WO2020238805A1 - 人脸识别装置和门禁设备 - Google Patents

人脸识别装置和门禁设备 Download PDF

Info

Publication number
WO2020238805A1
WO2020238805A1 PCT/CN2020/091910 CN2020091910W WO2020238805A1 WO 2020238805 A1 WO2020238805 A1 WO 2020238805A1 CN 2020091910 W CN2020091910 W CN 2020091910W WO 2020238805 A1 WO2020238805 A1 WO 2020238805A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
image
exposure
information
light
Prior art date
Application number
PCT/CN2020/091910
Other languages
English (en)
French (fr)
Inventor
於敏杰
聂鑫鑫
罗丽红
Original Assignee
杭州海康威视数字技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视数字技术股份有限公司 filed Critical 杭州海康威视数字技术股份有限公司
Publication of WO2020238805A1 publication Critical patent/WO2020238805A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/00174Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys
    • G07C9/00563Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys using personal physical data of the operator, e.g. finger prints, retinal images, voicepatterns

Definitions

  • This application relates to the field of computer vision, in particular to a face recognition device and access control equipment.
  • the related art provides a photographing device including a multispectral filter array sensor. Part of the pixels in the multispectral filter array sensor in the photographing device are only used for sensing near-infrared light, and the remaining pixels are used for sensing near-infrared light and visible light at the same time.
  • the shooting device can collect the original image signal containing the visible light information and the near-infrared light information, and separate the RGB image containing both the visible light information and the near-infrared light information and the near-infrared light information from the collected original image signal. Near infrared image of light information. Afterwards, the near-infrared light information contained in each pixel in the RGB image is removed to obtain a visible light image containing only visible light information.
  • the above-mentioned photographing equipment including the multi-spectral filter array sensor needs to separate the near-infrared light information and the visible light information in the collected original image signal in the later stage, the process is more complicated, and the near-infrared image and the visible light image obtained accordingly The image quality is also relatively low. In this way, when performing face recognition based on the image obtained by the photographing device, the accuracy of the face recognition will be low.
  • This application provides a face recognition device and access control equipment, which can solve the problem of low face recognition accuracy in related technologies.
  • the technical solution is as follows:
  • a face recognition device includes: an image acquisition unit, an image processor, and a face analysis unit;
  • the image acquisition unit includes a filter assembly, the filter assembly includes a first filter, and the first filter passes visible light and part of near-infrared light;
  • the image acquisition unit is configured to acquire a first image signal and a second image signal, the first image signal is an image signal generated according to a first preset exposure, and the second image signal is an image signal generated according to a second preset exposure The generated image signal, wherein near-infrared supplementary light is performed at least during a partial exposure time period of the first preset exposure, and near-infrared supplementary light is not performed during the exposure time period of the second preset exposure;
  • the image processor is configured to process at least one of the first image signal and the second image signal to obtain first image information
  • the face analysis unit is configured to perform face analysis on the first image information to obtain a face analysis result.
  • the image acquisition unit includes: an image sensor and a light supplement, and the image sensor is located on the light exit side of the light filter assembly;
  • the image sensor is configured to generate and output the first image signal and the second image signal through multiple exposures, and the first preset exposure and the second preset exposure are those of the multiple exposures Two exposures;
  • the light fill device includes a first light fill device, and the first light fill device is used for near-infrared light fill.
  • the center wavelength of the near-infrared supplement light performed by the first light supplement device is the set characteristic wavelength or falls within the set characteristic wavelength range
  • the center wavelength and/or the band width of the near-infrared light passing through the first filter Reach the constraints.
  • the center wavelength of the near-infrared supplement light performed by the first light supplement device is any wavelength within the wavelength range of 750 ⁇ 10 nanometers;
  • the center wavelength of the near-infrared supplement light performed by the first light supplement device is any wavelength within the wavelength range of 780 ⁇ 10 nanometers; or
  • the center wavelength of the near-infrared supplement light performed by the first light supplement device is any wavelength within the wavelength range of 940 ⁇ 10 nanometers.
  • the constraint conditions include:
  • the difference between the center wavelength of the near-infrared light passing through the first filter and the center wavelength of the near-infrared light supplemented by the first light-filling device lies within the wavelength fluctuation range, and the wavelength fluctuation range is 0 to 20 nanometers; or
  • the half bandwidth of the near-infrared light passing through the first filter is less than or equal to 50 nanometers; or
  • the first waveband width is smaller than the second waveband width; wherein, the first waveband width refers to the waveband width of the near-infrared light passing through the first filter, and the second waveband width refers to the waveband width of the near-infrared light passing through the first filter.
  • the third waveband width is smaller than the reference waveband width.
  • the third waveband width refers to the waveband width of near-infrared light whose pass rate is greater than a set ratio.
  • the reference waveband width is any waveband within the range of 50nm to 150nm. width.
  • the image sensor includes a plurality of photosensitive channels, and each photosensitive channel is used to sense at least one kind of light in the visible light band and to sense light in the near-infrared band.
  • the image sensor adopts a global exposure mode for multiple exposures.
  • the time period of near-infrared supplementary light does not overlap with the exposure time period of the nearest second preset exposure.
  • the time period of light is a subset of the exposure time period of the first preset exposure, or the time period of near-infrared fill light and the exposure time period of the first preset exposure overlap, or the first preset exposure Let the exposure time period of the exposure be a subset of the time period of the near-infrared fill light.
  • the image sensor adopts a rolling shutter exposure method to perform multiple exposures.
  • the time period of the near-infrared supplement light does not overlap with the exposure time period of the nearest second preset exposure;
  • the start time of the near-infrared fill light is no earlier than the exposure start time of the last line of the effective image in the first preset exposure, and the end time of the near-infrared fill light is no later than the first line of the effective image in the first preset exposure The end of the exposure;
  • the start time of the near-infrared fill light is no earlier than the exposure end time of the last line of the effective image of the nearest second preset exposure before the first preset exposure and no later than the first preset exposure.
  • the exposure end time of the line effective image, the end time of the near-infrared fill light is no earlier than the exposure start time of the last line of the effective image in the first preset exposure and no later than the nearest neighbor after the first preset exposure.
  • the exposure start time of the first line of the effective image of the second preset exposure or
  • the start time of the near-infrared fill light is not earlier than the exposure end time of the last line of the effective image of the nearest second preset exposure before the first preset exposure and not later than the first preset exposure.
  • the exposure start time of the line effective image, and the end time of the near-infrared fill light is no earlier than the exposure end time of the last line of the effective image in the first preset exposure and no later than the nearest neighbor after the first preset exposure.
  • the exposure start time of the first line of the effective image of the second preset exposure is not earlier than the exposure end time of the last line of the effective image of the nearest second preset exposure before the first preset exposure and not later than the first preset exposure.
  • At least one exposure parameter of the first preset exposure and the second preset exposure is different, the at least one exposure parameter is one or more of exposure time, exposure gain, and aperture size, and the exposure gain includes Analog gain, and/or, digital gain.
  • At least one exposure parameter of the first preset exposure and the second preset exposure is the same, the at least one exposure parameter includes one or more of exposure time, exposure gain, and aperture size, and the exposure gain includes Analog gain, and/or, digital gain.
  • the image processor is configured to process at least one of the first image signal and the second image signal by using a first processing parameter to obtain the first image information
  • the image processor is further configured to use a second processing parameter to process at least one of the first image signal and the second image signal to obtain second image information;
  • the image processor is further configured to transmit the second image information to a display device, and the display device displays the second image information.
  • the first processing parameter and the second processing parameter are different.
  • the processing performed by the image processor on at least one of the first image signal and the second image signal includes black level, image interpolation, digital gain, and white balance. , At least one of image noise reduction, image enhancement, and image fusion.
  • the image processor includes a cache
  • the buffer is used to store at least one of the first image signal and the second image signal, or to store at least one of the first image information and the second image information.
  • the image processor is further configured to adjust the image acquisition in the process of processing at least one of the first image signal and the second image signal The exposure parameters of the unit.
  • the face analysis unit includes: a face detection subunit, a face recognition subunit, and a face database;
  • At least one reference face information is stored in the face database
  • the face detection subunit is configured to perform face detection on the first image information, output the detected face image, and perform in vivo identification on the face image;
  • the face recognition subunit is used for extracting the face information of the face image when the face image passes in vivo identification, and storing the face information of the face image with the face database At least one of the reference face information is compared to obtain the face analysis result.
  • the face analysis unit includes: a face detection subunit, a face recognition subunit, and a face database;
  • At least one reference face information is stored in the face database
  • the face detection subunit is configured to perform face detection on the first image information, output the detected first face image, perform in vivo identification on the first face image, and perform face detection on the second Performing face detection on the image information, outputting the detected second face image, and performing live identification on the second face image;
  • the face recognition subunit is configured to extract the face information of the first face image when the first face image and the second face image both pass the identification
  • the face information of the face image is compared with at least one reference face information stored in the face database to obtain a face analysis result.
  • the first image information is grayscale image information obtained by processing the first image signal
  • the second image information is obtained by processing the second image signal.
  • the face analysis unit includes: a face detection subunit, a face recognition subunit, and a face database;
  • At least one reference face information is stored in the face database
  • the face detection subunit is used to perform face detection on the color image information, output the detected color face image, perform in vivo identification on the color face image, and pass the color face image During the living body identification, perform face detection on the gray-scale face image, and output the detected gray face image;
  • the face recognition subunit is configured to extract face information of the gray-scale face image, and compare the face information of the gray-scale face image with at least one reference face information stored in the face database Compare and get the result of face analysis.
  • the first image information is grayscale image information obtained by processing the first image signal
  • the second image information is a combination of the first image signal and the The fused image information obtained by performing image fusion processing on the second image signal
  • the face analysis unit includes: a face detection subunit, a face recognition subunit, and a face database
  • At least one reference face information is stored in the face database
  • the face detection subunit is configured to perform face detection on the fused image information, output the detected fused face image, perform live identification on the fused face image, and pass the fused face image During the living body identification, perform face detection on the gray-scale face image, and output the detected gray face image;
  • the face recognition subunit is configured to extract face information of the gray-scale face image, and compare the face information of the gray-scale face image with at least one reference face information stored in the face database Compare and get the result of face analysis.
  • the first image information is fused image information obtained by performing image fusion processing on the first image signal and the second image signal
  • the second image information is For gray-scale image information obtained by processing the first image signal
  • the face analysis unit includes: a face detection subunit, a face recognition subunit, and a face database;
  • At least one reference face information is stored in the face database
  • the face detection sub-unit is used to perform face detection on the gray-scale image information, output the detected gray-scale face image, perform in vivo identification on the gray-scale face image, and perform face detection on the gray-scale image information.
  • face detection on the fused face image perform face detection on the fused face image, and output the detected fused face image;
  • the face recognition subunit is used to extract face information of the fused face image, and compare the face information of the fused face image with at least one reference face information stored in the face database Yes, get the face analysis result.
  • the first image information is fused image information obtained by performing image fusion processing on the first image signal and the second image signal
  • the second image information is
  • the face analysis unit includes: a face detection subunit, a face recognition subunit, and a face database;
  • At least one reference face information is stored in the face database
  • the face detection subunit is used to perform face detection on the color image information, output the detected color face image, perform in vivo identification on the color face image, and pass the color face image During the living body identification, perform face detection on the fused face image, and output the detected fused face image;
  • the face recognition subunit is used to extract face information of the fused face image, and compare the face information of the fused face image with at least one reference face information stored in the face database Yes, get the face analysis result.
  • the first image information is first fused image information obtained by performing image fusion processing on the first image signal and the second image signal
  • the second image information It is the second fused image information obtained by performing image fusion processing on the first image signal and the second image signal.
  • the face analysis unit includes: a face detection subunit, a face recognition subunit, and a face database ;
  • At least one reference face information is stored in the face database
  • the face detection subunit is used to perform face detection on the second fused image information, output the detected second fused face image, perform live identification on the second fused face image, and When the second fused face image passes the living body identification, face detection is performed on the first fused face image, and the detected first fused face image is output;
  • the face recognition subunit is configured to extract face information of the first fused face image, and compare the face information of the first fused face image with at least one reference person stored in the face database The face information is compared, and the face analysis result is obtained.
  • the face analysis unit is further configured to transmit the face analysis result to a display device, and the display device displays the face analysis result.
  • an access control device includes an access control controller and the aforementioned face recognition device;
  • the face recognition device is used to transmit the face analysis result to the access controller
  • the access controller is configured to output a control signal for opening the door when the face analysis result is successful.
  • a face recognition method which is applied to a face recognition device, the face recognition device includes: an image acquisition unit, an image processor, and a face analysis unit, and the image acquisition unit includes a filter component, The filter assembly includes a first filter, and the method includes:
  • a first image signal and a second image signal are collected by the image acquisition unit, the first image signal is an image signal generated according to a first preset exposure, and the second image signal is generated according to a second preset exposure An image signal, wherein near-infrared supplementary light is performed at least during a partial exposure time period of the first preset exposure, and near-infrared supplementary light is not performed during the exposure time period of the second preset exposure;
  • the face recognition device includes an image acquisition unit, an image processor, and a face analysis unit.
  • the image acquisition unit includes a filter assembly, and the filter assembly includes a first filter, and the first filter passes visible light and part of the near-infrared light.
  • the image acquisition unit can simultaneously acquire a first image signal containing near-infrared light information (such as near-infrared light brightness information) and a second image signal containing visible light information through the first preset exposure and the second preset exposure.
  • the image acquisition unit in this application can directly collect the first image signal and the second image signal, and the acquisition process is simple effective.
  • the image processor processes at least one of the first image signal and the second image signal, and the first image information is of higher quality, and then the face analysis unit performs face analysis on the first image information. Obtain more accurate face analysis results, which can effectively improve the accuracy of face recognition.
  • Fig. 1 is a schematic structural diagram of a first face recognition device provided by an embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of a first image acquisition unit provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a principle of generating a first image signal by an image acquisition unit according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of the principle of generating a second image signal by an image acquisition unit provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of the relationship between the wavelength and relative intensity of the near-infrared supplement light performed by a first light supplement device according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of the relationship between the wavelength of the light passing through the first filter and the pass rate according to an embodiment of the present application.
  • Fig. 7 is a schematic structural diagram of a second image acquisition unit provided by an embodiment of the present application.
  • Fig. 8 is a schematic diagram of an RGB sensor provided by an embodiment of the present application.
  • Fig. 9 is a schematic diagram of an RGBW sensor provided by an embodiment of the present application.
  • Fig. 10 is a schematic diagram of an RCCB sensor provided by an embodiment of the present application.
  • Fig. 11 is a schematic diagram of a RYYB sensor provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of a sensing curve of an image sensor provided by an embodiment of the present application.
  • FIG. 13 is a schematic diagram of a rolling shutter exposure method provided by an embodiment of the present application.
  • FIG. 14 is a schematic diagram of the timing relationship between the first near-infrared fill light and the first preset exposure and the second preset exposure in the global exposure mode provided by an embodiment of the present application.
  • FIG. 15 is a schematic diagram of the timing relationship between the second near-infrared fill light provided by an embodiment of the present application and the first preset exposure and the second preset exposure in the global exposure mode.
  • FIG. 16 is a schematic diagram of the timing relationship between the third near-infrared fill light provided by an embodiment of the present application and the first preset exposure and the second preset exposure in the global exposure mode.
  • FIG. 17 is a schematic diagram of the timing relationship between the first preset exposure and the second preset exposure in the first near-infrared fill light and the rolling shutter exposure mode provided by an embodiment of the present application.
  • FIG. 18 is a schematic diagram of the timing relationship between the first preset exposure and the second preset exposure in the second near-infrared fill light and the rolling shutter exposure mode provided by an embodiment of the present application.
  • FIG. 19 is a schematic diagram of the timing relationship between the first preset exposure and the second preset exposure in the third near-infrared fill light and the rolling shutter exposure mode provided by an embodiment of the present application.
  • FIG. 20 is a schematic structural diagram of a third image acquisition unit provided by an embodiment of the present application.
  • FIG. 21 is a schematic structural diagram of a second face recognition device provided by an embodiment of the present application.
  • Fig. 22 is a schematic structural diagram of a third face recognition apparatus shown in an embodiment of the present application.
  • FIG. 23 is a schematic structural diagram of a fourth face recognition device shown in an embodiment of the present application.
  • FIG. 24 is a schematic structural diagram of an access control device shown in an embodiment of the present application.
  • Fig. 25 is a flowchart of a face recognition method shown in an embodiment of the present application.
  • Image acquisition unit 2: Image processor, 3: Face analysis unit, 01: Image sensor, 02: Filler, 03: Filter component, 04: Lens, 021: First fill light device, 022: The second light supplement device, 031: first filter, 032: second filter, 033: switching component, 311: face detection subunit, 312: face recognition subunit, 313, face database, 001 : Access controller, 002: Face recognition device.
  • Fig. 1 is a schematic structural diagram of a face recognition device provided by an embodiment of the present application.
  • the face recognition device includes: an image acquisition unit 1, an image processor 2, and a face analysis unit 3.
  • the image acquisition unit 1 is used to acquire a first image signal and a second image signal.
  • the image processor 2 is configured to process at least one of the first image signal and the second image signal to obtain first image information.
  • the face analysis unit 3 is used to perform face analysis on the first image information to obtain a face analysis result.
  • the first image signal is an image signal generated according to a first preset exposure
  • the second image signal is an image signal generated according to a second preset exposure.
  • the near-infrared supplementary light is performed at least during a partial exposure time period of the first preset exposure, and the near-infrared supplementary light is not performed during the exposure time period of the second preset exposure.
  • the image acquisition unit 1 can simultaneously acquire the first image signal containing near-infrared light information (such as near-infrared light brightness information) and the first image signal containing visible light information through the first preset exposure and the second preset exposure.
  • the second image signal Compared with the image processing method that needs to separate the near-infrared light information and the visible light information in the collected original image signal later, the image acquisition unit 1 in this application can directly collect the first image signal and the second image signal.
  • the acquisition process Simple and effective.
  • the image processor 2 processes at least one of the first image signal and the second image signal, and the first image information is of higher quality.
  • the face analysis unit 3 performs face analysis on the first image information. A more accurate face analysis result can be obtained, which can effectively improve the accuracy of face recognition.
  • the image acquisition unit 1, the image processor 2, and the face analysis unit 3 included in the face recognition device will be separately described below.
  • Image acquisition unit 1
  • the image acquisition unit includes an image sensor 01, a light supplement 02 and a filter assembly 03, and the image sensor 01 is located on the light exit side of the filter assembly 03.
  • the image sensor 01 is used to generate and output a first image signal and a second image signal through multiple exposures.
  • the first preset exposure and the second preset exposure are two of the multiple exposures.
  • the light supplement 02 includes a first light supplement device 021, and the first light supplement device 021 is used for near-infrared light supplement.
  • the filter assembly 03 includes a first filter 031, and the first filter 031 passes visible light and part of the near-infrared light.
  • the intensity of the near-infrared light passing through the first filter 031 when the first light supplement device 021 performs near-infrared light supplementation is higher than that of the first light supplement device 021 that passes through the first filter 031 when the first light supplement device 021 does not perform near-infrared light supplementation.
  • the intensity of near-infrared light is higher than that of the first light supplement device 021 that passes through the first filter 031 when the first light supplement device 021 does not perform near-infrared light supplementation.
  • the image acquisition unit 1 may further include a lens 04.
  • the filter assembly 03 may be located between the lens 04 and the image sensor 01, and the image sensor 01 is located at the light output of the filter assembly 03 side.
  • the lens 04 is located between the filter assembly 03 and the image sensor 01, and the image sensor 01 is located on the light exit side of the lens 04.
  • the first filter 031 can be a filter film.
  • the first filter 031 can be attached to the light-emitting side of the lens 04
  • the light supplement 02 may be located in the image acquisition unit 1 or outside the image acquisition unit 1.
  • the light supplement 02 can be a part of the image acquisition unit 1 or a device independent of the image acquisition unit 1.
  • the light supplement 02 can be connected to the image acquisition unit 1, so as to ensure that the exposure timing of the image sensor 01 in the image acquisition unit 1 is consistent with that of the light supplement 02.
  • the timing of the near-infrared supplementary light of the first light-filling device 021 has a certain relationship.
  • the near-infrared supplementary light is performed at least during a partial exposure time period of the first preset exposure, but not during the exposure time period of the second preset exposure. Near infrared fill light.
  • the first supplementary light device 021 is a device that can emit near-infrared light, such as a near-infrared supplementary light, etc., the first supplementary light device 021 can perform near-infrared supplementary light in a stroboscopic manner, or other similar stroboscopic The near-infrared supplementary light is performed in a manner, which is not limited in the embodiment of the present application.
  • the first light supplement device 021 when the first light supplement device 021 performs near-infrared supplement light in a stroboscopic manner, the first light supplement device 021 can be manually controlled to perform near-infrared supplement light in a stroboscopic manner, or through a software program Or a specific device controls the first light supplement device 021 to perform near-infrared supplement light in a strobe mode, which is not limited in the embodiment of the present application.
  • the time period during which the first light supplement device 021 performs near-infrared light supplementation may coincide with the exposure time period of the first preset exposure, or may be greater than the exposure time period of the first preset exposure or less than the exposure time period of the first preset exposure.
  • the near-infrared supplementary light is performed during the entire exposure period or part of the exposure period of the first preset exposure, and the near-infrared supplementary light is not performed during the exposure time period of the second preset exposure.
  • the near-infrared supplementary light is not performed during the exposure time period of the second preset exposure.
  • the exposure time period of the second preset exposure may be between the start exposure time and the end exposure time.
  • Time period, for the rolling shutter exposure mode the exposure time period of the second preset exposure may be the time period between the start exposure time of the first row of effective images of the second image signal and the end exposure time of the last row of effective images, but it is not limited to this.
  • the exposure time period of the second preset exposure may also be the exposure time period corresponding to the target image in the second image signal, and the target image is a number of rows of effective images corresponding to the target object or target area in the second image signal.
  • the time period between the start exposure time and the end exposure time of several rows of effective images can be regarded as the exposure time period of the second preset exposure.
  • the near-infrared light incident on the surface of the object may be reflected by the object and enter the first filter 031.
  • the ambient light may include visible light and near-infrared light, and near-infrared light in the ambient light is also reflected by the object when it is incident on the surface of the object, thereby entering the first filter 031.
  • the near-infrared light that passes through the first filter 031 when performing near-infrared light supplementation may include the near-infrared light that is reflected by the object and enters the first filter 031 when the first light supplement device 021 performs near-infrared light supplementation.
  • the near-infrared light passing through the first filter 031 when the near-infrared light supplement is not performed may include the near-infrared light reflected by the object into the first filter 031 when the first light supplement device 021 is not performing the near-infrared light supplement.
  • the near-infrared light that passes through the first filter 031 when performing near-infrared supplementary light includes the near-infrared light emitted by the first supplementary light device 021 and reflected by the object, and the ambient light reflected by the object Near-infrared light
  • the near-infrared light passing through the first filter 031 when the near-infrared supplementary light is not performed includes near-infrared light reflected by an object in the ambient light.
  • the image acquisition unit 1 acquires the first image signal and the first image signal.
  • the second image signal process is as follows: referring to Figure 3, when the image sensor 01 performs the first preset exposure, the first light supplement device 021 performs near-infrared light supplement, and at this time the ambient light in the shooting scene and the first light supplement device perform In the near-infrared fill light, the near-infrared light reflected by objects in the scene passes through the lens 04 and the first filter 031, and then the image sensor 01 generates the first image signal through the first preset exposure; see FIG.
  • the first fill light device 021 does not perform near-infrared fill light.
  • the image sensor 01 passes through the second preset Exposure generates a second image signal.
  • M first preset exposures and N second preset exposures in one frame period of image acquisition.
  • M and N and the magnitude relationship between M and N can be set according to actual requirements. For example, the values of M and N may be equal or different.
  • the first filter 031 can pass part of the near-infrared light band.
  • the near-infrared light band passing through the first filter 031 can be part of the near-infrared light band, or it can be all
  • the near-infrared light band is not limited in the embodiment of the present application.
  • the first light supplement device 021 since the intensity of the near-infrared light in the ambient light is lower than the intensity of the near-infrared light emitted by the first light supplement device 021, the first light supplement device 021 passes through the first filter 031 when performing near-infrared supplement light.
  • the intensity of the near-infrared light is higher than the intensity of the near-infrared light passing through the first filter 031 when the first light supplement device 021 is not performing near-infrared light supplementation.
  • the wavelength range of the first light supplement device 021 for near-infrared supplement light may be the second reference wavelength range, and the second reference wavelength range may be 700 nanometers to 800 nanometers, or 900 nanometers to 1000 nanometers, which can reduce common Interference caused by near red light at 850 nm.
  • the wavelength range of the near-infrared light incident on the first filter 031 may be the first reference wavelength range, and the first reference wavelength range is 650 nanometers to 1100 nanometers.
  • the near-infrared light passing through the first filter 031 during the near-infrared light supplementation may include the near-infrared light reflected by the object and entering the first filter 031 when the first light supplement device 021 performs near-infrared light supplementation, and The near-infrared light reflected by an object in the ambient light. Therefore, the intensity of the near-infrared light entering the filter assembly 03 is relatively strong at this time. However, when the near-infrared supplementary light is not performed, the near-infrared light passing through the first filter 031 includes the near-infrared light reflected by the object in the ambient light and entering the filter assembly 03.
  • the intensity of the near-infrared light passing through the first filter 031 is weak at this time. Therefore, the intensity of the near infrared light included in the first image signal generated and output according to the first preset exposure is higher than the intensity of the near infrared light included in the second image signal generated and output according to the second preset exposure.
  • the center wavelength and/or wavelength range of the first light supplement device 021 for near-infrared supplement light there are multiple choices for the center wavelength and/or wavelength range of the first light supplement device 021 for near-infrared supplement light.
  • the center wavelength of the near-infrared supplement light of the first light supplement device 021 can be designed, and the characteristics of the first filter 031 can be selected, so that the center of the first light supplement device 021 for the near-infrared light supplement.
  • the center wavelength and/or band width of the near-infrared light passing through the first filter 031 can meet the constraint conditions.
  • This constraint is mainly used to restrict the center wavelength of the near-infrared light passing through the first filter 031 as accurate as possible, and the band width of the near-infrared light passing through the first filter 031 is as narrow as possible, so as to avoid The infrared light band width is too wide and introduces wavelength interference.
  • the center wavelength of the near-infrared supplement light performed by the first light supplement device 021 may be the average value in the wavelength range of the highest energy in the spectrum of the near-infrared light emitted by the first light supplement device 021, or it may be understood as the first supplement light
  • the set characteristic wavelength or the set characteristic wavelength range can be preset.
  • the center wavelength of the first light supplement device 021 for near-infrared supplement light may be any wavelength within the wavelength range of 750 ⁇ 10 nanometers; or, the center wavelength of the first light supplement device 021 for near-infrared supplement light It is any wavelength within the wavelength range of 780 ⁇ 10 nanometers; or, the center wavelength of the first light supplement device 021 for near-infrared supplement light is any wavelength within the wavelength range of 940 ⁇ 10 nanometers. That is, the set characteristic wavelength range may be a wavelength range of 750 ⁇ 10 nanometers, or a wavelength range of 780 ⁇ 10 nanometers, or a wavelength range of 940 ⁇ 10 nanometers.
  • the center wavelength of the near-infrared supplement light performed by the first light supplement device 021 is 940 nanometers
  • the relationship between the wavelength and the relative intensity of the near-infrared supplement light performed by the first light supplement device 021 is shown in FIG. 5. It can be seen from FIG. 5 that the wavelength range of the first light supplement device 021 for near-infrared supplement light is 900 nanometers to 1000 nanometers, and the relative intensity of near-infrared light is the highest at 940 nanometers.
  • the above constraint conditions may include: the difference between the center wavelength of the near-infrared light passing through the first filter 031 and the center wavelength of the near-infrared light of the first light supplement device 021 lies in the wavelength fluctuation Within the range, as an example, the wavelength fluctuation range may be 0-20 nanometers.
  • the center wavelength of the near-infrared supplementary light passing through the first filter 031 can be the wavelength at the peak position in the near-infrared band in the near-infrared light pass rate curve of the first filter 031, or it can be understood as the first
  • the near-infrared light pass rate curve of a filter 031 is the wavelength at the middle position in the near-infrared waveband whose pass rate exceeds a certain threshold.
  • the above constraint conditions may include: the first band width may be smaller than the second band width.
  • the first waveband width refers to the waveband width of the near-infrared light passing through the first filter 031
  • the second waveband width refers to the waveband width of the near-infrared light blocked by the first filter 031.
  • the wavelength band width refers to the width of the wavelength range in which the wavelength of light lies.
  • the first wavelength band width is 800 nanometers minus 700 nanometers, that is, 100 nanometers.
  • the wavelength band width of the near-infrared light passing through the first filter 031 is smaller than the wavelength band width of the near-infrared light blocked by the first filter 031.
  • FIG. 6 is a schematic diagram of the relationship between the wavelength of light that can pass through the first filter 031 and the pass rate.
  • the wavelength band of the near-infrared light incident on the first filter 031 is 650 nanometers to 1100 nanometers.
  • the first filter 031 can pass visible light with a wavelength of 380 nanometers to 650 nanometers and a wavelength of near 900 nanometers to 1100 nanometers.
  • Infrared light passes through and blocks near-infrared light with a wavelength between 650 nanometers and 900 nanometers. That is, the width of the first band is 1000 nanometers minus 900 nanometers, that is, 100 nanometers.
  • the second band width is 900 nm minus 650 nm, plus 1100 nm minus 1000 nm, or 350 nm. 100 nanometers are smaller than 350 nanometers, that is, the wavelength band width of the near-infrared light passing through the first filter 031 is smaller than the wavelength band width of the near-infrared light blocked by the first filter 031.
  • the above relationship curve is just an example.
  • the wavelength range of the near-red light that can pass through the filter can be different, and the wavelength range of the near-infrared light blocked by the filter can also be different. different.
  • the above constraint conditions may include: passing the first filter
  • the half bandwidth of the near-infrared light of the light sheet 031 is less than or equal to 50 nanometers.
  • the half bandwidth refers to the band width of near-infrared light with a pass rate greater than 50%.
  • the above constraint condition may include: the third band width may be smaller than the reference band width.
  • the third waveband width refers to the waveband width of near-infrared light with a pass rate greater than a set ratio.
  • the reference waveband width may be any waveband width in the range of 50 nanometers to 100 nanometers.
  • the set ratio can be any ratio from 30% to 50%.
  • the set ratio can also be set to other ratios according to usage requirements, which is not limited in the embodiment of the present application.
  • the band width of the near-infrared light whose pass rate is greater than the set ratio may be smaller than the reference band width.
  • the wavelength band of the near-infrared light incident on the first filter 031 is 650 nm to 1100 nm, the setting ratio is 30%, and the reference wavelength band width is 100 nm. It can be seen from FIG. 6 that in the wavelength band of near-infrared light from 650 nanometers to 1100 nanometers, the band width of near-infrared light with a pass rate greater than 30% is significantly less than 100 nanometers.
  • the first light supplement device 021 Since the first light supplement device 021 provides near-infrared supplementary light at least during a partial exposure period of the first preset exposure, it does not provide near-infrared supplementary light during the entire exposure period of the second preset exposure, and the first preset exposure
  • the exposure and the second preset exposure are two of the multiple exposures of the image sensor 01, that is, the first light supplement device 021 provides near-infrared supplement light during the exposure period of the partial exposure of the image sensor 01, The near-infrared supplementary light is not provided during the exposure time period when another part of the image sensor 01 is exposed.
  • the number of times of supplementary light in the unit time length of the first supplementary light device 021 may be lower than the number of exposures of the image sensor 01 in the unit time length, wherein, within the interval of two adjacent times of supplementary light, there is one interval. Or multiple exposures.
  • the light supplement 02 can also A second light supplement device 022 is included, and the second light supplement device 022 is used for visible light supplement light.
  • the second light supplement device 022 provides visible light supplement light at least during a part of the exposure time of the first preset exposure, that is, it performs near-infrared supplement light and visible light supplement light at least during the partial exposure time period of the first preset exposure.
  • the mixed color of the two lights can be distinguished from the color of the red light in the traffic light, thereby avoiding the human eye from confusing the color of the light fill 02 for near-infrared fill light with the color of the red light in the traffic light.
  • the second light supplement device 022 provides visible light supplement light during the exposure time period of the second preset exposure, since the intensity of visible light is not particularly high during the exposure time period of the second preset exposure, When the visible light supplement is performed during the exposure time period of the exposure, the brightness of the visible light in the second image signal can also be increased, thereby ensuring the quality of image collection.
  • the second light supplement device 022 may be used to perform visible light supplement light in a constant light mode; or, the second light supplement device 022 may be used to perform visible light supplement light in a stroboscopic manner, wherein, at least in the first Visible light supplement light exists in part of the exposure time period of the preset exposure, and there is no visible light supplement light during the entire exposure time period of the second preset exposure; or, the second light supplement device 022 can be used to perform visible light supplement light in a strobe mode There is no visible light supplementary light at least during the entire exposure time period of the first preset exposure, and visible light supplementary light exists during the partial exposure time period of the second preset exposure.
  • the second light supplement device 022 When the second light supplement device 022 performs visible light supplement light in a constant light mode, it can not only prevent human eyes from confusing the color of the first supplement light device 021 for near-infrared supplement light with the color of the red light in the traffic light, but also can improve the Second, the brightness of visible light in the image signal to ensure the quality of image collection.
  • the second light supplement device 022 When the second light supplement device 022 performs visible light supplement light in a stroboscopic manner, it can prevent human eyes from confusing the color of the first light supplement device 021 for near-infrared supplement light with the color of the red light in the traffic light, or can improve The brightness of the visible light in the second image signal in turn ensures the quality of image collection, and can also reduce the number of times of supplementary light of the second supplementary light device 022, thereby prolonging the service life of the second supplementary light device 022.
  • the aforementioned multiple exposure refers to multiple exposures within one frame period, that is, the image sensor 01 performs multiple exposures within one frame period, thereby generating and outputting at least one frame of the first image signal and At least one frame of the second image signal.
  • 1 second includes 25 frame periods, and the image sensor 01 performs multiple exposures in each frame period, thereby generating at least one frame of the first image signal and at least one frame of the second image signal, and the The first image signal and the second image signal are called a group of image signals, so that 25 groups of image signals are generated within 25 frame periods.
  • the first preset exposure and the second preset exposure can be two adjacent exposures in multiple exposures in one frame period, or two non-adjacent exposures in multiple exposures in one frame period. The application embodiment does not limit this.
  • the first image signal is generated and output by the first preset exposure
  • the second image signal is generated and output by the second preset exposure.
  • the first image can be The signal and the second image signal are processed.
  • the purposes of the first image signal and the second image signal may be different, so in some embodiments, at least one exposure parameter of the first preset exposure and the second preset exposure may be different.
  • the at least one exposure parameter may include but is not limited to one or more of exposure time, analog gain, digital gain, and aperture size. Wherein, the exposure gain includes analog gain and/or digital gain.
  • the intensity of the near-infrared light sensed by the image sensor 01 is stronger when the near-infrared light is supplemented, and the first image signal generated and output accordingly includes the near-infrared light
  • the brightness of the light will also be higher.
  • near-infrared light with higher brightness is not conducive to the acquisition of external scene information.
  • the greater the exposure gain, the higher the brightness of the image signal output by the image sensor 01, and the smaller the exposure gain the lower the brightness of the image signal output by the image sensor 01.
  • the exposure gain of the first preset exposure may be less than the first preset exposure. 2. Exposure gain for preset exposure. In this way, when the first light supplement device 021 performs near-infrared supplement light, the brightness of the near-infrared light contained in the first image signal generated and output by the image sensor 01 will not be affected by the first light supplement device 021 performing near-infrared supplement light. Too high.
  • the longer the exposure time the higher the brightness included in the image signal obtained by the image sensor 01, and the longer the motion trailing of the moving objects in the external scene in the image signal; the shorter the exposure time, the longer the image
  • the image signal obtained by the sensor 01 includes the lower the brightness, and the shorter the motion trail of the moving object in the external scene is in the image signal. Therefore, in order to ensure that the brightness of the near-infrared light contained in the first image signal is within an appropriate range, and that the moving objects in the external scene have a short motion trail in the first image signal.
  • the exposure time of the first preset exposure may be less than the exposure time of the second preset exposure.
  • the first light supplement device 021 performs near-infrared supplement light
  • the brightness of the near-infrared light contained in the first image signal generated and output by the image sensor 01 will not be affected by the first light supplement device 021 performing near-infrared supplement light. Too high.
  • the shorter exposure time makes the motion trailing of the moving object in the external scene appear shorter in the first image signal, thereby facilitating the recognition of the moving object.
  • the exposure time of the first preset exposure is 40 milliseconds
  • the exposure time of the second preset exposure is 60 milliseconds, and so on.
  • the exposure time of the first preset exposure may not only be less than the exposure time of the second preset exposure , Can also be equal to the exposure time of the second preset exposure.
  • the exposure gain of the first preset exposure may be less than the exposure gain of the second preset exposure, or may be equal to the second preset exposure The exposure gain.
  • the purpose of the first image signal and the second image signal may be the same.
  • the exposure time of the first preset exposure may be equal to the exposure time of the second preset exposure. If the exposure time of the first preset exposure and the exposure time of the second preset exposure are different, the exposure time will be longer. There is motion smearing in one of the image signals, resulting in different definitions of the two image signals.
  • the exposure gain of the first preset exposure may be equal to the exposure gain of the second preset exposure.
  • the exposure gain of the first preset exposure may be less than the exposure gain of the second preset exposure. It can also be equal to the exposure gain of the second preset exposure.
  • the exposure time of the first preset exposure may be less than the exposure time of the second preset exposure, or may be equal to the second preset exposure The exposure time.
  • the image sensor 01 may include multiple photosensitive channels, and each photosensitive channel may be used to sense at least one type of light in the visible light band and to sense light in the near-infrared band. That is, each photosensitive channel can not only sense at least one kind of light in the visible light band, but also can sense light in the near-infrared band. In this way, it can be ensured that the first image signal and the second image signal have complete resolution without missing Pixel values.
  • the multiple photosensitive channels can be used to sense at least two different visible light wavelength bands.
  • the plurality of photosensitive channels may include at least two of R photosensitive channels, G photosensitive channels, B photosensitive channels, Y photosensitive channels, W photosensitive channels, and C photosensitive channels.
  • the R photosensitive channel is used to sense the light in the red and near-infrared bands
  • the G photosensitive channel is used to sense the light in the green and near-infrared bands
  • the B photosensitive channel is used to sense the light in the blue and near-infrared bands.
  • Y The photosensitive channel is used to sense light in the yellow band and near-infrared band.
  • W can be used to represent the light-sensing channel used to sense full-wavelength light
  • C can be used to represent the light-sensing channel used to sense full-wavelength light, so when there is more
  • a photosensitive channel includes a photosensitive channel for sensing light of a full waveband
  • this photosensitive channel may be a W photosensitive channel or a C photosensitive channel. That is, in practical applications, the photosensitive channel used for sensing the light of the full waveband can be selected according to the use requirements.
  • the image sensor 01 may be an RGB sensor, RGBW sensor, or RCCB sensor, or RYYB sensor.
  • the distribution of the R photosensitive channel, the G photosensitive channel and the B photosensitive channel in the RGB sensor can be seen in Figure 8.
  • the distribution of the R photosensitive channel, G photosensitive channel, B photosensitive channel and W photosensitive channel in the RGBW sensor can be seen in the figure 9.
  • the distribution of the R photosensitive channel, the C photosensitive channel and the B photosensitive channel in the RCCB sensor can be seen in Figure 10
  • the distribution of the R photosensitive channel, the Y photosensitive channel and the B photosensitive channel in the RYYB sensor can be seen in Figure 11.
  • some photosensitive channels may only sense light in the near-infrared waveband, but not light in the visible light waveband. In this way, it can be ensured that the first image signal has a complete resolution without missing pixel values.
  • the plurality of photosensitive channels may include at least two of R photosensitive channels, G photosensitive channels, B photosensitive channels, and IR photosensitive channels. Among them, the R photosensitive channel is used to sense red light and near-infrared light, the G photosensitive channel is used to sense green light and near-infrared light, and the B photosensitive channel is used to sense blue light and near-infrared light. IR The photosensitive channel is used to sense light in the near-infrared band.
  • the image sensor 01 may be an RGBIR sensor, where each IR photosensitive channel in the RGBIR sensor can sense light in the near-infrared waveband, but not light in the visible light waveband.
  • the image sensor 01 is an RGB sensor
  • other image sensors such as RGBIR sensors
  • the RGB information collected by the RGB sensor is more complete.
  • Some of the photosensitive channels of the RGBIR sensor cannot collect visible light, so the image collected by the RGB sensor The color details are more accurate.
  • the multiple photosensitive channels included in the image sensor 01 may correspond to multiple sensing curves.
  • the R curve in FIG. 12 represents the sensing curve of the image sensor 01 to light in the red light band
  • the G curve represents the sensing curve of the image sensor 01 to light in the green light band
  • the B curve represents the image sensor 01
  • the W (or C) curve represents the sensing curve of the image sensor 01 sensing the light in the full band
  • the NIR (Near infrared) curve represents the sensing of the image sensor 01 sensing the light in the near infrared band. curve.
  • the image sensor 01 may adopt a global exposure method or a rolling shutter exposure method.
  • the global exposure mode means that the exposure start time of each row of effective images is the same, and the exposure end time of each row of effective images is the same.
  • the global exposure mode is an exposure mode in which all rows of effective images are exposed at the same time and the exposure ends at the same time.
  • Rolling shutter exposure means that the exposure time of different rows of effective images does not completely overlap, that is, the exposure start time of one row of effective images is later than the exposure start time of the previous row of effective images, and the exposure end time of one row of effective images is later At the end of the exposure of the effective image on the previous line.
  • the data in the rolling exposure mode, the data can be output after each line of the effective image is exposed. Therefore, the time from the start of the output of the first line of the effective image to the end of the output of the last line of the effective image can be expressed as reading Time out.
  • FIG. 13 is a schematic diagram of a rolling shutter exposure method. It can be seen from Figure 13 that the effective image of line 1 starts to be exposed at time T1, and the exposure ends at time T3. The effective image of line 2 begins to be exposed at time T2 and ends at time T4. Time T2 is backward compared to time T1. A period of time has passed, and time T4 has moved a period of time backward compared to time T3. In addition, the effective image of the first line ends exposure at time T3 and begins to output data, and the output of data ends at time T5. The effective image of line n ends exposure at time T6 and begins to output data, and the output of data ends at time T7, then T3 The time between ⁇ T7 is the read time.
  • the time period of the near-infrared fill light and the exposure time period of the nearest second preset exposure do not exist Intersection
  • the time period of near-infrared fill light is a subset of the exposure time period of the first preset exposure, or the time period of near-infrared fill light and the exposure time period of the first preset exposure overlap, or the first preset
  • the exposure time period of exposure is a subset of the time period of near-infrared fill light.
  • the near-infrared supplementary light is performed at least during a part of the exposure time period of the first preset exposure, and the near-infrared supplementary light is not performed during the entire exposure time period of the second preset exposure. Set the exposure to affect.
  • the time period of near-infrared fill light does not overlap with the exposure time period of the nearest second preset exposure, and the time period of near-infrared fill light is the first preset A subset of the exposure time period for exposure.
  • the time period of near-infrared fill light does not overlap with the exposure time period of the nearest second preset exposure, and the time period of near-infrared fill light is equal to that of the first preset exposure. There is an intersection of exposure time periods.
  • the time period of near-infrared fill light does not overlap with the exposure time period of the nearest second preset exposure, and the exposure time period of the first preset exposure is near-infrared fill light A subset of the time period. 14 to 16 are only an example, and the sorting of the first preset exposure and the second preset exposure may not be limited to these examples.
  • the time period of near-infrared fill light is the same as the exposure time period of the nearest second preset exposure There is no intersection.
  • the start time of the near-infrared fill light is no earlier than the exposure start time of the last line of the effective image in the first preset exposure
  • the end time of the near-infrared fill light is no later than the exposure of the first line of the effective image in the first preset exposure End time.
  • the start time of the near-infrared fill light is not earlier than the exposure end time of the last line of the effective image of the nearest second preset exposure before the first preset exposure and not later than the first line of the first preset exposure.
  • the exposure end time of the image, the end time of the near-infrared fill light is no earlier than the exposure start time of the last line of the effective image in the first preset exposure and no later than the nearest second preset exposure after the first preset exposure
  • the start time of the near-infrared fill light is not earlier than the exposure end time of the last line of the effective image of the nearest second preset exposure before the first preset exposure and not later than the first line of the first preset exposure.
  • the exposure start time of the image, the end time of the near-infrared fill light is no earlier than the exposure end time of the last line of the effective image in the first preset exposure and no later than the nearest second preset exposure after the first preset exposure The exposure start time of the first line of valid images.
  • the time period of near-infrared fill light does not overlap with the exposure time period of the nearest second preset exposure, and the start time of near-infrared fill light is no earlier than The exposure start time of the last line of the effective image in the first preset exposure, and the end time of the near-infrared fill light is no later than the exposure end time of the first line of the effective image in the first preset exposure.
  • the time period of near-infrared fill light does not overlap with the exposure time period of the nearest second preset exposure, and the start time of near-infrared fill light is no earlier than the first
  • the end time of the near-infrared fill light is not It is earlier than the exposure start time of the last line of the effective image in the first preset exposure and not later than the exposure start time of the first line of the effective image of the nearest second preset exposure after the first preset exposure.
  • the time period of near-infrared fill light does not overlap with the exposure time period of the nearest second preset exposure, and the start time of near-infrared fill light is no earlier than the first
  • the exposure end time of the last line of the effective image of the nearest second preset exposure before the preset exposure and not later than the exposure start time of the first line of the effective image in the first preset exposure the end time of the near-infrared fill light is not It is earlier than the exposure end time of the last line of the effective image in the first preset exposure and not later than the exposure start time of the first line of the effective image of the nearest second preset exposure after the first preset exposure.
  • the slanted dotted line indicates the start time of exposure
  • the slanted solid line indicates the end time of exposure
  • the vertical dotted line indicates the first
  • the time period of the near-infrared fill light corresponding to the preset exposure, FIGS. 17 to 19 are only examples, and the order of the first preset exposure and the second preset exposure may not be limited to these examples.
  • the multiple exposures may include odd-numbered exposures and even-numbered exposures.
  • the first preset exposure and the second preset exposure may include but are not limited to the following methods:
  • the first preset exposure is one exposure in an odd number of exposures
  • the second preset exposure is one exposure in an even number of exposures.
  • the multiple exposures may include the first preset exposure and the second preset exposure arranged in a parity order.
  • the odd number of exposures such as the first exposure, the third exposure, and the fifth exposure in the multiple exposure are all the first preset exposures
  • the second exposure, the fourth exposure, and the sixth exposure are even numbered times.
  • the exposure is the second preset exposure.
  • the first preset exposure is one exposure in an even number of exposures
  • the second preset exposure is one exposure in an odd number of exposures.
  • the multiple exposures may include the first exposure in a parity order.
  • the preset exposure and the second preset exposure For example, odd-numbered exposures such as the first exposure, third exposure, and fifth exposure in multiple exposures are all second preset exposures, and even-numbered exposures such as second exposure, fourth exposure, and sixth exposure
  • the exposure is the first preset exposure.
  • the first preset exposure is one exposure in the specified odd number of exposures
  • the second preset exposure is one exposure in the other exposures except the specified odd number of exposures, that is, The second preset exposure may be an odd number of exposures in multiple exposures, or an even number of exposures in multiple exposures.
  • the first preset exposure is one exposure in the specified even number of exposures
  • the second preset exposure is one exposure in the other exposures except the specified even number of exposures, that is, The second preset exposure may be an odd number of exposures in multiple exposures, or an even number of exposures in multiple exposures.
  • the first preset exposure is one exposure in the first exposure sequence
  • the second preset exposure is one exposure in the second exposure sequence.
  • the first preset exposure is one exposure in the second exposure sequence
  • the second preset exposure is one exposure in the first exposure sequence
  • the aforementioned multiple exposure includes multiple exposure sequences
  • the first exposure sequence and the second exposure sequence are the same exposure sequence or two different exposure sequences in the multiple exposure sequences
  • each exposure sequence includes N exposures
  • the N exposures include 1 first preset exposure and N-1 second preset exposures, or the N exposures include 1 second preset exposure and N-1 second preset exposures, where N is A positive integer greater than 2.
  • each exposure sequence includes 3 exposures, and these 3 exposures may include 1 first preset exposure and 2 second preset exposures.
  • the first exposure of each exposure sequence may be the first preset Exposure
  • the second and third exposures are the second preset exposure. That is, each exposure sequence can be expressed as: a first preset exposure, a second preset exposure, and a second preset exposure.
  • these 3 exposures may include 1 second preset exposure and 2 first preset exposures, so that the first exposure of each exposure sequence may be the second preset exposure, the second and the third The exposure is the first preset exposure. That is, each exposure sequence can be expressed as: the second preset exposure, the first preset exposure, and the first preset exposure.
  • the filter assembly 03 further includes a second filter 032 and a switching component 033, and both the first filter 031 and the second filter 032 are connected to the switching component 033.
  • the switching component 033 is used to switch the second filter 032 to the light incident side of the image sensor 01.
  • the second filter 032 After the second filter 032 is switched to the light incident side of the image sensor 01, the second filter 032 enables the visible light waveband The light passes through to block the light in the near-infrared light band, and the image sensor 01 is used to generate and output a third image signal through exposure.
  • the switching component 033 is used to switch the second filter 032 to the light incident side of the image sensor 01, and can also be understood as the second filter 032 replacing the first filter 031 in the image sensor 01. Position on the light side.
  • the first light supplement device 021 may be in the off state or in the on state.
  • the first light supplement device 021 can be used to perform stroboscopic light supplementation, so that the image sensor 01 generates and outputs a first image signal containing near-infrared brightness information.
  • the second image signal containing visible light brightness information and because the first image signal and the second image signal are both acquired by the same image sensor 01, the viewpoint of the first image signal is the same as the viewpoint of the second image signal, so that the The first image signal and the second image signal can acquire complete external scene information.
  • the intensity of visible light is strong, for example, during the daytime, the proportion of near-infrared light during the day is relatively strong, and the color reproduction of the collected image is not good.
  • the image sensor 01 can generate and output a third image signal containing visible light brightness information, so that even During the day, images with good color reproduction can also be collected, and the real color information of the external scene can be obtained efficiently and simply regardless of the intensity of visible light, or whether it is day or night, which improves the image acquisition unit 1 is flexible in use and can be easily compatible with other image acquisition units.
  • the image processor 2 may process the third image signal to output third image information, and the face analysis unit 3 may perform face analysis on the third image information to obtain a face analysis result.
  • This application uses the exposure timing of the image sensor 01 to control the near-infrared supplementary light timing of the supplementary light device, so that the near-infrared supplementary light is performed during the first preset exposure and the first image signal is generated. In the process, the near-infrared supplement light is not performed and the second image signal is generated.
  • This data collection method can directly collect the first image signal and the second image signal with different brightness information while the structure is simple and the cost is reduced.
  • One image sensor 01 can acquire two different image signals, which makes the image acquisition unit 1 easier and more efficient to acquire the first image signal and the second image signal.
  • the first image signal and the second image signal are both generated and output by the same image sensor 01, so the viewpoint corresponding to the first image signal is the same as the viewpoint corresponding to the second image signal. Therefore, the information of the external scene can be jointly obtained through the first image signal and the second image signal, and there is no difference between the viewpoint corresponding to the first image signal and the viewpoint corresponding to the second image signal. It is not aligned with the image generated by the second image signal.
  • the image processor 2 may be a logic platform containing signal processing algorithms or programs.
  • the image processor 2 may be a computer based on the X86 or ARM architecture, or may be an FPGA (Field-Programmable Gate Array, Field-Programmable Gate Array) logic circuit.
  • FPGA Field-Programmable Gate Array, Field-Programmable Gate Array
  • the image processor 2 is configured to process at least one of the first image signal and the second image signal by using the first processing parameter to obtain first image information.
  • the image processor 2 is also used to process at least one of the first image signal and the second image signal by using the second processing parameter to obtain second image information, and then transmit the second image information to the display device, and then The device displays the second image information.
  • the first image signal and the second image signal can be flexibly combined according to the two different application requirements of face analysis and display, so that the two different application requirements can be compared. Good satisfaction.
  • the processing performed by the image processor 2 on at least one of the first image signal and the second image signal may include black level, image interpolation, digital gain, white balance, image noise reduction, image enhancement, and image fusion. At least one of the others.
  • the first processing parameter and the second processing parameter may be the same or different.
  • the first processing parameter and the second processing parameter may be different.
  • the first processing parameter can be set in advance according to the display requirement, and the second processing parameter can be set in advance according to the face analysis requirement.
  • the first processing parameter and the second processing parameter are when processing black level, image interpolation, digital gain, white balance, image noise reduction, image enhancement, image fusion, etc., on at least one of the first image signal and the second image signal. The required parameters.
  • the image processor 2 can flexibly select a more appropriate combination of first processing parameters and image signals to obtain the first image information, so as to achieve more favorable face analysis.
  • the image effect improves the accuracy of face recognition.
  • the image processor 2 can flexibly select a more appropriate combination of second processing parameters and image signals to obtain the second image information, so as to achieve a better quality image display effect.
  • the image processor 2 may use the first processing parameter to process the first image signal containing the near-infrared light information, and output the gray image information as the first image information.
  • the image quality of the grayscale image information obtained by processing the first image signal is better, which is more suitable for face analysis and can improve the face Recognition accuracy rate.
  • the image processor 2 may use the second processing parameter to process the second image signal containing the visible light information, and output the color image information as the second image information.
  • the second image signal contains visible light information
  • the color reproduction of the color image information obtained by processing the second image signal is more accurate, which is more suitable for display and can improve the image display effect.
  • the image processor 2 may use the first processing parameter to process the first image signal and the second image signal, and output the first image information. In this case, the image processor 2 needs to perform image fusion processing on the first image signal and the second image signal.
  • the image processor 2 may use the second processing parameter to process the first image signal and the second image signal, and output second image information. In this case, the image processor 2 needs to perform image fusion processing on the first image signal and the second image signal.
  • the first image signal and the second image signal do not enter the image processor 2 at the same time. If the image processor 2 needs to perform image fusion processing on the first image signal and the second image signal, the first image signal and the second image signal need to be synchronized first.
  • the image processor 2 may include a buffer for storing at least one of the first image signal and the second image signal, so as to achieve synchronization of the first image signal and the second image signal.
  • the image processor 2 may perform image fusion processing on the synchronized first image signal and the second image signal to obtain the first image information.
  • the cache can also be used to store other information, for example, it can be used to store at least one of the first image information and the second image information.
  • the image processor 2 may store the first image signal in the buffer first, and after the second image signal also enters the image processor 2, Then perform image fusion processing on the first image signal and the second image signal.
  • the image processor 2 may first store the second image signal in the buffer, and wait until the first image signal also enters the image processor 2. , And then perform image fusion processing on the first image signal and the second image signal.
  • the image processor 2 is also used to adjust the exposure parameters of the image acquisition unit 1 in the process of processing at least one of the first image signal and the second image signal. Specifically, in the process of processing at least one of the first image signal and the second image signal, the image processor 2 may determine the exposure parameter adjustment value according to the attribute parameter generated in the processing process, and then carry the exposure parameter. The control signal of the parameter adjustment value is sent to the image acquisition unit 1, and the image acquisition unit 1 adjusts its own exposure parameters according to the exposure parameter adjustment value.
  • attribute parameters generated in the process of processing at least one of the first image signal and the second image signal may include image resolution, image brightness, image contrast, and the like.
  • the image processor 2 adjusts the exposure parameters of the image acquisition unit 1, that is, adjusts the exposure parameters of the image sensor 01 in the image acquisition unit 1.
  • the image processor 2 can adjust the exposure parameters of the image sensor 01 at the same time.
  • the working state of the light supplement 02 and the working state of the filter assembly 03 are controlled.
  • the image processor 2 can control the on-off state of the first light-filling device 021 in the light-filler 02, and can also control the on-off state of the second light-filling device 022 in the light-filler 02, or control the first light-filling device 022 in the filter assembly 03 Switch between a filter 031 and a second filter 032.
  • the face analysis unit 3 is a logic platform containing a face analysis algorithm or program.
  • the face analysis unit 3 may be a computer based on X86 or ARM architecture, or an FPGA logic circuit.
  • the face analysis unit 3 can share hardware with the image processor 2.
  • the face analysis unit 3 and the image processor 2 can run on the same FPGA logic circuit.
  • the face analysis unit 3 and the image processor 2 may not share hardware, which is not limited in the embodiment of the present application.
  • the face analysis unit 3 may include: a face detection subunit 311, a face recognition subunit 312, and a face database 313.
  • At least one reference face information is stored in the face database 313.
  • the face detection subunit 311 is configured to perform face detection on the first image information, output the detected face image, and perform living body identification on the face image.
  • the face recognition sub-unit 312 is used to extract the face information of the face image when the face image passes in vivo identification, and compare the face information of the face image with at least one reference face stored in the face database 313 The information is compared to obtain the face analysis result.
  • At least one reference face information stored in the face database 313 may be set in advance.
  • the multiple reference facial information may be preset facial information of facial images of users who have a certain authority (such as the authority to open a door).
  • the face detection sub-unit 311 can perform face detection on the first image information, and perform biometric authentication on the detected face image to prevent camouflage attacks such as photos, videos, and masks.
  • the operation can be directly ended, and it is determined that the face analysis result is a face recognition failure.
  • the face recognition sub-unit 312 may compare the face information of the face image with at least one reference face information stored in the face database 313 when the face image passes in vivo identification. If the face information of the face image is successfully compared with any reference face information, it can be determined that the face analysis result is a successful recognition; if the face information of the face image is compared with the at least one reference face information If it fails, it can be determined that the face analysis result is a recognition failure.
  • the face information may be face feature data, etc.
  • the face feature data may include the curvature of the face and the attributes (such as size, position, distance, etc.) of facial contour points (such as iris, nose, and corner of the mouth).
  • the face recognition sub-unit 312 when the face recognition subunit 312 compares the face information of the face image with at least one piece of reference face information stored in the face database 313, the face recognition For any reference face information, the face recognition sub-unit 312 can calculate the matching degree between the reference face information and the face information of the face image, and determine the reference when the matching degree is greater than or equal to the matching degree threshold The face information is successfully compared with the face information of the face image. When the matching degree is less than the matching degree threshold, it is determined that the comparison between the reference face information and the face information of the face image fails.
  • the matching degree threshold can be set in advance.
  • At least one reference face information is stored in the face database 313.
  • the face detection subunit 311 is used to perform face detection on the first image information, output the detected first face image, perform living body identification on the first face image, and perform face detection on the second image information, and output The second face image is detected, and the second face image is subjected to living body identification.
  • the face recognition subunit 312 is used to extract the face information of the first face image when the first face image and the second face image both pass the living body identification, and combine the face information of the first face image with the face information.
  • At least one reference face information stored in the database 313 is compared to obtain a face analysis result.
  • At least one reference face information stored in the face database 313 may be set in advance.
  • the at least one reference face information may be preset face information of a face image of a user with a certain authority.
  • the face detection sub-unit 311 can perform face detection on both the first image information and the second image information, and perform live identification on both the detected first face image and the second face image.
  • the operation can be directly ended, and it is determined that the face analysis result is a face recognition failure.
  • the face detection sub-unit 311 realizes multi-spectral living body identification through the first image information and the second image information, thereby effectively improving the accuracy of living body identification.
  • the face recognition subunit 312 may compare the face information of the first face image with at least one reference person stored in the face database 313 when both the first face image and the second face image pass the living body identification. Face information is compared. If the face information of the first face image is successfully compared with any reference face information, it can be determined that the face analysis result is a successful recognition; if the face information of the first face image matches the at least one reference face information If the comparison fails, it can be determined that the face analysis result is a recognition failure.
  • the facial information may be facial feature data, etc.
  • the facial feature data may include facial curvature, attributes of facial contour points, and the like.
  • the face recognition sub-unit 312 compares the face information of the first face image with the at least one reference face information stored in the face database 313, the at least one reference face information is For any one of the reference face information, the face recognition subunit 312 can calculate the matching degree between this reference face information and the face information of the first face image, and when the matching degree is greater than or equal to the matching degree threshold, determine The reference face information is successfully compared with the face information of the first face image. When the matching degree is less than the matching degree threshold, it is determined that the comparison of the reference face information with the face information of the first face image fails.
  • the matching degree threshold can be set in advance.
  • At least one reference face information is stored in the face database 313.
  • the face detection subunit 311 is used to perform face detection on the second image information, output the detected second face image, perform in vivo identification on the second face image, and when the second face image passes in vivo identification, Perform face detection on the first image information, and output the detected first face image.
  • the face recognition subunit 312 is used to extract the face information of the first face image, and compare the face information of the first face image with at least one reference face information stored in the face database 313 to obtain the face information. Analyze the results.
  • the first image information is grayscale image information obtained by processing the first image signal
  • the second image information is color image information obtained by processing the second image signal
  • the face database 313 stores at least one reference face information ;
  • the face detection sub-unit 311 is used to perform face detection on color image information, output the detected color face image, perform in vivo identification on the color face image, and perform grayscale when the color face image passes the in vivo identification
  • the face image performs face detection and outputs the detected gray face image;
  • the face recognition sub-unit 312 is used to extract the face information of the gray-scale face image, and combine the face information of the gray-scale face image with the face database At least one reference face information stored in 313 is compared to obtain a face analysis result.
  • the first image information is grayscale image information obtained by processing the first image signal
  • the second image information is fused image information obtained by performing image fusion processing on the first image signal and the second image signal.
  • the face database 313 At least one reference face information is stored in the fusion image information; the face detection subunit 311 is used to perform face detection on the fusion image information, output the detected fusion face image, perform living body identification on the fusion face image, and perform face detection on the fusion face image.
  • the image When the image passes the living body identification, it performs face detection on the gray-scale face image, and outputs the detected gray face image; the face recognition subunit 312 is used to extract the face information of the gray-scale face image, and the gray-scale face The face information of the image is compared with at least one reference face information stored in the face database 313 to obtain a face analysis result.
  • the first image information is fused image information obtained by performing image fusion processing on the first image signal and the second image signal
  • the second image information is grayscale image information obtained by processing the first image signal.
  • the face database 313 At least one reference face information is stored in the computer; the face detection subunit 311 is used to perform face detection on grayscale image information, output the detected grayscale face image, perform in vivo identification on the grayscale face image, and When the gray-scale face image passes the living body identification, face detection is performed on the fused face image, and the detected fused face image is output; the face recognition subunit 312 is used to extract the face information of the fused face image, and merge the human face.
  • the face information of the face image is compared with at least one reference face information stored in the face database 313 to obtain a face analysis result.
  • the first image information is fused image information obtained by performing image fusion processing on the first image signal and the second image signal
  • the second image information is color image information obtained by processing the second image signal.
  • the face database 313 is At least one reference face information is stored; the face detection subunit 311 is used to perform face detection on color image information, output the detected color face image, perform in vivo identification of the color face image, and display the color face image When in vivo identification, face detection is performed on the fused face image, and the detected fused face image is output; the face recognition subunit 312 is used to extract the face information of the fused face image, and merge the face of the face image The information is compared with at least one reference face information stored in the face database 313 to obtain a face analysis result.
  • the first image information is the first fused image information obtained by performing image fusion processing on the first image signal and the second image signal
  • the second image information is obtained by performing image fusion processing on the first image signal and the second image signal.
  • At least one reference face information is stored in the face database 313;
  • the face detection subunit 311 is used to perform face detection on the second fused image information, and output the detected second fused face image , Perform in vivo identification on the second fused face image, and perform face detection on the first fused face image when the second fused face image passes the in vivo identification, and output the detected first fused face image;
  • the recognition sub-unit 312 is used to extract the face information of the first fused face image, and compare the face information of the first fused face image with at least one reference face information stored in the face database 313 to obtain the face Analyze the results.
  • the face analysis unit 3 can also send the human face analysis result after obtaining the face analysis result.
  • the face analysis result is transmitted to the display device, and the display device displays the face analysis result. In this way, the user can learn the face analysis result in time.
  • the face recognition device includes an image acquisition unit 1, an image processor 2, and a face analysis unit 3.
  • the image acquisition unit 1 includes a filter assembly 03, which includes a first filter 031, and the first filter 031 allows visible light and part of the near-infrared light to pass.
  • the image acquisition unit 1 can simultaneously acquire a first image signal containing near-infrared light information (such as near-infrared light brightness information) and a second image signal containing visible light information through the first preset exposure and the second preset exposure.
  • the image acquisition unit 1 in this application can directly collect the first image signal and the second image signal.
  • the acquisition process Simple and effective.
  • the image processor 2 processes at least one of the first image signal and the second image signal, and the first image information is of higher quality.
  • the face analysis unit 3 performs face analysis on the first image information. A more accurate face analysis result can be obtained, which can effectively improve the accuracy of face recognition.
  • FIG. 24 is a schematic structural diagram of an access control device provided by an embodiment of the present application.
  • the access control device includes an access control controller 001 and the face recognition device 002 shown in any one of FIGS. 1 to 23.
  • the face recognition device 002 is used to transmit the face analysis result to the access controller 001.
  • the access controller 001 is used for outputting a control signal for opening the door when the face analysis result is successful.
  • the access controller 001 does not perform any operation when the face analysis result is that the recognition fails.
  • the access control equipment includes an access control controller 001 and a face recognition device 002.
  • the face recognition device 002 has a relatively high face recognition accuracy rate, so that the control accuracy of the access control controller 001 can be ensured and the security of the access control .
  • the face recognition device provided in the embodiment of the present application can be applied not only to access control equipment, but also to other equipment with facial recognition requirements, such as payment equipment, which is not limited in the embodiment of the present application.
  • the face recognition method includes:
  • Step 251 Pass visible light and part of near-infrared light through the first filter.
  • Step 252 Collect a first image signal and a second image signal by the image acquisition unit, the first image signal is an image signal generated according to a first preset exposure, and the second image signal is an image signal generated according to a second preset exposure.
  • the near-infrared supplementary light is performed at least during a partial exposure time period of the first preset exposure, and the near-infrared supplementary light is not performed during the exposure time period of the second preset exposure.
  • Step 253 Process at least one of the first image signal and the second image signal by the image processor to obtain first image information.
  • Step 254 Perform face analysis on the first image information by the face analysis unit to obtain a face analysis result.
  • the image acquisition unit includes: an image sensor and a light supplement, the image sensor is located on the light exit side of the filter assembly, and the light supplement includes a first light supplement device;
  • Multiple exposures are performed through the image sensor to generate and output a first image signal and a second image signal.
  • the first preset exposure and the second preset exposure are two of the multiple exposures; through the first fill light
  • the device performs near-infrared fill light.
  • the center wavelength of the near-infrared supplement light performed by the first light supplement device is the set characteristic wavelength or the center wavelength of the near-infrared light passing through the first filter when it falls within the set characteristic wavelength range And/or the band width meets the constraints.
  • the center wavelength of the first light supplement device for near-infrared supplement light is any wavelength within the wavelength range of 750 ⁇ 10 nanometers;
  • the center wavelength of the first light supplement device for near-infrared supplement light is any wavelength within the wavelength range of 780 ⁇ 10 nanometers; or
  • the center wavelength of the first light supplement device for near-infrared supplement light is any wavelength within the wavelength range of 940 ⁇ 10 nanometers.
  • the constraints include:
  • the difference between the center wavelength of the near-infrared light passing through the first filter and the center wavelength of the near-infrared light supplemented by the first light supplement device is within the wavelength fluctuation range, and the wavelength fluctuation range is 0-20 nanometers; or
  • the half-bandwidth of the near-infrared light passing through the first filter is less than or equal to 50 nanometers; or
  • the first waveband width is smaller than the second waveband width; where the first waveband width refers to the waveband width of the near-infrared light that passes through the first filter, and the second waveband width refers to the near-infrared light that is blocked by the first filter. Band width; or
  • the third waveband width is smaller than the reference waveband width.
  • the third waveband width refers to the waveband width of near-infrared light whose pass rate is greater than a set ratio.
  • the reference waveband width is any waveband width in the range of 50 nm to 150 nm.
  • the image sensor includes a plurality of light-sensing channels, and each light-sensing channel is used to sense at least one kind of light in the visible light waveband and to sense light in the near-infrared waveband.
  • the image sensor uses a global exposure method for multiple exposures.
  • the time period of the near-infrared fill light does not exist with the exposure time period of the nearest second preset exposure Intersection, the time period of near-infrared fill light is a subset of the exposure time period of the first preset exposure, or the time period of near-infrared fill light and the exposure time period of the first preset exposure overlap, or the first preset
  • the exposure time period of exposure is a subset of the time period of near-infrared fill light.
  • the image sensor adopts rolling shutter exposure for multiple exposures.
  • the time period of near-infrared fill light is different from the exposure time period of the nearest second preset exposure. There is an intersection
  • the start time of the near-infrared fill light is no earlier than the exposure start time of the last line of the effective image in the first preset exposure, and the end time of the near-infrared fill light is no later than the exposure end time of the first line of the effective image in the first preset exposure ;
  • the start time of the near-infrared fill light is no earlier than the exposure end time of the last effective image line of the nearest second preset exposure before the first preset exposure and no later than the first effective image line of the first preset exposure Exposure end time, the end time of the near-infrared fill light is no earlier than the exposure start time of the last line of the effective image in the first preset exposure and no later than the first preset exposure of the nearest second preset exposure after the first preset exposure
  • the start time of the exposure of a valid image or
  • the start time of the near-infrared fill light is no earlier than the exposure end time of the last effective image line of the nearest second preset exposure before the first preset exposure and no later than the first effective image line of the first preset exposure
  • the exposure start time, the end time of the near-infrared fill light is no earlier than the exposure end time of the last line of the effective image in the first preset exposure and no later than the first preset exposure of the nearest second preset exposure after the first preset exposure
  • At least one exposure parameter of the first preset exposure and the second preset exposure is different, and the at least one exposure parameter is one or more of exposure time, exposure gain, and aperture size, and the exposure gain Including analog gain, and/or, digital gain.
  • At least one exposure parameter of the first preset exposure and the second preset exposure is the same, the at least one exposure parameter includes one or more of exposure time, exposure gain, and aperture size, and exposure gain Including analog gain, and/or, digital gain.
  • the image processor uses the first processing parameter to process at least one of the first image signal and the second image signal to obtain the first image information; the image processor uses the second processing parameter to process the first image signal and the second image At least one of the signals is processed to obtain second image information; the second image information is transmitted to the display device through the image processor, and the second image information is displayed by the display device.
  • the first processing parameter and the second processing parameter are different.
  • the processing performed by the image processor on at least one of the first image signal and the second image signal includes black level, image interpolation, digital gain, white balance, image noise reduction, image enhancement, At least one of image fusion.
  • the image processor includes a cache
  • At least one of the first image signal and the second image signal is stored in the buffer, or at least one of the first image information and the second image information is stored in the buffer.
  • the image processor is used to adjust the exposure parameter of the image acquisition unit in the process of processing at least one of the first image signal and the second image signal.
  • the face analysis unit includes: a face detection subunit, a face recognition subunit, and a face database;
  • the face recognition subunit extracts the face information of the face image when the face image passes in vivo identification, and compares the face information of the face image with at least one reference face information stored in the face database to obtain Face analysis results.
  • the face analysis unit includes: a face detection subunit, a face recognition subunit, and a face database;
  • the face recognition subunit extracts the face information of the first face image when both the first face image and the second face image pass the living body identification, and the face information of the first face image is combined with the face database At least one stored reference face information is compared to obtain a face analysis result.
  • the first image information is grayscale image information obtained by processing the first image signal
  • the second image information is color image information obtained by processing the second image signal
  • the face analysis unit includes: Face detection subunit, face recognition subunit and face database;
  • the face information of the grayscale face image is extracted through the face recognition subunit, and the face information of the grayscale face image is compared with at least one reference face information stored in the face database to obtain the face analysis result.
  • the first image information is grayscale image information obtained by processing the first image signal
  • the second image information is a fused image obtained by performing image fusion processing on the first image signal and the second image signal.
  • the face analysis unit includes: face detection sub-unit, face recognition sub-unit and face database;
  • the face information of the grayscale face image is extracted through the face recognition subunit, and the face information of the grayscale face image is compared with at least one reference face information stored in the face database to obtain the face analysis result.
  • the first image information is fused image information obtained by performing image fusion processing on the first image signal and the second image signal
  • the second image information is a grayscale image obtained by processing the first image signal.
  • the face analysis unit includes: face detection sub-unit, face recognition sub-unit and face database;
  • the face information of the fused face image is extracted through the face recognition subunit, and the face information of the fused face image is compared with at least one reference face information stored in the face database to obtain a face analysis result.
  • the first image information is fused image information obtained by performing image fusion processing on the first image signal and the second image signal
  • the second image information is color image information obtained by processing the second image signal.
  • the face analysis unit includes: face detection subunit, face recognition subunit and face database;
  • the face information of the fused face image is extracted through the face recognition subunit, and the face information of the fused face image is compared with at least one reference face information stored in the face database to obtain a face analysis result.
  • the first image information is the first fused image information obtained by performing image fusion processing on the first image signal and the second image signal
  • the second image information is the first image signal and the second image signal.
  • the second fused image information obtained by image fusion processing of the signal, the face analysis unit includes: a face detection subunit, a face recognition subunit and a face database;
  • perform face detection on the first fusion face image and output the detected first fusion face image
  • the face information of the first fusion face image is extracted through the face recognition subunit, and the face information of the first fusion face image is compared with at least one reference face information stored in the face database to obtain a face analysis result.
  • the face analysis result is transmitted to the display device through the face analysis unit, and the display device displays the face analysis result.
  • the face recognition device includes an image acquisition unit, an image processor, and a face analysis unit.
  • the image acquisition unit includes a filter assembly, and the filter assembly includes a first filter that passes visible light and part of the near-infrared light.
  • the image acquisition unit can simultaneously acquire a first image signal containing near-infrared light information (such as near-infrared light brightness information) and a second image signal containing visible light information through the first preset exposure and the second preset exposure.
  • the image acquisition unit in this application can directly collect the first image signal and the second image signal, and the acquisition process is simple effective.
  • the image processor processes at least one of the first image signal and the second image signal, and the first image information is of higher quality, and then the face analysis unit performs face analysis on the first image information. Obtain more accurate face analysis results, which can effectively improve the accuracy of face recognition.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

本申请公开了一种人脸识别装置和门禁设备,属于计算机视觉领域。人脸识别装置包括图像采集单元、图像处理器和人脸分析单元;图像采集单元包括的滤光组件中的第一滤光片使可见光和部分近红外光通过;图像采集单元采集第一图像信号和第二图像信号,第一图像信号是根据第一预设曝光产生,第二图像信号是根据第二预设曝光产生,其中,至少在第一预设曝光的部分曝光时间段内进行近红外补光,在第二预设曝光的曝光时间段内不进行近红外补光;图像处理器对第一图像信号和第二图像信号中的至少一个进行处理,得到第一图像信息;人脸分析单元对第一图像信息进行人脸分析,得到人脸分析结果。本申请中的人脸识别装置的人脸识别准确率较高。

Description

人脸识别装置和门禁设备
本申请要求于2019年05月31日提交的申请号为201910472703.7、发明名称为“人脸识别装置和门禁设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机视觉领域,特别涉及一种人脸识别装置和门禁设备。
背景技术
目前,各种各样的拍摄设备被广泛的应用于诸如智能交通、安保等领域。其中,为了提高拍摄的图像的质量,相关技术提供了一种包括多光谱滤光器阵列传感器的拍摄设备。该拍摄设备中的多光谱滤光器阵列传感器中的部分像素仅用于感应近红外光,剩余像素用于同时感应近红外光和可见光。这样,该拍摄设备可以采集包含有可见光信息和近红外光信息的原始图像信号,并从采集的原始图像信号中分离出同时包含有可见光信息和近红外光信息的RGB图像以及仅包含有近红外光信息的近红外图像。之后,将RGB图像中的每个像素包含的近红外光信息去除,得到仅包含可见光信息的可见光图像。
然而,上述包括多光谱滤光器阵列传感器的拍摄设备需要在后期将采集的原始图像信号中的近红外光信息和可见光信息进行分离,过程比较复杂,且据此得到的近红外图像和可见光图像的图像质量也比较低。如此,在基于该拍摄设备得到的图像进行人脸识别时,会导致人脸识别的准确率较低。
发明内容
本申请提供了一种人脸识别装置和门禁设备,可以解决相关技术中人脸识别准确率较低的问题。所述技术方案如下:
一方面,提供了一种人脸识别装置,所述人脸识别装置包括:图像采集单元、图像处理器和人脸分析单元;
所述图像采集单元包括滤光组件,所述滤光组件包括第一滤光片,所述第一滤光片使可见光和部分近红外光通过;
所述图像采集单元,用于采集第一图像信号和第二图像信号,所述第一图像信号是根据第一预设曝光产生的图像信号,所述第二图像信号是根据第二预设曝光产生的图像信号,其中,至少在所述第一预设曝光的部分曝光时间段内进行近红外补光,在所述第二预设曝光的曝光时间段内不进行近红外补光;
所述图像处理器,用于对所述第一图像信号和所述第二图像信号中的至少一个进行处理,得到第一图像信息;
所述人脸分析单元,用于对所述第一图像信息进行人脸分析,得到人脸分析结果。
在本申请一种可能的实现方式中,所述图像采集单元包括:图像传感器和补光器,所述图像传感器位于所述滤光组件的出光侧;
所述图像传感器,用于通过多次曝光产生并输出所述第一图像信号和所述第二图像信号, 所述第一预设曝光和所述第二预设曝光为所述多次曝光的其中两次曝光;
所述补光器包括第一补光装置,所述第一补光装置用于进行近红外补光。
在本申请一种可能的实现方式中,
所述第一补光装置进行近红外补光的中心波长为设定特征波长或者落在设定特征波长范围时,通过所述第一滤光片的近红外光的中心波长和/或波段宽度达到约束条件。
在本申请一种可能的实现方式中,
所述第一补光装置进行近红外补光的中心波长为750±10纳米的波长范围内的任一波长;或者
所述第一补光装置进行近红外补光的中心波长为780±10纳米的波长范围内的任一波长;或者
所述第一补光装置进行近红外补光的中心波长为940±10纳米的波长范围内的任一波长。
在本申请一种可能的实现方式中,所述约束条件包括:
通过所述第一滤光片的近红外光的中心波长与所述第一补光装置进行近红外补光的中心波长之间的差值位于波长波动范围内,所述波长波动范围为0~20纳米;或者
通过所述第一滤光片的近红外光的半带宽小于或等于50纳米;或者
第一波段宽度小于第二波段宽度;其中,所述第一波段宽度是指通过所述第一滤光片的近红外光的波段宽度,所述第二波段宽度是指被所述第一滤光片阻挡的近红外光的波段宽度;或者
第三波段宽度小于参考波段宽度,所述第三波段宽度是指通过率大于设定比例的近红外光的波段宽度,所述参考波段宽度为50纳米~150纳米的波段范围内的任一波段宽度。
在本申请一种可能的实现方式中,所述图像传感器包括多个感光通道,每个感光通道用于感应至少一种可见光波段的光,以及感应近红外波段的光。
在本申请一种可能的实现方式中,
所述图像传感器采用全局曝光方式进行多次曝光,对于任意一次近红外补光,近红外补光的时间段与最邻近的所述第二预设曝光的曝光时间段不存在交集,近红外补光的时间段是所述第一预设曝光的曝光时间段的子集,或者,近红外补光的时间段与所述第一预设曝光的曝光时间段存在交集,或者所述第一预设曝光的曝光时间段是近红外补光的时间段的子集。
在本申请一种可能的实现方式中,
所述图像传感器采用卷帘曝光方式进行多次曝光,对于任意一次近红外补光,近红外补光的时间段与最邻近的所述第二预设曝光的曝光时间段不存在交集;
近红外补光的开始时刻不早于所述第一预设曝光中最后一行有效图像的曝光开始时刻,近红外补光的结束时刻不晚于所述第一预设曝光中第一行有效图像的曝光结束时刻;
或者,
近红外补光的开始时刻不早于所述第一预设曝光之前的最邻近的第二预设曝光的最后一行有效图像的曝光结束时刻且不晚于所述第一预设曝光中第一行有效图像的曝光结束时刻,近红外补光的结束时刻不早于所述第一预设曝光中最后一行有效图像的曝光开始时刻且不晚于所述第一预设曝光之后的最邻近的第二预设曝光的第一行有效图像的曝光开始时刻;或者
近红外补光的开始时刻不早于所述第一预设曝光之前的最邻近的第二预设曝光的最后一行有效图像的曝光结束时刻且不晚于所述第一预设曝光中第一行有效图像的曝光开始时刻,近红外补光的结束时刻不早于所述第一预设曝光中最后一行有效图像的曝光结束时刻且不晚 于所述第一预设曝光之后的最邻近的第二预设曝光的第一行有效图像的曝光开始时刻。
在本申请一种可能的实现方式中,
所述第一预设曝光与所述第二预设曝光的至少一个曝光参数不同,所述至少一个曝光参数为曝光时间、曝光增益、光圈大小中的一种或多种,所述曝光增益包括模拟增益,和/或,数字增益。
在本申请一种可能的实现方式中,
所述第一预设曝光和所述第二预设曝光的至少一个曝光参数相同,所述至少一个曝光参数包括曝光时间、曝光增益、光圈大小中的一种或多种,所述曝光增益包括模拟增益,和/或,数字增益。
在本申请一种可能的实现方式中,
所述图像处理器,用于采用第一处理参数对所述第一图像信号和所述第二图像信号中的至少一个进行处理,得到所述第一图像信息;
所述图像处理器,还用于采用第二处理参数对所述第一图像信号和所述第二图像信号中的至少一个进行处理,得到第二图像信息;
所述图像处理器,还用于将所述第二图像信息传输到显示设备,由所述显示设备显示所述第二图像信息。
在本申请一种可能的实现方式中,当所述第一图像信息和所述第二图像信息均是对所述第一图像信号处理得到时,或者,当所述第一图像信息和所述第二图像信息均是对所述第二图像信号处理得到时,或者,当所述第一图像信息和所述第二图像信息均是对所述第一图像信号和所述第二图像信号处理得到时,所述第一处理参数和所述第二处理参数不同。
在本申请一种可能的实现方式中,所述图像处理器对所述第一图像信号和所述第二图像信号中的至少一个进行的处理包括黑电平、图像插值、数字增益、白平衡、图像降噪、图像增强、图像融合中的至少一种。
在本申请一种可能的实现方式中,所述图像处理器包括缓存;
所述缓存,用于存储所述第一图像信号和所述第二图像信号中的至少一个,或者,用于存储所述第一图像信息和所述第二图像信息中的至少一个。
在本申请一种可能的实现方式中,所述图像处理器,还用于在对所述第一图像信号和所述第二图像信号中的至少一个进行处理的过程中,调整所述图像采集单元的曝光参数。
在本申请一种可能的实现方式中,所述人脸分析单元包括:人脸检测子单元、人脸识别子单元和人脸数据库;
所述人脸数据库中存储有至少一个参考人脸信息;
所述人脸检测子单元,用于对所述第一图像信息进行人脸检测,输出检测到的人脸图像,并对所述人脸图像进行活体鉴别;
所述人脸识别子单元,用于在所述人脸图像通过活体鉴别时,提取所述人脸图像的人脸信息,将所述人脸图像的人脸信息与所述人脸数据库中存储的至少一个参考人脸信息进行比对,得到人脸分析结果。
在本申请一种可能的实现方式中,所述人脸分析单元包括:人脸检测子单元、人脸识别子单元和人脸数据库;
所述人脸数据库中存储有至少一个参考人脸信息;
所述人脸检测子单元,用于对所述第一图像信息进行人脸检测,输出检测到的第一人脸 图像,对所述第一人脸图像进行活体鉴别,以及对所述第二图像信息进行人脸检测,输出检测到的第二人脸图像,对所述第二人脸图像进行活体鉴别;
所述人脸识别子单元,用于在所述第一人脸图像和所述第二人脸图像均通过活体鉴别时,提取所述第一人脸图像的人脸信息,将所述第一人脸图像的人脸信息与所述人脸数据库中存储的至少一个参考人脸信息进行比对,得到人脸分析结果。
在本申请一种可能的实现方式中,所述第一图像信息是对所述第一图像信号处理得到的灰度图像信息,所述第二图像信息是对所述第二图像信号处理得到的彩色图像信息,所述人脸分析单元包括:人脸检测子单元、人脸识别子单元和人脸数据库;
所述人脸数据库中存储有至少一个参考人脸信息;
所述人脸检测子单元,用于对所述彩色图像信息进行人脸检测,输出检测到的彩色人脸图像,对所述彩色人脸图像进行活体鉴别,以及在所述彩色人脸图像通过活体鉴别时,对所述灰度人脸图像进行人脸检测,输出检测到的灰色人脸图像;
所述人脸识别子单元,用于提取所述灰度人脸图像的人脸信息,将所述灰度人脸图像的人脸信息与所述人脸数据库中存储的至少一个参考人脸信息进行比对,得到人脸分析结果。
在本申请一种可能的实现方式中,所述第一图像信息是对所述第一图像信号处理得到的灰度图像信息,所述第二图像信息是对所述第一图像信号和所述第二图像信号进行图像融合处理得到的融合图像信息,所述人脸分析单元包括:人脸检测子单元、人脸识别子单元和人脸数据库;
所述人脸数据库中存储有至少一个参考人脸信息;
所述人脸检测子单元,用于对所述融合图像信息进行人脸检测,输出检测到的融合人脸图像,对所述融合人脸图像进行活体鉴别,以及在所述融合人脸图像通过活体鉴别时,对所述灰度人脸图像进行人脸检测,输出检测到的灰色人脸图像;
所述人脸识别子单元,用于提取所述灰度人脸图像的人脸信息,将所述灰度人脸图像的人脸信息与所述人脸数据库中存储的至少一个参考人脸信息进行比对,得到人脸分析结果。
在本申请一种可能的实现方式中,所述第一图像信息是对所述第一图像信号和所述第二图像信号进行图像融合处理得到的融合图像信息,所述第二图像信息是对所述第一图像信号处理得到的灰度图像信息,所述人脸分析单元包括:人脸检测子单元、人脸识别子单元和人脸数据库;
所述人脸数据库中存储有至少一个参考人脸信息;
所述人脸检测子单元,用于对所述灰度图像信息进行人脸检测,输出检测到的灰度人脸图像,对所述灰度人脸图像进行活体鉴别,以及在所述灰度人脸图像通过活体鉴别时,对所述融合人脸图像进行人脸检测,输出检测到的融合人脸图像;
所述人脸识别子单元,用于提取所述融合人脸图像的人脸信息,将所述融合人脸图像的人脸信息与所述人脸数据库中存储的至少一个参考人脸信息进行比对,得到人脸分析结果。
在本申请一种可能的实现方式中,所述第一图像信息是对所述第一图像信号和所述第二图像信号进行图像融合处理得到的融合图像信息,所述第二图像信息是对所述第二图像信号处理得到的彩色图像信息,所述人脸分析单元包括:人脸检测子单元、人脸识别子单元和人脸数据库;
所述人脸数据库中存储有至少一个参考人脸信息;
所述人脸检测子单元,用于对所述彩色图像信息进行人脸检测,输出检测到的彩色人脸 图像,对所述彩色人脸图像进行活体鉴别,以及在所述彩色人脸图像通过活体鉴别时,对所述融合人脸图像进行人脸检测,输出检测到的融合人脸图像;
所述人脸识别子单元,用于提取所述融合人脸图像的人脸信息,将所述融合人脸图像的人脸信息与所述人脸数据库中存储的至少一个参考人脸信息进行比对,得到人脸分析结果。
在本申请一种可能的实现方式中,所述第一图像信息是对所述第一图像信号和所述第二图像信号进行图像融合处理得到的第一融合图像信息,所述第二图像信息是对所述第一图像信号和所述第二图像信号进行图像融合处理得到的第二融合图像信息,所述人脸分析单元包括:人脸检测子单元、人脸识别子单元和人脸数据库;
所述人脸数据库中存储有至少一个参考人脸信息;
所述人脸检测子单元,用于对所述第二融合图像信息进行人脸检测,输出检测到的第二融合人脸图像,对所述第二融合人脸图像进行活体鉴别,以及在所述第二融合人脸图像通过活体鉴别时,对所述第一融合人脸图像进行人脸检测,输出检测到的第一融合人脸图像;
所述人脸识别子单元,用于提取所述第一融合人脸图像的人脸信息,将所述第一融合人脸图像的人脸信息与所述人脸数据库中存储的至少一个参考人脸信息进行比对,得到人脸分析结果。
在本申请一种可能的实现方式中,所述人脸分析单元,还用于将所述人脸分析结果传输到显示设备,由所述显示设备对所述人脸分析结果进行显示。
一方面,提供了一种门禁设备,所述门禁设备包括门禁控制器和上述的人脸识别装置;
所述人脸识别装置,用于将所述人脸分析结果传输到所述门禁控制器;
所述门禁控制器,用于在所述人脸分析结果为识别成功时,输出用于打开门禁的控制信号。
一方面,提供了一种人脸识别方法,应用于人脸识别装置,所述人脸识别装置包括:图像采集单元、图像处理器和人脸分析单元,所述图像采集单元包括滤光组件,所述滤光组件包括第一滤光片,所述方法包括:
通过所述第一滤光片使可见光和部分近红外光通过;
通过所述图像采集单元采集第一图像信号和第二图像信号,所述第一图像信号是根据第一预设曝光产生的图像信号,所述第二图像信号是根据第二预设曝光产生的图像信号,其中,至少在所述第一预设曝光的部分曝光时间段内进行近红外补光,在所述第二预设曝光的曝光时间段内不进行近红外补光;
通过所述图像处理器对所述第一图像信号和所述第二图像信号中的至少一个进行处理,得到第一图像信息;
通过所述人脸分析单元对所述第一图像信息进行人脸分析,得到人脸分析结果。
本申请提供的技术方案至少可以带来以下有益效果:
在本申请中,人脸识别装置包括图像采集单元、图像处理器和人脸分析单元。图像采集单元包括滤光组件,滤光组件包括第一滤光片,第一滤光片使可见光和部分近红外光通过。图像采集单元可以通过第一预设曝光和第二预设曝光同时采集到包含近红外光信息(如近红外光亮度信息)的第一图像信号和包含可见光信息的第二图像信号。相对于需要通过后期将 采集的原始图像信号中的近红外光信息和可见光信息进行分离的图像处理方式,本申请中图像采集单元可以直接采集到第一图像信号和第二图像信号,采集过程简单有效。如此,图像处理器对第一图像信号和第二图像信号中的至少一个进行处理后得到的第一图像信息的质量更高,继而人脸分析单元对第一图像信息进行人脸分析后就可以得到更为准确的人脸分析结果,从而可以有效提高人脸识别准确率。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的第一种人脸识别装置的结构示意图。
图2是本申请实施例提供的第一种图像采集单元的结构示意图。
图3是本申请实施例提供的一种图像采集单元产生第一图像信号的原理示意图。
图4是本申请实施例提供的一种图像采集单元产生第二图像信号的原理示意图。
图5是本申请实施例提供的一种第一补光装置进行近红外补光的波长和相对强度之间的关系示意图。
图6是本申请实施例提供的一种第一滤光片通过的光的波长与通过率之间的关系的示意图。
图7是本申请实施例提供的第二种图像采集单元的结构示意图。
图8是本申请实施例提供的一种RGB传感器的示意图。
图9是本申请实施例提供的一种RGBW传感器的示意图。
图10是本申请实施例提供的一种RCCB传感器的示意图。
图11是本申请实施例提供的一种RYYB传感器的示意图。
图12是本申请实施例提供的一种图像传感器的感应曲线示意图。
图13是本申请实施例提供的一种卷帘曝光方式的示意图。
图14是本申请实施例提供的第一种近红外补光与全局曝光方式中的第一预设曝光和第二预设曝光之间的时序关系示意图。
图15是本申请实施例提供的第二种近红外补光与全局曝光方式中的第一预设曝光和第二预设曝光之间的时序关系示意图。
图16是本申请实施例提供的第三种近红外补光与全局曝光方式中的第一预设曝光和第二预设曝光之间的时序关系示意图。
图17是本申请实施例提供的第一种近红外补光与卷帘曝光方式中第一预设曝光和第二预设曝光之间的时序关系示意图。
图18是本申请实施例提供的第二种近红外补光与卷帘曝光方式中第一预设曝光和第二预设曝光之间的时序关系示意图。
图19是本申请实施例提供的第三种近红外补光与卷帘曝光方式中第一预设曝光和第二预设曝光之间的时序关系示意图。
图20是本申请实施例提供的第三种图像采集单元的结构示意图。
图21是本申请实施例提供的第二种人脸识别装置的结构示意图。
图22是本申请实施例示出的第三种人脸识别装置的结构示意图。
图23是本申请实施例示出的第四种人脸识别装置的结构示意图。
图24是本申请实施例示出的一种门禁设备的结构示意图。
图25是本申请实施例示出的一种人脸识别方法的流程图。
附图标记:
1:图像采集单元,2:图像处理器,3:人脸分析单元,01:图像传感器,02:补光器,03:滤光组件,04:镜头,021:第一补光装置,022:第二补光装置,031:第一滤光片,032:第二滤光片,033:切换部件,311:人脸检测子单元,312:人脸识别子单元,313,人脸数据库,001:门禁控制器,002:人脸识别装置。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
图1是本申请实施例提供的一种人脸识别装置的结构示意图。如图1所示,该人脸识别装置包括:图像采集单元1、图像处理器2和人脸分析单元3。
图像采集单元1用于采集第一图像信号和第二图像信号。图像处理器2用于对第一图像信号和第二图像信号中的至少一个进行处理,得到第一图像信息。人脸分析单元3用于对第一图像信息进行人脸分析,得到人脸分析结果。
需要说明的是,第一图像信号是根据第一预设曝光产生的图像信号,第二图像信号是根据第二预设曝光产生的图像信号。其中,至少在第一预设曝光的部分曝光时间段内进行近红外补光,在第二预设曝光的曝光时间段内不进行近红外补光。
在本申请实施例中,图像采集单元1可以通过第一预设曝光和第二预设曝光同时采集到包含近红外光信息(如近红外光亮度信息)的第一图像信号和包含可见光信息的第二图像信号。相对于需要通过后期将采集的原始图像信号中的近红外光信息和可见光信息进行分离的图像处理方式,本申请中图像采集单元1可以直接采集到第一图像信号和第二图像信号,采集过程简单有效。如此,图像处理器2对第一图像信号和第二图像信号中的至少一个进行处理后得到的第一图像信息的质量更高,继而人脸分析单元3对第一图像信息进行人脸分析后就可以得到更为准确的人脸分析结果,从而可以有效提高人脸识别准确率。
下面对该人脸识别装置包括的图像采集单元1、图像处理器2和人脸分析单元3分别进行说明。
1、图像采集单元1
如图2所示,该图像采集单元包括图像传感器01、补光器02和滤光组件03,图像传感器01位于滤光组件03的出光侧。图像传感器01用于通过多次曝光产生并输出第一图像信号和第二图像信号,第一预设曝光和第二预设曝光为该多次曝光的其中两次曝光。补光器02包括第一补光装置021,第一补光装置021用于进行近红外补光。滤光组件03包括第一滤光片031,第一滤光片031使可见光和部分近红外光通过。其中,第一补光装置021进行近红外补光时通过第一滤光片031的近红外光的强度高于第一补光装置021未进行近红外补光时通过第一滤光片031的近红外光的强度。
在本申请实施例中,参见图2,图像采集单元1还可以包括镜头04,此时,滤光组件03可以位于镜头04和图像传感器01之间,且图像传感器01位于滤光组件03的出光侧。或者, 镜头04位于滤光组件03与图像传感器01之间,且图像传感器01位于镜头04的出光侧。作为一种示例,第一滤光片031可以是滤光薄膜,这样,当滤光组件03位于镜头04和图像传感器01之间时,第一滤光片031可以贴在镜头04的出光侧的表面,或者,当镜头04位于滤光组件03与图像传感器01之间时,第一滤光片031可以贴在镜头04的入光侧的表面。
需要说明的一点是,补光器02可以位于图像采集单元1内,也可以位于图像采集单元1的外部。补光器02可以为图像采集单元1的一部分,也可以为独立于图像采集单元1的一个器件。当补光器02位于图像采集单元1的外部时,补光器02可以与图像采集单元1进行通信连接,从而可以保证图像采集单元1中的图像传感器01的曝光时序与补光器02包括的第一补光装置021的近红外补光时序存在一定的关系,如至少在第一预设曝光的部分曝光时间段内进行近红外补光,在第二预设曝光的曝光时间段内不进行近红外补光。
另外,第一补光装置021为可以发出近红外光的装置,例如近红外补光灯等,第一补光装置021可以以频闪方式进行近红外补光,也可以以类似频闪的其他方式进行近红外补光,本申请实施例对此不做限定。在一些示例中,当第一补光装置021以频闪方式进行近红外补光时,可以通过手动方式来控制第一补光装置021以频闪方式进行近红外补光,也可以通过软件程序或特定设备来控制第一补光装置021以频闪方式进行近红外补光,本申请实施例对此不做限定。其中,第一补光装置021进行近红外补光的时间段可以与第一预设曝光的曝光时间段重合,也可以大于第一预设曝光的曝光时间段或者小于第一预设曝光的曝光时间段,只要在第一预设曝光的整个曝光时间段或者部分曝光时间段内进行近红外补光,而在第二预设曝光的曝光时间段内不进行近红外补光即可。
需要说明的是,第二预设曝光的曝光时间段内不进行近红外补光,对于全局曝光方式来说,第二预设曝光的曝光时间段可以是开始曝光时刻和结束曝光时刻之间的时间段,对于卷帘曝光方式来说,第二预设曝光的曝光时间段可以是第二图像信号第一行有效图像的开始曝光时刻与最后一行有效图像的结束曝光时刻之间的时间段,但并不局限于此。例如,第二预设曝光的曝光时间段也可以是第二图像信号中目标图像对应的曝光时间段,目标图像为第二图像信号中与目标对象或目标区域所对应的若干行有效图像,这若干行有效图像的开始曝光时刻与结束曝光时刻之间的时间段可以看作第二预设曝光的曝光时间段。
需要说明的另一点是,由于第一补光装置021在对外部场景进行近红外补光时,入射到物体表面的近红外光可能会被物体反射,从而进入到第一滤光片031中。并且由于通常情况下,环境光可以包括可见光和近红外光,且环境光中的近红外光入射到物体表面时也会被物体反射,从而进入到第一滤光片031中。因此,在进行近红外补光时通过第一滤光片031的近红外光可以包括第一补光装置021进行近红外补光时经物体反射进入第一滤光片031的近红外光,在不进行近红外补光时通过第一滤光片031的近红外光可以包括第一补光装置021未进行近红外补光时经物体反射进入第一滤光片031的近红外光。也即是,在进行近红外补光时通过第一滤光片031的近红外光包括第一补光装置021发出的且经物体反射后的近红外光,以及环境光中经物体反射后的近红外光,在不进行近红外补光时通过第一滤光片031的近红外光包括环境光中经物体反射后的近红外光。
以图像采集单元1中,滤光组件03位于镜头04和图像传感器01之间,且图像传感器01位于滤光组件03的出光侧的结构特征为例,图像采集单元1采集第一图像信号和第二图像信号的过程为:参见图3,在图像传感器01进行第一预设曝光时,第一补光装置021进行近红外补光,此时拍摄场景中的环境光和第一补光装置进行近红外补光时被场景中物体反射 的近红外光经由镜头04、第一滤光片031之后,由图像传感器01通过第一预设曝光产生第一图像信号;参见图4,在图像传感器01进行第二预设曝光时,第一补光装置021不进行近红外补光,此时拍摄场景中的环境光经由镜头04、第一滤光片031之后,由图像传感器01通过第二预设曝光产生第二图像信号,在图像采集的一个帧周期内可以有M个第一预设曝光和N个第二预设曝光,第一预设曝光和第二预设曝光之间可以有多种组合的排序,在图像采集的一个帧周期中,M和N的取值以及M和N的大小关系可以根据实际需求设置,例如,M和N的取值可相等,也可不相同。
需要说明的是,第一滤光片031可以使部分近红外光波段的光通过,换句话说,通过第一滤光片031的近红外光波段可以为部分近红外光波段,也可以为全部近红外光波段,本申请实施例对此不做限定。
另外,由于环境光中的近红外光的强度低于第一补光装置021发出的近红外光的强度,因此,第一补光装置021进行近红外补光时通过第一滤光片031的近红外光的强度高于第一补光装置021未进行近红外补光时通过第一滤光片031的近红外光的强度。
其中,第一补光装置021进行近红外补光的波段范围可以为第二参考波段范围,第二参考波段范围可以为700纳米~800纳米,或者900纳米~1000纳米,这样,可以减轻常见的850纳米的近红灯造成的干扰。另外,入射到第一滤光片031的近红外光的波段范围可以为第一参考波段范围,第一参考波段范围为650纳米~1100纳米。
由于在进行近红外补光时,通过第一滤光片031的近红外光可以包括第一补光装置021进行近红外补光时经物体反射进入第一滤光片031的近红外光,以及环境光中的经物体反射后的近红外光。所以此时进入滤光组件03的近红外光的强度较强。但是,在不进行近红外补光时,通过第一滤光片031的近红外光包括环境光中经物体反射进入滤光组件03的近红外光。由于没有第一补光装置021进行补光的近红外光,所以此时通过第一滤光片031的近红外光的强度较弱。因此,根据第一预设曝光产生并输出的第一图像信号包括的近红外光的强度,要高于根据第二预设曝光产生并输出的第二图像信号包括的近红外光的强度。
第一补光装置021进行近红外补光的中心波长和/或波段范围可以有多种选择,本申请实施例中,为了使第一补光装置021和第一滤光片031有更好的配合,可以对第一补光装置021进行近红外补光的中心波长进行设计,以及对第一滤光片031的特性进行选择,从而使得在第一补光装置021进行近红外补光的中心波长为设定特征波长或者落在设定特征波长范围时,通过第一滤光片031的近红外光的中心波长和/或波段宽度可以达到约束条件。该约束条件主要是用来约束通过第一滤光片031的近红外光的中心波长尽可能准确,以及通过第一滤光片031的近红外光的波段宽度尽可能窄,从而避免出现因近红外光波段宽度过宽而引入波长干扰。
其中,第一补光装置021进行近红外补光的中心波长可以为第一补光装置021发出的近红外光的光谱中能量最大的波长范围内的平均值,也可以理解为第一补光装置021发出的近红外光的光谱中能量超过一定阈值的波长范围内的中间位置处的波长。
其中,设定特征波长或者设定特征波长范围可以预先设置。作为一种示例,第一补光装置021进行近红外补光的中心波长可以为750±10纳米的波长范围内的任一波长;或者,第一补光装置021进行近红外补光的中心波长为780±10纳米的波长范围内的任一波长;或者,第一补光装置021进行近红外补光的中心波长为940±10纳米的波长范围内的任一波长。也即是,设定特征波长范围可以为750±10纳米的波长范围、或者780±10纳米的波长范围、 或者940±10纳米的波长范围。示例性地,第一补光装置021进行近红外补光的中心波长为940纳米,第一补光装置021进行近红外补光的波长和相对强度之间的关系如图5所示。从图5可以看出,第一补光装置021进行近红外补光的波段范围为900纳米~1000纳米,其中,在940纳米处,近红外光的相对强度最高。
由于在进行近红外补光时,通过第一滤光片031的近红外光大部分为第一补光装置021进行近红外补光时经物体反射进入第一滤光片031的近红外光,因此,在一些实施例中,上述约束条件可以包括:通过第一滤光片031的近红外光的中心波长与第一补光装置021进行近红外补光的中心波长之间的差值位于波长波动范围内,作为一种示例,波长波动范围可以为0~20纳米。
其中,通过第一滤光片031的近红外补光的中心波长可以为第一滤光片031的近红外光通过率曲线中的近红外波段范围内波峰位置处的波长,也可以理解为第一滤光片031的近红外光通过率曲线中通过率超过一定阈值的近红外波段范围内的中间位置处的波长。
为了避免通过第一滤光片031的近红外光的波段宽度过宽而引入波长干扰,在一些实施例中,上述约束条件可以包括:第一波段宽度可以小于第二波段宽度。其中,第一波段宽度是指通过第一滤光片031的近红外光的波段宽度,第二波段宽度是指被第一滤光片031阻挡的近红外光的波段宽度。应当理解的是,波段宽度是指光线的波长所处的波长范围的宽度。例如,通过第一滤光片031的近红外光的波长所处的波长范围为700纳米~800纳米,那么第一波段宽度为800纳米减去700纳米,即100纳米。换句话说,通过第一滤光片031的近红外光的波段宽度小于第一滤光片031阻挡的近红外光的波段宽度。
例如,参见图6,图6为第一滤光片031可以通过的光的波长与通过率之间的关系的一种示意图。入射到第一滤光片031的近红外光的波段为650纳米~1100纳米,第一滤光片031可以使波长位于380纳米~650纳米的可见光通过,以及波长位于900纳米~1100纳米的近红外光通过,阻挡波长位于650纳米~900纳米的近红外光。也即是,第一波段宽度为1000纳米减去900纳米,即100纳米。第二波段宽度为900纳米减去650纳米,加上1100纳米减去1000纳米,即350纳米。100纳米小于350纳米,即通过第一滤光片031的近红外光的波段宽度小于第一滤光片031阻挡的近红外光的波段宽度。以上关系曲线仅是一种示例,对于不同的滤光片,能够通过滤光片的近红光波段的波段范围可以有所不同,被滤光片阻挡的近红外光的波段范围也可以有所不同。
为了避免在非近红外补光的时间段内,通过第一滤光片031的近红外光的波段宽度过宽而引入波长干扰,在一些实施例中,上述约束条件可以包括:通过第一滤光片031的近红外光的半带宽小于或等于50纳米。其中,半带宽是指通过率大于50%的近红外光的波段宽度。
为了避免通过第一滤光片031的近红外光的波段宽度过宽而引入波长干扰,在一些实施例中,上述约束条件可以包括:第三波段宽度可以小于参考波段宽度。其中,第三波段宽度是指通过率大于设定比例的近红外光的波段宽度,作为一种示例,参考波段宽度可以为50纳米~100纳米的波段范围内的任一波段宽度。设定比例可以为30%~50%中的任一比例,当然设定比例还可以根据使用需求设置为其他比例,本申请实施例对此不做限定。换句话说,通过率大于设定比例的近红外光的波段宽度可以小于参考波段宽度。
例如,参见图6,入射到第一滤光片031的近红外光的波段为650纳米~1100纳米,设定比例为30%,参考波段宽度为100纳米。从图6可以看出,在650纳米~1100纳米的近红外光的波段中,通过率大于30%的近红外光的波段宽度明显小于100纳米。
由于第一补光装置021至少在第一预设曝光的部分曝光时间段内提供近红外补光,在第二预设曝光的整个曝光时间段内不提供近红外补光,而第一预设曝光和第二预设曝光为图像传感器01的多次曝光中的其中两次曝光,也即是,第一补光装置021在图像传感器01的部分曝光的曝光时间段内提供近红外补光,在图像传感器01的另外一部分曝光的曝光时间段内不提供近红外补光。所以,第一补光装置021在单位时间长度内的补光次数可以低于图像传感器01在该单位时间长度内的曝光次数,其中,每相邻两次补光的间隔时间段内,间隔一次或多次曝光。
在一种可能的实现方式中,由于人眼容易将第一补光装置021进行近红外补光的颜色与交通灯中的红灯的颜色混淆,所以,参见图7,补光器02还可以包括第二补光装置022,第二补光装置022用于进行可见光补光。这样,如果第二补光装置022至少在第一预设曝光的部分曝光时间提供可见光补光,也即是,至少在第一预设曝光的部分曝光时间段内进行近红外补光和可见光补光,这两种光的混合颜色可以区别于交通灯中的红灯的颜色,从而避免了人眼将补光器02进行近红外补光的颜色与交通灯中的红灯的颜色混淆。另外,如果第二补光装置022在第二预设曝光的曝光时间段内提供可见光补光,由于第二预设曝光的曝光时间段内可见光的强度不是特别高,因此,在第二预设曝光的曝光时间段内进行可见光补光时,还可以提高第二图像信号中的可见光的亮度,进而保证图像采集的质量。
在一些实施例中,第二补光装置022可以用于以常亮方式进行可见光补光;或者,第二补光装置022可以用于以频闪方式进行可见光补光,其中,至少在第一预设曝光的部分曝光时间段内存在可见光补光,在第二预设曝光的整个曝光时间段内不存在可见光补光;或者,第二补光装置022可以用于以频闪方式进行可见光补光,其中,至少在第一预设曝光的整个曝光时间段内不存在可见光补光,在第二预设曝光的部分曝光时间段内存在可见光补光。当第二补光装置022常亮方式进行可见光补光时,不仅可以避免人眼将第一补光装置021进行近红外补光的颜色与交通灯中的红灯的颜色混淆,还可以提高第二图像信号中的可见光的亮度,进而保证图像采集的质量。当第二补光装置022以频闪方式进行可见光补光时,可以避免人眼将第一补光装置021进行近红外补光的颜色与交通灯中的红灯的颜色混淆,或者,可以提高第二图像信号中的可见光的亮度,进而保证图像采集的质量,而且还可以减少第二补光装置022的补光次数,从而延长第二补光装置022的使用寿命。
在一些实施例中,上述多次曝光是指一个帧周期内的多次曝光,也即是,图像传感器01在一个帧周期内进行多次曝光,从而产生并输出至少一帧第一图像信号和至少一帧第二图像信号。例如,1秒内包括25个帧周期,图像传感器01在每个帧周期内进行多次曝光,从而产生至少一帧第一图像信号和至少一帧第二图像信号,将一个帧周期内产生的第一图像信号和第二图像信号称为一组图像信号,这样,25个帧周期内就会产生25组图像信号。其中,第一预设曝光和第二预设曝光可以是一个帧周期内多次曝光中相邻的两次曝光,也可以是一个帧周期内多次曝光中不相邻的两次曝光,本申请实施例对此不做限定。
第一图像信号是第一预设曝光产生并输出的,第二图像信号是第二预设曝光产生并输出的,在产生并输出第一图像信号和第二图像信号之后,可以对第一图像信号和第二图像信号进行处理。在某些情况下,第一图像信号和第二图像信号的用途可能不同,所以在一些实施例中,第一预设曝光与第二预设曝光的至少一个曝光参数可以不同。作为一种示例,该至少一个曝光参数可以包括但不限于曝光时间、模拟增益、数字增益、光圈大小中的一种或多种。其中,曝光增益包括模拟增益和/或数字增益。
在一些实施例中。可以理解的是,与第二预设曝光相比,在进行近红外补光时,图像传感器01感应到的近红外光的强度较强,相应地产生并输出的第一图像信号包括的近红外光的亮度也会较高。但是较高亮度的近红外光不利于外部场景信息的获取。而且在一些实施例中,曝光增益越大,图像传感器01输出的图像信号的亮度越高,曝光增益越小,图像传感器01输出的图像信号的亮度越低,因此,为了保证第一图像信号包含的近红外光的亮度在合适的范围内,在第一预设曝光和第二预设曝光的至少一个曝光参数不同的情况下,作为一种示例,第一预设曝光的曝光增益可以小于第二预设曝光的曝光增益。这样,在第一补光装置021进行近红外补光时,图像传感器01产生并输出的第一图像信号包含的近红外光的亮度,不会因第一补光装置021进行近红外补光而过高。
在另一些实施例中,曝光时间越长,图像传感器01得到的图像信号包括的亮度越高,并且外部场景中的运动的对象在图像信号中的运动拖尾越长;曝光时间越短,图像传感器01得到的图像信号包括的亮度越低,并且外部场景中的运动的对象在图像信号中的运动拖尾越短。因此,为了保证第一图像信号包含的近红外光的亮度在合适的范围内,且外部场景中的运动的对象在第一图像信号中的运动拖尾较短。在第一预设曝光和第二预设曝光的至少一个曝光参数不同的情况下,作为一种示例,第一预设曝光的曝光时间可以小于第二预设曝光的曝光时间。这样,在第一补光装置021进行近红外补光时,图像传感器01产生并输出的第一图像信号包含的近红外光的亮度,不会因第一补光装置021进行近红外补光而过高。并且较短的曝光时间使外部场景中的运动的对象在第一图像信号中出现的运动拖尾较短,从而有利于对运动对象的识别。示例性地,第一预设曝光的曝光时间为40毫秒,第二预设曝光的曝光时间为60毫秒等。
值得注意的是,在一些实施例中,当第一预设曝光的曝光增益小于第二预设曝光的曝光增益时,第一预设曝光的曝光时间不仅可以小于第二预设曝光的曝光时间,还可以等于第二预设曝光的曝光时间。同理,当第一预设曝光的曝光时间小于第二预设曝光的曝光时间时,第一预设曝光的曝光增益可以小于第二预设曝光的曝光增益,也可以等于第二预设曝光的曝光增益。
在另一些实施例中,第一图像信号和第二图像信号的用途可以相同,例如第一图像信号和第二图像信号都用于智能分析时,为了能使进行智能分析的人脸或目标在运动时能够有同样的清晰度,第一预设曝光与第二预设曝光的至少一个曝光参数可以相同。作为一种示例,第一预设曝光的曝光时间可以等于第二预设曝光的曝光时间,如果第一预设曝光的曝光时间和第二预设曝光的曝光时间不同,会出现曝光时间较长的一路图像信号存在运动拖尾,导致两路图像信号的清晰度不同。同理,作为另一种示例,第一预设曝光的曝光增益可以等于第二预设曝光的曝光增益。
值得注意的是,在一些实施例中,当第一预设曝光的曝光时间等于第二预设曝光的曝光时间时,第一预设曝光的曝光增益可以小于第二预设曝光的曝光增益,也可以等于第二预设曝光的曝光增益。同理,当第一预设曝光的曝光增益等于第二预设曝光的曝光增益时,第一预设曝光的曝光时间可以小于第二预设曝光的曝光时间,也可以等于第二预设曝光的曝光时间。
其中,图像传感器01可以包括多个感光通道,每个感光通道可以用于感应至少一种可见光波段的光,以及感应近红外波段的光。也即是,每个感光通道既能感应至少一种可见光波段的光,又能感应近红外波段的光,这样,可以保证第一图像信号和第二图像信号中具有完 整的分辨率,不缺失像素值。在一种可能的实现方式中,该多个感光通道可以用于感应至少两种不同的可见光波段的光。
在一些实施例中,该多个感光通道可以包括R感光通道、G感光通道、B感光通道、Y感光通道、W感光通道和C感光通道中的至少两种。其中,R感光通道用于感应红光波段和近红外波段的光,G感光通道用于感应绿光波段和近红外波段的光,B感光通道用于感应蓝光波段和近红外波段的光,Y感光通道用于感应黄光波段和近红外波段的光。由于在一些实施例中,可以用W来表示用于感应全波段的光的感光通道,在另一些实施例中,可以用C来表示用于感应全波段的光的感光通道,所以当该多个感光通道包括用于感应全波段的光的感光通道时,这个感光通道可以是W感光通道,也可以是C感光通道。也即是,在实际应用中,可以根据使用需求来选择用于感应全波段的光的感光通道。示例性地,图像传感器01可以为RGB传感器、RGBW传感器,或RCCB传感器,或RYYB传感器。其中,RGB传感器中的R感光通道、G感光通道和B感光通道的分布方式可以参见图8,RGBW传感器中的R感光通道、G感光通道、B感光通道和W感光通道的分布方式可以参见图9,RCCB传感器中的R感光通道、C感光通道和B感光通道分布方式可以参见图10,RYYB传感器中的R感光通道、Y感光通道和B感光通道分布方式可以参见图11。
在另一些实施例中,有些感光通道也可以仅感应近红外波段的光,而不感应可见光波段的光,这样,可以保证第一图像信号中具有完整的分辨率,不缺失像素值。作为一种示例,该多个感光通道可以包括R感光通道、G感光通道、B感光通道、IR感光通道中的至少两种。其中,R感光通道用于感应红光波段和近红外波段的光,G感光通道用于感应绿光波段和近红外波段的光,B感光通道用于感应蓝光波段和近红外波段的光,IR感光通道用于感应近红外波段的光。
示例地,图像传感器01可以为RGBIR传感器,其中,RGBIR传感器中的每个IR感光通道都可以感应近红外波段的光,而不感应可见光波段的光。
其中,当图像传感器01为RGB传感器时,相比于其他图像传感器,如RGBIR传感器等,RGB传感器采集的RGB信息更完整,RGBIR传感器有一部分的感光通道采集不到可见光,所以RGB传感器采集的图像的色彩细节更准确。
值得注意的是,图像传感器01包括的多个感光通道可以对应多条感应曲线。示例性地,参见图12,图12中的R曲线代表图像传感器01对红光波段的光的感应曲线,G曲线代表图像传感器01对绿光波段的光的感应曲线,B曲线代表图像传感器01对蓝光波段的光的感应曲线,W(或者C)曲线代表图像传感器01感应全波段的光的感应曲线,NIR(Near infrared,近红外光)曲线代表图像传感器01感应近红外波段的光的感应曲线。
作为一种示例,图像传感器01可以采用全局曝光方式,也可以采用卷帘曝光方式。其中,全局曝光方式是指每一行有效图像的曝光开始时刻均相同,且每一行有效图像的曝光结束时刻均相同。换句话说,全局曝光方式是所有行有效图像同时进行曝光并且同时结束曝光的一种曝光方式。卷帘曝光方式是指不同行有效图像的曝光时间不完全重合,也即是,一行有效图像的曝光开始时刻都晚于上一行有效图像的曝光开始时刻,且一行有效图像的曝光结束时刻都晚于上一行有效图像的曝光结束时刻。另外,卷帘曝光方式中每一行有效图像结束曝光后可以进行数据输出,因此,从第一行有效图像的数据开始输出时刻到最后一行有效图像的数据结束输出时刻之间的时间可以表示为读出时间。
示例性地,参见图13,图13为一种卷帘曝光方式的示意图。从图13可以看出,第1行 有效图像在T1时刻开始曝光,在T3时刻结束曝光,第2行有效图像在T2时刻开始曝光,在T4时刻结束曝光,T2时刻相比于T1时刻向后推移了一个时间段,T4时刻相比于T3时刻向后推移了一个时间段。另外,第1行有效图像在T3时刻结束曝光并开始输出数据,在T5时刻结束数据的输出,第n行有效图像在T6时刻结束曝光并开始输出数据,在T7时刻结束数据的输出,则T3~T7时刻之间的时间即为读出时间。
在一些实施例中,当图像传感器01采用全局曝光方式进行多次曝光时,对于任意一次近红外补光,近红外补光的时间段与最邻近的第二预设曝光的曝光时间段不存在交集,近红外补光的时间段是第一预设曝光的曝光时间段的子集,或者,近红外补光的时间段与第一预设曝光的曝光时间段存在交集,或者第一预设曝光的曝光时间段是近红外补光的时间段的子集。这样,即可实现至少在第一预设曝光的部分曝光时间段内进行近红外补光,在第二预设曝光的整个曝光时间段内不进行近红外补光,从而不会对第二预设曝光造成影响。
例如,参见图14,对于任意一次近红外补光,近红外补光的时间段与最邻近的第二预设曝光的曝光时间段不存在交集,近红外补光的时间段是第一预设曝光的曝光时间段的子集。参见图15,对于任意一次近红外补光,近红外补光的时间段与最邻近的第二预设曝光的曝光时间段不存在交集,近红外补光的时间段与第一预设曝光的曝光时间段存在交集。参见图16,对于任意一次近红外补光,近红外补光的时间段与最邻近的第二预设曝光的曝光时间段不存在交集,第一预设曝光的曝光时间段是近红外补光的时间段的子集。图14至图16仅是一种示例,第一预设曝光和第二预设曝光的排序可以不限于这些示例。
在另一些实施例中,当图像传感器01采用卷帘曝光方式进行多次曝光时,对于任意一次近红外补光,近红外补光的时间段与最邻近的第二预设曝光的曝光时间段不存在交集。并且,近红外补光的开始时刻不早于第一预设曝光中最后一行有效图像的曝光开始时刻,近红外补光的结束时刻不晚于第一预设曝光中第一行有效图像的曝光结束时刻。或者,近红外补光的开始时刻不早于第一预设曝光之前的最邻近的第二预设曝光的最后一行有效图像的曝光结束时刻且不晚于第一预设曝光中第一行有效图像的曝光结束时刻,近红外补光的结束时刻不早于第一预设曝光中最后一行有效图像的曝光开始时刻且不晚于第一预设曝光之后的最邻近的第二预设曝光的第一行有效图像的曝光开始时刻。或者,近红外补光的开始时刻不早于第一预设曝光之前的最邻近的第二预设曝光的最后一行有效图像的曝光结束时刻且不晚于第一预设曝光中第一行有效图像的曝光开始时刻,近红外补光的结束时刻不早于第一预设曝光中最后一行有效图像的曝光结束时刻且不晚于第一预设曝光之后的最邻近的第二预设曝光的第一行有效图像的曝光开始时刻。
例如,参见图17,对于任意一次近红外补光,近红外补光的时间段与最邻近的第二预设曝光的曝光时间段不存在交集,并且,近红外补光的开始时刻不早于第一预设曝光中最后一行有效图像的曝光开始时刻,近红外补光的结束时刻不晚于第一预设曝光中第一行有效图像的曝光结束时刻。参见图18,对于任意一次近红外补光,近红外补光的时间段与最邻近的第二预设曝光的曝光时间段不存在交集,并且,近红外补光的开始时刻不早于第一预设曝光之前的最邻近的第二预设曝光的最后一行有效图像的曝光结束时刻且不晚于第一预设曝光中第一行有效图像的曝光结束时刻,近红外补光的结束时刻不早于第一预设曝光中最后一行有效图像的曝光开始时刻且不晚于第一预设曝光之后的最邻近的第二预设曝光的第一行有效图像的曝光开始时刻。参见图19,对于任意一次近红外补光,近红外补光的时间段与最邻近的第二预设曝光的曝光时间段不存在交集,并且,近红外补光的开始时刻不早于第一预设曝光之 前的最邻近的第二预设曝光的最后一行有效图像的曝光结束时刻且不晚于第一预设曝光中第一行有效图像的曝光开始时刻,近红外补光的结束时刻不早于第一预设曝光中最后一行有效图像的曝光结束时刻且不晚于第一预设曝光之后的最邻近的第二预设曝光的第一行有效图像的曝光开始时刻。图17至图19中,针对第一预设曝光和第二预设曝光,倾斜虚线表示曝光开始时刻,倾斜实线表示曝光结束时刻,针对第一预设曝光,竖直虚线之间表示第一预设曝光对应的近红外补光的时间段,图17至图19仅是一种示例,第一预设曝光和第二预设曝光的排序可以不限于这些示例。
其中,多次曝光可以包括奇数次曝光和偶数次曝光,这样,第一预设曝光和第二预设曝光可以包括但不限于如下几种方式:
第一种可能的实现方式,第一预设曝光为奇数次曝光中的一次曝光,第二预设曝光为偶数次曝光中的一次曝光。这样,多次曝光可以包括按照奇偶次序排列的第一预设曝光和第二预设曝光。例如,多次曝光中的第1次曝光、第3个曝光、第5次曝光等奇数次曝光均为第一预设曝光,第2次曝光、第4次曝光、第6次曝光等偶数次曝光均为第二预设曝光。
第二种可能的实现方式,第一预设曝光为偶数次曝光中的一次曝光,第二预设曝光为奇数次曝光中的一次曝光,这样,多次曝光可以包括按照奇偶次序排列的第一预设曝光和第二预设曝光。例如,多次曝光中的第1次曝光、第3个曝光、第5次曝光等奇数次曝光均为第二预设曝光,第2次曝光、第4次曝光、第6次曝光等偶数次曝光均为第一预设曝光。
第三种可能的实现方式,第一预设曝光为指定的奇数次曝光中的一次曝光,第二预设曝光为除指定的奇数次曝光之外的其他曝光中的一次曝光,也即是,第二预设曝光可以为多次曝光中的奇数次曝光,也可以为多次曝光中的偶数次曝光。
第四种可能的实现方式,第一预设曝光为指定的偶数次曝光中的一次曝光,第二预设曝光为除指定的偶数次曝光之外的其他曝光中的一次曝光,也即是,第二预设曝光可以为多次曝光中的奇数次曝光,也可以为多次曝光中的偶数次曝光。
第五种可能的实现方式,第一预设曝光为第一曝光序列中的一次曝光,第二预设曝光为第二曝光序列中的一次曝光。
第六种可能的实现方式,第一预设曝光为第二曝光序列中的一次曝光,第二预设曝光为第一曝光序列中的一次曝光。
其中,上述多次曝光包括多个曝光序列,第一曝光序列和第二曝光序列为该多个曝光序列中的同一个曝光序列或者两个不同的曝光序列,每个曝光序列包括N次曝光,该N次曝光包括1次第一预设曝光和N-1次第二预设曝光,或者,该N次曝光包括1次第二预设曝光和N-1次第二预设曝光,N为大于2的正整数。
例如,每个曝光序列包括3次曝光,这3次曝光可以包括1次第一预设曝光和2次第二预设曝光,这样,每个曝光序列的第1次曝光可以为第一预设曝光,第2次和第3次曝光为第二预设曝光。也即是,每个曝光序列可以表示为:第一预设曝光、第二预设曝光、第二预设曝光。或者,这3次曝光可以包括1次第二预设曝光和2次第一预设曝光,这样,每个曝光序列的第1次曝光可以为第二预设曝光,第2次和第3次曝光为第一预设曝光。也即是,每个曝光序列可以表示为:第二预设曝光、第一预设曝光、第一预设曝光。
上述仅提供了六种第一预设曝光和第二预设曝光的可能的实现方式,实际应用中,不限于上述六种可能的实现方式,本申请实施例对此不做限定。
在一些实施例中,参见图20,滤光组件03还包括第二滤光片032和切换部件033,第一 滤光片031和第二滤光片032均与切换部件033连接。切换部件033,用于将第二滤光片032切换到图像传感器01的入光侧,在第二滤光片032切换到图像传感器01的入光侧之后,第二滤光片032使可见光波段的光通过,阻挡近红外光波段的光,图像传感器01,用于通过曝光产生并输出第三图像信号。
需要说明的是,切换部件033用于将第二滤光片032切换到图像传感器01的入光侧,也可以理解为第二滤光片032替换第一滤光片031在图像传感器01的入光侧的位置。在第二滤光片032切换到图像传感器01的入光侧之后,第一补光装置021可以处于关闭状态也可以处于开启状态。
综上,当环境光中的可见光强度较弱时,例如夜晚,可以通过第一补光装置021进行频闪式的补光,使图像传感器01产生并输出包含近红外亮度信息的第一图像信号,以及包含可见光亮度信息的第二图像信号,且由于第一图像信号和第二图像信号均由同一个图像传感器01获取,所以第一图像信号的视点与第二图像信号的视点相同,从而通过第一图像信号和第二图像信号可以获取完整的外部场景的信息。在可见光强度较强时,例如白天,白天近红外光的占比比较强,采集的图像的色彩还原度不佳,可以通过图像传感器01产生并输出包含可见光亮度信息的第三图像信号,这样即使白天,也可以采集到色彩还原度比较好的图像,也可达到不论可见光强度的强弱,或者说不论白天还是夜晚,均能高效、简便地获取外部场景的真实色彩信息,提高了图像采集单元1的使用灵活性,并且还可以方便地与其他图像采集单元进行兼容。并且,这种情况下,图像处理器2可以对第三图像信号进行处理,输出第三图像信息,人脸分析单元3可以对第三图像信息进行人脸分析,得到人脸分析结果。
本申请利用图像传感器01的曝光时序来控制补光装置的近红外补光时序,以便在第一预设曝光的过程中进行近红外补光并产生第一图像信号,在第二预设曝光的过程中不进行近红外补光并产生第二图像信号,这样的数据采集方式,可以在结构简单、降低成本的同时直接采集到亮度信息不同的第一图像信号和第二图像信号,也即通过一个图像传感器01就可以获取两种不同的图像信号,使得该图像采集单元1更加简便,进而使得获取第一图像信号和第二图像信号也更加高效。并且,第一图像信号和第二图像信号均由同一个图像传感器01产生并输出,所以第一图像信号对应的视点与第二图像信号对应的视点相同。因此,通过第一图像信号和第二图像信号可以共同获取外部场景的信息,且不会存在因第一图像信号对应的视点与第二图像信号对应的视点不相同,而导致根据第一图像信号和第二图像信号生成的图像不对齐。
2、图像处理器2
图像处理器2可以是一个包含信号处理算法或程序的逻辑平台。例如,图像处理器2可以是基于X86或ARM架构的计算机,也可以是FPGA(Field-Programmable Gate Array,现场可编程门阵列)逻辑电路。
参见图21,图像处理器2用于采用第一处理参数对第一图像信号和第二图像信号中的至少一个进行处理,得到第一图像信息。并且,图像处理器2还用于采用第二处理参数对第一图像信号和第二图像信号中的至少一个进行处理,得到第二图像信息,然后将第二图像信息传输到显示设备,由显示设备显示第二图像信息。
本申请实施例中可以根据人脸分析和显示这两种不同的应用需求,对第一图像信号和第二图像信号进行灵活地组合处理,从而可以使这两种不同的应用需求均能得到较好的满足。
需要说明的是,图像处理器2对第一图像信号和第二图像信号中的至少一个进行的处理 可以包括黑电平、图像插值、数字增益、白平衡、图像降噪、图像增强、图像融合等中的至少一种。
另外,第一处理参数和第二处理参数可以相同,也可以不同。可选地,当第一图像信息和第二图像信息均是对第一图像信号处理得到时,或者,当第一图像信息和第二图像信息均是对第二图像信号处理得到时,或者,当第一图像信息和第二图像信息均是对第一图像信号和第二图像信号处理得到时,第一处理参数和第二处理参数可以不同。第一处理参数可以根据显示需求预先进行设置,第二处理参数可以根据人脸分析需求预先进行设置。第一处理参数和第二处理参数是对第一图像信号和第二图像信号中的至少一个进行黑电平、图像插值、数字增益、白平衡、图像降噪、图像增强、图像融合等处理时所需的参数。
再者,由于第一图像信息用于进行人脸分析,所以图像处理器2可以灵活地选取较为合适的第一处理参数和图像信号组合来得到第一图像信息,以达到更利于人脸分析的图像效果,提高人脸识别准确率。同理,由于第二图像信息用于进行显示,所以图像处理器2可以灵活地选取较为合适的第二处理参数和图像信号组合来得到第二图像信息,以达到质量更优的图像显示效果。
作为一种示例,图像处理器2可以采用第一处理参数对包含有近红外光信息的第一图像信号进行处理,输出灰度图像信息作为第一图像信息。这种情况下,由于第一图像信号包含有近红外光信息,所以对第一图像信号进行处理得到的灰度图像信息的图像质量较好,比较适合用于进行人脸分析,可以提高人脸识别准确率。
作为一种示例,图像处理器2可以采用第二处理参数对包含有可见光信息的第二图像信号进行处理,输出彩色图像信息作为第二图像信息。这种情况下,由于第二图像信号包含有可见光信息,所以对第二图像信号进行处理得到的彩色图像信息的色彩还原较为准确,比较适合用于进行显示,可以提高图像显示效果。
作为一种示例,图像处理器2可以采用第一处理参数对第一图像信号和第二图像信号进行处理,输出第一图像信息。这种情况下,图像处理器2需要对第一图像信号和第二图像信号进行图像融合处理。
作为一种示例,图像处理器2可以采用第二处理参数对第一图像信号和第二图像信号进行处理,输出第二图像信息。这种情况下,图像处理器2需要对第一图像信号和第二图像信号进行图像融合处理。
需要说明的是,由于图像采集单元1产生第一图像信号的时间与产生第二图像信号的时间不同,所以第一图像信号和第二图像信号并不是同一时间进入图像处理器2。如果图像处理器2需要对第一图像信号和第二图像信号进行图像融合处理,则需要先对第一图像信号和第二图像信号进行同步。
因而,图像处理器2可以包括缓存,该缓存用于存储第一图像信号和第二图像信号中的至少一个,以实现第一图像信号和第二图像信号的同步。这种情况下,图像处理器2可以对同步后的第一图像信号和第二图像信号进行图像融合处理,来得到第一图像信息。当然,该缓存也可以用于存储其它信息,如可以用于存储第一图像信息和第二图像信息中的至少一个。
例如,当第一图像信号早于第二图像信号进入图像处理器2时,图像处理器2可以先将第一图像信号存储在该缓存中,待第二图像信号也进入图像处理器2后,再对第一图像信号和第二图像信号进行图像融合处理。又例如,当第二图像信号早于第一图像信号进入图像处理器2时,图像处理器2可以先将第二图像信号存储在该缓存中,待第一图像信号也进入图 像处理器2后,再对第一图像信号和第二图像信号进行图像融合处理。
进一步地,图像处理器2还用于在对第一图像信号和第二图像信号中的至少一个进行处理的过程中,调整图像采集单元1的曝光参数。具体地,图像处理器2可以在对第一图像信号和第二图像信号中的至少一个进行处理的过程中,根据处理过程中产生的属性参数,确定曝光参数调整值,然后将携带有该曝光参数调整值的控制信号发送给图像采集单元1,由图像采集单元1根据该曝光参数调整值对自身的曝光参数进行调整。
需要说明的是,在对第一图像信号和第二图像信号中的至少一个进行处理的过程中产生的属性参数可以包括图像分辨率、图像亮度、图像对比度等。
另外,图像处理器2对图像采集单元1的曝光参数进行调整,即是对图像采集单元1中的图像传感器01的曝光参数进行调整。
再者,由于补光器02的工作状态和滤光组件03的工作状态均与图像传感器01的曝光参数紧密关联,所以图像处理器2在对图像传感器01的曝光参数进行调整的同时,也可以对补光器02的工作状态和滤光组件03的工作状态进行控制。例如,图像处理器2可以控制补光器02中第一补光装置021的开关状态,也可以控制补光器02中第二补光装置022的开关状态,也可以控制滤光组件03中第一滤光片031与第二滤光片032之间的切换。
3、人脸分析单元3
人脸分析单元3是包含人脸分析算法或程序的逻辑平台。例如,人脸分析单元3可以是基于X86或ARM架构的计算机,也可以是FPGA逻辑电路。人脸分析单元3可以与图像处理器2共用硬件,如人脸分析单元3与图像处理器2可以运行于同一FPGA逻辑电路上。当然,人脸分析单元3与图像处理器2也可以不共用硬件,本申请实施例对此不作限定。
参见图22,人脸分析单元3可以包括:人脸检测子单元311、人脸识别子单元312和人脸数据库313。
一种可能的实现方式中,人脸数据库313中存储有至少一个参考人脸信息。人脸检测子单元311用于对第一图像信息进行人脸检测,输出检测到的人脸图像,并对该人脸图像进行活体鉴别。人脸识别子单元312用于在该人脸图像通过活体鉴别时,提取该人脸图像的人脸信息,将该人脸图像的人脸信息与人脸数据库313中存储的至少一个参考人脸信息进行比对,得到人脸分析结果。
需要说明的是,人脸数据库313中存储的至少一个参考人脸信息可以预先进行设置。例如,该多个参考人脸信息可以是预先设置的拥有某种权限(如打开门禁权限)的用户的人脸图像的人脸信息。
另外,人脸检测子单元311可以对第一图像信息进行人脸检测,并对检测到的人脸图像进行活体鉴别,以防止照片、录像、面具等的伪装攻击。并且,当该人脸图像未通过活体鉴别时,可以直接结束操作,确定人脸分析结果为人脸识别失败。
再者,人脸识别子单元312可以在该人脸图像通过活体鉴别时,将该人脸图像的人脸信息与人脸数据库313中存储的至少一个参考人脸信息进行比对。如果该人脸图像的人脸信息与任一参考人脸信息比对成功,则可以确定人脸分析结果为识别成功;如果该人脸图像的人脸信息与该至少一个参考人脸信息均比对失败,则可以确定人脸分析结果为识别失败。
需要说明的是,人脸信息可以为人脸特征数据等,人脸特征数据可以包括脸型曲率、面部轮廓点(如眼虹膜、鼻翼和嘴角等)的属性(如大小、位置和距离等)等。
作为一种示例,人脸识别子单元312在将该人脸图像的人脸信息与人脸数据库313中存 储的至少一个参考人脸信息进行比对时,对于该至少一个参考人脸信息中的任意一个参考人脸信息,人脸识别子单元312可以计算这个参考人脸信息与该人脸图像的人脸信息之间的匹配度,当该匹配度大于或等于匹配度阈值时,确定这个参考人脸信息与该人脸图像的人脸信息比对成功,当该匹配度小于匹配度阈值时,确定这个参考人脸信息与该人脸图像的人脸信息比对失败。匹配度阈值可以预先进行设置。
另一种可能的实现方式中,人脸数据库313中存储有至少一个参考人脸信息。人脸检测子单元311用于对第一图像信息进行人脸检测,输出检测到的第一人脸图像,对第一人脸图像进行活体鉴别,以及对第二图像信息进行人脸检测,输出检测到的第二人脸图像,对第二人脸图像进行活体鉴别。人脸识别子单元312用于在第一人脸图像和第二人脸图像均通过活体鉴别时,提取第一人脸图像的人脸信息,将第一人脸图像的人脸信息与人脸数据库313中存储的至少一个参考人脸信息进行比对,得到人脸分析结果。
需要说明的是,人脸数据库313中存储的至少一个参考人脸信息可以预先进行设置。例如,该至少一个参考人脸信息可以是预先设置的拥有某种权限的用户的人脸图像的人脸信息。
另外,人脸检测子单元311可以对第一图像信息和第二图像信息均进行人脸检测,并对检测到的第一人脸图像和第二人脸图像均进行活体鉴别,当第一人脸图像和第二人脸图像中的任意一个未通过活体鉴别时,可以直接结束操作,确定人脸分析结果为人脸识别失败。如此,人脸检测子单元311是通过第一图像信息和第二图像信息实现了多光谱的活体鉴别,从而有效提高了活体鉴别的准确率。
再者,人脸识别子单元312可以在第一人脸图像和第二人脸图像均通过活体鉴别时,将第一人脸图像的人脸信息与人脸数据库313中存储的至少一个参考人脸信息进行比对。如果第一人脸图像的人脸信息与任一参考人脸信息比对成功,则可以确定人脸分析结果为识别成功;如果第一人脸图像的人脸信息与该至少一个参考人脸信息均比对失败,则可以确定人脸分析结果为识别失败。
需要说明的是,人脸信息可以为人脸特征数据等,人脸特征数据可以包括脸型曲率、面部轮廓点的属性等。
作为一种示例,人脸识别子单元312在将第一人脸图像的人脸信息与人脸数据库313中存储的至少一个参考人脸信息进行比对时,对于该至少一个参考人脸信息中的任意一个参考人脸信息,人脸识别子单元312可以计算这个参考人脸信息与第一人脸图像的人脸信息之间的匹配度,当该匹配度大于或等于匹配度阈值时,确定这个参考人脸信息与第一人脸图像的人脸信息比对成功,当该匹配度小于匹配度阈值时,确定这个参考人脸信息与第一人脸图像的人脸信息比对失败。匹配度阈值可以预先进行设置。
又一种可能的实现方式中,人脸数据库313中存储有至少一个参考人脸信息。人脸检测子单元311用于对第二图像信息进行人脸检测,输出检测到的第二人脸图像,对第二人脸图像进行活体鉴别,以及在第二人脸图像通过活体鉴别时,对第一图像信息进行人脸检测,输出检测到的第一人脸图像。人脸识别子单元312用于提取第一人脸图像的人脸信息,将第一人脸图像的人脸信息与人脸数据库313中存储的至少一个参考人脸信息进行比对,得到人脸分析结果。
例如,第一图像信息是对第一图像信号处理得到的灰度图像信息,第二图像信息是对第二图像信号处理得到的彩色图像信息,人脸数据库313中存储有至少一个参考人脸信息;人脸检测子单元311用于对彩色图像信息进行人脸检测,输出检测到的彩色人脸图像,对彩色 人脸图像进行活体鉴别,以及在彩色人脸图像通过活体鉴别时,对灰度人脸图像进行人脸检测,输出检测到的灰色人脸图像;人脸识别子单元312用于提取灰度人脸图像的人脸信息,将灰度人脸图像的人脸信息与人脸数据库313中存储的至少一个参考人脸信息进行比对,得到人脸分析结果。
又例如,第一图像信息是对第一图像信号处理得到的灰度图像信息,第二图像信息是对第一图像信号和第二图像信号进行图像融合处理得到的融合图像信息,人脸数据库313中存储有至少一个参考人脸信息;人脸检测子单元311用于对融合图像信息进行人脸检测,输出检测到的融合人脸图像,对融合人脸图像进行活体鉴别,以及在融合人脸图像通过活体鉴别时,对灰度人脸图像进行人脸检测,输出检测到的灰色人脸图像;人脸识别子单元312用于提取灰度人脸图像的人脸信息,将灰度人脸图像的人脸信息与人脸数据库313中存储的至少一个参考人脸信息进行比对,得到人脸分析结果。
又例如,第一图像信息是对第一图像信号和第二图像信号进行图像融合处理得到的融合图像信息,第二图像信息是对第一图像信号处理得到的灰度图像信息,人脸数据库313中存储有至少一个参考人脸信息;人脸检测子单元311用于对灰度图像信息进行人脸检测,输出检测到的灰度人脸图像,对灰度人脸图像进行活体鉴别,以及在灰度人脸图像通过活体鉴别时,对融合人脸图像进行人脸检测,输出检测到的融合人脸图像;人脸识别子单元312用于提取融合人脸图像的人脸信息,将融合人脸图像的人脸信息与人脸数据库313中存储的至少一个参考人脸信息进行比对,得到人脸分析结果。
再例如,第一图像信息是对第一图像信号和第二图像信号进行图像融合处理得到的融合图像信息,第二图像信息是对第二图像信号处理得到的彩色图像信息,人脸数据库313中存储有至少一个参考人脸信息;人脸检测子单元311用于对彩色图像信息进行人脸检测,输出检测到的彩色人脸图像,对彩色人脸图像进行活体鉴别,以及在彩色人脸图像通过活体鉴别时,对融合人脸图像进行人脸检测,输出检测到的融合人脸图像;人脸识别子单元312用于提取融合人脸图像的人脸信息,将融合人脸图像的人脸信息与人脸数据库313中存储的至少一个参考人脸信息进行比对,得到人脸分析结果。
还例如,第一图像信息是对第一图像信号和第二图像信号进行图像融合处理得到的第一融合图像信息,第二图像信息是对第一图像信号和第二图像信号进行图像融合处理得到的第二融合图像信息,人脸数据库313中存储有至少一个参考人脸信息;人脸检测子单元311用于对第二融合图像信息进行人脸检测,输出检测到的第二融合人脸图像,对第二融合人脸图像进行活体鉴别,以及在第二融合人脸图像通过活体鉴别时,对第一融合人脸图像进行人脸检测,输出检测到的第一融合人脸图像;人脸识别子单元312用于提取第一融合人脸图像的人脸信息,将第一融合人脸图像的人脸信息与人脸数据库313中存储的至少一个参考人脸信息进行比对,得到人脸分析结果。
进一步地,如图23所示,本申请实施例中不仅可以由图像处理器2将第二图像信息传输给显示设备进行显示,人脸分析单元3在得到人脸分析结果后,也可以将人脸分析结果传输到显示设备,由显示设备对该人脸分析结果进行显示。如此,用户就可以及时获知该人脸分析结果。
在本申请实施例中,人脸识别装置包括图像采集单元1、图像处理器2和人脸分析单元3。图像采集单元1包括滤光组件03,滤光组件包括第一滤光片031,第一滤光片031使可见光和部分近红外光通过。图像采集单元1可以通过第一预设曝光和第二预设曝光同时采集到包 含近红外光信息(如近红外光亮度信息)的第一图像信号和包含可见光信息的第二图像信号。相对于需要通过后期将采集的原始图像信号中的近红外光信息和可见光信息进行分离的图像处理方式,本申请中图像采集单元1可以直接采集到第一图像信号和第二图像信号,采集过程简单有效。如此,图像处理器2对第一图像信号和第二图像信号中的至少一个进行处理后得到的第一图像信息的质量更高,继而人脸分析单元3对第一图像信息进行人脸分析后就可以得到更为准确的人脸分析结果,从而可以有效提高人脸识别准确率。
图24是本申请实施例提供的一种门禁设备的结构示意图。参见图24,该门禁设备包括门禁控制器001和上述图1-图23任一所示的人脸识别装置002。
人脸识别装置002用于将人脸分析结果传输到门禁控制器001。门禁控制器001用于在该人脸分析结果为识别成功时,输出用于打开门禁的控制信号。门禁控制器001在该人脸分析结果为识别失败时,不执行操作。
在本申请实施例中,门禁设备包括门禁控制器001和人脸识别装置002,人脸识别装置002的人脸识别准确率较高,因而可以保证门禁控制器001的控制准确性,保证门禁安全。
需要说明的是,本申请实施例提供的人脸识别装置不仅可以应用于门禁设备,也可以应用于其它有人脸识别需求的设备中,如支付设备等,本申请实施例对此不作限定。
下面以基于上述图1-图23所示的实施例提供的人脸识别装置来对人脸识别方法进行说明。参见图25,该方法包括:
步骤251:通过第一滤光片使可见光和部分近红外光通过。
步骤252:通过图像采集单元采集第一图像信号和第二图像信号,第一图像信号是根据第一预设曝光产生的图像信号,第二图像信号是根据第二预设曝光产生的图像信号,其中,至少在第一预设曝光的部分曝光时间段内进行近红外补光,在第二预设曝光的曝光时间段内不进行近红外补光。
步骤253:通过图像处理器对第一图像信号和第二图像信号中的至少一个进行处理,得到第一图像信息。
步骤254:通过人脸分析单元对第一图像信息进行人脸分析,得到人脸分析结果。
在一种可能的实现方式中,图像采集单元包括:图像传感器和补光器,图像传感器位于所述滤光组件的出光侧,补光器包括第一补光装置;
通过图像传感器进行多次曝光,以产生并输出第一图像信号和第二图像信号,第一预设曝光和第二预设曝光为所述多次曝光的其中两次曝光;通过第一补光装置进行近红外补光。
在一种可能的实现方式中,第一补光装置进行近红外补光的中心波长为设定特征波长或者落在设定特征波长范围时,通过第一滤光片的近红外光的中心波长和/或波段宽度达到约束条件。
在一种可能的实现方式中,
第一补光装置进行近红外补光的中心波长为750±10纳米的波长范围内的任一波长;或者
第一补光装置进行近红外补光的中心波长为780±10纳米的波长范围内的任一波长;或者
第一补光装置进行近红外补光的中心波长为940±10纳米的波长范围内的任一波长。
在一种可能的实现方式中,约束条件包括:
通过第一滤光片的近红外光的中心波长与第一补光装置进行近红外补光的中心波长之间的差值位于波长波动范围内,波长波动范围为0~20纳米;或者
通过第一滤光片的近红外光的半带宽小于或等于50纳米;或者
第一波段宽度小于第二波段宽度;其中,第一波段宽度是指通过第一滤光片的近红外光的波段宽度,第二波段宽度是指被第一滤光片阻挡的近红外光的波段宽度;或者
第三波段宽度小于参考波段宽度,第三波段宽度是指通过率大于设定比例的近红外光的波段宽度,参考波段宽度为50纳米~150纳米的波段范围内的任一波段宽度。
在一种可能的实现方式中,图像传感器包括多个感光通道,每个感光通道用于感应至少一种可见光波段的光,以及感应近红外波段的光。
在一种可能的实现方式中,图像传感器采用全局曝光方式进行多次曝光,对于任意一次近红外补光,近红外补光的时间段与最邻近的第二预设曝光的曝光时间段不存在交集,近红外补光的时间段是第一预设曝光的曝光时间段的子集,或者,近红外补光的时间段与第一预设曝光的曝光时间段存在交集,或者第一预设曝光的曝光时间段是近红外补光的时间段的子集。
在一种可能的实现方式中,图像传感器采用卷帘曝光方式进行多次曝光,对于任意一次近红外补光,近红外补光的时间段与最邻近的第二预设曝光的曝光时间段不存在交集;
近红外补光的开始时刻不早于第一预设曝光中最后一行有效图像的曝光开始时刻,近红外补光的结束时刻不晚于第一预设曝光中第一行有效图像的曝光结束时刻;
或者,
近红外补光的开始时刻不早于第一预设曝光之前的最邻近的第二预设曝光的最后一行有效图像的曝光结束时刻且不晚于第一预设曝光中第一行有效图像的曝光结束时刻,近红外补光的结束时刻不早于第一预设曝光中最后一行有效图像的曝光开始时刻且不晚于第一预设曝光之后的最邻近的第二预设曝光的第一行有效图像的曝光开始时刻;或者
近红外补光的开始时刻不早于第一预设曝光之前的最邻近的第二预设曝光的最后一行有效图像的曝光结束时刻且不晚于第一预设曝光中第一行有效图像的曝光开始时刻,近红外补光的结束时刻不早于第一预设曝光中最后一行有效图像的曝光结束时刻且不晚于第一预设曝光之后的最邻近的第二预设曝光的第一行有效图像的曝光开始时刻。
在一种可能的实现方式中,第一预设曝光与第二预设曝光的至少一个曝光参数不同,至少一个曝光参数为曝光时间、曝光增益、光圈大小中的一种或多种,曝光增益包括模拟增益,和/或,数字增益。
在一种可能的实现方式中,第一预设曝光和第二预设曝光的至少一个曝光参数相同,至少一个曝光参数包括曝光时间、曝光增益、光圈大小中的一种或多种,曝光增益包括模拟增益,和/或,数字增益。
在一种可能的实现方式中,
通过图像处理器采用第一处理参数对第一图像信号和第二图像信号中的至少一个进行处理,得到第一图像信息;通过图像处理器采用第二处理参数对第一图像信号和第二图像信号中的至少一个进行处理,得到第二图像信息;通过图像处理器将第二图像信息传输到显示设备,由显示设备显示第二图像信息。
在一种可能的实现方式中,当第一图像信息和第二图像信息均是对第一图像信号处理得 到时,或者,当第一图像信息和第二图像信息均是对第二图像信号处理得到时,或者,当第一图像信息和第二图像信息均是对第一图像信号和第二图像信号处理得到时,第一处理参数和第二处理参数不同。
在一种可能的实现方式中,图像处理器对第一图像信号和第二图像信号中的至少一个进行的处理包括黑电平、图像插值、数字增益、白平衡、图像降噪、图像增强、图像融合中的至少一种。
在一种可能的实现方式中,图像处理器包括缓存;
通过缓存存储第一图像信号和第二图像信号中的至少一个,或者,通过缓存存储第一图像信息和第二图像信息中的至少一个。
在一种可能的实现方式中,通过图像处理器在对第一图像信号和第二图像信号中的至少一个进行处理的过程中,调整图像采集单元的曝光参数。
在一种可能的实现方式中,人脸分析单元包括:人脸检测子单元、人脸识别子单元和人脸数据库;
通过人脸数据库存储至少一个参考人脸信息;
通过人脸检测子单元对第一图像信息进行人脸检测,输出检测到的人脸图像,并对人脸图像进行活体鉴别;
通过人脸识别子单元在人脸图像通过活体鉴别时,提取人脸图像的人脸信息,将人脸图像的人脸信息与人脸数据库中存储的至少一个参考人脸信息进行比对,得到人脸分析结果。
在一种可能的实现方式中,人脸分析单元包括:人脸检测子单元、人脸识别子单元和人脸数据库;
通过人脸数据库存储至少一个参考人脸信息;
通过人脸检测子单元对第一图像信息进行人脸检测,输出检测到的第一人脸图像,对第一人脸图像进行活体鉴别,以及对第二图像信息进行人脸检测,输出检测到的第二人脸图像,对第二人脸图像进行活体鉴别;
通过人脸识别子单元在第一人脸图像和第二人脸图像均通过活体鉴别时,提取第一人脸图像的人脸信息,将第一人脸图像的人脸信息与人脸数据库中存储的至少一个参考人脸信息进行比对,得到人脸分析结果。
在一种可能的实现方式中,第一图像信息是对第一图像信号处理得到的灰度图像信息,第二图像信息是对第二图像信号处理得到的彩色图像信息,人脸分析单元包括:人脸检测子单元、人脸识别子单元和人脸数据库;
通过人脸数据库存储至少一个参考人脸信息;
通过人脸检测子单元对彩色图像信息进行人脸检测,输出检测到的彩色人脸图像,对彩色人脸图像进行活体鉴别,以及在彩色人脸图像通过活体鉴别时,对灰度人脸图像进行人脸检测,输出检测到的灰色人脸图像;
通过人脸识别子单元提取灰度人脸图像的人脸信息,将灰度人脸图像的人脸信息与人脸数据库中存储的至少一个参考人脸信息进行比对,得到人脸分析结果。
在一种可能的实现方式中,第一图像信息是对第一图像信号处理得到的灰度图像信息,第二图像信息是对第一图像信号和第二图像信号进行图像融合处理得到的融合图像信息,人脸分析单元包括:人脸检测子单元、人脸识别子单元和人脸数据库;
通过人脸数据库存储至少一个参考人脸信息;
通过人脸检测子单元对融合图像信息进行人脸检测,输出检测到的融合人脸图像,对融合人脸图像进行活体鉴别,以及在融合人脸图像通过活体鉴别时,对灰度人脸图像进行人脸检测,输出检测到的灰色人脸图像;
通过人脸识别子单元提取灰度人脸图像的人脸信息,将灰度人脸图像的人脸信息与人脸数据库中存储的至少一个参考人脸信息进行比对,得到人脸分析结果。
在一种可能的实现方式中,第一图像信息是对第一图像信号和第二图像信号进行图像融合处理得到的融合图像信息,第二图像信息是对第一图像信号处理得到的灰度图像信息,人脸分析单元包括:人脸检测子单元、人脸识别子单元和人脸数据库;
通过人脸数据库存储至少一个参考人脸信息;
通过人脸检测子单元对灰度图像信息进行人脸检测,输出检测到的灰度人脸图像,对灰度人脸图像进行活体鉴别,以及在灰度人脸图像通过活体鉴别时,对融合人脸图像进行人脸检测,输出检测到的融合人脸图像;
通过人脸识别子单元提取融合人脸图像的人脸信息,将融合人脸图像的人脸信息与人脸数据库中存储的至少一个参考人脸信息进行比对,得到人脸分析结果。
在一种可能的实现方式中,第一图像信息是对第一图像信号和第二图像信号进行图像融合处理得到的融合图像信息,第二图像信息是对第二图像信号处理得到的彩色图像信息,人脸分析单元包括:人脸检测子单元、人脸识别子单元和人脸数据库;
通过人脸数据库存储至少一个参考人脸信息;
通过人脸检测子单元对彩色图像信息进行人脸检测,输出检测到的彩色人脸图像,对彩色人脸图像进行活体鉴别,以及在彩色人脸图像通过活体鉴别时,对融合人脸图像进行人脸检测,输出检测到的融合人脸图像;
通过人脸识别子单元提取融合人脸图像的人脸信息,将融合人脸图像的人脸信息与人脸数据库中存储的至少一个参考人脸信息进行比对,得到人脸分析结果。
在一种可能的实现方式中,第一图像信息是对第一图像信号和第二图像信号进行图像融合处理得到的第一融合图像信息,第二图像信息是对第一图像信号和第二图像信号进行图像融合处理得到的第二融合图像信息,人脸分析单元包括:人脸检测子单元、人脸识别子单元和人脸数据库;
通过人脸数据库存储至少一个参考人脸信息;
通过人脸检测子单元对第二融合图像信息进行人脸检测,输出检测到的第二融合人脸图像,对第二融合人脸图像进行活体鉴别,以及在第二融合人脸图像通过活体鉴别时,对第一融合人脸图像进行人脸检测,输出检测到的第一融合人脸图像;
通过人脸识别子单元提取第一融合人脸图像的人脸信息,将第一融合人脸图像的人脸信息与人脸数据库中存储的至少一个参考人脸信息进行比对,得到人脸分析结果。
在一种可能的实现方式中,通过人脸分析单元将人脸分析结果传输到显示设备,由显示设备对人脸分析结果进行显示。
需要说明的是,由于本实施例与上述图1-图23所示的实施例可以采用同样的发明构思,因此,关于本实施例内容的解释可以参考上述图1-图23所示实施例中相关内容的解释,此处不再赘述。
在本申请实施例中,人脸识别装置包括图像采集单元、图像处理器和人脸分析单元。图像采集单元包括滤光组件,滤光组件包括第一滤光片,第一滤光片使可见光和部分近红外光 通过。图像采集单元可以通过第一预设曝光和第二预设曝光同时采集到包含近红外光信息(如近红外光亮度信息)的第一图像信号和包含可见光信息的第二图像信号。相对于需要通过后期将采集的原始图像信号中的近红外光信息和可见光信息进行分离的图像处理方式,本申请中图像采集单元可以直接采集到第一图像信号和第二图像信号,采集过程简单有效。如此,图像处理器对第一图像信号和第二图像信号中的至少一个进行处理后得到的第一图像信息的质量更高,继而人脸分析单元对第一图像信息进行人脸分析后就可以得到更为准确的人脸分析结果,从而可以有效提高人脸识别准确率。
以上所述仅为本申请的较佳实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (25)

  1. 一种人脸识别装置,其特征在于,所述人脸识别装置包括:图像采集单元(1)、图像处理器(2)和人脸分析单元(3);
    所述图像采集单元(1)包括滤光组件(03),所述滤光组件(03)包括第一滤光片(031),所述第一滤光片(031)使可见光和部分近红外光通过;
    所述图像采集单元(1),用于采集第一图像信号和第二图像信号,所述第一图像信号是根据第一预设曝光产生的图像信号,所述第二图像信号是根据第二预设曝光产生的图像信号,其中,至少在所述第一预设曝光的部分曝光时间段内进行近红外补光,在所述第二预设曝光的曝光时间段内不进行近红外补光;
    所述图像处理器(2),用于对所述第一图像信号和所述第二图像信号中的至少一个进行处理,得到第一图像信息;
    所述人脸分析单元(3),用于对所述第一图像信息进行人脸分析,得到人脸分析结果。
  2. 如权利要求1所述的人脸识别装置,其特征在于,所述图像采集单元(1)包括:图像传感器(01)和补光器(02),所述图像传感器(01)位于所述滤光组件(03)的出光侧;
    所述图像传感器(01),用于通过多次曝光产生并输出所述第一图像信号和所述第二图像信号,所述第一预设曝光和所述第二预设曝光为所述多次曝光的其中两次曝光;
    所述补光器(02)包括第一补光装置(021),所述第一补光装置(021)用于进行近红外补光。
  3. 如权利要求2所述的人脸识别装置,其特征在于,
    所述第一补光装置(021)进行近红外补光的中心波长为设定特征波长或者落在设定特征波长范围时,通过所述第一滤光片(031)的近红外光的中心波长和/或波段宽度达到约束条件。
  4. 如权利要求3所述的人脸识别装置,其特征在于,
    所述第一补光装置(021)进行近红外补光的中心波长为750±10纳米的波长范围内的任一波长;或者
    所述第一补光装置(021)进行近红外补光的中心波长为780±10纳米的波长范围内的任一波长;或者
    所述第一补光装置(021)进行近红外补光的中心波长为940±10纳米的波长范围内的任一波长。
  5. 如权利要求3所述的人脸识别装置,其特征在于,所述约束条件包括:
    通过所述第一滤光片(031)的近红外光的中心波长与所述第一补光装置(021)进行近红外补光的中心波长之间的差值位于波长波动范围内,所述波长波动范围为0~20纳米;或者
    通过所述第一滤光片(031)的近红外光的半带宽小于或等于50纳米;或者
    第一波段宽度小于第二波段宽度;其中,所述第一波段宽度是指通过所述第一滤光片 (031)的近红外光的波段宽度,所述第二波段宽度是指被所述第一滤光片(031)阻挡的近红外光的波段宽度;或者
    第三波段宽度小于参考波段宽度,所述第三波段宽度是指通过率大于设定比例的近红外光的波段宽度,所述参考波段宽度为50纳米~150纳米的波段范围内的任一波段宽度。
  6. 如权利要求2所述的人脸识别装置,其特征在于,所述图像传感器(01)包括多个感光通道,每个感光通道用于感应至少一种可见光波段的光,以及感应近红外波段的光。
  7. 如权利要求2所述的人脸识别装置,其特征在于,
    所述图像传感器(01)采用全局曝光方式进行多次曝光,对于任意一次近红外补光,近红外补光的时间段与最邻近的所述第二预设曝光的曝光时间段不存在交集,近红外补光的时间段是所述第一预设曝光的曝光时间段的子集,或者,近红外补光的时间段与所述第一预设曝光的曝光时间段存在交集,或者所述第一预设曝光的曝光时间段是近红外补光的时间段的子集。
  8. 如权利要求2所述的人脸识别装置,其特征在于,
    所述图像传感器(01)采用卷帘曝光方式进行多次曝光,对于任意一次近红外补光,近红外补光的时间段与最邻近的所述第二预设曝光的曝光时间段不存在交集;
    近红外补光的开始时刻不早于所述第一预设曝光中最后一行有效图像的曝光开始时刻,近红外补光的结束时刻不晚于所述第一预设曝光中第一行有效图像的曝光结束时刻;
    或者,
    近红外补光的开始时刻不早于所述第一预设曝光之前的最邻近的第二预设曝光的最后一行有效图像的曝光结束时刻且不晚于所述第一预设曝光中第一行有效图像的曝光结束时刻,近红外补光的结束时刻不早于所述第一预设曝光中最后一行有效图像的曝光开始时刻且不晚于所述第一预设曝光之后的最邻近的第二预设曝光的第一行有效图像的曝光开始时刻;或者
    近红外补光的开始时刻不早于所述第一预设曝光之前的最邻近的第二预设曝光的最后一行有效图像的曝光结束时刻且不晚于所述第一预设曝光中第一行有效图像的曝光开始时刻,近红外补光的结束时刻不早于所述第一预设曝光中最后一行有效图像的曝光结束时刻且不晚于所述第一预设曝光之后的最邻近的第二预设曝光的第一行有效图像的曝光开始时刻。
  9. 如权利要求1所述的人脸识别装置,其特征在于,
    所述第一预设曝光与所述第二预设曝光的至少一个曝光参数不同,所述至少一个曝光参数为曝光时间、曝光增益、光圈大小中的一种或多种,所述曝光增益包括模拟增益,和/或,数字增益。
  10. 如权利要求1所述的人脸识别装置,其特征在于,
    所述第一预设曝光和所述第二预设曝光的至少一个曝光参数相同,所述至少一个曝光参数包括曝光时间、曝光增益、光圈大小中的一种或多种,所述曝光增益包括模拟增益,和/或,数字增益。
  11. 如权利要求1-10中任一项所述的人脸识别装置,其特征在于,
    所述图像处理器(2),用于采用第一处理参数对所述第一图像信号和所述第二图像信号中的至少一个进行处理,得到所述第一图像信息;
    所述图像处理器(2),还用于采用第二处理参数对所述第一图像信号和所述第二图像信号中的至少一个进行处理,得到第二图像信息;
    所述图像处理器(2),还用于将所述第二图像信息传输到显示设备,由所述显示设备显示所述第二图像信息。
  12. 如权利要求11所述的人脸识别装置,其特征在于,当所述第一图像信息和所述第二图像信息均是对所述第一图像信号处理得到时,或者,当所述第一图像信息和所述第二图像信息均是对所述第二图像信号处理得到时,或者,当所述第一图像信息和所述第二图像信息均是对所述第一图像信号和所述第二图像信号处理得到时,所述第一处理参数和所述第二处理参数不同。
  13. 如权利要求11所述的人脸识别装置,其特征在于,所述图像处理器(2)对所述第一图像信号和所述第二图像信号中的至少一个进行的处理包括黑电平、图像插值、数字增益、白平衡、图像降噪、图像增强、图像融合中的至少一种。
  14. 如权利要求11所述的人脸识别装置,其特征在于,所述图像处理器(2)包括缓存;
    所述缓存,用于存储所述第一图像信号和所述第二图像信号中的至少一个,或者,用于存储所述第一图像信息和所述第二图像信息中的至少一个。
  15. 如权利要求1-10中任一项所述的人脸识别装置,其特征在于,所述图像处理器(2),还用于在对所述第一图像信号和所述第二图像信号中的至少一个进行处理的过程中,调整所述图像采集单元(1)的曝光参数。
  16. 如权利要求1-10中任一项所述的人脸识别装置,其特征在于,所述人脸分析单元(3)包括:人脸检测子单元(311)、人脸识别子单元(312)和人脸数据库(313);
    所述人脸数据库(313)中存储有至少一个参考人脸信息;
    所述人脸检测子单元(311),用于对所述第一图像信息进行人脸检测,输出检测到的人脸图像,并对所述人脸图像进行活体鉴别;
    所述人脸识别子单元(312),用于在所述人脸图像通过活体鉴别时,提取所述人脸图像的人脸信息,将所述人脸图像的人脸信息与所述人脸数据库(313)中存储的至少一个参考人脸信息进行比对,得到人脸分析结果。
  17. 如权利要求11所述的人脸识别装置,其特征在于,所述人脸分析单元(3)包括:人脸检测子单元(311)、人脸识别子单元(312)和人脸数据库(313);
    所述人脸数据库(313)中存储有至少一个参考人脸信息;
    所述人脸检测子单元(311),用于对所述第一图像信息进行人脸检测,输出检测到的第一人脸图像,对所述第一人脸图像进行活体鉴别,以及对所述第二图像信息进行人脸检测,输出检测到的第二人脸图像,对所述第二人脸图像进行活体鉴别;
    所述人脸识别子单元(312),用于在所述第一人脸图像和所述第二人脸图像均通过活体鉴别时,提取所述第一人脸图像的人脸信息,将所述第一人脸图像的人脸信息与所述人脸数据库(313)中存储的至少一个参考人脸信息进行比对,得到人脸分析结果。
  18. 如权利要求11所述的人脸识别装置,其特征在于,所述第一图像信息是对所述第一图像信号处理得到的灰度图像信息,所述第二图像信息是对所述第二图像信号处理得到的彩色图像信息,所述人脸分析单元(3)包括:人脸检测子单元(311)、人脸识别子单元(312)和人脸数据库(313);
    所述人脸数据库(313)中存储有至少一个参考人脸信息;
    所述人脸检测子单元(311),用于对所述彩色图像信息进行人脸检测,输出检测到的彩色人脸图像,对所述彩色人脸图像进行活体鉴别,以及在所述彩色人脸图像通过活体鉴别时,对所述灰度人脸图像进行人脸检测,输出检测到的灰色人脸图像;
    所述人脸识别子单元(312),用于提取所述灰度人脸图像的人脸信息,将所述灰度人脸图像的人脸信息与所述人脸数据库(313)中存储的至少一个参考人脸信息进行比对,得到人脸分析结果。
  19. 如权利要求11所述的人脸识别装置,其特征在于,所述第一图像信息是对所述第一图像信号处理得到的灰度图像信息,所述第二图像信息是对所述第一图像信号和所述第二图像信号进行图像融合处理得到的融合图像信息,所述人脸分析单元(3)包括:人脸检测子单元(311)、人脸识别子单元(312)和人脸数据库(313);
    所述人脸数据库(313)中存储有至少一个参考人脸信息;
    所述人脸检测子单元(311),用于对所述融合图像信息进行人脸检测,输出检测到的融合人脸图像,对所述融合人脸图像进行活体鉴别,以及在所述融合人脸图像通过活体鉴别时,对所述灰度人脸图像进行人脸检测,输出检测到的灰色人脸图像;
    所述人脸识别子单元(312),用于提取所述灰度人脸图像的人脸信息,将所述灰度人脸图像的人脸信息与所述人脸数据库(313)中存储的至少一个参考人脸信息进行比对,得到人脸分析结果。
  20. 如权利要求11所述的人脸识别装置,其特征在于,所述第一图像信息是对所述第一图像信号和所述第二图像信号进行图像融合处理得到的融合图像信息,所述第二图像信息是对所述第一图像信号处理得到的灰度图像信息,所述人脸分析单元(3)包括:人脸检测子单元(311)、人脸识别子单元(312)和人脸数据库(313);
    所述人脸数据库(313)中存储有至少一个参考人脸信息;
    所述人脸检测子单元(311),用于对所述灰度图像信息进行人脸检测,输出检测到的灰度人脸图像,对所述灰度人脸图像进行活体鉴别,以及在所述灰度人脸图像通过活体鉴别时,对所述融合人脸图像进行人脸检测,输出检测到的融合人脸图像;
    所述人脸识别子单元(312),用于提取所述融合人脸图像的人脸信息,将所述融合人脸图像的人脸信息与所述人脸数据库(313)中存储的至少一个参考人脸信息进行比对,得到人脸分析结果。
  21. 如权利要求11所述的人脸识别装置,其特征在于,所述第一图像信息是对所述第一图像信号和所述第二图像信号进行图像融合处理得到的融合图像信息,所述第二图像信息是对所述第二图像信号处理得到的彩色图像信息,所述人脸分析单元(3)包括:人脸检测子单元(311)、人脸识别子单元(312)和人脸数据库(313);
    所述人脸数据库(313)中存储有至少一个参考人脸信息;
    所述人脸检测子单元(311),用于对所述彩色图像信息进行人脸检测,输出检测到的彩色人脸图像,对所述彩色人脸图像进行活体鉴别,以及在所述彩色人脸图像通过活体鉴别时,对所述融合人脸图像进行人脸检测,输出检测到的融合人脸图像;
    所述人脸识别子单元(312),用于提取所述融合人脸图像的人脸信息,将所述融合人脸图像的人脸信息与所述人脸数据库(313)中存储的至少一个参考人脸信息进行比对,得到人脸分析结果。
  22. 如权利要求11所述的人脸识别装置,其特征在于,所述第一图像信息是对所述第一图像信号和所述第二图像信号进行图像融合处理得到的第一融合图像信息,所述第二图像信息是对所述第一图像信号和所述第二图像信号进行图像融合处理得到的第二融合图像信息,所述人脸分析单元(3)包括:人脸检测子单元(311)、人脸识别子单元(312)和人脸数据库(313);
    所述人脸数据库(313)中存储有至少一个参考人脸信息;
    所述人脸检测子单元(311),用于对所述第二融合图像信息进行人脸检测,输出检测到的第二融合人脸图像,对所述第二融合人脸图像进行活体鉴别,以及在所述第二融合人脸图像通过活体鉴别时,对所述第一融合人脸图像进行人脸检测,输出检测到的第一融合人脸图像;
    所述人脸识别子单元(312),用于提取所述第一融合人脸图像的人脸信息,将所述第一融合人脸图像的人脸信息与所述人脸数据库(313)中存储的至少一个参考人脸信息进行比对,得到人脸分析结果。
  23. 如权利要求1-10中任一项所述的人脸识别装置,其特征在于,所述人脸分析单元(3),还用于将所述人脸分析结果传输到显示设备,由所述显示设备对所述人脸分析结果进行显示。
  24. 一种门禁设备,其特征在于,所述门禁设备包括门禁控制器和上述权利要求1-23中任一项所述的人脸识别装置;
    所述人脸识别装置,用于将所述人脸分析结果传输到所述门禁控制器;
    所述门禁控制器,用于在所述人脸分析结果为识别成功时,输出用于打开门禁的控制信号。
  25. 一种人脸识别方法,应用于人脸识别装置,所述人脸识别装置包括:图像采集单元、图像处理器和人脸分析单元,所述图像采集单元包括滤光组件,所述滤光组件包括第一滤光片,其特征在于,所述方法包括:
    通过所述第一滤光片使可见光和部分近红外光通过;
    通过所述图像采集单元采集第一图像信号和第二图像信号,所述第一图像信号是根据第一预设曝光产生的图像信号,所述第二图像信号是根据第二预设曝光产生的图像信号,其中,至少在所述第一预设曝光的部分曝光时间段内进行近红外补光,在所述第二预设曝光的曝光时间段内不进行近红外补光;
    通过所述图像处理器对所述第一图像信号和所述第二图像信号中的至少一个进行处理,得到第一图像信息;
    通过所述人脸分析单元对所述第一图像信息进行人脸分析,得到人脸分析结果。
PCT/CN2020/091910 2019-05-31 2020-05-22 人脸识别装置和门禁设备 WO2020238805A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910472703.7 2019-05-31
CN201910472703.7A CN110490042B (zh) 2019-05-31 2019-05-31 人脸识别装置和门禁设备

Publications (1)

Publication Number Publication Date
WO2020238805A1 true WO2020238805A1 (zh) 2020-12-03

Family

ID=68546292

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/091910 WO2020238805A1 (zh) 2019-05-31 2020-05-22 人脸识别装置和门禁设备

Country Status (2)

Country Link
CN (1) CN110490042B (zh)
WO (1) WO2020238805A1 (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110490042B (zh) * 2019-05-31 2022-02-11 杭州海康威视数字技术股份有限公司 人脸识别装置和门禁设备
CN110493492B (zh) * 2019-05-31 2021-02-26 杭州海康威视数字技术股份有限公司 图像采集装置及图像采集方法
CN110493491B (zh) * 2019-05-31 2021-02-26 杭州海康威视数字技术股份有限公司 一种图像采集装置及摄像方法
CN112989866B (zh) * 2019-12-02 2024-04-09 浙江宇视科技有限公司 对象识别方法、装置、电子设备和可读存储介质
CN113128259B (zh) * 2019-12-30 2023-08-29 杭州海康威视数字技术股份有限公司 人脸识别设备及人脸识别方法
CN116978104A (zh) * 2023-08-11 2023-10-31 泰智达(北京)网络科技有限公司 一种人脸识别系统

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101226587A (zh) * 2007-01-15 2008-07-23 中国科学院自动化研究所 图像采集装置及应用该装置的人脸识别系统和方法
CN101931755A (zh) * 2010-07-06 2010-12-29 上海洪剑智能科技有限公司 一种人脸识别用的调制光滤光装置和滤光方法
KR20110128574A (ko) * 2010-05-24 2011-11-30 주식회사 다음커뮤니케이션 이미지내 생체 얼굴 인식 방법 및 인식 장치
CN107220621A (zh) * 2017-05-27 2017-09-29 北京小米移动软件有限公司 终端进行人脸识别的方法及装置
CN108289179A (zh) * 2018-02-08 2018-07-17 深圳泰华安全技术工程有限公司 一种提高视频信号采集抗干扰能力的方法
CN110312079A (zh) * 2018-03-20 2019-10-08 北京中科奥森科技有限公司 图像采集装置及其应用系统
CN110490187A (zh) * 2019-05-31 2019-11-22 杭州海康威视数字技术股份有限公司 车牌识别设备和方法
CN110490041A (zh) * 2019-05-31 2019-11-22 杭州海康威视数字技术股份有限公司 人脸图像采集装置及方法
CN110490042A (zh) * 2019-05-31 2019-11-22 杭州海康威视数字技术股份有限公司 人脸识别装置和门禁设备

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN203193649U (zh) * 2013-04-16 2013-09-11 北京天诚盛业科技有限公司 电子签名装置
JP2016096430A (ja) * 2014-11-13 2016-05-26 パナソニックIpマネジメント株式会社 撮像装置及び撮像方法
JP6597636B2 (ja) * 2014-12-10 2019-10-30 ソニー株式会社 撮像装置、撮像方法、およびプログラム、並びに画像処理装置
CN105187727A (zh) * 2015-06-17 2015-12-23 广州市巽腾信息科技有限公司 一种图像信息采集装置、图像采集方法及其用途
CN106449617B (zh) * 2015-08-05 2019-04-12 杭州海康威视数字技术股份有限公司 用于产生光的光源设备及其补光方法和装置
CN105868753B (zh) * 2016-04-05 2019-10-18 浙江宇视科技有限公司 蓝色车牌颜色的识别方法及装置
CN108234898A (zh) * 2018-02-07 2018-06-29 信利光电股份有限公司 多摄像头的同步拍摄方法、拍摄装置、移动终端和可读存储介质
CN109635760A (zh) * 2018-12-18 2019-04-16 深圳市捷顺科技实业股份有限公司 一种人脸识别方法及相关设备

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101226587A (zh) * 2007-01-15 2008-07-23 中国科学院自动化研究所 图像采集装置及应用该装置的人脸识别系统和方法
KR20110128574A (ko) * 2010-05-24 2011-11-30 주식회사 다음커뮤니케이션 이미지내 생체 얼굴 인식 방법 및 인식 장치
CN101931755A (zh) * 2010-07-06 2010-12-29 上海洪剑智能科技有限公司 一种人脸识别用的调制光滤光装置和滤光方法
CN107220621A (zh) * 2017-05-27 2017-09-29 北京小米移动软件有限公司 终端进行人脸识别的方法及装置
CN108289179A (zh) * 2018-02-08 2018-07-17 深圳泰华安全技术工程有限公司 一种提高视频信号采集抗干扰能力的方法
CN110312079A (zh) * 2018-03-20 2019-10-08 北京中科奥森科技有限公司 图像采集装置及其应用系统
CN110490187A (zh) * 2019-05-31 2019-11-22 杭州海康威视数字技术股份有限公司 车牌识别设备和方法
CN110490041A (zh) * 2019-05-31 2019-11-22 杭州海康威视数字技术股份有限公司 人脸图像采集装置及方法
CN110490042A (zh) * 2019-05-31 2019-11-22 杭州海康威视数字技术股份有限公司 人脸识别装置和门禁设备

Also Published As

Publication number Publication date
CN110490042B (zh) 2022-02-11
CN110490042A (zh) 2019-11-22

Similar Documents

Publication Publication Date Title
WO2020238805A1 (zh) 人脸识别装置和门禁设备
WO2020238903A1 (zh) 人脸图像采集装置及方法
WO2020238806A1 (zh) 一种图像采集装置及摄像方法
WO2020238807A1 (zh) 图像融合装置及图像融合方法
WO2020238905A1 (zh) 图像融合设备和方法
US8416302B2 (en) Low-light imaging augmented with non-intrusive lighting
US11657606B2 (en) Dynamic image capture and processing
CN110490044B (zh) 人脸建模设备和人脸建模方法
CN107730445A (zh) 图像处理方法、装置、存储介质和电子设备
US7084907B2 (en) Image-capturing device
CN110490187B (zh) 车牌识别设备和方法
CN107800965B (zh) 图像处理方法、装置、计算机可读存储介质和计算机设备
WO2020238970A1 (zh) 图像降噪装置及图像降噪方法
CN109191403A (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
WO2021073140A1 (zh) 单目摄像机、图像处理系统以及图像处理方法
CN110493536B (zh) 图像采集装置和图像采集的方法
CN102316247A (zh) 图像处理装置
CN110493535B (zh) 图像采集装置和图像采集的方法
CN110493495B (zh) 图像采集装置和图像采集的方法
CN107396079B (zh) 白平衡调整方法和装置
WO2020238804A1 (zh) 图像采集装置及图像采集方法
CN110493493B (zh) 全景细节摄像机及获取图像信号的方法
WO2020027210A1 (ja) 画像処理装置、画像処理方法、および画像処理プログラム
US11153546B2 (en) Low-light imaging system
US20120314044A1 (en) Imaging device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20815121

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20815121

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20815121

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 16.09.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20815121

Country of ref document: EP

Kind code of ref document: A1