WO2020238903A1 - 人脸图像采集装置及方法 - Google Patents

人脸图像采集装置及方法 Download PDF

Info

Publication number
WO2020238903A1
WO2020238903A1 PCT/CN2020/092357 CN2020092357W WO2020238903A1 WO 2020238903 A1 WO2020238903 A1 WO 2020238903A1 CN 2020092357 W CN2020092357 W CN 2020092357W WO 2020238903 A1 WO2020238903 A1 WO 2020238903A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
light
exposure
face
infrared
Prior art date
Application number
PCT/CN2020/092357
Other languages
English (en)
French (fr)
Inventor
罗丽红
聂鑫鑫
於敏杰
Original Assignee
杭州海康威视数字技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视数字技术股份有限公司 filed Critical 杭州海康威视数字技术股份有限公司
Publication of WO2020238903A1 publication Critical patent/WO2020238903A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements

Definitions

  • This application relates to the field of information processing technology, and in particular to a face image acquisition device and method.
  • the image acquisition circuit in the face recognition camera firstly collects visible light images and infrared light images through two image sensors, secondly fusion processes the visible light images and infrared light images, and finally encodes and analyzes the fused images to obtain the human Face image.
  • the aforementioned cameras have extremely high requirements on the process structure of the two image sensors and the registration and synchronization between the two, which is not only costly, but also if the registration is not up to standard, the quality of the obtained face image will be poor.
  • the present application provides a face image acquisition device and method to overcome the problem of reducing the cost of face image acquisition and improving the quality of the acquired face image.
  • a face image acquisition device provided by the first aspect of the present application includes: an image sensor, a light supplement, a filter component, and an image processor;
  • the image sensor is used to generate and output a first image signal and a second image signal through multiple exposures, wherein the first image signal is an image signal generated according to a first preset exposure, and the second image signal is According to the image signal generated by the second preset exposure, the first preset exposure and the second preset exposure are two exposures among the multiple exposures;
  • the light supplementer includes a first light supplement device, and the first light supplement device is used to perform near-infrared supplement light, wherein the near-infrared supplement light is performed at least within the exposure time period of the first preset exposure, and Not performing near-infrared fill light during the exposure time period of the second preset exposure;
  • the filter assembly includes a first filter, and the first filter is used to pass visible light and part of near-infrared light;
  • the image processor is used to perform image processing and face detection on the first image signal and the second image signal to obtain a face image.
  • a second aspect of the application provides a face image acquisition method, which is applied to a face image acquisition device.
  • the face image acquisition device includes an image sensor, a light fill, a filter component, and an image processor.
  • the light fill Comprising a first supplementary light device
  • the filter assembly includes a first filter
  • the image sensor is located on the light exit side of the filter assembly, and the method includes:
  • the near-infrared light-filling is performed by the first light-filling device, wherein the near-infrared light-filling is performed at least during a partial exposure time period of the first preset exposure, and the near-infrared light is not performed during the exposure time period of the second preset exposure Fill light, the first preset exposure and the second preset exposure are two of the multiple exposures of the image sensor;
  • Multiple exposures are performed by the image sensor to generate and output a first image signal and a second image signal, the first image signal is an image signal generated according to the first preset exposure, and the second image signal Is an image signal generated according to the second preset exposure;
  • the image processor performs image processing and face detection on the first image signal and the second image signal to obtain a face image.
  • the image sensor generates and outputs a first image signal and a second image signal through multiple exposures, wherein the first image signal is an image generated according to a first preset exposure Signal, the second image signal is an image signal generated according to a second preset exposure, the first preset exposure and the second preset exposure are two of the multiple exposures, and the light supplement includes a first light supplement device,
  • the first light-filling device performs near-infrared light-filling, wherein at least the near-infrared light-filling is performed during the exposure time period of the first preset exposure, and the near-infrared light-filling is not performed during the exposure time period of the second preset exposure.
  • the image processor is used to perform image processing and face detection on the first image signal and the second image signal to obtain a face image.
  • only one image sensor is needed to obtain visible light images and infrared light images, which reduces the cost, and avoids that the images obtained by the two image sensors due to the process structure and the registration and synchronization problems of the two image sensors are not synchronized, causing the face The problem of poor image quality.
  • FIG. 1 is a schematic structural diagram of a face image acquisition device provided by an embodiment of the application.
  • FIG. 2 is a schematic structural diagram of an image processor in an embodiment of the application
  • FIG. 3 is a schematic flowchart of processing the first image signal and the second image signal by the processing component
  • FIG. 4 is a schematic structural diagram of another image processor in an embodiment of the application.
  • Figure 5 is a schematic structural diagram of the fusion component for fusion processing of color images and grayscale images
  • FIG. 6 is a schematic structural diagram of still another image processor in an embodiment of the application.
  • FIG. 7 is a flow diagram of face detection processing performed by a detection component in an embodiment of the application.
  • FIG. 8 is a schematic diagram of processing color images and grayscale images by the detection component in this embodiment.
  • FIG. 9 is another schematic diagram of processing color images and grayscale images by the detection component in this embodiment.
  • FIG. 10 is a schematic structural diagram of another image processor in an embodiment of the application.
  • FIG. 11 is a schematic structural diagram of another image processor in an embodiment of this application.
  • FIG. 12 is a schematic diagram of the relationship between the wavelength and relative intensity of the near-infrared supplement light performed by the first light supplement device provided by an embodiment of the present application;
  • FIG. 13 is a schematic diagram of the relationship between the wavelength of light that can pass through the first filter and the pass rate
  • FIG. 14 is a schematic structural diagram of another face image acquisition device provided by an embodiment of the application.
  • FIG. 15 is a schematic diagram of an RGB sensor provided by an embodiment of the present application.
  • FIG. 16 is a schematic diagram of an RGBW sensor provided by an embodiment of the present application.
  • FIG. 17 is a schematic diagram of an RCCB sensor provided by an embodiment of the present application.
  • FIG. 18 is a schematic diagram of a RYYB sensor provided by an embodiment of the present application.
  • FIG. 19 is a schematic diagram of a sensing curve of an image sensor according to an embodiment of the present application.
  • Figure 20 is a schematic diagram of a rolling shutter exposure method
  • FIG. 21 is a schematic diagram of a first preset exposure and a second preset exposure provided by an embodiment of the present application.
  • 22 is a schematic diagram of a second type of first preset exposure and a second preset exposure provided by an embodiment of the present application;
  • FIG. 23 is a schematic diagram of a third type of first preset exposure and a second preset exposure provided by an embodiment of the present application.
  • FIG. 24 is a schematic diagram of the first rolling shutter exposure method and near-infrared light supplement provided by an embodiment of the present application.
  • 25 is a schematic diagram of a second rolling shutter exposure method and near-infrared light supplement provided by an embodiment of the present application.
  • FIG. 26 is a schematic diagram of a third rolling shutter exposure method and near-infrared light supplement provided by an embodiment of the present application.
  • FIG. 27 is a schematic flowchart of an embodiment of a method for acquiring a face image provided by an embodiment of the application.
  • the embodiment of the application proposes a face image acquisition device and method, which can at least reduce the cost of the camera and improve the face image quality.
  • the image sensor generates and outputs a first image signal and a second image signal through multiple exposures.
  • An image signal is an image signal generated according to a first preset exposure
  • a second image signal is an image signal generated according to a second preset exposure.
  • the first preset exposure and the second preset exposure are two of the multiple exposures.
  • the light supplement includes a first light supplement device that performs near-infrared supplement light, wherein at least the near-infrared supplement light exists in the exposure time period of the first preset exposure, and the second preset There is no near-infrared fill light in the exposure time period of the exposure, and the image processor is used to perform image processing and face detection on the first image signal and the second image signal to obtain a face image.
  • the image processor is used to perform image processing and face detection on the first image signal and the second image signal to obtain a face image.
  • only one image sensor is needed to obtain visible light images and infrared light images, which reduces the cost, and avoids that the images obtained by the two image sensors due to the process structure and the registration and synchronization problems of the two image sensors are not synchronized, causing the face The problem of poor image quality.
  • FIG. 1 is a schematic structural diagram of a face image acquisition device provided by an embodiment of the application.
  • the face image acquisition device may include: an image sensor 01, a light supplement 02, a filter assembly 03, a lens assembly 04, and an image processor 05.
  • the image sensor 01 is located on the light exit side of the filter assembly 03
  • the image processor 05 is located behind the image sensor 01.
  • the image sensor 01 is used to generate and output the first image signal and the second image signal through multiple exposures.
  • the first image signal is an image signal generated according to a first preset exposure
  • the second image signal is an image signal generated according to a second preset exposure
  • the first preset exposure and the second preset exposure are the multiple exposures Two of the exposures.
  • first image signal and the second image signal are obtained by photographing a person, that is, both the first image signal and the second image signal include a face area.
  • the light supplement 02 includes a first light supplement device 021.
  • the first light supplement device 021 is used to perform near-infrared supplement light, wherein at least there is near-infrared supplement light during a partial exposure period of the first preset exposure, and the second There is no near-infrared fill light in the exposure time period of the preset exposure.
  • the above-mentioned supplementary light through the first supplementary light device 021 improves the signal collection capability, which is beneficial to improve the image quality.
  • the filter assembly 03 includes a first filter 031.
  • the first filter 031 allows visible light and part of the near-infrared light to pass.
  • the first light supplement 021 passes through the first filter 031 when the near-infrared light is supplemented.
  • the intensity of the near-infrared light is higher than the intensity of the near-infrared light that passes through the first filter 031 when the first light supplement device 021 does not perform the near-infrared light supplement.
  • the filter assembly 03 may be located between the lens 04 and the image sensor 01, and the image sensor 01 is located on the light exit side of the filter assembly 03.
  • the lens 04 is located between the filter assembly 03 and the image sensor 01, and the image sensor 01 is located on the light exit side of the lens 04.
  • the first filter 031 can be a filter film. In this way, when the filter assembly 03 is located between the lens 04 and the image sensor 01, the first filter 031 can be attached to the light-emitting side of the lens 04 The surface, or, when the lens 04 is located between the filter assembly 03 and the image sensor 01, the first filter 031 may be attached to the surface of the lens 04 on the light incident side.
  • the filter component 03 can control the spectral range received by the image sensor.
  • the supplementary light and visible light generated by the first supplementary light device can pass through, while preventing the light of other spectral bands from passing through, ensuring effective use of the supplementary light. Under the premise of light, minimize the influence of other light sources.
  • the image processor 05 is used to perform image processing and face detection on the first image signal and the second image signal to obtain a face image.
  • the image processor 05 may receive the first image signal and the second image signal transmitted by the image sensor 01, and after performing face analysis and processing on the first image signal and the second image signal, obtain Face images in the first image signal and the second image signal, thereby realizing the function of capturing the face.
  • the face image acquisition device includes an image sensor, a light supplement, a light filter component, and an image processor, multiple exposures of the image sensor, light supplement of the light supplement, and light filtering of the filter component.
  • a single image sensor can also be used to obtain multiple first image signals and second image signals with different spectral ranges, which expands the image acquisition capability of a single sensor and improves the image quality in different scenarios.
  • the image processor is used to obtain The first image signal and the second image signal of the device are processed and analyzed to output a face image, thereby realizing the face capture or collection function of the device.
  • the face image acquisition device may include an image acquisition unit and an image processing unit, and the image acquisition unit may include the above-mentioned image sensor, light fill, filter assembly and lens assembly.
  • the image acquisition unit may be an image acquisition device including the above-mentioned components, wherein the light supplement is a part of the image acquisition device to realize the function of supplement light, for example, a camera, a capture machine, a face recognition camera, a reader Code camera, vehicle-mounted camera, panoramic detail camera, etc.; as another example, the image acquisition unit can also be realized by connecting the image acquisition device and the light supplement 02, which is located outside the image acquisition device and the image acquisition The device is connected.
  • the image processing unit may be an image processor, which has data processing and analysis capabilities, and analyzes the face image in the image signal. Since the quality of the first image signal and the second image signal in this application are good, correspondingly, the accuracy of face detection is improved.
  • the exposure timing of the above-mentioned image sensor 01 and the near-infrared supplementary light timing of the first supplementary light device 021 included in the light supplement 02 for example, at least in the first preset period. It is assumed that there is near-infrared supplementary light in a part of the exposure time period of the exposure, and there is no near-infrared supplementary light in the exposure time period of the second preset exposure.
  • FIG. 2 is a schematic structural diagram of an image processor in an embodiment of this application.
  • the aforementioned image processor 05 may include: a processing component 051 and a detection component 052.
  • the processing component 051 is configured to perform first preprocessing on the first image signal to generate a first image, and perform second preprocessing on the second image signal to generate a second image.
  • the detection component 052 is used to perform face detection processing on the first image and the second image generated by the processing component 051 to obtain a face image.
  • the detection component 052 can perform content analysis on the images (for example, the first image and the second image), and if it detects the presence of facial feature information in the image, it can obtain the location of the face area and extract the person. Face image, realize the function of face capture.
  • the image processor is a computing platform that processes image signals, and has many typical implementations.
  • the implementation of the image processor shown in FIG. 2 is a typical implementation that saves computing resources.
  • the first image signal and the second image signal collected by the image sensor 01 undergo the image preprocessing of the processing component 051 to generate the first image and the second image, and the detection component 052 then processes the slave processing component 051.
  • the received first image and second image are subjected to detection processing, thereby outputting a face image.
  • the first image may be a grayscale image
  • the second image may be a color image
  • the grayscale image can be embodied in the form of a black and white image.
  • the grayscale images described below can all be embodied in black and white images or grayscale images with different black and white ratios, which can be set according to actual conditions. No longer.
  • the first preprocessing may include any one or a combination of the following operations: image interpolation, gamma mapping, color conversion, and image noise reduction.
  • the second preprocessing may include any one or a combination of the following: white balance, image interpolation, gamma mapping, and image noise reduction.
  • the processing components may include conventional image processing such as white balance, image interpolation, color conversion, gamma mapping, and image noise reduction. Different processing procedures and parameters may be used for the above-mentioned first image signal and second image signal. , So as to obtain the first image and the second image with different quality or color degree.
  • FIG. 3 is a schematic diagram of a flow of processing the first image signal and the second image signal by the processing component.
  • the processing component uses the first processing parameters to perform one or more of the processing operations such as image interpolation, gamma mapping, color conversion, and image noise reduction on the first image signal to obtain a gray scale.
  • the second processing parameter uses the second processing parameter to perform one or a combination of processing operations such as white balance, image interpolation, gamma mapping, and image noise reduction processing on the second image signal to obtain a color image.
  • the processing component in this embodiment can flexibly select appropriate processing parameters and image signal combinations, so that the image quality of the final output face image is better.
  • first image signal and the second image signal are relative concepts, and the names can be interchanged.
  • Figure 3 performs image interpolation, gamma mapping, color conversion, and image noise reduction on the first image signal to obtain a grayscale image, and performs white balance, image interpolation, gamma mapping, and image noise reduction on the second image signal to obtain
  • the color image is illustrated as an example, and the embodiment of the present application does not limit it.
  • the image processor 05 may include: a fusion component 053 in addition to a processing component 051 and a detection component 052.
  • the information of different images can be extracted through the fusion component 053, and the information between the different images can be merged, so as to maximize the amount of information and improve the image quality.
  • FIG. 4 is a schematic structural diagram of another image processor in an embodiment of this application.
  • the fusion component 053 may be located between the processing component 051 and the detection component 052.
  • the fusion component 053 is configured to perform fusion processing on the first image and the second image generated by the processing component 051 to generate a fusion image.
  • the detection component 052 is specifically configured to perform face detection processing on the fused image generated by the fusion component to obtain a face image.
  • the processing component 051 performs image preprocessing on the collected first image signal and second image signal, generates the first image and the second image, and sends the first image and the second image to the fusion component
  • the fusion component 053 is used to fuse the received first image and the second image to generate a fusion image
  • the detection component performs content detection and analysis on the received fusion image, and outputs a face image.
  • the fusion component 053 separately extracts the information of the first image and the second image for fusion, so as to maximize the amount of information and output the fused image.
  • the fusion component 053 is specifically used to extract the brightness information of the color image to obtain the brightness image and extract the color information of the color image Obtain a color image, and perform fusion processing on the brightness image, the color image, and the grayscale image to obtain a face image.
  • the fusion processing includes at least one of the following operations: pixel-to-point fusion and pyramid multi-scale fusion.
  • Fig. 5 is a schematic structural diagram of a fusion component performing fusion processing on a color image and a grayscale image.
  • the fusion component 053 can extract the brightness image and the color image of the color image, and merge them with the gray image, for example, pixel-to-point fusion, pyramid multi-scale fusion, etc.
  • the fusion weight of each image can be determined by the user
  • the configuration can also be calculated from image brightness, texture and other information, so as to output a color fusion image with improved signal-to-noise ratio.
  • the fusion component 053 extracts the brightness image and the color image of the color image, it merges the brightness image and the gray image to obtain a fused brightness image, and then merges the fused brightness image and the color image to output Colored fusion image.
  • it can be determined by the following formula:
  • y FUS represents the fusion image
  • y VIS represents the brightness image
  • y NIR represents the gray image
  • represents the fusion weight
  • FIG. 6 is a schematic structural diagram of another image processor in an embodiment of this application. As shown in FIG. 6, the fusion component 053 may be located behind the detection component 052.
  • the detection component 052 is specifically configured to perform face detection processing on the first image and the second image generated by the processing component 051 to obtain the first face image and The second face image.
  • the fusion component 053 is specifically used to perform fusion processing on the first face image and the second face image obtained by the detection component 052 to obtain a face image.
  • the first image signal and the second image signal collected by the image sensor are subjected to the image preprocessing of the processing component 051 to generate a color image and a grayscale image.
  • the detection component 052 evaluates the received color image ,
  • the grayscale image is detected and processed, and the color face image and the grayscale face image are output.
  • the fusion component performs fusion processing on the color face image and the grayscale face image to generate a fused face image.
  • the detection component 052 is specifically configured to calibrate the position and size of the face region according to the facial features detected in the target image, and output the target face image, the target image being the following image Any one: a first image, a second image, a fusion image of the first image and the second image, and a combination of the first image and the second image.
  • FIG. 7 is a flow diagram of face detection processing performed by the detection component in an embodiment of the application. As shown in FIG.
  • the detection component 052 is specifically used to extract multiple faces in the target image Feature points, based on preset facial feature information, determine multiple locating feature points that meet the facial rules from the above multiple facial feature points, determine face position coordinates based on the multiple locating feature points, and determine the target in the target image Face image.
  • the target face image is the first face image
  • the target face image is the second face image
  • the target face image is the second face image
  • the target image is the first
  • the target face image is a fused face image of the first image and the second image.
  • the extraction of multiple facial feature points in the target image in this embodiment is usually also called feature point extraction, which is based on preset facial feature information from the multiple facial feature points.
  • Determining multiple positioning feature points that meet the facial rules actually refers to feature point comparison and feature point positioning.
  • a typical implementation of feature point extraction and feature point comparison is to obtain feature data helpful for face classification according to the shape description of the face organs and the distance characteristics between them.
  • the feature data usually includes Euclidean distance, curvature and angle between feature points. Since a human face is composed of parts such as eyes, nose, mouth, and chin, the geometric description of these parts and the structural relationship between them can be used as an important feature for detecting the face area. If a feature point that satisfies the facial rules is detected, the feature point location is performed, the face position coordinates are obtained, and the face image is extracted.
  • the processing procedures for the first image, the second image, and the fusion image are similar, and they can all be implemented based on the foregoing processing procedures for the target image, which will not be repeated here.
  • the detection component 052 is not only used to determine the target face image in the target image, but can also be used to detect whether the target face image is based on the principle of living body detection. It is obtained by shooting a real face, and when it is determined that the target face image is obtained by shooting a real face, the target face image is output.
  • the detection component 052 can detect the target face image to verify whether the target face image is obtained by photographing a real face.
  • the detection component 052 can distinguish the source of the target face image by using characteristics of different infrared reflection characteristics of a fake human face such as a real human face and a piece of paper, a screen, a stereo mask, etc.
  • the detection component in this embodiment can process two images at the same time, and can output one face image or two face images as needed.
  • the foregoing target image is a combination of the first image and the second image.
  • the detection component 052 is specifically configured to extract multiple facial feature points in the first image, and determine multiple positioning feature points that meet the facial rules from the multiple facial feature points based on preset facial feature information, and The multiple positioning feature points determine the first face position coordinates, perform face extraction according to the first face position coordinates and the first image to obtain the first face image, and at the same time, obtain the first face image according to the first face position coordinates and the second face position coordinates.
  • the image is subjected to face extraction to obtain a second face image.
  • the first image is a grayscale image
  • the second image is a color image
  • the first face image is a grayscale face image
  • the second face image is a color face image
  • the detection component 052 is also used to detect whether the gray-scale face image is obtained by shooting a real face based on the principle of living body detection, and when it is determined that the gray-scale face image is obtained by shooting a real face. Output a grayscale face image, and output a color face image based on the extracted second face image.
  • FIG. 8 is a schematic diagram of processing the color image and the grayscale image by the detection component in this embodiment.
  • the detection component 052 first performs face calibration on a gray image with a higher signal-to-noise ratio, such as feature point extraction, feature point comparison, and feature point positioning, to obtain face position coordinates. And extract the gray-scale face image from the gray-scale image, and perform the live detection process to determine whether the gray-scale face image is obtained by shooting the real face, if it is, then extract the color face image from the color image, thereby Output grayscale face image, color face image, or output color face image.
  • a higher signal-to-noise ratio such as feature point extraction, feature point comparison, and feature point positioning
  • the detection component has one face image, two face images, or more face images can be determined according to actual needs, and will not be repeated here.
  • the first image is a color image and the second image is a grayscale image
  • the first facial image is a color facial image
  • the second facial image is a grayscale facial image
  • the detection component 052 is also used to detect whether the gray-scale face image is obtained by shooting a real face based on the principle of living body detection, and when it is determined that the gray-scale face image is obtained by shooting a real face, based on the extracted first A face image outputs a color face image.
  • FIG. 9 is another schematic diagram of the detection component processing the color image and the grayscale image in this embodiment.
  • the detection component 052 first performs feature point extraction, feature point comparison, and feature point positioning on the color image to obtain face position coordinates, and then extracts from the grayscale image according to the face position coordinates A gray-scale face image is generated, and a living body detection process is performed to determine whether the gray-scale face image is obtained by photographing a real face. If it is, a color face image is extracted from the color image, thereby outputting a color face image.
  • this implementation is illustrated by outputting a color face image.
  • this implementation can also output two images, such as a grayscale face image and a color face image.
  • the embodiment of the present application does not limit the number of image frames that are specifically output by each implementation manner, and it can be determined according to implementation needs, and will not be repeated here.
  • the image output by the face capture machine can be a grayscale face image, a color face image, a grayscale face image, and a color face image fusion One or more of the subsequent face images.
  • the foregoing image processor further includes: a cache component 054.
  • the cache component 054 may be located before the processing component 051 or after the processing component 051.
  • the caching component 054 is used to cache temporary content, the temporary content includes the first image signal and/or the second image signal output by the image sensor 01; or, the temporary content includes the image The first image and/or the second image obtained by the processor 05 during processing.
  • FIG. 10 is a schematic structural diagram of another image processor in an embodiment of this application.
  • the image processor 05 in this embodiment has an image synchronization function. Specifically, if a subsequent module (for example, the processing component 051) needs to process the first image signal and the second image signal at the same time, this At this time, the buffer component 054 can be located before the processing component 051, and the buffer component 054 is used to store the first image signal and/or the second image signal collected first, and the second image signal and/or the first image signal to be received After the signal is received, it is processed again to achieve synchronization between the first image signal and the second image signal. That is, the cache component 054 in this embodiment can realize synchronization between images with different exposure time periods by caching images.
  • the image that the buffer component 054 can store can be the original image signal (first image signal or second image signal) collected by the image sensor, or the first image signal obtained by the image processor during processing. Image and/or second image, first face image and/or second face image, etc.
  • the embodiment of the present application does not limit the content cached by the cache component 054, which can be determined according to actual conditions, and will not be repeated here.
  • the image processor 05 may also have a noise reduction function.
  • the image processor 05 may use a grayscale image with a high signal-to-noise ratio as a guide for color images and grayscale images. Perform joint noise reduction for high-resolution images, such as guided filtering, joint bilateral filtering, etc., to obtain color images and grayscale images with improved signal-to-noise ratio.
  • the foregoing image processor further includes: an image enhancement component 055.
  • the image enhancement component 055 may be located after the processing component 051, before the detection component 052, or after the detection component 052.
  • the image processor 05 includes a fusion component 053
  • the image enhancement component 05 can also be located before the fusion component 053.
  • the specific setting position of the image enhancement component 055 can be flexibly configured according to application requirements or resource conditions. This embodiment does not It is not limited.
  • FIG. 11 is a schematic structural diagram of another image processor in an embodiment of this application.
  • the image enhancement component 05 is located after the detection component 052 for explanation.
  • the image enhancement component 05 is used to perform enhancement processing on a target image to obtain an enhanced target image.
  • the enhancement processing includes at least one of the following: contrast enhancement and super-resolution reconstruction, and the target image is among the following images Any one: the first image, the second image, the fusion image of the first image and the second image, and the face image.
  • the target image is a human face image for schematic illustration.
  • the image processor 05 has the function of image enhancement processing, which performs enhancement processing such as contrast enhancement, super resolution, etc., on the received first image, second image, or face image, etc., and output quality Elevated face image.
  • image enhancement processing which performs enhancement processing such as contrast enhancement, super resolution, etc., on the received first image, second image, or face image, etc., and output quality Elevated face image.
  • the image processor 05 processes the low-resolution small face image through super-resolution reconstruction to generate a high-resolution large face image to improve image quality.
  • the super-resolution reconstruction processing can adopt interpolation-based, reconstruction-based, and learning-based methods, which will not be repeated here.
  • the first light supplement device 02 can perform stroboscopic light supplementation, that is, can perform high-frequency switching of different supplementary light states, and use the first light supplement device when performing image acquisition according to the first preset exposure.
  • State fill light use the second state fill light during image acquisition according to the second preset exposure
  • the first state fill light and the second state fill light can adopt different fill light configurations
  • the parameters include but are not limited to fill light type , Fill light intensity (including the off state), fill light duration, etc., so as to expand the spectral range that the image sensor 01 can receive.
  • the first supplementary light device 021 is a device that can emit near-infrared light, such as a near-infrared supplementary light, etc., the first supplementary light device 021 can perform near-infrared supplementary light in a stroboscopic manner, or a similar stroboscopic Other methods of performing near-infrared supplemental light are not limited in the embodiment of the present application.
  • the first light supplement device 021 when the first light supplement device 021 performs near-infrared supplement light in a stroboscopic manner, the first light supplement device 021 can be manually controlled to perform near-infrared supplement light in a stroboscopic manner, or through a software program Or a specific device controls the first light supplement device 021 to perform near-infrared supplement light in a strobe mode, which is not limited in the embodiment of the present application.
  • the time period during which the first light supplement device 021 performs near-infrared light supplementation may coincide with the exposure time period of the first preset exposure, or may be greater than the exposure time period of the first preset exposure or less than the exposure time period of the first preset exposure. The time period, as long as there is near-infrared supplement light in the entire exposure time period or part of the exposure time period of the first preset exposure, and there is no near-infrared supplement light in the exposure time period of the second preset exposure.
  • the exposure time of the image sensor and the fill light duration of the first light supplement device meet certain constraints. If the infrared fill light is turned on in the first fill light state, the fill light time period cannot be the same as that of the second image. The signal exposure time period overlaps; similarly, if the second state supplement light turns on the infrared supplement light, its supplement light time period cannot overlap with the exposure time period of the first image signal to realize multi-spectral image acquisition.
  • the exposure time period of the second preset exposure may be between the start exposure time and the end exposure time.
  • Time period, for the rolling shutter exposure mode, the exposure time period of the second preset exposure may be the time period between the start exposure time of the first row of effective images of the second image signal and the end exposure time of the last row of effective images, but it is not limited to this.
  • the exposure time period of the second preset exposure may also be the exposure time period corresponding to the target image in the second image signal, and the target image is a number of rows of effective images corresponding to the target object or target area in the second image signal.
  • the time period between the start exposure time and the end exposure time of several rows of effective images can be regarded as the exposure time period of the second preset exposure.
  • the image sensor may generate a first image signal according to a first preset exposure, and a second image signal according to a second preset exposure.
  • the first preset exposure and the second preset exposure may be the same or Different exposure parameters, including but not limited to exposure time, gain, aperture size, etc., can be matched with the fill light state to achieve multi-spectral image acquisition.
  • the near-infrared light incident on the surface of the object may be reflected by the object and enter the first filter 031.
  • the ambient light may include visible light and near-infrared light, and near-infrared light in the ambient light is also reflected by the object when it is incident on the surface of the object, thereby entering the first filter 031.
  • the near-infrared light that passes through the first filter 031 when there is near-infrared supplementary light may include the near-infrared light that enters the first filter 031 by the reflection of the object when the first supplementary light device 021 performs near-infrared supplementary light.
  • the near-infrared light passing through the first filter 031 when there is no near-infrared supplementary light may include the near-infrared light reflected by the object and entering the first filter 031 when the first supplementary light device 021 is not performing near-infrared supplementary light.
  • the near-infrared light passing through the first filter 031 when there is near-infrared supplement light includes the near-infrared light emitted by the first supplementary light device 021 and reflected by the object, and the ambient light reflected by the object Near-infrared light
  • the near-infrared light passing through the first filter 031 when there is no near-infrared supplementary light includes near-infrared light reflected by an object in the ambient light.
  • the filter assembly 03 may be located between the lens 04 and the image sensor 01, and the image sensor 01 is located on the light-emitting side of the filter assembly 03 as an example.
  • the process of the first image signal and the second image signal is: when the image sensor 01 performs the first preset exposure, the first light supplement device 021 has near-infrared supplement light, and the ambient light in the shooting scene and the first light supplement device After the near-infrared light reflected by objects in the scene passes through the lens 04 and the first filter 031 when performing the near-infrared fill light, the image sensor 01 generates the first image signal through the first preset exposure; the second image signal is generated on the image sensor 01 During the preset exposure, the first light supplement device 021 does not have near-infrared fill light.
  • the image sensor 01 After the ambient light in the shooting scene passes through the lens 04 and the first filter 031, the image sensor 01 generates the first light through the second preset exposure.
  • Two image signals there can be M first preset exposures and N second preset exposures in one frame period of image acquisition, and there can be multiple combinations of sorts between the first preset exposure and the second preset exposure
  • the values of M and N and the size relationship between M and N can be set according to actual requirements. For example, the values of M and N can be equal or different.
  • the first light-filling device 021 since the intensity of the near-infrared light in the ambient light is lower than the intensity of the near-infrared light emitted by the first light-filling device 021, the first light-filling device 021 passes through the first filter 031 when performing near-infrared light-filling.
  • the intensity of the near-infrared light is higher than the intensity of the near-infrared light that passes through the first filter 031 when the first light supplement device 021 does not perform the near-infrared light supplement.
  • the wavelength range of the near-infrared light incident on the first filter 031 may be the first reference wavelength range, and the first reference wavelength range is 650 nm to 1100 nm.
  • the wavelength range of the first light supplement device 021 for near-infrared supplement light may be the second reference wavelength range, and the second reference wavelength range may be 700 nanometers to 800 nanometers, or 900 nanometers to 1000 nanometers, etc. This is not the case in the embodiment of the application. Make a limit.
  • the fill light device included in the fill light can be of visible light, infrared light, or a combination of the two, and the energy of the near-infrared fill light is concentrated in the range of 650 nm to 1000 nm.
  • the energy is concentrated in the range of 700nm ⁇ 800nm, or concentrated in the range of 900m ⁇ 1000nm, so as to avoid the influence of 800nm ⁇ 900nm common 850nm infrared lamps to avoid confusion with alternate signal lights.
  • the near-infrared light passing through the first filter 031 may include the near-infrared light reflected by the object and entering the first filter 031 when the first light-filling device 021 performs near-infrared light-filling when there is near-infrared supplementary light, And the near-infrared light reflected by the object in the ambient light. Therefore, the intensity of the near-infrared light entering the filter assembly 03 is relatively strong at this time. However, when there is no near-infrared complementary light, the near-infrared light passing through the first filter 031 includes the near-infrared light reflected by the object into the filter assembly 03 in the ambient light.
  • the intensity of the near-infrared light passing through the first filter 031 is weak at this time. Therefore, the intensity of the near infrared light included in the first image signal generated and output according to the first preset exposure is higher than the intensity of the near infrared light included in the second image signal generated and output according to the second preset exposure.
  • the center wavelength and/or wavelength range of the first light supplement device 021 for near-infrared supplement light there are multiple choices for the center wavelength and/or wavelength range of the first light supplement device 021 for near-infrared supplement light.
  • the center wavelength of the near-infrared supplement light of the first light supplement device 021 can be designed, and the characteristics of the first filter 031 can be selected, so that the center of the first light supplement device 021 for the near-infrared light supplement.
  • the center wavelength and/or band width of the near-infrared light passing through the first filter 031 can meet the constraint conditions.
  • This constraint is mainly used to restrict the center wavelength of the near-infrared light passing through the first filter 031 as accurate as possible, and the band width of the near-infrared light passing through the first filter 031 is as narrow as possible, so as to avoid The infrared light band width is too wide and introduces wavelength interference.
  • the center wavelength of the near-infrared light supplemented by the first light-filling device 021 may be the average value in the wavelength range of the highest energy in the spectrum of the near-infrared light emitted by the first light-filling device 021, or it may be understood as the first light supplement
  • the set characteristic wavelength or the set characteristic wavelength range can be preset.
  • the center wavelength of the first light supplement device 021 for near-infrared supplement light may be any wavelength within the wavelength range of 750 ⁇ 10 nanometers; or, the center wavelength of the first light supplement device 021 for near-infrared supplement light Any wavelength within the wavelength range of 780 ⁇ 10 nanometers; or, the center wavelength of the near-infrared supplement light performed by the first light supplement device 021 is any wavelength within the wavelength range of 810 ⁇ 10 nanometers; or, the first supplement light
  • the center wavelength of the device 021 for near-infrared supplementary light is any wavelength within the wavelength range of 940 ⁇ 10 nanometers.
  • the set characteristic wavelength range may be a wavelength range of 750 ⁇ 10 nanometers, or a wavelength range of 780 ⁇ 10 nanometers, or a wavelength range of 810 ⁇ 10 nanometers, or a wavelength range of 940 ⁇ 10 nanometers.
  • FIG. 12 is a schematic diagram of the relationship between the wavelength and the relative intensity of the near-infrared supplement light performed by a first light supplement device provided in an embodiment of the present application.
  • the center wavelength of the first light supplement device 021 for near-infrared supplement light is 940 nanometers.
  • the wavelength range of the first light supplement device 021 for near-infrared supplement light is 900 nanometers to 1000 nanometers.
  • the relative intensity of near-infrared light is highest.
  • the above-mentioned constraint conditions may include: the difference between the center wavelength of the near-infrared light passing through the first filter 031 and the center wavelength of the near-infrared light of the first light supplement device 021 lies in the wavelength fluctuation Within the range, as an example, the wavelength fluctuation range may be 0-20 nanometers.
  • the center wavelength of the near-infrared supplement light passing through the first filter 031 can be the wavelength at the peak position in the near-infrared band in the near-infrared light pass rate curve of the first filter 031, or it can be understood as the first A filter 031 is the wavelength at the middle position in the near-infrared waveband whose pass rate exceeds a certain threshold in the near-infrared light pass rate curve of the filter 031.
  • the above constraint conditions may include: the first band width may be smaller than the second band width.
  • the first waveband width refers to the waveband width of the near-infrared light passing through the first filter 031
  • the second waveband width refers to the waveband width of the near-infrared light blocked by the first filter 031.
  • the wavelength band width refers to the width of the wavelength range in which the wavelength of light lies.
  • the first wavelength band width is 800 nanometers minus 700 nanometers, that is, 100 nanometers.
  • the wavelength band width of the near-infrared light passing through the first filter 031 is smaller than the wavelength band width of the near-infrared light blocked by the first filter 031.
  • FIG. 13 is a schematic diagram of the relationship between the wavelength of light that can pass through the first filter and the pass rate.
  • the near-infrared light incident on the first filter 031 has a wavelength range of 650 nm to 1100 nm.
  • the first filter 031 can pass visible light with a wavelength of 380 nm to 650 nm, and a wavelength of 900 nm.
  • Near-infrared light with a wavelength between nanometers and 1100 nanometers passes through, and the near-infrared light with a wavelength between 650 nanometers and 900 nanometers is blocked. That is, the width of the first band is 1000 nanometers minus 900 nanometers, that is, 100 nanometers.
  • the second band width is 900 nm minus 650 nm, plus 1100 nm minus 1000 nm, or 350 nm. 100 nanometers are smaller than 350 nanometers, that is, the wavelength band width of the near-infrared light passing through the first filter 031 is smaller than the wavelength band width of the near-infrared light blocked by the first filter 031.
  • the above relationship curve is just an example.
  • the wavelength range of the near-red light that can pass through the filter can be different, and the wavelength range of the near-infrared light blocked by the filter can also be different. different.
  • the above constraint conditions may include: passing the first filter
  • the half bandwidth of the near-infrared light of the light sheet 031 is less than or equal to 50 nanometers.
  • the half bandwidth refers to the band width of near-infrared light with a pass rate greater than 50%.
  • the above constraint conditions may include: the third band width may be smaller than the reference band width.
  • the third waveband width refers to the waveband width of near-infrared light with a pass rate greater than a set ratio.
  • the reference waveband width may be any waveband width in the range of 50 nanometers to 100 nanometers.
  • the set ratio can be any ratio from 30% to 50%.
  • the set ratio can also be set to other ratios according to usage requirements, which is not limited in the embodiment of the application.
  • the band width of the near-infrared light whose pass rate is greater than the set ratio may be smaller than the reference band width.
  • the wavelength band of the near-infrared light incident on the first filter 031 is 650 nanometers to 1100 nanometers, the setting ratio is 30%, and the reference wavelength band width is 100 nanometers. It can be seen from FIG. 3 that in the wavelength band of near-infrared light from 650 nanometers to 1100 nanometers, the band width of near-infrared light with a pass rate greater than 30% is significantly less than 100 nanometers.
  • the first light supplement device 021 Since the first light supplement device 021 provides near-infrared supplementary light at least during a partial exposure period of the first preset exposure, it does not provide near-infrared supplementary light during the entire exposure period of the second preset exposure, and the first preset exposure
  • the exposure and the second preset exposure are two of the multiple exposures of the image sensor 01, that is, the first light supplement device 021 provides near-infrared supplement light during the exposure period of the partial exposure of the image sensor 01, The near-infrared supplementary light is not provided during the exposure time period when another part of the image sensor 01 is exposed.
  • the number of times of supplementary light in the unit time length of the first supplementary light device 021 may be lower than the number of exposures of the image sensor 01 in the unit time length, wherein, within the interval of two adjacent times of supplementary light, there is one interval. Or multiple exposures.
  • the light supplement device 02 may further include a second light supplement device 022, and the second light supplement device 022 is used for visible light supplement light.
  • the second light supplement device 022 provides visible light supplement light for at least part of the exposure time of the first preset exposure, that is, at least the near-infrared supplement light and visible light supplement light are present during the partial exposure time period of the first preset exposure.
  • the mixed color of the two lights can be distinguished from the color of the red light in the traffic light, thereby avoiding the human eye from confusing the color of the light fill 02 for near-infrared fill light with the color of the red light in the traffic light.
  • the second light supplement device 022 provides visible light supplement light during the exposure time period of the second preset exposure, since the intensity of visible light is not particularly high during the exposure time period of the second preset exposure, When the visible light supplement is performed during the exposure time period of the exposure, the brightness of the visible light in the second image signal can also be increased, thereby ensuring the quality of image collection.
  • the second light supplement device 022 may be used to perform visible light supplement light in a constant light mode; or, the second light supplement device 022 may be used to perform visible light supplement light in a stroboscopic manner, wherein, at least in the first Visible light supplement light exists in part of the exposure time period of the preset exposure, and there is no visible light supplement light during the entire exposure time period of the second preset exposure; or, the second light supplement device 022 can be used to perform visible light supplement light in a strobe mode There is no visible light supplementary light at least during the entire exposure time period of the first preset exposure, and visible light supplementary light exists during the partial exposure time period of the second preset exposure.
  • the second light supplement device 022 When the second light supplement device 022 performs visible light supplement light in a constant light mode, it can not only prevent human eyes from confusing the color of the first supplement light device 021 for near-infrared supplement light with the color of the red light in the traffic light, but also can improve the Second, the brightness of visible light in the image signal to ensure the quality of image collection.
  • the second light supplement device 022 When the second light supplement device 022 performs visible light supplement light in a stroboscopic manner, it can prevent human eyes from confusing the color of the first light supplement device 021 for near-infrared supplement light with the color of the red light in the traffic light, or can improve The brightness of the visible light in the second image signal in turn ensures the quality of image collection, and can also reduce the number of times of supplementary light of the second supplementary light device 022, thereby prolonging the service life of the second supplementary light device 022.
  • the filter assembly 03 may further include a second filter 032 and a switching component 033.
  • the second filter 032 can be switched by the switching component 033. Switch to the light incident side of image sensor 01. After the second filter 032 is switched to the light incident side of the image sensor 01, the second filter 032 allows visible light to pass and blocks the near-infrared light. After the second filter 032 passes the visible light and blocks the near-infrared light, Exposure is performed by the image sensor 01 to generate and output a third image signal. Therefore, the face image acquisition device of this embodiment is compatible with the existing image acquisition function, which improves flexibility.
  • the switching component 033 is used to switch the second filter 032 to the light incident side of the image sensor 01, and can also be understood as the second filter 032 replacing the first filter 031 in the image sensor 01. Position on the light side.
  • the first light supplement device 021 may be in a closed state or an open state.
  • the aforementioned multiple exposure refers to multiple exposures within one frame period, that is, the image sensor 01 performs multiple exposures within one frame period, thereby generating and outputting at least one frame of the first image signal and At least one frame of the second image signal.
  • 1 second includes 25 frame periods, and the image sensor 01 performs multiple exposures in each frame period, thereby generating at least one frame of the first image signal and at least one frame of the second image signal, and the The first image signal and the second image signal are called a group of image signals, so that 25 groups of image signals are generated within 25 frame periods.
  • the first preset exposure and the second preset exposure can be two adjacent exposures in multiple exposures in one frame period, or two non-adjacent exposures in multiple exposures in one frame period. The application embodiment does not limit this.
  • the first image signal is generated and output by the first preset exposure
  • the second image signal is generated and output by the second preset exposure.
  • the first image can be The signal and the second image signal are processed.
  • the purposes of the first image signal and the second image signal may be different, so in some embodiments, at least one exposure parameter of the first preset exposure and the second preset exposure may be different.
  • the at least one exposure parameter may include but is not limited to one or more of exposure time, analog gain, digital gain, and aperture size. Wherein, the exposure gain includes analog gain and/or digital gain.
  • the intensity of the near-infrared light sensed by the image sensor 01 is stronger, and the first image signal generated and output accordingly includes the near-infrared light
  • the brightness of the light will also be higher.
  • near-infrared light with higher brightness is not conducive to the acquisition of external scene information.
  • the exposure gain of the first preset exposure may be smaller than the first preset exposure. 2. Exposure gain for preset exposure. In this way, when the first light supplement device 021 performs near-infrared supplement light, the brightness of the near-infrared light contained in the first image signal generated and output by the image sensor 01 will not be affected by the first light supplement device 021 performing near-infrared supplement light. Too high.
  • the longer the exposure time the higher the brightness included in the image signal obtained by the image sensor 01, and the longer the motion trailing of the moving objects in the external scene in the image signal; the shorter the exposure time, the longer the image
  • the image signal obtained by the sensor 01 includes the lower the brightness, and the shorter the motion trail of the moving object in the external scene is in the image signal. Therefore, in order to ensure that the brightness of the near-infrared light contained in the first image signal is within an appropriate range, and that the moving objects in the external scene have a short motion trail in the first image signal.
  • the exposure time of the first preset exposure may be less than the exposure time of the second preset exposure.
  • the first light supplement device 021 performs near-infrared supplement light
  • the brightness of the near-infrared light contained in the first image signal generated and output by the image sensor 01 will not be affected by the first light supplement device 021 performing near-infrared supplement light. Too high.
  • the shorter exposure time makes the motion trailing of the moving object in the external scene appear shorter in the first image signal, thereby facilitating the recognition of the moving object.
  • the exposure time of the first preset exposure is 40 milliseconds
  • the exposure time of the second preset exposure is 60 milliseconds, and so on.
  • the exposure time of the first preset exposure may not only be less than the exposure time of the second preset exposure , Can also be equal to the exposure time of the second preset exposure.
  • the exposure gain of the first preset exposure may be less than the exposure gain of the second preset exposure, or may be equal to the second preset exposure The exposure gain.
  • the purposes of the first image signal and the second image signal may be the same.
  • the exposure time of the first preset exposure may be equal to the exposure time of the second preset exposure. If the exposure time of the first preset exposure and the exposure time of the second preset exposure are different, the exposure time will be longer. There is a motion trailing in the image signal of one channel, resulting in different definitions of the two image signals.
  • the exposure gain of the first preset exposure may be equal to the exposure gain of the second preset exposure.
  • the exposure gain of the first preset exposure may be less than the exposure gain of the second preset exposure. It can also be equal to the exposure gain of the second preset exposure.
  • the exposure time of the first preset exposure may be less than the exposure time of the second preset exposure, or may be equal to the second preset exposure The exposure time.
  • the image sensor 01 may include multiple photosensitive channels, and each photosensitive channel may be used to sense at least one kind of light in the visible light waveband and to sense light in the near-infrared waveband. That is, each photosensitive channel can sense at least one light in the visible light band, such as red light, green light, blue light, and yellow light, as well as light in the near-infrared band.
  • the multiple photosensitive channels can be used to sense at least two different visible light wavelength bands.
  • each pixel of the image sensor 01 can sense the fill light generated by the light fill 02 to ensure that the collected infrared light image has a complete resolution and no pixels are missing.
  • the plurality of photosensitive channels may include at least two of R photosensitive channels, G photosensitive channels, B photosensitive channels, Y photosensitive channels, W photosensitive channels, and C photosensitive channels.
  • the R photosensitive channel is used to sense the light in the red and near-infrared bands
  • the G photosensitive channel is used to sense the light in the green and near-infrared bands
  • the B photosensitive channel is used to sense the light in the blue and near-infrared bands.
  • Y The photosensitive channel is used to sense light in the yellow band and near-infrared band.
  • W can be used to represent the light-sensing channel used to sense full-wavelength light
  • C can be used to represent the light-sensing channel used to sense full-wavelength light, so when there is more
  • a photosensitive channel includes a photosensitive channel for sensing light of a full waveband
  • this photosensitive channel may be a W photosensitive channel or a C photosensitive channel. That is, in practical applications, the photosensitive channel used for sensing the light of the full waveband can be selected according to the use requirements.
  • the image sensor 01 may be an RGB sensor, RGBW sensor, or RCCB sensor, or RYYB sensor.
  • FIG. 15 is a schematic diagram of an RGB sensor provided by an embodiment of the present application.
  • FIG. 16 is a schematic diagram of an RGBW sensor provided by an embodiment of the present application.
  • Fig. 17 is a schematic diagram of an RCCB sensor provided by an embodiment of the present application.
  • Fig. 18 is a schematic diagram of a RYYB sensor provided by an embodiment of the present application.
  • the distribution of the R photosensitive channel, G photosensitive channel and B photosensitive channel in the RGB sensor can be seen in Figure 15.
  • the R photosensitive channel, G photosensitive channel, B photosensitive channel and W photosensitive channel in the RGBW sensor Refer to Figure 16 for the distribution of channels, refer to Figure 17 for the distribution of R photosensitive channel, C photosensitive channel and B photosensitive channel in the RCCB sensor, refer to the distribution of R photosensitive channel, Y photosensitive channel and B photosensitive channel in the RYYB sensor Figure 18.
  • some photosensitive channels may only sense light in the near-infrared waveband, but not light in the visible light waveband.
  • the plurality of photosensitive channels may include at least two of R photosensitive channels, G photosensitive channels, B photosensitive channels, and IR photosensitive channels.
  • the R photosensitive channel is used to sense red light and near-infrared light
  • the G photosensitive channel is used to sense green light and near-infrared light
  • the B photosensitive channel is used to sense blue light and near-infrared light.
  • IR The photosensitive channel is used to sense light in the near-infrared band.
  • the image sensor 01 may be an RGBIR sensor, where each IR photosensitive channel in the RGBIR sensor can sense light in the near-infrared waveband, but not light in the visible light waveband.
  • the image sensor 01 is an RGB sensor
  • other image sensors such as RGBIR sensors
  • the RGB information collected by the RGB sensor is more complete.
  • Some of the photosensitive channels of the RGBIR sensor cannot collect visible light, so the image collected by the RGB sensor The color details are more accurate.
  • FIG. 19 is a schematic diagram of a sensing curve of an image sensor according to an embodiment of the present application.
  • the R curve in Figure 19 represents the sensing curve of image sensor 01 to light in the red light band
  • the G curve represents the sensing curve of image sensor 01 to light in the green light band
  • the B curve represents the sensing curve of image sensor 01 to blue light.
  • the light sensing curve, the W (or C) curve represents the sensing curve of the image sensor 01 sensing light in the full band
  • the NIR (Near infrared) curve represents the sensing curve of the image sensor 01 sensing light in the near infrared band.
  • the image sensor 01 may adopt a global exposure method or a rolling shutter exposure method.
  • the global exposure mode means that the exposure start time of each row of effective images is the same, and the exposure end time of each row of effective images is the same.
  • the global exposure mode is an exposure mode in which all rows of effective images are exposed at the same time and the exposure ends at the same time.
  • Rolling shutter exposure mode means that the exposure time of different rows of effective images does not completely coincide, that is, the exposure start time of a row of effective images is later than the exposure start time of the previous row of effective images, and the exposure end time of a row of effective images is later At the end of the exposure of the effective image on the previous line.
  • data can be output after each line of effective image is exposed. Therefore, the time from the start of output of the first line of effective image to the end of output of the last line of effective image can be expressed as reading Time out.
  • FIG. 20 is a schematic diagram of a rolling shutter exposure method. It can be seen from Figure 20 that the effective image of the first line begins to be exposed at time T1, and the exposure ends at time T3. The effective image of the second line begins to be exposed at time T2 and ends at time T4. Time T2 is backward compared to time T1. A period of time has passed, and time T4 has moved a period of time backward compared to time T3. In addition, the effective image of the first line ends exposure at time T3 and begins to output data, and the output of data ends at time T5. The effective image of line n ends exposure at time T6 and begins to output data, and the output of data ends at time T7, then T3 The time between ⁇ T7 is the read time.
  • the time period of the near-infrared fill light and the exposure time period of the nearest second preset exposure do not exist Intersection
  • the time period of near-infrared fill light is a subset of the exposure time period of the first preset exposure, or the time period of near-infrared fill light and the exposure time period of the first preset exposure overlap, or the first preset
  • the exposure period of exposure is a subset of the near-infrared fill light.
  • FIG. 21 is a schematic diagram of the first type of first preset exposure and the second preset exposure provided by an embodiment of the present application.
  • FIG. 22 is a schematic diagram of the second first preset exposure and the second preset exposure provided by an embodiment of the present application.
  • FIG. 23 is a schematic diagram of a third type of first preset exposure and a second preset exposure provided by an embodiment of the present application. Referring to Figure 21, for any one near-infrared fill light, the time period of near-infrared fill light does not overlap with the exposure time period of the nearest second preset exposure, and the time period of near-infrared fill light is that of the first preset exposure A subset of the exposure time period.
  • the time period of near-infrared fill light does not overlap with the exposure time period of the nearest second preset exposure, and the time period of near-infrared fill light is equal to that of the first preset exposure. There is an intersection of exposure time periods.
  • the time period of near-infrared fill light does not overlap with the exposure time period of the nearest second preset exposure, and the exposure time period of the first preset exposure is near-infrared fill light A subset of.
  • the time period of near-infrared fill light is the same as the exposure time period of the nearest second preset exposure There is no intersection.
  • the start time of the near-infrared fill light is no earlier than the exposure start time of the last line of the effective image in the first preset exposure
  • the end time of the near-infrared fill light is no later than the exposure of the first line of the effective image in the first preset exposure End time.
  • the start time of the near-infrared fill light is not earlier than the exposure end time of the last line of the effective image of the nearest second preset exposure before the first preset exposure and not later than the first line of the first preset exposure.
  • the exposure end time of the image, the end time of the near-infrared fill light is no earlier than the exposure start time of the last line of the effective image in the first preset exposure and no later than the nearest second preset exposure after the first preset exposure
  • the start time of the near-infrared fill light is not earlier than the exposure end time of the last line of the effective image of the nearest second preset exposure before the first preset exposure and not later than the first line of the first preset exposure.
  • the exposure start time of the image, the end time of the near-infrared fill light is no earlier than the exposure end time of the last line of the effective image in the first preset exposure and no later than the nearest second preset exposure after the first preset exposure The exposure start time of the first line of valid images.
  • FIG. 24 is a schematic diagram of the first rolling shutter exposure method and near-infrared light supplement provided by an embodiment of the present application.
  • FIG. 25 is a schematic diagram of a second rolling shutter exposure method and near-infrared light supplement provided by an embodiment of the present application.
  • FIG. 26 is a schematic diagram of a third rolling shutter exposure method and near-infrared light supplement provided by an embodiment of the present application.
  • the time period of near-infrared fill light does not overlap with the exposure time period of the nearest second preset exposure, and the start time of near-infrared fill light is no earlier than the first The exposure start time of the last line of the effective image in the preset exposure, and the end time of the near-infrared fill light is no later than the exposure end time of the first line of the effective image in the first preset exposure.
  • the time period of near-infrared fill light does not overlap with the exposure time period of the nearest second preset exposure, and the start time of near-infrared fill light is no earlier than the first
  • the exposure end time of the last effective image line of the nearest second preset exposure before the preset exposure and not later than the exposure end time of the first effective image line in the first preset exposure, and the end time of the near-infrared fill light is not It is earlier than the exposure start time of the last line of the effective image in the first preset exposure and not later than the exposure start time of the first line of the effective image of the nearest second preset exposure after the first preset exposure.
  • the time period of near-infrared fill light does not overlap with the exposure time period of the nearest second preset exposure, and the start time of near-infrared fill light is no earlier than the first
  • the exposure end time of the last line of the effective image of the nearest second preset exposure before the preset exposure and not later than the exposure start time of the first line of the effective image in the first preset exposure the end time of the near-infrared fill light is not It is earlier than the exposure end time of the last line of the effective image in the first preset exposure and not later than the exposure start time of the first line of the effective image of the nearest second preset exposure after the first preset exposure.
  • FIGS. 24 to 26 are only an example, and the sorting of the first preset exposure and the second preset exposure may not be limited to these examples.
  • the first light supplement device 021 can be used to stroboscopically fill light to make the image sensor 01 generate and output the first image signal containing near-infrared brightness information.
  • the second image signal containing visible light brightness information and since the first image signal and the second image signal are both acquired by the same image sensor 01, the viewpoint of the first image signal is the same as the viewpoint of the second image signal, so that the The first image signal and the second image signal can obtain complete information of the external scene.
  • the intensity of visible light is strong, such as daytime, the proportion of near-infrared light during the day is relatively strong, and the color reproduction of the collected image is not good.
  • the third image signal containing the visible light brightness information can be generated and output by the image sensor 01, so Even during the day, images with better color reproduction can be collected, and the true color information of the external scene can be obtained efficiently and simply regardless of the intensity of visible light, or whether it is day or night.
  • the present application uses the exposure timing of the image sensor to control the near-infrared supplementary light timing of the supplementary light device, so that the near-infrared supplementary light is performed during the first preset exposure and the first image signal is generated, and during the second preset exposure It does not perform near-infrared supplementary light and generates a second image signal.
  • This data collection method can directly collect the first image signal and the second image signal with different brightness information while the structure is simple and the cost is reduced, that is, through one
  • the image sensor can acquire two different image signals, which makes the image acquisition device easier and more efficient to acquire the first image signal and the second image signal.
  • the first image signal and the second image signal are both generated and output by the same image sensor, so the viewpoint corresponding to the first image signal is the same as the viewpoint corresponding to the second image signal. Therefore, the information of the external scene can be jointly obtained through the first image signal and the second image signal, and there is no difference between the viewpoint corresponding to the first image signal and the viewpoint corresponding to the second image signal. The problem of misalignment with the image generated by the second image signal.
  • the face image acquisition device can use the first image signal and the second image signal generated and output by multiple exposures to perform image processing and face detection to obtain a face image.
  • the face image acquisition method will be described with the face image acquisition device provided based on the embodiment shown in Figs. 1-26.
  • FIG. 27 is a schematic flowchart of an embodiment of a method for acquiring a face image according to an embodiment of the application.
  • the method is applied to a face image acquisition device, the face image acquisition device includes an image sensor, a light supplement, a filter component, and an image processor.
  • the light supplement includes a first light supplement device, and the filter component includes a first light supplement device.
  • Filter the image sensor is located on the light emitting side of the filter assembly.
  • the method may include:
  • Step 2701 Perform near-infrared light supplementation through the first light-filling device, wherein at least the near-infrared light-filling is performed during the exposure time period of the first preset exposure, and the near-infrared light-filling is not performed during the exposure time period of the second preset exposure Fill light, the first preset exposure and the second preset exposure are two of the multiple exposures of the image sensor.
  • Step 2702 Pass the visible light and part of the near-infrared light through the first filter.
  • Step 2703 Perform multiple exposures through the image sensor to generate and output a first image signal and a second image signal, where the first image signal is an image signal generated according to the first preset exposure, and the first The second image signal is an image signal generated according to the second preset exposure.
  • Step 2704 Perform image processing and face detection on the first image signal and the second image signal by an image processor to obtain a face image.
  • the image processor includes a processing component and a detection component, and the foregoing step 2704 may specifically include the following steps:
  • the detection component performs face detection processing on the first image and the second image generated by the processing component to obtain the face image.
  • the first image is a grayscale image
  • the first preprocessing includes any one or a combination of the following operations: image interpolation, gamma mapping, color conversion, and image noise reduction;
  • the second image is a color image
  • the second preprocessing includes any one or a combination of the following: white balance, image interpolation, gamma mapping, and image noise reduction.
  • the image processor further includes: a fusion component.
  • the above step 2704 may also include the following steps:
  • the detection component performs face detection processing on the fusion image generated by the fusion component to obtain the face image.
  • the first image is a grayscale image
  • the second image is a color image
  • the above step 2704 may further include the following steps:
  • the brightness information of the color image is extracted by the fusion component to obtain a brightness image
  • the color information of the color image is extracted to obtain a color image
  • the brightness image, the color image, and the gray image are fused to obtain the Describe the face image.
  • the fusion processing includes at least one of the following operations: pixel-to-point fusion and pyramid multi-scale fusion.
  • the image processor further includes: a fusion component.
  • the above step 2704 may also include the following steps:
  • the first face image and the second face image obtained by the detection component are fused by a fusion component to obtain the face image.
  • the foregoing step 2704 may specifically include the following steps:
  • the detection component is used to calibrate the position and size of the face area according to the facial features detected in the target image, and output the target face image.
  • the target image is any one of the following images: the first image, the second image, the fused image, the combination of the first image and the second image.
  • the target image is any one of the following images: the first image, the second image, and the fused image; then the above step 2704 may specifically include the following steps:
  • the detection component extracts multiple facial feature points in the target image, determines multiple positioning feature points from the multiple facial feature points based on preset facial feature information, and based on the multiple positioning features
  • the feature points determine the position coordinates of the face, and determine the target face image in the target image.
  • step 2704 may specifically further include the following steps:
  • the detection component detects whether the target face image is obtained by shooting a real face based on the principle of living body detection, and outputs the target face image when it is determined that the target face image is obtained by shooting a real face.
  • the target image is a combination of the first image and the second image
  • the detection component extracts multiple facial feature points in the first image, determines multiple positioning feature points from the multiple facial feature points based on preset facial feature information, and based on the multiple facial feature points
  • the positioning feature points determine the first face position coordinates, and perform face extraction according to the first face position coordinates and the first image to obtain the first face image, and at the same time according to the first face position coordinates and the
  • the second image is subjected to face extraction to obtain a second face image.
  • the first image is a grayscale image
  • the second image is a color image
  • the first face image is a grayscale face image
  • the second face image is a color face Image
  • the first image is a color image and the second image is a grayscale image
  • the first face image is a color face image
  • the second face image is a grayscale image.
  • Face image; the above step 2704 may specifically include the following steps:
  • the detection component is used to detect whether the gray-scale face image is obtained by shooting a real face based on the principle of living body detection, and when it is determined that the gray-scale face image is obtained by shooting a real face, based on the extracted first face image
  • the color face image is output.
  • the image processor further includes: a cache component; the face image acquisition method may further include the following steps:
  • the temporary content is cached by the cache component, and the temporary content includes any one of the following content: the first image signal and/or the second image signal output by the image sensor, and the first image signal obtained by the image processor during processing. One image and/or second image.
  • the image processor further includes: an image enhancement component; the face image acquisition method may further include the following steps:
  • the target image is enhanced by the image enhancement component to obtain an enhanced target image
  • the enhancement processing includes at least one of the following: contrast enhancement and super-resolution reconstruction
  • the target image is any one of the following images: The first image, the second image, and the face image.
  • the intensity of the near-infrared light passing through the first filter when the first light-filling device performs near-infrared light-filling is higher than that when the first light-filling device does not perform near-infrared light-filling.
  • the intensity of the near-infrared light of the first filter is higher than that when the first light-filling device does not perform near-infrared light-filling.
  • the center wavelength of the near-infrared supplement light performed by the first light supplement device is a set characteristic wavelength or falls within a set characteristic wavelength range
  • the center wavelength of the near-infrared light passing through the first filter is sum / Or the band width reaches the constraint condition.
  • the center wavelength of the near-infrared supplement light performed by the first light supplement device is any wavelength within a wavelength range of 750 ⁇ 10 nanometers;
  • the center wavelength of the near-infrared supplement light performed by the first light supplement device is any wavelength within the wavelength range of 780 ⁇ 10 nanometers;
  • the center wavelength of the near-infrared supplement light performed by the first light supplement device is any wavelength within the wavelength range of 810 ⁇ 10 nanometers;
  • the center wavelength of the near-infrared supplement light performed by the first light supplement device is any wavelength within the wavelength range of 940 ⁇ 10 nanometers.
  • restriction conditions include any one of the following:
  • the difference between the center wavelength of the near-infrared light passing through the first filter and the center wavelength of the near-infrared light supplemented by the first light-filling device lies within the wavelength fluctuation range, and the wavelength fluctuation range is 0 to 20 nanometers;
  • the half bandwidth of the near-infrared light passing through the first filter is less than or equal to 50 nanometers
  • the first waveband width is smaller than the second waveband width; wherein, the first waveband width refers to the waveband width of the near-infrared light passing through the first filter, and the second waveband width refers to the waveband width of the near-infrared light passing through the first filter.
  • the third waveband width is smaller than the reference waveband width.
  • the third waveband width refers to the waveband width of near-infrared light whose pass rate is greater than a set ratio.
  • the reference waveband width is any waveband within the range of 50nm to 150nm.
  • Width, the set ratio is any ratio within the ratio range of 30% to 50%.
  • At least one exposure parameter of the first preset exposure and the second preset exposure is different, and the at least one exposure parameter is one of exposure time, exposure gain, and aperture size.
  • the exposure gain includes analog gain, and/or digital gain.
  • At least one exposure parameter of the first preset exposure and the second preset exposure is the same, and the at least one exposure parameter includes one or more of exposure time, exposure gain, and aperture size ,
  • the exposure gain includes analog gain and/or digital gain.
  • the image sensor includes a plurality of light-sensing channels, and each light-sensing channel is used to sense at least one light in the visible light waveband and to sense light in the near-infrared waveband.
  • the image sensor adopts a global exposure mode to perform multiple exposures.
  • the time period of the near-infrared fill light is the same as the closest second preset exposure
  • the time period of the near-infrared supplement light is a subset of the exposure time period of the first preset exposure, or the time period of the near-infrared supplement light and the exposure of the first preset exposure
  • the image sensor adopts a rolling shutter exposure method for multiple exposures.
  • the time period of the near-infrared supplement light is the same as that of the nearest second pre-light. Suppose there is no intersection between the exposure time periods of exposure;
  • the start time of the near-infrared fill light is no earlier than the exposure start time of the last line of the effective image in the first preset exposure, and the end time of the near-infrared fill light is no later than the first line of the effective image in the first preset exposure The end of the exposure;
  • the start time of the near-infrared fill light is no earlier than the exposure end time of the last line of the effective image of the nearest second preset exposure before the first preset exposure and no later than the first preset exposure.
  • the exposure end time of the line effective image, the end time of the near-infrared fill light is no earlier than the exposure start time of the last line of the effective image in the first preset exposure and no later than the nearest neighbor after the first preset exposure.
  • the exposure start time of the first line of the effective image of the second preset exposure or
  • the start time of the near-infrared fill light is no earlier than the exposure end time of the last line of the effective image of the nearest second preset exposure before the first preset exposure and no later than the first preset exposure.
  • the exposure start time of the line effective image, the end time of the near-infrared fill light is no earlier than the exposure end time of the last line of the effective image in the first preset exposure and no later than the nearest neighbor after the first preset exposure.
  • the exposure start time of the first line of the effective image of the second preset exposure is no earlier than the exposure end time of the last line of the effective image in the first preset exposure and no later than the nearest neighbor after the first preset exposure.
  • the light fill device further includes a second light fill device, and the second light fill device is used to fill light with visible light.
  • the filter assembly further includes a second filter and a switching component, and both the first filter and the second filter are connected to the switching component;
  • the switching component is configured to switch the second filter to the light incident side of the image sensor
  • the second filter is switched to the light incident side of the image sensor, the second filter is used to pass visible light and block near-infrared light, and the image sensor is used to generate and output through exposure Third image signal
  • the image sensor generates and outputs a first image signal and a second image signal through multiple exposures, where the first image signal is an image signal generated according to a first preset exposure, and the second image signal is According to the image signal generated by the second preset exposure, the first preset exposure and the second preset exposure are two exposures in the multiple exposures.
  • the light supplement includes a first light supplement device, and the first light supplement device performs Near-infrared supplementary light, wherein at least the near-infrared supplementary light is performed during the exposure time period of the first preset exposure, and the near-infrared supplementary light is not performed during the exposure time period of the second preset exposure.
  • Image processing and face detection are performed on the image signal and the second image signal to obtain a face image.
  • only one image sensor is needed to obtain visible light images and infrared light images, which reduces the cost, and avoids that the images obtained by the two image sensors due to the process structure and the registration and synchronization problems of the two image sensors are not synchronized, causing the face The problem of poor image quality.
  • At least one refers to one or more, and “multiple” refers to two or more.
  • “And/or” describes the association relationship of the associated objects, indicating that there can be three relationships, for example, A and/or B, which can mean: A alone exists, both A and B exist, and B exists alone, where A, B can be singular or plural.
  • the character “/” generally indicates that the associated objects before and after are in an “or” relationship; in the formula, the character “/” indicates that the associated objects before and after are in a “division” relationship.
  • “The following at least one item (a)” or similar expressions refers to any combination of these items, including any combination of a single item (a) or plural items (a).
  • at least one of a, b, or c can mean: a, b, c, ab, ac, bc, or abc, where a, b, and c can be single or multiple One.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

本申请提供一种人脸图像采集装置及方法,其中,该装置包括:图像传感器、补光器、滤光组件和图像处理器。该图像传感器通过多次曝光产生并输出第一图像信号和第二图像信号,补光器包括的第一补光装置进行近红外补光,至少在第一预设曝光的曝光时间段内进行近红外补光,在第二预设曝光的曝光时间段内不进行近红外补光;滤光组件包括的第一滤光片使可见光和部分近红外光通过;图像处理器用于对第一图像信号和第二图像信号进行图像处理和人脸检测,得到人脸图像。该技术方案中,只需要一个图像传感器可以得到可见光图像和红外光图像,不仅降低了成本,而且提高了人脸图像质量。

Description

人脸图像采集装置及方法
本申请要求于2019年05月31日提交中国专利局、申请号为201910472685.2、申请名称为“人脸图像采集装置及方法”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及信息处理技术领域,尤其涉及一种人脸图像采集装置及方法。
背景技术
随着科学技术的迅速发展,安全防护产品的应用已经应用到各个领域,例如,政府部门、大型企业、社区和家庭。其中,监控系统属于安全防护产品的重要组成部分,而图像采集可以为后续数据分析提供来源,以实现各种应用场景的不同需求。
如果采用人脸识别摄像机得到监控系统检测到的人脸图像。具体的,该人脸识别摄像机中的图像采集电路首先通过两个图像传感器采集到可见光图像和红外光图像,其次对可见光图像和红外光图像进行融合处理,最后对融合图像进行编码和分析得到人脸图像。
然而,上述摄像机对两个图像传感器的工艺结构以及两者之间的配准和同步要求极高,不仅成本高,而且若配准未达标,会出现得到的人脸图像质量较差。
发明内容
本申请提供一种人脸图像采集装置及方法,以克服降低人脸图像采集的成本,提升采集到的人脸图像质量的问题。
本申请第一方面提供的一种人脸图像采集装置,包括:图像传感器、补光器、滤光组件和图像处理器;
所述图像传感器用于通过多次曝光产生并输出第一图像信号和第二图像信号,其中,所述第一图像信号是根据第一预设曝光产生的图像信号,所述第二图像信号是根据第二预设曝光产生的图像信号,所述第一预设曝光和所 述第二预设曝光为所述多次曝光中的其中两次曝光;
所述补光器包括第一补光装置,所述第一补光装置用于进行近红外补光,其中,至少在所述第一预设曝光的曝光时间段内进行近红外补光,在所述第二预设曝光的曝光时间段内不进行近红外补光;
所述滤光组件包括第一滤光片,所述第一滤光片用于使可见光和部分近红外光通过;
所述图像处理器用于对所述第一图像信号和第二图像信号进行图像处理和人脸检测,得到人脸图像。
本申请第二方面提供一种人脸图像采集方法,应用于人脸图像采集装置,所述人脸图像采集装置包括图像传感器、补光器、滤光组件和图像处理器,所述补光器包括第一补光装置,所述滤光组件包括第一滤光片,所述图像传感器位于所述滤光组件的出光侧,所述方法包括:
通过所述第一补光装置进行近红外补光,其中,至少在第一预设曝光的部分曝光时间段内进行近红外补光,在第二预设曝光的曝光时间段内不进行近红外补光,所述第一预设曝光和所述第二预设曝光为所述图像传感器的多次曝光中的其中两次曝光;
通过所述第一滤光片使可见光和部分近红外光通过;
通过所述图像传感器进行多次曝光,以产生并输出第一图像信号和第二图像信号,所述第一图像信号是根据所述第一预设曝光产生的图像信号,所述第二图像信号是根据所述第二预设曝光产生的图像信号;
通过所述图像处理器对所述第一图像信号和第二图像信号进行图像处理和人脸检测,得到人脸图像。
本申请实施例提供的人脸图像采集装置及方法,图像传感器通过多次曝光产生并输出第一图像信号和第二图像信号,其中,该第一图像信号是根据第一预设曝光产生的图像信号,第二图像信号是根据第二预设曝光产生的图像信号,第一预设曝光和第二预设曝光为多次曝光中的其中两次曝光,补光器包括第一补光装置,该第一补光装置进行近红外补光,其中,至少在第一预设曝光的曝光时间段内进行近红外补光,在第二预设曝光的曝光时间段内不进行近红外补光,图像处理器用于对第一图像信号和第二图像信号进行图像处理和人脸检测,得到人脸图像。该技术方案中,只需要一个图像传感器可以得到可见光图像和红外光图像,降低了成本,而且避免了两个图像传感 器由于工艺结构以及两者配准和同步问题得到的图像不同步,导致人脸图像质量较差的问题。
附图说明
图1为本申请实施例提供的一种人脸图像采集装置结构示意图;
图2为本申请实施例中一种图像处理器的结构示意图;
图3为处理组件对第一图像信号和第二图像信号进行处理的流程示意图;
图4为本申请实施例中另一种图像处理器的结构示意图;
图5为融合组件对彩色图像和灰度图像进行融合处理的结构示意图;
图6为本申请实施例中再一种图像处理器的结构示意图;
图7为本申请实施例中检测组件进行人脸检测处理的流向图;
图8为本实施例中检测组件对彩色图像和灰度图像进行处理的一种示意图;
图9为本实施例中检测组件对彩色图像和灰度图像进行处理的另一种示意图;
图10为本申请实施例中又一种图像处理器的结构示意图;
图11为本申请实施例中又一种图像处理器的结构示意图;
图12是本申请实施例提供的一种第一补光装置进行近红外补光的波长和相对强度之间的关系示意图;
图13为第一滤光片可以通过的光的波长与通过率之间的关系的一种示意图;
图14为本申请实施例提供的另一种人脸图像采集装置结构示意图;
图15是本申请实施例提供的一种RGB传感器的示意图;
图16是本申请实施例提供的一种RGBW传感器的示意图;
图17是本申请实施例提供的一种RCCB传感器的示意图;
图18是本申请实施例提供的一种RYYB传感器的示意图;
图19是本申请实施例提供的一种图像传感器的感应曲线示意图;
图20为一种卷帘曝光方式的示意图;
图21是本申请实施例提供的第一种第一预设曝光和第二预设曝光的示意图;
图22是本申请实施例提供的第二种第一预设曝光和第二预设曝光的示意图;
图23是本申请实施例提供的第三种第一预设曝光和第二预设曝光的示意图;
图24是本申请实施例提供的第一种卷帘曝光方式和近红外补光的示意图;
图25是本申请实施例提供的第二种卷帘曝光方式和近红外补光的示意图;
图26是本申请实施例提供的第三种卷帘曝光方式和近红外补光的示意图;
图27为本申请实施例提供的人脸图像采集方法实施例的流程示意图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请实施例提出了一种人脸图像采集装置及方法,至少可以降低摄像机成本提升人脸图像质量,图像传感器通过多次曝光产生并输出第一图像信号和第二图像信号,其中,该第一图像信号是根据第一预设曝光产生的图像信号,第二图像信号是根据第二预设曝光产生的图像信号,第一预设曝光和第二预设曝光为多次曝光中的其中两次曝光,补光器包括第一补光装置,该第一补光装置进行近红外补光,其中,至少在第一预设曝光的曝光时间段内存在近红外补光,在第二预设曝光的曝光时间段内不存在近红外补光,图像处理器用于对第一图像信号和第二图像信号进行图像处理和人脸检测,得到人脸图像。该技术方案中,只需要一个图像传感器可以得到可见光图像和红外光图像,降低了成本,而且避免了两个图像传感器由于工艺结构以及两者配准和同步问题得到的图像不同步,导致人脸图像质量较差的问题。
下面,通过具体实施例对本申请的技术方案进行详细说明。需要说明的是,下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。
图1为本申请实施例提供的一种人脸图像采集装置结构示意图。如图1 所示,该人脸图像采集装置可以包括:图像传感器01、补光器02、滤光组件03、镜头组件04和图像处理器05,该图像传感器01位于滤光组件03的出光侧,该图像处理器05位于该图像传感器01之后。
在本申请的实施例中,图像传感器01用于通过多次曝光产生并输出第一图像信号和第二图像信号。其中,第一图像信号是根据第一预设曝光产生的图像信号,第二图像信号是根据第二预设曝光产生的图像信号,第一预设曝光和第二预设曝光为该多次曝光中的其中两次曝光。
值得说明的是,该第一图像信号和第二图像信号中通过拍摄人获取到的,即第一图像信号和第二图像信号中均包括人脸区域。
补光器02包括第一补光装置021,第一补光装置021用于进行近红外补光,其中,至少在第一预设曝光的部分曝光时间段内存在近红外补光,在第二预设曝光的曝光时间段内不存在近红外补光。通过第一补光装置021的上述补光提升了信号采集能力,有利于提升图像质量。
滤光组件03包括第一滤光片031,第一滤光片031使可见光和部分近红外光通过,其中,第一补光装021进行近红外光补光时通过第一滤光片031的近红外光的强度高于第一补光装置021未进行近红外补光时通过第一滤光片031的近红外光的强度。通过第一滤光片031的滤光可以使得在近红外补光时间段内可获得近红外图像信号,在非补光时间段内获得色彩准确的可见光图像信号。
在本申请实施例中,滤光组件03可以位于镜头04和图像传感器01之间,且图像传感器01位于滤光组件03的出光侧。或者,镜头04位于滤光组件03与图像传感器01之间,且图像传感器01位于镜头04的出光侧。作为一种示例,第一滤光片031可以是滤光薄膜,这样,当滤光组件03位于镜头04和图像传感器01之间时,第一滤光片031可以贴在镜头04的出光侧的表面,或者,当镜头04位于滤光组件03与图像传感器01之间时,第一滤光片031可以贴在镜头04的入光侧的表面。
在本实施例中,该滤光组件03可以控制图像传感器接收到的光谱范围,例如,第一补光装置产生的补光和可见光能够通过,而阻止其他光谱波段的光通过,保证有效利用补光的前提下,尽量减少其他光源的影响。
该图像处理器05用于对第一图像信号和第二图像信号进行图像处理和人脸检测,得到人脸图像。
在本实施例中,图像处理器05可以接收该图像传感器01传输的第一图像信号和第二图像信号,并通过对该第一图像信号和第二图像信号进行人脸分析和处理后,得到第一图像信号和第二图像信号中的人脸图像,从而实现人脸抓拍的功能。
本申请实施例提供的人脸图像采集装置,通过包括图像传感器、补光器、滤光组件和图像处理器,通过图像传感器的多次曝光、补光器的补光、滤光组件的滤光作用,利用单个图像传感器也可以得到多幅具有不同光谱范围的第一图像信号和第二图像信号,扩展了单传感器的图像采集能力,提升不同场景下的图像质量,利用图像处理器对获取到的第一图像信号和第二图像信号进行处理和分析,能够输出人脸图像,从而实现了该装置的人脸抓拍或采集功能。
作为一种示例,人脸图像采集装置可以包括图像采集单元和图像处理单元,该图像采集单元可以包括上述图像传感器、补光器、滤光组件和镜头组件。作为一种示例,该图像采集单元可以是包括上述各组件的图像采集设备,其中的补光器作为图像采集设备的一部分实现补光的功能,例如,摄像机、抓拍机、人脸识别相机、读码相机、车载相机、全景细节相机等;作为另一种示例,该图像采集单元还可以通过图像采集设备和补光器02连接形式实现,该补光器02位于图像采集设备的外部与图像采集设备进行连接。该图像处理单元可以是图像处理器,其具有数据处理和分析的能力,分析出图像信号中的人脸图像。由于本申请中的第一图像信号和第二图像信号的质量好,相应的,提高了人脸检测的准确度。
可选的,在本申请的实施例中,上述图像传感器01的曝光时序与补光器02包括的第一补光装置021的近红外补光时序存在一定的关系,例如,至少在第一预设曝光的部分曝光时间段内存在近红外补光,在第二预设曝光的曝光时间段内不存在近红外补光。
示例性的,在上述实施例的基础上,图2为本申请实施例中一种图像处理器的结构示意图。如图2所示,上述图像处理器05可以包括:处理组件051和检测组件052。
其中,该处理组件051用于对上述第一图像信号进行第一预处理生成第一图像,以及对第二图像信号进行第二预处理生成第二图像。
该检测组件052用于对处理组件051生成的第一图像和第二图像进行人脸检测处理,得到人脸图像。
在本实施例中,检测组件052可以对图像(例如,第一图像和第二图像)进行内容分析,若检测到图像中有人脸特征信息出现时,能够获取人脸区域所在位置,提取出人脸图像,实现了人脸抓拍的功能。
图像处理器是一个处理图像信号的计算平台,具有多种典型的实现方式。
示例性的,图2所示的图像处理器的实现方式是一种较为节省计算资源的典型实现方式。在本实施例中,图像传感器01采集到的第一图像信号和第二图像信号经过处理组件051的图像预处理后,生成第一图像和第二图像,该检测组件052再对从处理组件051接收到的第一图像和第二图像进行检测处理,从而输出人脸图像。
示例性的,该第一图像可以是灰度图像,该第二图像可以是彩色图像。可选的,该灰度图像可以通过黑白图像的形式体现,下述所述的灰度图像均可以通过黑白图像或者黑白比例不同的灰度图像来体现,其可以通过实际情况设定,此处不再赘述。
可选的,该第一图像为灰度图像时,该第一预处理可以包括如下操作中的任意一种或多种的组合:图像插值、伽马映射、色彩转换和图像降噪。第二图像为彩色图像时,该第二预处理可以包括如下任意一种或多种的组合:白平衡、图像插值、伽马映射和图像降噪。
值得说明的是,本申请实施例并不限定上述第一预处理和第二预处理具体包括的操作,其可以根据实际情况确定,此处不再赘述。
在本实施例中,处理组件可以包含白平衡、图像插值、色彩转换、伽马映射和图像降噪等图像常规处理,对上述第一图像信号和第二图像信号可以采用不同的处理流程和参数,从而得到质量不同或色彩度等不同的第一图像和第二图像。
例如,图3为处理组件对第一图像信号和第二图像信号进行处理的流程示意图。如图3所示,处理组件采用第一处理参数对第一图像信号进行例如图像插值、伽马映射、色彩转换和图像降噪等处理操作中的一种或多种的组合,可以得到灰度图像,采用第二处理参数对第二图像信号进行例如白平衡、图像插值、伽马映射和图像降噪处理等处理操作中的一种或多种的组合,可以得到彩色图像。
本实施例中的处理组件能够灵活的选择合适的处理参数和图像信号组合,使得最后输出的人脸图像的图像质量更优。
值得说明的是,在本实施例中,上述第一图像信号和第二图像信号是相对的概念,名称上可以互换。图3以对第一图像信号进行图像插值、伽马映射、色彩转换和图像降噪等得到灰度图像,以对第二图像信号进行白平衡、图像插值、伽马映射和图像降噪等得到彩色图像进行举例说明,本申请实施例并不对其进行限定。
示例性的,该图像处理器05可以除了包括:处理组件051和检测组件052之外,还可以包括:融合组件053。本实施例中,通过融合组件053能够提取出出不同图像的信息,并将不同图像之间的信息进行合并,实现信息量最大化,提升了图像质量。
作为一种示例,图4为本申请实施例中另一种图像处理器的结构示意图。如图4所示,该融合组件053可以位于处理组件051和检测组件052之间。
可选的,该融合组件053用于对处理组件051生成的第一图像和第二图像进行融合处理生成融合图像。
该检测组件052具体用于对该融合组件生成的融合图像进行人脸检测处理,得到人脸图像。
在本实施例中,处理组件051对采集到的第一图像信号和第二图像信号进行图像预处理,生成第一图像和第二图像,并将该第一图像和第二图像发送给融合组件,利用融合组件053对接收到的第一图像和第二图像进行融合,生成融合图像,该检测组件对接收到的融合图像进行内容检测与分析,输出人脸图像。
在本实施例中,该融合组件053分别提取出第一图像和第二图像的信息进行融合,实现信息量最大化,输出融合图像。
示例性的,在本实施例中,当第一图像为灰度图像,第二图像为彩色图像时,该融合组件053具体用于提取彩色图像的亮度信息得到亮度图像、提取彩色图像的色彩信息得到色彩图像,以及对该亮度图像、该色彩图像以及上述灰度图像进行融合处理,得到人脸图像。
其中,该融合处理包括如下操作中的至少一种:像素点对点融合、金字塔多尺度融合。
在本实施例中,例如,图5为融合组件对彩色图像和灰度图像进行融合 处理的结构示意图。如图5所示,该融合组件053可以提取出彩色图像的亮度图像和色彩图像,并与灰度图像进行融合,例如,像素点对点融合、金字塔多尺度融合等,关于各图像的融合权重可由用户进行配置,也可以通过图像亮度、纹理等信息计算得到,从而输出信噪比提升的彩色融合图像。
可选的,该融合组件053提取出彩色图像的亮度图像和色彩图像之后,将亮度图像与灰度图像进行融合得到融合后的亮度图像,再将融合后的亮度图像与色彩图像进行合并,输出彩色的融合图像。示例性的,可以通过如下公式确定:
y FUS=ω*y VIS+(1-ω)*y NIR
其中,y FUS表示融合图像,y VIS表示亮度图像,y NIR表示灰度图像,ω表示融合权重。
作为另一种示例,图6为本申请实施例中再一种图像处理器的结构示意图。如图6所示,该融合组件053可以位于检测组件052之后。
可选的,在本实施例中,如图6所示,该检测组件052具体用于对处理组件051生成的第一图像和第二图像进行人脸检测处理,分别得到第一人脸图像和第二人脸图像。
该融合组件053具体用于对该检测组件052得到的第一人脸图像和第二人脸图像进行融合处理,得到人脸图像。
例如,在本实施例中,图像传感器采集到的第一图像信号和第二图像信号经过处理组件051的图像预处理后,生成彩色图像和灰度图像,该检测组件052对接收到的彩色图像、灰度图像进行检测处理,输出彩色人脸图像和灰度人脸图像,该融合组件对彩色人脸图像、灰度人脸图像进行融合处理,生成融合后的人脸图像。
示例性的,在本实施例中,该检测组件052具体用于根据在目标图像中检测到的面部特征进行人脸区域位置和大小标定,输出目标人脸图像,该目标图像为如下图像中的任意一种:第一图像、第二图像、第一图像和第二图像的融合图像、第一图像和所述第二图像的组合。
示例性的,当目标图像为单个图像时,例如,第一图像、第二图像、第一图像和第二图像的融合图像等,检测组件052的检测原理均可以如图7所示。具体的,图7为本申请实施例中检测组件进行人脸检测处理的流向图。如图7所示,当本实施例中的目标图像为如下图像中的任意一种:第一图像、 第二图像、融合图像时,该检测组件052具体用于提取目标图像中的多个面部特征点,基于预设的面部特征信息从上述多个面部特征点中确定出满足面部规则的多个定位特征点,基于该多个定位特征点确定人脸位置坐标,确定该目标图像中的目标人脸图像。
例如,当目标图像为第一图像时,该目标人脸图像为第一人脸图像;当目标图像为第二图像时,该目标人脸图像为第二人脸图像;当目标图像为第一图像和第二图像的融合图像时,该目标人脸图像为第一图像和第二图像的融合后的融合人脸图像等。
可选的,在图7所示的示意图中,本实施例中的提取目标图像中的多个面部特征点通常也称为特征点提取,基于预设的面部特征信息从上述多个面部特征点中确定出满足面部规则的多个定位特征点实际上指的是特征点对比和特征点定位。示例性的,特征点提取与特征点比对的一种典型实现方式是根据人脸器官的形状描述以及他们之间的距离特性来获得有助于人脸分类的特征数据,该特征数据通常包括特征点间的欧氏距离、曲率和角度等。由于人脸由眼睛、鼻子、嘴、下巴等局部构成,对这些局部和它们之间结构关系的几何描述,可作为检测出人脸区域的重要特征。若检测到满足面部规则的特征点,则进行特征点定位,获取人脸位置坐标,提取出人脸图像。
关于对第一图像、第二图像以及融合图像的处理过程类似,均可以基于上述对目标图像的处理过程实现,此处不再一一赘述。
进一步的,在本申请的实施例中,如图7所示,该检测组件052不仅用于确定该目标图像中的目标人脸图像,还可以用于基于活体检测原理检测该目标人脸图像是否由拍摄真实人脸获得,并在确定该目标人脸图像由拍摄真实人脸获得时输出该目标人脸图像。
具体的,检测组件052在确定出目标图像中的目标人脸图像之后,可以对该目标人脸图像进行检测,以验证该目标人脸图像是否由拍摄真实人脸获得。示例性的,检测组件052可以利用真实的人脸和纸片、屏幕、立体面具等伪造人脸具有不同的红外反射特性的特点来区分目标人脸图像的来源。
在本实施例中,通过对目标人脸图像进行活体检测能够保证获取到的人脸图像由拍摄真实人脸获得,其保证了抓拍到的人脸的真实性。
在本申请的另一种可能设计中,本实施例中的检测组件可以同时处理两幅图像,并可以根据需要输出一幅人脸图像或两幅人脸图像。
示例性的,上述目标图像为第一图像和第二图像的组合。
此时,该检测组件052具体用于提取该第一图像中的多个面部特征点,基于预设的面部特征信息从多个面部特征点中确定出满足面部规则的多个定位特征点,基于该多个定位特征点确定第一人脸位置坐标,根据该第一人脸位置坐标和第一图像进行人脸提取得到第一人脸图像,同时根据该第一人脸位置坐标和该第二图像进行人脸提取得到第二人脸图像。
作为一种实现方式,该第一图像为灰度图像,该第二图像为彩色图像时,则第一人脸图像为灰度人脸图像,第二人脸图像为彩色人脸图像。
因而,在该实现方式中,该检测组件052还用于基于活体检测原理检测该灰度人脸图像是否由拍摄真实人脸获得,并在确定该灰度人脸图像由拍摄真实人脸获得时输出灰度人脸图像,以及基于提取得到的第二人脸图像输出彩色人脸图像。
作为一种示例,当输入的图像为彩色图像和灰度图像时,图8为本实施例中检测组件对彩色图像和灰度图像进行处理的一种示意图。如图8所示,本实施例,检测组件052首先对具有更高信噪比的灰度图像进行特征点提取、特征点对比和特征点定位等步骤的人脸标定,获取人脸位置坐标,并在灰度图像中提取出灰度人脸图像,并进行活体检测处理,判断该灰度人脸图像是否由拍摄真实人脸获得,若是,则在彩色图像中提取出彩色人脸图像,从而输出灰度人脸图像、彩色人脸图像,或者,输出彩色人脸图像。
关于检测组件具有输出一幅人脸图像、两幅人脸图像还是更多的人脸图像可以根据实际需要确定,此处不再赘述。
作为另一种实现方式,该第一图像为彩色图像,第二图像为灰度图像时,该第一人脸图像为彩色人脸图像,第二人脸图像为灰度人脸图像。
这时,检测组件052还用于基于活体检测原理检测该灰度人脸图像是否由拍摄真实人脸获得,并在确定该灰度人脸图像由拍摄真实人脸获得时,基于提取得到的第一人脸图像输出彩色人脸图像。
作为另一种示例,当输入的图像为彩色图像和灰度图像时,图9为本实施例中检测组件对彩色图像和灰度图像进行处理的另一种示意图。如图9所示,本实施例,检测组件052首先对对彩色图像进行特征点提取、特征点对比和特征点定位,获取人脸位置坐标,再根据该人脸位置坐标从灰度图像中提取出灰度人脸图像,并进行活体检测处理,判断该灰度人脸图像是否由拍 摄真实人脸获得,若是,则在彩色图像中提取出彩色人脸图像,从而输出彩色人脸图像。
值得说明的是,该种实现方式以输出一幅彩色人脸图像进行举例说明。实际上,该种实现方式也可以输出灰度人脸图像和彩色人脸图像等两幅图像。本申请实施例不对每种实现方式具体输出的图像幅数进行限定,其可以根据实现需要确定,此处不再赘述。
假设本发明实施例中的人脸图像采集装置为人脸抓拍机,那么该人脸抓拍机输出的图像可以是灰度人脸图像、彩色人脸图像、灰度人脸图像和彩色人脸图像融合后的人脸图像当中的一个或多个。
示例性的,在本申请的实施例中,上述图像处理器还包括:缓存组件054。该缓存组件054可以位于处理组件051之前,也可以位于处理组件051之后。
可选的,在本实施例中,该缓存组件054用于缓存临时内容,该临时内容包括该图像传感器01输出的第一图像信号和/或第二图像信号;或者,该临时内容包括该图像处理器05在处理过程中得到的第一图像和/或第二图像。
示例性的,图10为本申请实施例中又一种图像处理器的结构示意图。如图10所示,本实施例中的图像处理器05具有图像同步的功能,具体的,若后续模块(例如,处理组件051)需要同时对第一图像信号和第二图像信号进行处理,这时,该缓存组件054可以位于处理组件051之前,并利用该缓存组件054来存储先采集到的第一图像信号和/或第二图像信号,待接收到第二图像信号和/或第一图像信号后,再对其进行处理,实现了第一图像信号和第二图像信号之间的同步。也即,本实施例中的缓存组件054能够通过缓存图像,实现了具有不同曝光时间段的图像之间的同步。
值得说明的是,该缓存组件054可以存储的图像可以是图像传感器采集到的原始图像信号(第一图像信号或第二图像信号),也可以是该图像处理器在处理过程中得到的第一图像和/或第二图像以及第一人脸图像和/或第二人脸图像等。本申请实施例并不限定该缓存组件054缓存的内容,其可以根据实际情况确定,此处不再赘述。
进一步的,本申请的实施例中,该图像处理器05还可以具有降噪功能,示例性的,该图像处理器05可采用信噪比较高的灰度图像作为引导,对彩色图像和灰度图像进行联合降噪,如导向滤波、联合双边滤波等,获取信噪比提升的彩色图像和灰度图像。
示例性的,在本申请的实施例中,上述图像处理器还包括:图像增强组件055。该图像增强组件055可以位于处理组件051之后、检测组件052之前,也可以位于检测组件052之后。当该图像处理器05包括融合组件053时,该图像增强组件05还可以位于融合组件053之前,关于该图像增强组件055的具体设置位置可以根据应用需求或资源情况灵活的配置,本实施例并不对其进行限定。
示例性的,图11为本申请实施例中又一种图像处理器的结构示意图。如图11所示,本实施例以图像增强组件05位于检测组件052之后进行解释说明。具体的,该图像增强组件05用于对目标图像进行增强处理,得到增强处理后的目标图像,该增强处理包括如下至少一种:对比度增强、超分辨率重建,该目标图像为如下图像中的任意一种:第一图像、第二图像、第一图像和第二图像的融合图像、人脸图像。示例性的,在图11中,以目标图像为人脸图像进行示意性说明。
在本实施例中,该图像处理器05具有图像增强处理的功能,其对接收到的第一图像或第二图像或人脸图像等进行如对比度增强、超分辨率等的增强处理,输出质量提升的人脸图像。
具体的,该图像处理器05通过超分辨率重建对低分辨率的人脸小图进行处理,生成高分辨率的人脸大图,提升图像质量。该超分辨率重建处理可以采用基于插值、基于重建、基于学习的方法,此处不再赘述。
在本申请的上述任意一种实施例中,第一补光装置02可以进行频闪补光,即可以进行不同补光状态的高频切换,按照第一预设曝光进行图像采集时采用第一状态补光,按照第二预设曝光进行图像采集时采用第二状态补光,该第一状态补光和第二状态补光可以采用不同的补光配置,其参数包含但不限于补光类型、补光强度(包含关闭状态)、补光时长等,从而可以扩展图像传感器01能够接收到的光谱范围。
示例性的,第一补光装置021为可以发出近红外光的装置,例如近红外补光灯等,第一补光装置021可以以频闪方式进行近红外补光,也可以以类似频闪的其他方式进行近红外补光,本申请实施例对此不做限定。
在一些示例中,当第一补光装置021以频闪方式进行近红外补光时,可以通过手动方式来控制第一补光装置021以频闪方式进行近红外补光,也可以通过软件程序或特定设备来控制第一补光装置021以频闪方式进行近红外补光,本申请实施例对此不做限定。其中,第一补光装置021进行近红外补光的时间段可以与第一预设曝光的曝光时间段重合,也可以大于第一预设曝光的曝光时间段或者小于第一预设曝光的曝光时间段,只要在第一预设曝光的整个曝光时间段或者部分曝光时间段内存在近红外补光,而在第二预设曝光的曝光时间段内不存在近红外补光即可。
在本实施例中,图像传感器的曝光时长与第一补光装置的补光时长满足一定约束,若第一补光状态补光开启了红外补光,则其补光时间段不能与第二图像信号的曝光时间段有重合;同样,若第二状态补光开启了红外补光,则其补光时间段不能与第一图像信号的曝光时间段有重合,实现多光谱图像采集。
需要说明的是,第二预设曝光的曝光时间段内不存在近红外补光,对于全局曝光方式来说,第二预设曝光的曝光时间段可以是开始曝光时刻和结束曝光时刻之间的时间段,对于卷帘曝光方式来说,第二预设曝光的曝光时间段可以是第二图像信号第一行有效图像的开始曝光时刻与最后一行有效图像的结束曝光时刻之间的时间段,但并不局限于此。例如,第二预设曝光的曝光时间段也可以是第二图像信号中目标图像对应的曝光时间段,目标图像为第二图像信号中与目标对象或目标区域所对应的若干行有效图像,这若干行有效图像的开始曝光时刻与结束曝光时刻之间的时间段可以看作第二预设曝光的曝光时间段。
在本实施例中,图像传感器可以按照第一预设曝光生成第一图像信号,按照第二预设曝光生成第二图像信号,该第一预设曝光和该第二预设曝光可以采用相同或不同的曝光参数,包含但不限于曝光时长、增益、光圈大小等,能够与补光灯状态匹配,实现多光谱图像采集。
需要说明的另一点是,由于第一补光装置021在对外部场景进行近红外补光时,入射到物体表面的近红外光可能会被物体反射,从而进入到第一滤光片031中。并且由于通常情况下,环境光可以包括可见光和近红外光,且环境光中的近红外光入射到物体表面时也会被物体反射,从而进入到第一滤光片031中。因此,在存在近红外补光时通过第一滤光片031的近红外光可 以包括第一补光装置021进行近红外补光时经物体反射进入第一滤光片031的近红外光,在不存在近红外补光时通过第一滤光片031的近红外光可以包括第一补光装置021未进行近红外补光时经物体反射进入第一滤光片031的近红外光。
也即是,在存在近红外补光时通过第一滤光片031的近红外光包括第一补光装置021发出的且经物体反射后的近红外光,以及环境光中经物体反射后的近红外光,在不存在近红外补光时通过第一滤光片031的近红外光包括环境光中经物体反射后的近红外光。
以本实施例的人脸图像采集装置中,滤光组件03可以位于镜头04和图像传感器01之间,且图像传感器01位于滤光组件03的出光侧的结构特征为例,图像采集装置采集第一图像信号和第二图像信号的过程为:在图像传感器01进行第一预设曝光时,第一补光装置021存在近红外补光,此时拍摄场景中的环境光和第一补光装置进行近红外补光时被场景中物体反射的近红外光经由镜头04、第一滤光片031之后,由图像传感器01通过第一预设曝光产生第一图像信号;在图像传感器01进行第二预设曝光时,第一补光装置021不存在近红外补光,此时拍摄场景中的环境光经由镜头04、第一滤光片031之后,由图像传感器01通过第二预设曝光产生第二图像信号,在图像采集的一个帧周期内可以有M个第一预设曝光和N个第二预设曝光,第一预设曝光和第二预设曝光之间可以有多种组合的排序,在图像采集的一个帧周期中,M和N的取值以及M和N的大小关系可以根据实际需求设置,例如,M和N的取值可相等,也可不相同。
另外,由于环境光中的近红外光的强度低于第一补光装置021发出的近红外光的强度,因此,第一补光装置021进行近红外补光时通过第一滤光片031的近红外光的强度高于第一补光装置021未进行近红外补光时通过第一滤光片031的近红外光的强度。
其中,入射到第一滤光片031的近红外光的波段范围可以为第一参考波段范围,第一参考波段范围为650纳米~1100纳米。第一补光装置021进行近红外补光的波段范围可以为第二参考波段范围,第二参考波段范围可以为700纳米~800纳米,或者900纳米~1000纳米等,本申请实施例对此不做限定。
示例性的,在本实施例中,补光器包含的补光装置,其补光灯类型可以 是可见光、红外光或二者的组合,其近红外补光的能量集中于650nm~1000nm中的某一段,具体的,能量集中于700nm~800nm范围内,或集中于900m~1000nm范围内,这样可以避开800nm~900nm常见850nm红外灯的影响,以避免与交替信号灯造成混淆。
由于在存在近红外补光时,通过第一滤光片031的近红外光可以包括第一补光装置021进行近红外光补光时经物体反射进入第一滤光片031的近红外光,以及环境光中的经物体反射后的近红外光。所以此时进入滤光组件03的近红外光的强度较强。但是,在不存在近红外补光时,通过第一滤光片031的近红外光包括环境光中经物体反射进入滤光组件03的近红外光。由于没有第一补光装置021进行补光的近红外光,所以此时通过第一滤光片031的近红外光的强度较弱。因此,根据第一预设曝光产生并输出的第一图像信号包括的近红外光的强度,要高于根据第二预设曝光产生并输出的第二图像信号包括的近红外光的强度。
第一补光装置021进行近红外补光的中心波长和/或波段范围可以有多种选择,本申请实施例中,为了使第一补光装置021和第一滤光片031有更好的配合,可以对第一补光装置021进行近红外补光的中心波长进行设计,以及对第一滤光片031的特性进行选择,从而使得在第一补光装置021进行近红外补光的中心波长为设定特征波长或者落在设定特征波长范围时,通过第一滤光片031的近红外光的中心波长和/或波段宽度可以达到约束条件。该约束条件主要是用来约束通过第一滤光片031的近红外光的中心波长尽可能准确,以及通过第一滤光片031的近红外光的波段宽度尽可能窄,从而避免出现因近红外光波段宽度过宽而引入波长干扰。
其中,第一补光装置021进行近红外补光的中心波长可以为第一补光装置021发出的近红外光的光谱中能量最大的波长范围内的平均值,也可以理解为第一补光装置021发出的近红外光的光谱中能量超过一定阈值的波长范围内的中间位置处的波长。
其中,设定特征波长或者设定特征波长范围可以预先设置。作为一种示例,第一补光装置021进行近红外补光的中心波长可以为750±10纳米的波长范围内的任一波长;或者,第一补光装置021进行近红外补光的中心波长为780±10纳米的波长范围内的任一波长;或者,第一补光装置021进行近红外补光的中心波长为810±10纳米的波长范围内的任一波长;或者,第一补光装 置021进行近红外补光的中心波长为940±10纳米的波长范围内的任一波长。也即是,设定特征波长范围可以为750±10纳米的波长范围、或者780±10纳米的波长范围、或者810±10纳米的波长范围、或者940±10纳米的波长范围。
示例性地,图12是本申请实施例提供的一种第一补光装置进行近红外补光的波长和相对强度之间的关系示意图。如图12所示,第一补光装置021进行近红外补光的中心波长为940纳米,这时,第一补光装置021进行近红外补光的波段范围为900纳米~1000纳米,其中,在940纳米处,近红外光的相对强度最高。
由于在存在近红外补光时,通过第一滤光片031的近红外光大部分为第一补光装置021进行近红外补光时经物体反射进入第一滤光片031的近红外光,因此,在一些实施例中,上述约束条件可以包括:通过第一滤光片031的近红外光的中心波长与第一补光装置021进行近红外补光的中心波长之间的差值位于波长波动范围内,作为一种示例,波长波动范围可以为0~20纳米。
其中,通过第一滤光片031的近红外补光的中心波长可以为第一滤光片031的近红外光通过率曲线中的近红外波段范围内波峰位置处的波长,也可以理解为第一滤光片031的近红外光通过率曲线中通过率超过一定阈值的近红外波段范围内的中间位置处的波长。
为了避免通过第一滤光片031的近红外光的波段宽度过宽而引入波长干扰,在一些实施例中,上述约束条件可以包括:第一波段宽度可以小于第二波段宽度。其中,第一波段宽度是指通过第一滤光片031的近红外光的波段宽度,第二波段宽度是指被第一滤光片031阻挡的近红外光的波段宽度。应当理解的是,波段宽度是指光线的波长所处的波长范围的宽度。例如,通过第一滤光片031的近红外光的波长所处的波长范围为700纳米~800纳米,那么第一波段宽度为800纳米减去700纳米,即100纳米。换句话说,通过第一滤光片031的近红外光的波段宽度小于第一滤光片031阻挡的近红外光的波段宽度。
例如,参见图13,图13为第一滤光片可以通过的光的波长与通过率之间的关系的一种示意图。如图13所示,入射到第一滤光片031的近红外光的波段为650纳米~1100纳米,第一滤光片031可以使波长位于380纳米~650纳米的可见光通过,以及波长位于900纳米~1100纳米的近红外光通过,阻挡波长位于650纳米~900纳米的近红外光。也即是,第一波段宽度为1000纳米减 去900纳米,即100纳米。第二波段宽度为900纳米减去650纳米,加上1100纳米减去1000纳米,即350纳米。100纳米小于350纳米,即通过第一滤光片031的近红外光的波段宽度小于第一滤光片031阻挡的近红外光的波段宽度。以上关系曲线仅是一种示例,对于不同的滤光片,能够通过滤光片的近红光波段的波段范围可以有所不同,被滤光片阻挡的近红外光的波段范围也可以有所不同。
为了避免在非近红外补光的时间段内,通过第一滤光片031的近红外光的波段宽度过宽而引入波长干扰,在一些实施例中,上述约束条件可以包括:通过第一滤光片031的近红外光的半带宽小于或等于50纳米。其中,半带宽是指通过率大于50%的近红外光的波段宽度。
为了避免通过第一滤光片031的近红外光的波段宽度过宽而引入波长干扰,在一些实施例中,上述约束条件可以包括:第三波段宽度可以小于参考波段宽度。其中,第三波段宽度是指通过率大于设定比例的近红外光的波段宽度,作为一种示例,参考波段宽度可以为50纳米~100纳米的波段范围内的任一波段宽度。设定比例可以为30%~50%中的任一比例,当然设定比例还可以根据使用需求设置为其他比例,本申请实施例对此不做限定。换句话说,通过率大于设定比例的近红外光的波段宽度可以小于参考波段宽度。
例如,参见图13,入射到第一滤光片031的近红外光的波段为650纳米~1100纳米,设定比例为30%,参考波段宽度为100纳米。从图3可以看出,在650纳米~1100纳米的近红外光的波段中,通过率大于30%的近红外光的波段宽度明显小于100纳米。
由于第一补光装置021至少在第一预设曝光的部分曝光时间段内提供近红外补光,在第二预设曝光的整个曝光时间段内不提供近红外补光,而第一预设曝光和第二预设曝光为图像传感器01的多次曝光中的其中两次曝光,也即是,第一补光装置021在图像传感器01的部分曝光的曝光时间段内提供近红外补光,在图像传感器01的另外一部分曝光的曝光时间段内不提供近红外补光。所以,第一补光装置021在单位时间长度内的补光次数可以低于图像传感器01在该单位时间长度内的曝光次数,其中,每相邻两次补光的间隔时间段内,间隔一次或多次曝光。
可选地,由于人眼容易将第一补光装置021进行近红外光补光的颜色与交通灯中的红灯的颜色混淆,所以,参见图14,图14为本申请实施例提供的 另一种人脸图像采集装置结构示意图。如图14所示,补光器02还可以包括第二补光装置022,第二补光装置022用于进行可见光补光。这样,如果第二补光装置022至少在第一预设曝光的部分曝光时间提供可见光补光,也即是,至少在第一预设曝光的部分曝光时间段内存在近红外补光和可见光补光,这两种光的混合颜色可以区别于交通灯中的红灯的颜色,从而避免了人眼将补光器02进行近红外补光的颜色与交通灯中的红灯的颜色混淆。
另外,如果第二补光装置022在第二预设曝光的曝光时间段内提供可见光补光,由于第二预设曝光的曝光时间段内可见光的强度不是特别高,因此,在第二预设曝光的曝光时间段内进行可见光补光时,还可以提高第二图像信号中的可见光的亮度,进而保证图像采集的质量。
在一些实施例中,第二补光装置022可以用于以常亮方式进行可见光补光;或者,第二补光装置022可以用于以频闪方式进行可见光补光,其中,至少在第一预设曝光的部分曝光时间段内存在可见光补光,在第二预设曝光的整个曝光时间段内不存在可见光补光;或者,第二补光装置022可以用于以频闪方式进行可见光补光,其中,至少在第一预设曝光的整个曝光时间段内不存在可见光补光,在第二预设曝光的部分曝光时间段内存在可见光补光。
当第二补光装置022常亮方式进行可见光补光时,不仅可以避免人眼将第一补光装置021进行近红外补光的颜色与交通灯中的红灯的颜色混淆,还可以提高第二图像信号中的可见光的亮度,进而保证图像采集的质量。当第二补光装置022以频闪方式进行可见光补光时,可以避免人眼将第一补光装置021进行近红外补光的颜色与交通灯中的红灯的颜色混淆,或者,可以提高第二图像信号中的可见光的亮度,进而保证图像采集的质量,而且还可以减少第二补光装置022的补光次数,从而延长第二补光装置022的使用寿命。
进一步的,如图14所示,在本申请的实施例中,滤光组件03还可以包括第二滤光片032和切换部件033,此时,通过切换部件033可以将第二滤光片032切换到图像传感器01的入光侧。在第二滤光片032切换到图像传感器01的入光侧之后,通过第二滤光片032使可见光通过,阻挡近红外光,在第二滤光片032通过可见光且阻挡近红外光之后,通过图像传感器01进行曝光,以产生并输出第三图像信号。因而,本实施例的人脸图像采集装置兼容现有图像采集功能,提高灵活性。
需要说明的是,切换部件033用于将第二滤光片032切换到图像传感器 01的入光侧,也可以理解为第二滤光片032替换第一滤光片031在图像传感器01的入光侧的位置。在第二滤光片032切换到图像传感器01的入光侧之后,第一补光装置021可以处于关闭状态也可以处于开启状态。
在一些实施例中,上述多次曝光是指一个帧周期内的多次曝光,也即是,图像传感器01在一个帧周期内进行多次曝光,从而产生并输出至少一帧第一图像信号和至少一帧第二图像信号。
例如,1秒内包括25个帧周期,图像传感器01在每个帧周期内进行多次曝光,从而产生至少一帧第一图像信号和至少一帧第二图像信号,将一个帧周期内产生的第一图像信号和第二图像信号称为一组图像信号,这样,25个帧周期内就会产生25组图像信号。其中,第一预设曝光和第二预设曝光可以是一个帧周期内多次曝光中相邻的两次曝光,也可以是一个帧周期内多次曝光中不相邻的两次曝光,本申请实施例对此不做限定。
第一图像信号是第一预设曝光产生并输出的,第二图像信号是第二预设曝光产生并输出的,在产生并输出第一图像信号和第二图像信号之后,可以对第一图像信号和第二图像信号进行处理。在某些情况下,第一图像信号和第二图像信号的用途可能不同,所以在一些实施例中,第一预设曝光与第二预设曝光的至少一个曝光参数可以不同。作为一种示例,该至少一个曝光参数可以包括但不限于曝光时间、模拟增益、数字增益、光圈大小中的一种或多种。其中,曝光增益包括模拟增益和/或数字增益。
在一些实施例中。可以理解的是,与第二预设曝光相比,在存在近红外补光时,图像传感器01感应到的近红外光的强度较强,相应地产生并输出的第一图像信号包括的近红外光的亮度也会较高。但是较高亮度的近红外光不利于外部场景信息的获取。而且在一些实施例中,曝光增益越大,图像传感器01输出的图像信号的亮度越高,曝光增益越小,图像传感器01输出的图像信号的亮度越低,因此,为了保证第一图像信号包含的近红外光的亮度在合适的范围内,在第一预设曝光和第二预设曝光的至少一个曝光参数不同的情况下,作为一种示例,第一预设曝光的曝光增益可以小于第二预设曝光的曝光增益。这样,在第一补光装置021进行近红外补光时,图像传感器01产生并输出的第一图像信号包含的近红外光的亮度,不会因第一补光装置021进行近红外补光而过高。
在另一些实施例中,曝光时间越长,图像传感器01得到的图像信号包括 的亮度越高,并且外部场景中的运动的对象在图像信号中的运动拖尾越长;曝光时间越短,图像传感器01得到的图像信号包括的亮度越低,并且外部场景中的运动的对象在图像信号中的运动拖尾越短。因此,为了保证第一图像信号包含的近红外光的亮度在合适的范围内,且外部场景中的运动的对象在第一图像信号中的运动拖尾较短。
在第一预设曝光和第二预设曝光的至少一个曝光参数不同的情况下,作为一种示例,第一预设曝光的曝光时间可以小于第二预设曝光的曝光时间。这样,在第一补光装置021进行近红外补光时,图像传感器01产生并输出的第一图像信号包含的近红外光的亮度,不会因第一补光装置021进行近红外补光而过高。并且较短的曝光时间使外部场景中的运动的对象在第一图像信号中出现的运动拖尾较短,从而有利于对运动对象的识别。示例性地,第一预设曝光的曝光时间为40毫秒,第二预设曝光的曝光时间为60毫秒等。
值得注意的是,在一些实施例中,当第一预设曝光的曝光增益小于第二预设曝光的曝光增益时,第一预设曝光的曝光时间不仅可以小于第二预设曝光的曝光时间,还可以等于第二预设曝光的曝光时间。同理,当第一预设曝光的曝光时间小于第二预设曝光的曝光时间时,第一预设曝光的曝光增益可以小于第二预设曝光的曝光增益,也可以等于第二预设曝光的曝光增益。
在另一些实施例中,第一图像信号和第二图像信号的用途可以相同,例如第一图像信号和第二图像信号都用于智能分析时,为了能使进行智能分析的人脸或目标在运动时能够有同样的清晰度,第一预设曝光与第二预设曝光的至少一个曝光参数可以相同。作为一种示例,第一预设曝光的曝光时间可以等于第二预设曝光的曝光时间,如果第一预设曝光的曝光时间和第二预设曝光的曝光时间不同,会出现曝光时间较长的一路图像信号存在运动拖尾,导致两路图像信号的清晰度不同。同理,作为另一种示例,第一预设曝光的曝光增益可以等于第二预设曝光的曝光增益。
值得注意的是,在一些实施例中,当第一预设曝光的曝光时间等于第二预设曝光的曝光时间时,第一预设曝光的曝光增益可以小于第二预设曝光的曝光增益,也可以等于第二预设曝光的曝光增益。同理,当第一预设曝光的曝光增益等于第二预设曝光的曝光增益时,第一预设曝光的曝光时间可以小于第二预设曝光的曝光时间,也可以等于第二预设曝光的曝光时间。
其中,图像传感器01可以包括多个感光通道,每个感光通道可以用于感 应至少一种可见光波段的光,以及感应近红外波段的光。也即是,每个感光通道既能感应红光、绿光、蓝光、黄光等至少一种可见光波段的光,又能感应近红外波段的光。可选地,该多个感光通道可以用于感应至少两种不同的可见光波段的光。
在本实施例中,该图像传感器01的每个像素点均能感应到补光器02产生的补光,保证采集到的红外光图像具有完整的分辨率,不缺失像素。
在一些实施例中,该多个感光通道可以包括R感光通道、G感光通道、B感光通道、Y感光通道、W感光通道和C感光通道中的至少两种。其中,R感光通道用于感应红光波段和近红外波段的光,G感光通道用于感应绿光波段和近红外波段的光,B感光通道用于感应蓝光波段和近红外波段的光,Y感光通道用于感应黄光波段和近红外波段的光。由于在一些实施例中,可以用W来表示用于感应全波段的光的感光通道,在另一些实施例中,可以用C来表示用于感应全波段的光的感光通道,所以当该多个感光通道包括用于感应全波段的光的感光通道时,这个感光通道可以是W感光通道,也可以是C感光通道。也即是,在实际应用中,可以根据使用需求来选择用于感应全波段的光的感光通道。
示例性地,图像传感器01可以为RGB传感器、RGBW传感器,或RCCB传感器,或RYYB传感器。示例性的,图15是本申请实施例提供的一种RGB传感器的示意图。图16是本申请实施例提供的一种RGBW传感器的示意图。图17是本申请实施例提供的一种RCCB传感器的示意图。图18是本申请实施例提供的一种RYYB传感器的示意图。如图15-图18所示,RGB传感器中的R感光通道、G感光通道和B感光通道的分布方式可以参见图15,RGBW传感器中的R感光通道、G感光通道、B感光通道和W感光通道的分布方式可以参见图16,RCCB传感器中的R感光通道、C感光通道和B感光通道分布方式可以参见图17,RYYB传感器中的R感光通道、Y感光通道和B感光通道分布方式可以参见图18。
在另一些实施例中,有些感光通道也可以仅感应近红外波段的光,而不感应可见光波段的光。作为一种示例,该多个感光通道可以包括R感光通道、G感光通道、B感光通道、IR感光通道中的至少两种。其中,R感光通道用于感应红光波段和近红外波段的光,G感光通道用于感应绿光波段和近红外波段的光,B感光通道用于感应蓝光波段和近红外波段的光,IR感光通道用 于感应近红外波段的光。
示例地,图像传感器01可以为RGBIR传感器,其中,RGBIR传感器中的每个IR感光通道都可以感应近红外波段的光,而不感应可见光波段的光。
其中,当图像传感器01为RGB传感器时,相比于其他图像传感器,如RGBIR传感器等,RGB传感器采集的RGB信息更完整,RGBIR传感器有一部分的感光通道采集不到可见光,所以RGB传感器采集的图像的色彩细节更准确。
值得注意的是,图像传感器01包括的多个感光通道可以对应多条感应曲线。示例性地,图19是本申请实施例提供的一种图像传感器的感应曲线示意图。参见图19,图19中的R曲线代表图像传感器01对红光波段的光的感应曲线,G曲线代表图像传感器01对绿光波段的光的感应曲线,B曲线代表图像传感器01对蓝光波段的光的感应曲线,W(或者C)曲线代表图像传感器01感应全波段的光的感应曲线,NIR(Near infrared,近红外光)曲线代表图像传感器01感应近红外波段的光的感应曲线。
作为一种示例,图像传感器01可以采用全局曝光方式,也可以采用卷帘曝光方式。其中,全局曝光方式是指每一行有效图像的曝光开始时刻均相同,且每一行有效图像的曝光结束时刻均相同。换句话说,全局曝光方式是所有行有效图像同时进行曝光并且同时结束曝光的一种曝光方式。卷帘曝光方式是指不同行有效图像的曝光时间不完全重合,也即是,一行有效图像的曝光开始时刻都晚于上一行有效图像的曝光开始时刻,且一行有效图像的曝光结束时刻都晚于上一行有效图像的曝光结束时刻。另外,卷帘曝光方式中每一行有效图像结束曝光后可以进行数据输出,因此,从第一行有效图像的数据开始输出时刻到最后一行有效图像的数据结束输出时刻之间的时间可以表示为读出时间。
示例性地,参见图20,图20为一种卷帘曝光方式的示意图。从图20可以看出,第1行有效图像在T1时刻开始曝光,在T3时刻结束曝光,第2行有效图像在T2时刻开始曝光,在T4时刻结束曝光,T2时刻相比于T1时刻向后推移了一个时间段,T4时刻相比于T3时刻向后推移了一个时间段。另外,第1行有效图像在T3时刻结束曝光并开始输出数据,在T5时刻结束数据的输出,第n行有效图像在T6时刻结束曝光并开始输出数据,在T7时刻 结束数据的输出,则T3~T7时刻之间的时间即为读出时间。
在一些实施例中,当图像传感器01采用全局曝光方式进行多次曝光时,对于任意一次近红外补光,近红外补光的时间段与最邻近的第二预设曝光的曝光时间段不存在交集,近红外补光的时间段是第一预设曝光的曝光时间段的子集,或者,近红外补光的时间段与第一预设曝光的曝光时间段存在交集,或者第一预设曝光的曝光时间段是近红外补光的子集。这样,即可实现至少在第一预设曝光的部分曝光时间段内存在近红外补光,在第二预设曝光的整个曝光时间段内不存在近红外补光,从而不会对第二预设曝光造成影响。
例如,图21是本申请实施例提供的第一种第一预设曝光和第二预设曝光的示意图。图22是本申请实施例提供的第二种第一预设曝光和第二预设曝光的示意图。图23是本申请实施例提供的第三种第一预设曝光和第二预设曝光的示意图。参见图21,对于任意一次近红外补光,近红外补光的时间段与最邻近的第二预设曝光的曝光时间段不存在交集,近红外补光的时间段是第一预设曝光的曝光时间段的子集。参见图22,对于任意一次近红外补光,近红外补光的时间段与最邻近的第二预设曝光的曝光时间段不存在交集,近红外补光的时间段与第一预设曝光的曝光时间段存在交集。参见图23,对于任意一次近红外补光,近红外补光的时间段与最邻近的第二预设曝光的曝光时间段不存在交集,第一预设曝光的曝光时间段是近红外补光的子集。
在另一些实施例中,当图像传感器01采用卷帘曝光方式进行多次曝光时,对于任意一次近红外补光,近红外补光的时间段与最邻近的第二预设曝光的曝光时间段不存在交集。并且,近红外补光的开始时刻不早于第一预设曝光中最后一行有效图像的曝光开始时刻,近红外补光的结束时刻不晚于第一预设曝光中第一行有效图像的曝光结束时刻。或者,近红外补光的开始时刻不早于第一预设曝光之前的最邻近的第二预设曝光的最后一行有效图像的曝光结束时刻且不晚于第一预设曝光中第一行有效图像的曝光结束时刻,近红外补光的结束时刻不早于第一预设曝光中最后一行有效图像的曝光开始时刻且不晚于第一预设曝光之后的最邻近的第二预设曝光的第一行有效图像的曝光开始时刻。或者,近红外补光的开始时刻不早于第一预设曝光之前的最邻近的第二预设曝光的最后一行有效图像的曝光结束时刻且不晚于第一预设曝光中第一行有效图像的曝光开始时刻,近红外补光的结束时刻不早于第一预设曝光中最后一行有效图像的曝光结束时刻且不晚于第一预设曝光之后的最邻 近的第二预设曝光的第一行有效图像的曝光开始时刻。
例如,图24是本申请实施例提供的第一种卷帘曝光方式和近红外补光的示意图。图25是本申请实施例提供的第二种卷帘曝光方式和近红外补光的示意图。图26是本申请实施例提供的第三种卷帘曝光方式和近红外补光的示意图。参见图24,对于任意一次近红外补光,近红外补光的时间段与最邻近的第二预设曝光的曝光时间段不存在交集,并且,近红外补光的开始时刻不早于第一预设曝光中最后一行有效图像的曝光开始时刻,近红外补光的结束时刻不晚于第一预设曝光中第一行有效图像的曝光结束时刻。参见图25,对于任意一次近红外补光,近红外补光的时间段与最邻近的第二预设曝光的曝光时间段不存在交集,并且,近红外补光的开始时刻不早于第一预设曝光之前的最邻近的第二预设曝光的最后一行有效图像的曝光结束时刻且不晚于第一预设曝光中第一行有效图像的曝光结束时刻,近红外补光的结束时刻不早于第一预设曝光中最后一行有效图像的曝光开始时刻且不晚于第一预设曝光之后的最邻近的第二预设曝光的第一行有效图像的曝光开始时刻。参见图26,对于任意一次近红外补光,近红外补光的时间段与最邻近的第二预设曝光的曝光时间段不存在交集,并且,近红外补光的开始时刻不早于第一预设曝光之前的最邻近的第二预设曝光的最后一行有效图像的曝光结束时刻且不晚于第一预设曝光中第一行有效图像的曝光开始时刻,近红外补光的结束时刻不早于第一预设曝光中最后一行有效图像的曝光结束时刻且不晚于第一预设曝光之后的最邻近的第二预设曝光的第一行有效图像的曝光开始时刻。图24至图26仅是一种示例,第一预设曝光和第二预设曝光的排序可以不限于这些示例。
综上,当环境光中的可见光强度较弱时,例如夜晚,可以通过第一补光装置021频闪式的补光,使图像传感器01产生并输出包含近红外亮度信息的第一图像信号,以及包含可见光亮度信息的第二图像信号,且由于第一图像信号和第二图像信号均由同一个图像传感器01获取,所以第一图像信号的视点与第二图像信号的视点相同,从而通过第一图像信号和第二图像信号可以获取完整的外部场景的信息。在可见光强度较强时,例如白天,白天近红外光的占比比较强,采集的图像的色彩还原度不佳,可以通过图像传感器01产生并输出的包含可见光亮度信息的第三图像信号,这样即使白天,也可以采集到色彩还原度比较好的图像,也可达到不论可见光强度的强弱,或者说不 论白天还是夜晚,均能高效、简便地获取外部场景的真实色彩信息。
本申请利用图像传感器的曝光时序来控制补光装置的近红外补光时序,以便在第一预设曝光的过程中进行近红外补光并产生第一图像信号,在第二预设曝光的过程中不进行近红外补光并产生第二图像信号,这样的数据采集方式,可以在结构简单、降低成本的同时直接采集到亮度信息不同的第一图像信号和第二图像信号,也即通过一个图像传感器就可以获取两种不同的图像信号,使得该图像采集装置更加简便,进而使得获取第一图像信号和第二图像信号也更加高效。并且,第一图像信号和第二图像信号均由同一个图像传感器产生并输出,所以第一图像信号对应的视点与第二图像信号对应的视点相同。因此,通过第一图像信号和第二图像信号可以共同获取外部场景的信息,且不会存在因第一图像信号对应的视点与第二图像信号对应的视点不相同,而导致根据第一图像信号和第二图像信号生成的图像不对齐的问题。
基于上述对人脸图像采集装置的描述,该人脸图像采集装置可以利用多次曝光产生并输出的第一图像信号和第二图像信号进行图像处理和人脸检测,得到人脸图像。接下来,以基于上述图1-26所示的实施例提供的人脸图像采集装置来对人脸图像采集方法进行说明。
示例性的,图27为本申请实施例提供的人脸图像采集方法实施例的流程示意图。该方法应用于人脸图像采集装置,该人脸图像采集装置包括图像传感器、补光器、滤光组件和图像处理器,该补光器包括第一补光装置,该滤光组件包括第一滤光片,该图像传感器位于滤光组件的出光侧。参见图27,该方法可以包括:
步骤2701:通过第一补光装置进行近红外补光,其中,至少在第一预设曝光的曝光时间段内进行近红外补光,在第二预设曝光的曝光时间段内不进行近红外补光,该第一预设曝光和第二预设曝光为图像传感器的多次曝光中的其中两次曝光。
步骤2702:通过第一滤光片使可见光和部分近红外光通过。
步骤2703:通过图像传感器进行多次曝光,以产生并输出第一图像信号和第二图像信号,其中,所述第一图像信号是根据所述第一预设曝光产生的图像信号,所述第二图像信号是根据所述第二预设曝光产生的图像信号。
步骤2704:通过图像处理器对所述第一图像信号和第二图像信号进行图 像处理和人脸检测,得到人脸图像。
可选的,在本实施例中,图像处理器包括:处理组件和检测组件,则上述步骤2704具体可以包括如下步骤:
通过处理组件对所述第一图像信号进行第一预处理生成第一图像,以及对所述第二图像信号进行第二预处理生成第二图像;
通过检测组件对所述处理组件生成的所述第一图像和所述第二图像进行人脸检测处理,得到所述人脸图像。
可选的,所述第一图像为灰度图像,所述第一预处理包括如下操作中的任意一种或多种的组合:图像插值、伽马映射、色彩转换和图像降噪;
所述第二图像为彩色图像,所述第二预处理包括如下任意一种或多种的组合:白平衡、图像插值、伽马映射和图像降噪。
示例性的,在本实施例的一种可能设计中,该图像处理器还包括:融合组件。则上述步骤2704还可以包括如下步骤:
通过融合组件对所述处理组件生成的所述第一图像和所述第二图像进行融合处理生成融合图像;
通过检测组件对所述融合组件生成的所述融合图像进行人脸检测处理,得到所述人脸图像。
可选的,所述第一图像为灰度图像,所述第二图像为彩色图像;则上述步骤2704还可以包括如下步骤:
通过融合组件提取所述彩色图像的亮度信息得到亮度图像、提取所述彩色图像的色彩信息得到色彩图像,以及对所述亮度图像、所述色彩图像以及所述灰度图像进行融合处理,得到所述人脸图像。
其中,所述融合处理包括如下操作中的至少一种:像素点对点融合、金字塔多尺度融合。
示例性的,在本实施例的另一种可能设计中,该图像处理器还包括:融合组件。则上述步骤2704还可以包括如下步骤:
通过检测组件对所述处理组件生成的所述第一图像和所述第二图像进行人脸检测处理,分别得到第一人脸图像和第二人脸图像;
通过融合组件对所述检测组件得到的所述第一人脸图像和所述第二人脸图像进行融合处理,得到所述人脸图像。
作为一种示例,上述步骤2704具体可以包括如下步骤:
通过检测组件根据在目标图像中检测到的面部特征进行人脸区域位置和大小标定,输出目标人脸图像。
其中,目标图像为如下图像中的任意一种:所述第一图像、所述第二图像、所述融合图像、所述第一图像和所述第二图像的组合。
在一种可能设计中,所述目标图像为如下图像中的任意一种:所述第一图像、所述第二图像、所述融合图像;则上述步骤2704具体可以包括如下步骤:
通过检测组件提取所述目标图像中的多个面部特征点,基于预设的面部特征信息从所述多个面部特征点中确定出满足面部规则的多个定位特征点,基于所述多个定位特征点确定人脸位置坐标,确定所述目标图像中的目标人脸图像。
示例性的,则上述步骤2704具体还可以包括如下步骤:
通过检测组件基于活体检测原理检测所述目标人脸图像是否由拍摄真实人脸获得,并在确定所述目标人脸图像由拍摄真实人脸获得时输出所述目标人脸图像。
在另一种可能设计中,所述目标图像为所述第一图像和所述第二图像的组合;
通过检测组件提取所述第一图像中的多个面部特征点,基于预设的面部特征信息从所述多个面部特征点中确定出满足面部规则的多个定位特征点,基于所述多个定位特征点确定第一人脸位置坐标,根据所述第一人脸位置坐标和所述第一图像进行人脸提取得到第一人脸图像,同时根据所述第一人脸位置坐标和所述第二图像进行人脸提取得到第二人脸图像。
作为一种示例,所述第一图像为灰度图像,所述第二图像为彩色图像时,所述第一人脸图像为灰度人脸图像,所述第二人脸图像为彩色人脸图像;则上述步骤2704具体还可以包括如下步骤:
通过检测组件基于活体检测原理检测所述灰度人脸图像是否由拍摄真实人脸获得,并在确定所述灰度人脸图像由拍摄真实人脸获得时输出所述灰度人脸图像,以及基于提取得到的第二人脸图像输出所述彩色人脸图像。
作为另一种示例,所述第一图像为彩色图像,所述第二图像为灰度图像时,所述第一人脸图像为彩色人脸图像,所述第二人脸图像为灰度人脸图像;则上述步骤2704具体还可以包括如下步骤:
通过检测组件基于活体检测原理检测所述灰度人脸图像是否由拍摄真实人脸获得,并在确定所述灰度人脸图像由拍摄真实人脸获得时,基于提取得到的第一人脸图像输出所述彩色人脸图像。
在再一种可能设计中,所述图像处理器还包括:缓存组件;该人脸图像采集方法还可以包括如下步骤:
通过缓存组件缓存临时内容,所述临时内容包括如下内容中的任意一种:所述图像传感器输出的第一图像信号和/或第二图像信号、所述图像处理器在处理过程中得到的第一图像和/或第二图像。
在又一种可能设计中,图像处理器还包括:图像增强组件;该人脸图像采集方法还可以包括如下步骤:
通过图像增强组件对目标图像进行增强处理,得到增强处理后的目标图像,所述增强处理包括如下至少一种:对比度增强、超分辨率重建,所述目标图像为如下图像中的任意一种:所述第一图像、所述第二图像、所述人脸图像。
可选的,所述第一补光装置进行近红外补光时通过所述第一滤光片的近红外光的强度高于所述第一补光装置未进行近红外补光时通过所述第一滤光片的近红外光的强度。
可选的,所述第一补光装置进行近红外补光的中心波长为设定特征波长或者落在设定特征波长范围时,通过所述第一滤光片的近红外光的中心波长和/或波段宽度达到约束条件。
可选的,所述第一补光装置进行近红外补光的中心波长为750±10纳米的波长范围内的任一波长;
或者
所述第一补光装置进行近红外补光的中心波长为780±10纳米的波长范围内的任一波长;
或者
所述第一补光装置进行近红外补光的中心波长为810±10纳米的波长范围内的任一波长;
或者
所述第一补光装置进行近红外补光的中心波长为940±10纳米的波长范围内的任一波长。
可选的,所述约束条件包括如下任意一种:
通过所述第一滤光片的近红外光的中心波长与所述第一补光装置进行近红外补光的中心波长之间的差值位于波长波动范围内,所述波长波动范围为0~20纳米;
或者
通过所述第一滤光片的近红外光的半带宽小于或等于50纳米;
或者
第一波段宽度小于第二波段宽度;其中,所述第一波段宽度是指通过所述第一滤光片的近红外光的波段宽度,所述第二波段宽度是指被所述第一滤光片阻挡的近红外光的波段宽度;
或者
第三波段宽度小于参考波段宽度,所述第三波段宽度是指通过率大于设定比例的近红外光的波段宽度,所述参考波段宽度为50纳米~150纳米的波段范围内的任一波段宽度,所述设定比例为30%~50%的比例范围内的任一比例。
在本实施例中,作为一种示例,所述第一预设曝光与所述第二预设曝光的至少一个曝光参数不同,所述至少一个曝光参数为曝光时间、曝光增益、光圈大小中的一种或多种,所述曝光增益包括模拟增益,和/或,数字增益。
作为另一种示例,所述第一预设曝光和所述第二预设曝光的至少一个曝光参数相同,所述至少一个曝光参数包括曝光时间、曝光增益、光圈大小中的一种或多种,所述曝光增益包括模拟增益,和/或,数字增益。
可选的,所述图像传感器包括多个感光通道,每个感光通道用于感应至少一种可见光波段的光,以及感应近红外波段的光。
在本实施例的一种实施例中,所述图像传感器采用全局曝光方式进行多次曝光,对于任意一次近红外补光,近红外补光的时间段与最邻近的所述第二预设曝光的曝光时间段不存在交集,近红外补光的时间段是所述第一预设曝光的曝光时间段的子集,或者,近红外补光的时间段与所述第一预设曝光的曝光时间段存在交集,或者所述第一预设曝光的曝光时间段是近红外补光的子集。
在本实施例的另一种实施例中,所述图像传感器采用卷帘曝光方式进行多次曝光,对于任意一次近红外补光,近红外补光的时间段与最邻近的所述第二预设曝光的曝光时间段不存在交集;
近红外补光的开始时刻不早于所述第一预设曝光中最后一行有效图像的曝光开始时刻,近红外补光的结束时刻不晚于所述第一预设曝光中第一行有效图像的曝光结束时刻;
或者,
近红外补光的开始时刻不早于所述第一预设曝光之前的最邻近的第二预设曝光的最后一行有效图像的曝光结束时刻且不晚于所述第一预设曝光中第一行有效图像的曝光结束时刻,近红外补光的结束时刻不早于所述第一预设曝光中最后一行有效图像的曝光开始时刻且不晚于所述第一预设曝光之后的最邻近的第二预设曝光的第一行有效图像的曝光开始时刻;或者
近红外补光的开始时刻不早于所述第一预设曝光之前的最邻近的第二预设曝光的最后一行有效图像的曝光结束时刻且不晚于所述第一预设曝光中第一行有效图像的曝光开始时刻,近红外补光的结束时刻不早于所述第一预设曝光中最后一行有效图像的曝光结束时刻且不晚于所述第一预设曝光之后的最邻近的第二预设曝光的第一行有效图像的曝光开始时刻。
可选的,所述补光器还包括第二补光装置,所述第二补光装置用于进行可见光补光。
可选的,所述滤光组件还包括第二滤光片和切换部件,所述第一滤光片和所述第二滤光片均与所述切换部件连接;
所述切换部件,用于将所述第二滤光片切换到所述图像传感器的入光侧;
在所述第二滤光片切换到所述图像传感器的入光侧之后,所述第二滤光片用于使可见光通过,阻挡近红外光,所述图像传感器,用于通过曝光产生并输出第三图像信号
需要说明的是,由于本实施例与上述图1-26所示的实施例可以采用同样的发明构思,因此,关于本实施例内容的解释可以参考上述图1-26所示实施例中相关内容的解释,此处不再赘述。
在本申请实施例中,图像传感器通过多次曝光产生并输出第一图像信号和第二图像信号,其中,该第一图像信号是根据第一预设曝光产生的图像信号,第二图像信号是根据第二预设曝光产生的图像信号,第一预设曝光和第二预设曝光为多次曝光中的其中两次曝光,补光器包括第一补光装置,该第一补光装置进行近红外补光,其中,至少在第一预设曝光的曝光时间段内进行近红外补光,在第二预设曝光的曝光时间段内不进行近红外补光,图像处 理器用于对第一图像信号和第二图像信号进行图像处理和人脸检测,得到人脸图像。该技术方案中,只需要一个图像传感器可以得到可见光图像和红外光图像,降低了成本,而且避免了两个图像传感器由于工艺结构以及两者配准和同步问题得到的图像不同步,导致人脸图像质量较差的问题。
本申请中,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系;在公式中,字符“/”,表示前后关联对象是一种“相除”的关系。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b,或c中的至少一项(个),可以表示:a,b,c,a-b,a-c,b-c,或a-b-c,其中,a,b,c可以是单个,也可以是多个。
可以理解的是,在本申请的实施例中涉及的各种数字编号仅为描述方便进行的区分,并不用来限制本申请的实施例的范围。上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请的实施例的实施过程构成任何限定。
最后应说明的是:以上各实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述各实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (23)

  1. 一种人脸图像采集装置,其特征在于,包括:图像传感器、补光器、滤光组件和图像处理器;
    所述图像传感器用于通过多次曝光产生并输出第一图像信号和第二图像信号,其中,所述第一图像信号是根据第一预设曝光产生的图像信号,所述第二图像信号是根据第二预设曝光产生的图像信号,所述第一预设曝光和所述第二预设曝光为所述多次曝光中的其中两次曝光;
    所述补光器包括第一补光装置,所述第一补光装置用于进行近红外补光,其中,至少在所述第一预设曝光的曝光时间段内进行近红外补光,在所述第二预设曝光的曝光时间段内不进行近红外补光;
    所述滤光组件包括第一滤光片,所述第一滤光片用于使可见光和部分近红外光通过;
    所述图像处理器用于对所述第一图像信号和第二图像信号进行图像处理和人脸检测,得到人脸图像。
  2. 根据权利要求1所述的装置,其特征在于,所述图像处理器包括:处理组件和检测组件;
    所述处理组件用于对所述第一图像信号进行第一预处理生成第一图像,以及对所述第二图像信号进行第二预处理生成第二图像;
    所述检测组件用于对所述处理组件生成的所述第一图像和所述第二图像进行人脸检测处理,得到所述人脸图像。
  3. 根据权利要求2所述的装置,其特征在于,所述第一图像为灰度图像,所述第一预处理包括如下操作中的任意一种或多种的组合:图像插值、伽马映射、色彩转换和图像降噪;
    所述第二图像为彩色图像,所述第二预处理包括如下任意一种或多种的组合:白平衡、图像插值、伽马映射和图像降噪。
  4. 根据权利要求2所述的装置,其特征在于,所述图像处理器还包括:融合组件;
    所述融合组件用于对所述处理组件生成的所述第一图像和所述第二图像进行融合处理生成融合图像;
    所述检测组件具体用于对所述融合组件生成的所述融合图像进行人脸检 测处理,得到所述人脸图像。
  5. 根据权利要求4所述的装置,其特征在于,所述第一图像为灰度图像,所述第二图像为彩色图像;
    所述融合组件具体用于提取所述彩色图像的亮度信息得到亮度图像、提取所述彩色图像的色彩信息得到色彩图像,以及对所述亮度图像、所述色彩图像以及所述灰度图像进行融合处理,得到所述人脸图像,所述融合处理包括如下操作中的至少一种:像素点对点融合、金字塔多尺度融合。
  6. 根据权利要求2所述的装置,其特征在于,所述图像处理器还包括:融合组件;
    所述检测组件具体用于对所述处理组件生成的所述第一图像和所述第二图像进行人脸检测处理,分别得到第一人脸图像和第二人脸图像;
    所述融合组件具体用于对所述检测组件得到的所述第一人脸图像和所述第二人脸图像进行融合处理,得到所述人脸图像。
  7. 根据权利要求2所述的装置,其特征在于,所述检测组件具体用于根据在目标图像中检测到的面部特征进行人脸区域位置和大小标定,输出目标人脸图像,所述目标图像为如下图像中的任意一种:所述第一图像、所述第二图像、所述第一图像和所述第二图像的融合图像、所述第一图像和所述第二图像的组合。
  8. 根据权利要求7所述的装置,其特征在于,所述目标图像为如下图像中的任意一种:所述第一图像、所述第二图像、所述融合图像;
    所述检测组件具体用于提取所述目标图像中的多个面部特征点,基于预设的面部特征信息从所述多个面部特征点中确定出满足面部规则的多个定位特征点,基于所述多个定位特征点确定人脸位置坐标,确定所述目标图像中的目标人脸图像。
  9. 根据权利要求8所述的装置,其特征在于,所述检测组件还用于基于活体检测原理检测所述目标人脸图像是否由拍摄真实人脸获得,并在确定所述目标人脸图像由拍摄真实人脸获得时输出所述目标人脸图像。
  10. 根据权利要求7所述的装置,其特征在于,所述目标图像为所述第一图像和所述第二图像的组合;
    所述检测组件具体用于提取所述第一图像中的多个面部特征点,基于预设的面部特征信息从所述多个面部特征点中确定出满足面部规则的多个定位 特征点,基于所述多个定位特征点确定第一人脸位置坐标,根据所述第一人脸位置坐标和所述第一图像进行人脸提取得到第一人脸图像,同时根据所述第一人脸位置坐标和所述第二图像进行人脸提取得到第二人脸图像。
  11. 根据权利要求10所述的装置,其特征在于,所述第一图像为灰度图像,所述第二图像为彩色图像时,所述第一人脸图像为灰度人脸图像,所述第二人脸图像为彩色人脸图像;
    所述检测组件还用于基于活体检测原理检测所述灰度人脸图像是否由拍摄真实人脸获得,并在确定所述灰度人脸图像由拍摄真实人脸获得时输出所述灰度人脸图像,以及基于提取得到的第二人脸图像输出所述彩色人脸图像。
  12. 根据权利要求10所述的装置,其特征在于,所述第一图像为彩色图像,所述第二图像为灰度图像时,所述第一人脸图像为彩色人脸图像,所述第二人脸图像为灰度人脸图像;
    所述检测组件还用于基于活体检测原理检测所述灰度人脸图像是否由拍摄真实人脸获得,并在确定所述灰度人脸图像由拍摄真实人脸获得时,基于提取得到的第一人脸图像输出所述彩色人脸图像。
  13. 根据权利要求2所述的装置,其特征在于,所述图像处理器还包括:缓存组件;
    所述缓存组件用于缓存临时内容,所述临时内容包括:所述图像传感器输出的第一图像信号和/或第二图像信号;或者
    所述临时内容包括:所述图像处理器在处理过程中得到的第一图像和/或第二图像。
  14. 根据权利要求2所述的装置,其特征在于,所述图像处理器还包括:图像增强组件;
    所述图像增强组件用于对目标图像进行增强处理,得到增强处理后的目标图像,所述增强处理包括如下至少一种:对比度增强、超分辨率重建,所述目标图像为如下图像中的任意一种:所述第一图像、所述第二图像、所述第一图像和所述第二图像的融合图像、所述人脸图像。
  15. 根据权利要求1-14任一项所述的装置,其特征在于,所述第一补光装置进行近红外补光的中心波长为设定特征波长或者落在设定特征波长范围时,通过所述第一滤光片的近红外光的中心波长和/或波段宽度达到约束条件。
  16. 根据权利要求1-14任一项所述的装置,其特征在于,所述第一预设 曝光与所述第二预设曝光的至少一个曝光参数不同,所述至少一个曝光参数为曝光时间、曝光增益、光圈大小中的一种或多种,所述曝光增益包括模拟增益,和/或,数字增益。
  17. 根据权利要求1-14任一项所述的装置,其特征在于,所述第一预设曝光和所述第二预设曝光的至少一个曝光参数相同,所述至少一个曝光参数包括曝光时间、曝光增益、光圈大小中的一种或多种,所述曝光增益包括模拟增益,和/或,数字增益。
  18. 根据权利要求1-14任一项所述的装置,其特征在于,所述图像传感器包括多个感光通道,每个感光通道用于感应至少一种可见光波段的光,以及感应近红外波段的光。
  19. 根据权利要求1-14任一项所述的装置,其特征在于,所述图像传感器采用全局曝光方式进行多次曝光,对于任意一次近红外补光,近红外补光的时间段与最邻近的所述第二预设曝光的曝光时间段不存在交集,近红外补光的时间段是所述第一预设曝光的曝光时间段的子集,或者,近红外补光的时间段与所述第一预设曝光的曝光时间段存在交集,或者所述第一预设曝光的曝光时间段是近红外补光的子集。
  20. 根据权利要求1-14任一项所述的装置,其特征在于,所述图像传感器采用卷帘曝光方式进行多次曝光,对于任意一次近红外补光,近红外补光的时间段与最邻近的所述第二预设曝光的曝光时间段不存在交集;
    近红外补光的开始时刻不早于所述第一预设曝光中最后一行有效图像的曝光开始时刻,近红外补光的结束时刻不晚于所述第一预设曝光中第一行有效图像的曝光结束时刻;
    或者,
    近红外补光的开始时刻不早于所述第一预设曝光之前的最邻近的第二预设曝光的最后一行有效图像的曝光结束时刻且不晚于所述第一预设曝光中第一行有效图像的曝光结束时刻,近红外补光的结束时刻不早于所述第一预设曝光中最后一行有效图像的曝光开始时刻且不晚于所述第一预设曝光之后的最邻近的第二预设曝光的第一行有效图像的曝光开始时刻;或者
    近红外补光的开始时刻不早于所述第一预设曝光之前的最邻近的第二预设曝光的最后一行有效图像的曝光结束时刻且不晚于所述第一预设曝光中第一行有效图像的曝光开始时刻,近红外补光的结束时刻不早于所述第一预设 曝光中最后一行有效图像的曝光结束时刻且不晚于所述第一预设曝光之后的最邻近的第二预设曝光的第一行有效图像的曝光开始时刻。
  21. 根据权利要求1-14任一项所述的装置,其特征在于,所述补光器还包括第二补光装置,所述第二补光装置用于进行可见光补光。
  22. 根据权利要求1-14任一项所述的装置,其特征在于,所述滤光组件还包括第二滤光片和切换部件,所述第一滤光片和所述第二滤光片均与所述切换部件连接;
    所述切换部件,用于将所述第二滤光片切换到所述图像传感器的入光侧;
    在所述第二滤光片切换到所述图像传感器的入光侧之后,所述第二滤光片用于使可见光通过,阻挡近红外光,所述图像传感器,用于通过曝光产生并输出第三图像信号。
  23. 一种人脸图像采集的方法,应用于人脸图像采集装置,所述人脸图像采集装置包括图像传感器、补光器、滤光组件和图像处理器,所述补光器包括第一补光装置,所述滤光组件包括第一滤光片,所述图像传感器位于所述滤光组件的出光侧,其特征在于,所述方法包括:
    通过所述第一补光装置进行近红外补光,其中,至少在第一预设曝光的部分曝光时间段内进行近红外补光,在第二预设曝光的曝光时间段内不进行近红外补光,所述第一预设曝光和所述第二预设曝光为所述图像传感器的多次曝光中的其中两次曝光;
    通过所述第一滤光片使可见光和部分近红外光通过;
    通过所述图像传感器进行多次曝光,以产生并输出第一图像信号和第二图像信号,所述第一图像信号是根据所述第一预设曝光产生的图像信号,所述第二图像信号是根据所述第二预设曝光产生的图像信号;
    通过所述图像处理器对所述第一图像信号和第二图像信号进行图像处理和人脸检测,得到人脸图像。
PCT/CN2020/092357 2019-05-31 2020-05-26 人脸图像采集装置及方法 WO2020238903A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910472685.2A CN110490041B (zh) 2019-05-31 2019-05-31 人脸图像采集装置及方法
CN201910472685.2 2019-05-31

Publications (1)

Publication Number Publication Date
WO2020238903A1 true WO2020238903A1 (zh) 2020-12-03

Family

ID=68546284

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/092357 WO2020238903A1 (zh) 2019-05-31 2020-05-26 人脸图像采集装置及方法

Country Status (2)

Country Link
CN (1) CN110490041B (zh)
WO (1) WO2020238903A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112669438A (zh) * 2020-12-31 2021-04-16 杭州海康机器人技术有限公司 一种图像重建方法、装置及设备

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110490041B (zh) * 2019-05-31 2022-03-15 杭州海康威视数字技术股份有限公司 人脸图像采集装置及方法
CN110493492B (zh) 2019-05-31 2021-02-26 杭州海康威视数字技术股份有限公司 图像采集装置及图像采集方法
CN110493491B (zh) * 2019-05-31 2021-02-26 杭州海康威视数字技术股份有限公司 一种图像采集装置及摄像方法
CN110490042B (zh) * 2019-05-31 2022-02-11 杭州海康威视数字技术股份有限公司 人脸识别装置和门禁设备
CN110493494B (zh) * 2019-05-31 2021-02-26 杭州海康威视数字技术股份有限公司 图像融合装置及图像融合方法
CN113259546B (zh) * 2020-02-11 2023-05-12 华为技术有限公司 图像获取装置和图像获取方法
CN111462125B (zh) * 2020-04-03 2021-08-20 杭州恒生数字设备科技有限公司 一种增强活体检测图像处理系统
CN111524088A (zh) * 2020-05-06 2020-08-11 北京未动科技有限公司 用于图像采集的方法、装置、设备及计算机可读存储介质
CN111597938B (zh) * 2020-05-07 2022-02-22 马上消费金融股份有限公司 活体检测、模型训练方法及装置
CN113538926B (zh) * 2021-05-31 2023-01-17 浙江大华技术股份有限公司 人脸抓拍方法、人脸抓拍系统以及计算机可读存储介质
CN113452903B (zh) * 2021-06-17 2023-07-11 浙江大华技术股份有限公司 一种抓拍设备、抓拍方法及主控芯片
CN115995103A (zh) * 2021-10-15 2023-04-21 北京眼神科技有限公司 人脸活体检测方法、装置、计算机可读存储介质及设备
CN114640795A (zh) * 2022-03-22 2022-06-17 深圳市商汤科技有限公司 图像处理方法和装置、设备、介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080002060A1 (en) * 1997-10-09 2008-01-03 Fotonation Vision Limited Optimized Performance and Performance for Red-Eye Filter Method and Apparatus
CN107809601A (zh) * 2017-11-24 2018-03-16 深圳先牛信息技术有限公司 图像传感器
CN109194873A (zh) * 2018-10-29 2019-01-11 浙江大华技术股份有限公司 一种图像处理方法及装置
CN109429001A (zh) * 2017-08-25 2019-03-05 杭州海康威视数字技术股份有限公司 图像采集方法、装置、电子设备以及计算机可读存储介质
CN110490041A (zh) * 2019-05-31 2019-11-22 杭州海康威视数字技术股份有限公司 人脸图像采集装置及方法

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10462390B2 (en) * 2014-12-10 2019-10-29 Sony Corporation Image pickup apparatus, image pickup method, program, and image processing apparatus
CN106488201B (zh) * 2015-08-28 2020-05-01 杭州海康威视数字技术股份有限公司 一种图像信号的处理方法和系统
CN111988587B (zh) * 2017-02-10 2023-02-07 杭州海康威视数字技术股份有限公司 图像融合设备和图像融合方法
CN107566747B (zh) * 2017-09-22 2020-02-14 浙江大华技术股份有限公司 一种图像亮度增强方法及装置
CN208691387U (zh) * 2018-08-28 2019-04-02 杭州萤石软件有限公司 一种全彩网络摄像机
CN208819221U (zh) * 2018-09-10 2019-05-03 杭州海康威视数字技术股份有限公司 一种人脸活体检测装置
CN109635760A (zh) * 2018-12-18 2019-04-16 深圳市捷顺科技实业股份有限公司 一种人脸识别方法及相关设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080002060A1 (en) * 1997-10-09 2008-01-03 Fotonation Vision Limited Optimized Performance and Performance for Red-Eye Filter Method and Apparatus
CN109429001A (zh) * 2017-08-25 2019-03-05 杭州海康威视数字技术股份有限公司 图像采集方法、装置、电子设备以及计算机可读存储介质
CN107809601A (zh) * 2017-11-24 2018-03-16 深圳先牛信息技术有限公司 图像传感器
CN109194873A (zh) * 2018-10-29 2019-01-11 浙江大华技术股份有限公司 一种图像处理方法及装置
CN110490041A (zh) * 2019-05-31 2019-11-22 杭州海康威视数字技术股份有限公司 人脸图像采集装置及方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112669438A (zh) * 2020-12-31 2021-04-16 杭州海康机器人技术有限公司 一种图像重建方法、装置及设备

Also Published As

Publication number Publication date
CN110490041A (zh) 2019-11-22
CN110490041B (zh) 2022-03-15

Similar Documents

Publication Publication Date Title
WO2020238903A1 (zh) 人脸图像采集装置及方法
WO2020238806A1 (zh) 一种图像采集装置及摄像方法
WO2020238807A1 (zh) 图像融合装置及图像融合方法
WO2020238905A1 (zh) 图像融合设备和方法
CN110490042B (zh) 人脸识别装置和门禁设备
CN110490044B (zh) 人脸建模设备和人脸建模方法
WO2020238970A1 (zh) 图像降噪装置及图像降噪方法
CN110490187B (zh) 车牌识别设备和方法
CN110706178A (zh) 图像融合装置、方法、设备及存储介质
CN110493536B (zh) 图像采集装置和图像采集的方法
CN108712608A (zh) 终端设备拍摄方法和装置
CN110493535B (zh) 图像采集装置和图像采集的方法
CN206370880U (zh) 一种双摄像头成像系统和移动终端
CN111131798B (zh) 图像处理方法、图像处理装置以及摄像装置
CN110493496B (zh) 图像采集装置及方法
US11455710B2 (en) Device and method of object detection
CN110493495B (zh) 图像采集装置和图像采集的方法
CN110493537B (zh) 图像采集装置及图像采集方法
WO2020238804A1 (zh) 图像采集装置及图像采集方法
CN110493493B (zh) 全景细节摄像机及获取图像信号的方法
CN110493533B (zh) 图像采集装置及图像采集方法
CN105554485B (zh) 成像方法、成像装置及电子装置
CN107016343A (zh) 一种基于贝尔格式图像的红绿灯快速识别方法
CN110505376A (zh) 图像采集装置及方法
CN109361906A (zh) 热成像超低照人脸识别球型摄像机

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20815461

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20815461

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20815461

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 27.09.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20815461

Country of ref document: EP

Kind code of ref document: A1