WO2020238903A1 - 人脸图像采集装置及方法 - Google Patents
人脸图像采集装置及方法 Download PDFInfo
- Publication number
- WO2020238903A1 WO2020238903A1 PCT/CN2020/092357 CN2020092357W WO2020238903A1 WO 2020238903 A1 WO2020238903 A1 WO 2020238903A1 CN 2020092357 W CN2020092357 W CN 2020092357W WO 2020238903 A1 WO2020238903 A1 WO 2020238903A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- light
- exposure
- face
- infrared
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
Definitions
- This application relates to the field of information processing technology, and in particular to a face image acquisition device and method.
- the image acquisition circuit in the face recognition camera firstly collects visible light images and infrared light images through two image sensors, secondly fusion processes the visible light images and infrared light images, and finally encodes and analyzes the fused images to obtain the human Face image.
- the aforementioned cameras have extremely high requirements on the process structure of the two image sensors and the registration and synchronization between the two, which is not only costly, but also if the registration is not up to standard, the quality of the obtained face image will be poor.
- the present application provides a face image acquisition device and method to overcome the problem of reducing the cost of face image acquisition and improving the quality of the acquired face image.
- a face image acquisition device provided by the first aspect of the present application includes: an image sensor, a light supplement, a filter component, and an image processor;
- the image sensor is used to generate and output a first image signal and a second image signal through multiple exposures, wherein the first image signal is an image signal generated according to a first preset exposure, and the second image signal is According to the image signal generated by the second preset exposure, the first preset exposure and the second preset exposure are two exposures among the multiple exposures;
- the light supplementer includes a first light supplement device, and the first light supplement device is used to perform near-infrared supplement light, wherein the near-infrared supplement light is performed at least within the exposure time period of the first preset exposure, and Not performing near-infrared fill light during the exposure time period of the second preset exposure;
- the filter assembly includes a first filter, and the first filter is used to pass visible light and part of near-infrared light;
- the image processor is used to perform image processing and face detection on the first image signal and the second image signal to obtain a face image.
- a second aspect of the application provides a face image acquisition method, which is applied to a face image acquisition device.
- the face image acquisition device includes an image sensor, a light fill, a filter component, and an image processor.
- the light fill Comprising a first supplementary light device
- the filter assembly includes a first filter
- the image sensor is located on the light exit side of the filter assembly, and the method includes:
- the near-infrared light-filling is performed by the first light-filling device, wherein the near-infrared light-filling is performed at least during a partial exposure time period of the first preset exposure, and the near-infrared light is not performed during the exposure time period of the second preset exposure Fill light, the first preset exposure and the second preset exposure are two of the multiple exposures of the image sensor;
- Multiple exposures are performed by the image sensor to generate and output a first image signal and a second image signal, the first image signal is an image signal generated according to the first preset exposure, and the second image signal Is an image signal generated according to the second preset exposure;
- the image processor performs image processing and face detection on the first image signal and the second image signal to obtain a face image.
- the image sensor generates and outputs a first image signal and a second image signal through multiple exposures, wherein the first image signal is an image generated according to a first preset exposure Signal, the second image signal is an image signal generated according to a second preset exposure, the first preset exposure and the second preset exposure are two of the multiple exposures, and the light supplement includes a first light supplement device,
- the first light-filling device performs near-infrared light-filling, wherein at least the near-infrared light-filling is performed during the exposure time period of the first preset exposure, and the near-infrared light-filling is not performed during the exposure time period of the second preset exposure.
- the image processor is used to perform image processing and face detection on the first image signal and the second image signal to obtain a face image.
- only one image sensor is needed to obtain visible light images and infrared light images, which reduces the cost, and avoids that the images obtained by the two image sensors due to the process structure and the registration and synchronization problems of the two image sensors are not synchronized, causing the face The problem of poor image quality.
- FIG. 1 is a schematic structural diagram of a face image acquisition device provided by an embodiment of the application.
- FIG. 2 is a schematic structural diagram of an image processor in an embodiment of the application
- FIG. 3 is a schematic flowchart of processing the first image signal and the second image signal by the processing component
- FIG. 4 is a schematic structural diagram of another image processor in an embodiment of the application.
- Figure 5 is a schematic structural diagram of the fusion component for fusion processing of color images and grayscale images
- FIG. 6 is a schematic structural diagram of still another image processor in an embodiment of the application.
- FIG. 7 is a flow diagram of face detection processing performed by a detection component in an embodiment of the application.
- FIG. 8 is a schematic diagram of processing color images and grayscale images by the detection component in this embodiment.
- FIG. 9 is another schematic diagram of processing color images and grayscale images by the detection component in this embodiment.
- FIG. 10 is a schematic structural diagram of another image processor in an embodiment of the application.
- FIG. 11 is a schematic structural diagram of another image processor in an embodiment of this application.
- FIG. 12 is a schematic diagram of the relationship between the wavelength and relative intensity of the near-infrared supplement light performed by the first light supplement device provided by an embodiment of the present application;
- FIG. 13 is a schematic diagram of the relationship between the wavelength of light that can pass through the first filter and the pass rate
- FIG. 14 is a schematic structural diagram of another face image acquisition device provided by an embodiment of the application.
- FIG. 15 is a schematic diagram of an RGB sensor provided by an embodiment of the present application.
- FIG. 16 is a schematic diagram of an RGBW sensor provided by an embodiment of the present application.
- FIG. 17 is a schematic diagram of an RCCB sensor provided by an embodiment of the present application.
- FIG. 18 is a schematic diagram of a RYYB sensor provided by an embodiment of the present application.
- FIG. 19 is a schematic diagram of a sensing curve of an image sensor according to an embodiment of the present application.
- Figure 20 is a schematic diagram of a rolling shutter exposure method
- FIG. 21 is a schematic diagram of a first preset exposure and a second preset exposure provided by an embodiment of the present application.
- 22 is a schematic diagram of a second type of first preset exposure and a second preset exposure provided by an embodiment of the present application;
- FIG. 23 is a schematic diagram of a third type of first preset exposure and a second preset exposure provided by an embodiment of the present application.
- FIG. 24 is a schematic diagram of the first rolling shutter exposure method and near-infrared light supplement provided by an embodiment of the present application.
- 25 is a schematic diagram of a second rolling shutter exposure method and near-infrared light supplement provided by an embodiment of the present application.
- FIG. 26 is a schematic diagram of a third rolling shutter exposure method and near-infrared light supplement provided by an embodiment of the present application.
- FIG. 27 is a schematic flowchart of an embodiment of a method for acquiring a face image provided by an embodiment of the application.
- the embodiment of the application proposes a face image acquisition device and method, which can at least reduce the cost of the camera and improve the face image quality.
- the image sensor generates and outputs a first image signal and a second image signal through multiple exposures.
- An image signal is an image signal generated according to a first preset exposure
- a second image signal is an image signal generated according to a second preset exposure.
- the first preset exposure and the second preset exposure are two of the multiple exposures.
- the light supplement includes a first light supplement device that performs near-infrared supplement light, wherein at least the near-infrared supplement light exists in the exposure time period of the first preset exposure, and the second preset There is no near-infrared fill light in the exposure time period of the exposure, and the image processor is used to perform image processing and face detection on the first image signal and the second image signal to obtain a face image.
- the image processor is used to perform image processing and face detection on the first image signal and the second image signal to obtain a face image.
- only one image sensor is needed to obtain visible light images and infrared light images, which reduces the cost, and avoids that the images obtained by the two image sensors due to the process structure and the registration and synchronization problems of the two image sensors are not synchronized, causing the face The problem of poor image quality.
- FIG. 1 is a schematic structural diagram of a face image acquisition device provided by an embodiment of the application.
- the face image acquisition device may include: an image sensor 01, a light supplement 02, a filter assembly 03, a lens assembly 04, and an image processor 05.
- the image sensor 01 is located on the light exit side of the filter assembly 03
- the image processor 05 is located behind the image sensor 01.
- the image sensor 01 is used to generate and output the first image signal and the second image signal through multiple exposures.
- the first image signal is an image signal generated according to a first preset exposure
- the second image signal is an image signal generated according to a second preset exposure
- the first preset exposure and the second preset exposure are the multiple exposures Two of the exposures.
- first image signal and the second image signal are obtained by photographing a person, that is, both the first image signal and the second image signal include a face area.
- the light supplement 02 includes a first light supplement device 021.
- the first light supplement device 021 is used to perform near-infrared supplement light, wherein at least there is near-infrared supplement light during a partial exposure period of the first preset exposure, and the second There is no near-infrared fill light in the exposure time period of the preset exposure.
- the above-mentioned supplementary light through the first supplementary light device 021 improves the signal collection capability, which is beneficial to improve the image quality.
- the filter assembly 03 includes a first filter 031.
- the first filter 031 allows visible light and part of the near-infrared light to pass.
- the first light supplement 021 passes through the first filter 031 when the near-infrared light is supplemented.
- the intensity of the near-infrared light is higher than the intensity of the near-infrared light that passes through the first filter 031 when the first light supplement device 021 does not perform the near-infrared light supplement.
- the filter assembly 03 may be located between the lens 04 and the image sensor 01, and the image sensor 01 is located on the light exit side of the filter assembly 03.
- the lens 04 is located between the filter assembly 03 and the image sensor 01, and the image sensor 01 is located on the light exit side of the lens 04.
- the first filter 031 can be a filter film. In this way, when the filter assembly 03 is located between the lens 04 and the image sensor 01, the first filter 031 can be attached to the light-emitting side of the lens 04 The surface, or, when the lens 04 is located between the filter assembly 03 and the image sensor 01, the first filter 031 may be attached to the surface of the lens 04 on the light incident side.
- the filter component 03 can control the spectral range received by the image sensor.
- the supplementary light and visible light generated by the first supplementary light device can pass through, while preventing the light of other spectral bands from passing through, ensuring effective use of the supplementary light. Under the premise of light, minimize the influence of other light sources.
- the image processor 05 is used to perform image processing and face detection on the first image signal and the second image signal to obtain a face image.
- the image processor 05 may receive the first image signal and the second image signal transmitted by the image sensor 01, and after performing face analysis and processing on the first image signal and the second image signal, obtain Face images in the first image signal and the second image signal, thereby realizing the function of capturing the face.
- the face image acquisition device includes an image sensor, a light supplement, a light filter component, and an image processor, multiple exposures of the image sensor, light supplement of the light supplement, and light filtering of the filter component.
- a single image sensor can also be used to obtain multiple first image signals and second image signals with different spectral ranges, which expands the image acquisition capability of a single sensor and improves the image quality in different scenarios.
- the image processor is used to obtain The first image signal and the second image signal of the device are processed and analyzed to output a face image, thereby realizing the face capture or collection function of the device.
- the face image acquisition device may include an image acquisition unit and an image processing unit, and the image acquisition unit may include the above-mentioned image sensor, light fill, filter assembly and lens assembly.
- the image acquisition unit may be an image acquisition device including the above-mentioned components, wherein the light supplement is a part of the image acquisition device to realize the function of supplement light, for example, a camera, a capture machine, a face recognition camera, a reader Code camera, vehicle-mounted camera, panoramic detail camera, etc.; as another example, the image acquisition unit can also be realized by connecting the image acquisition device and the light supplement 02, which is located outside the image acquisition device and the image acquisition The device is connected.
- the image processing unit may be an image processor, which has data processing and analysis capabilities, and analyzes the face image in the image signal. Since the quality of the first image signal and the second image signal in this application are good, correspondingly, the accuracy of face detection is improved.
- the exposure timing of the above-mentioned image sensor 01 and the near-infrared supplementary light timing of the first supplementary light device 021 included in the light supplement 02 for example, at least in the first preset period. It is assumed that there is near-infrared supplementary light in a part of the exposure time period of the exposure, and there is no near-infrared supplementary light in the exposure time period of the second preset exposure.
- FIG. 2 is a schematic structural diagram of an image processor in an embodiment of this application.
- the aforementioned image processor 05 may include: a processing component 051 and a detection component 052.
- the processing component 051 is configured to perform first preprocessing on the first image signal to generate a first image, and perform second preprocessing on the second image signal to generate a second image.
- the detection component 052 is used to perform face detection processing on the first image and the second image generated by the processing component 051 to obtain a face image.
- the detection component 052 can perform content analysis on the images (for example, the first image and the second image), and if it detects the presence of facial feature information in the image, it can obtain the location of the face area and extract the person. Face image, realize the function of face capture.
- the image processor is a computing platform that processes image signals, and has many typical implementations.
- the implementation of the image processor shown in FIG. 2 is a typical implementation that saves computing resources.
- the first image signal and the second image signal collected by the image sensor 01 undergo the image preprocessing of the processing component 051 to generate the first image and the second image, and the detection component 052 then processes the slave processing component 051.
- the received first image and second image are subjected to detection processing, thereby outputting a face image.
- the first image may be a grayscale image
- the second image may be a color image
- the grayscale image can be embodied in the form of a black and white image.
- the grayscale images described below can all be embodied in black and white images or grayscale images with different black and white ratios, which can be set according to actual conditions. No longer.
- the first preprocessing may include any one or a combination of the following operations: image interpolation, gamma mapping, color conversion, and image noise reduction.
- the second preprocessing may include any one or a combination of the following: white balance, image interpolation, gamma mapping, and image noise reduction.
- the processing components may include conventional image processing such as white balance, image interpolation, color conversion, gamma mapping, and image noise reduction. Different processing procedures and parameters may be used for the above-mentioned first image signal and second image signal. , So as to obtain the first image and the second image with different quality or color degree.
- FIG. 3 is a schematic diagram of a flow of processing the first image signal and the second image signal by the processing component.
- the processing component uses the first processing parameters to perform one or more of the processing operations such as image interpolation, gamma mapping, color conversion, and image noise reduction on the first image signal to obtain a gray scale.
- the second processing parameter uses the second processing parameter to perform one or a combination of processing operations such as white balance, image interpolation, gamma mapping, and image noise reduction processing on the second image signal to obtain a color image.
- the processing component in this embodiment can flexibly select appropriate processing parameters and image signal combinations, so that the image quality of the final output face image is better.
- first image signal and the second image signal are relative concepts, and the names can be interchanged.
- Figure 3 performs image interpolation, gamma mapping, color conversion, and image noise reduction on the first image signal to obtain a grayscale image, and performs white balance, image interpolation, gamma mapping, and image noise reduction on the second image signal to obtain
- the color image is illustrated as an example, and the embodiment of the present application does not limit it.
- the image processor 05 may include: a fusion component 053 in addition to a processing component 051 and a detection component 052.
- the information of different images can be extracted through the fusion component 053, and the information between the different images can be merged, so as to maximize the amount of information and improve the image quality.
- FIG. 4 is a schematic structural diagram of another image processor in an embodiment of this application.
- the fusion component 053 may be located between the processing component 051 and the detection component 052.
- the fusion component 053 is configured to perform fusion processing on the first image and the second image generated by the processing component 051 to generate a fusion image.
- the detection component 052 is specifically configured to perform face detection processing on the fused image generated by the fusion component to obtain a face image.
- the processing component 051 performs image preprocessing on the collected first image signal and second image signal, generates the first image and the second image, and sends the first image and the second image to the fusion component
- the fusion component 053 is used to fuse the received first image and the second image to generate a fusion image
- the detection component performs content detection and analysis on the received fusion image, and outputs a face image.
- the fusion component 053 separately extracts the information of the first image and the second image for fusion, so as to maximize the amount of information and output the fused image.
- the fusion component 053 is specifically used to extract the brightness information of the color image to obtain the brightness image and extract the color information of the color image Obtain a color image, and perform fusion processing on the brightness image, the color image, and the grayscale image to obtain a face image.
- the fusion processing includes at least one of the following operations: pixel-to-point fusion and pyramid multi-scale fusion.
- Fig. 5 is a schematic structural diagram of a fusion component performing fusion processing on a color image and a grayscale image.
- the fusion component 053 can extract the brightness image and the color image of the color image, and merge them with the gray image, for example, pixel-to-point fusion, pyramid multi-scale fusion, etc.
- the fusion weight of each image can be determined by the user
- the configuration can also be calculated from image brightness, texture and other information, so as to output a color fusion image with improved signal-to-noise ratio.
- the fusion component 053 extracts the brightness image and the color image of the color image, it merges the brightness image and the gray image to obtain a fused brightness image, and then merges the fused brightness image and the color image to output Colored fusion image.
- it can be determined by the following formula:
- y FUS represents the fusion image
- y VIS represents the brightness image
- y NIR represents the gray image
- ⁇ represents the fusion weight
- FIG. 6 is a schematic structural diagram of another image processor in an embodiment of this application. As shown in FIG. 6, the fusion component 053 may be located behind the detection component 052.
- the detection component 052 is specifically configured to perform face detection processing on the first image and the second image generated by the processing component 051 to obtain the first face image and The second face image.
- the fusion component 053 is specifically used to perform fusion processing on the first face image and the second face image obtained by the detection component 052 to obtain a face image.
- the first image signal and the second image signal collected by the image sensor are subjected to the image preprocessing of the processing component 051 to generate a color image and a grayscale image.
- the detection component 052 evaluates the received color image ,
- the grayscale image is detected and processed, and the color face image and the grayscale face image are output.
- the fusion component performs fusion processing on the color face image and the grayscale face image to generate a fused face image.
- the detection component 052 is specifically configured to calibrate the position and size of the face region according to the facial features detected in the target image, and output the target face image, the target image being the following image Any one: a first image, a second image, a fusion image of the first image and the second image, and a combination of the first image and the second image.
- FIG. 7 is a flow diagram of face detection processing performed by the detection component in an embodiment of the application. As shown in FIG.
- the detection component 052 is specifically used to extract multiple faces in the target image Feature points, based on preset facial feature information, determine multiple locating feature points that meet the facial rules from the above multiple facial feature points, determine face position coordinates based on the multiple locating feature points, and determine the target in the target image Face image.
- the target face image is the first face image
- the target face image is the second face image
- the target face image is the second face image
- the target image is the first
- the target face image is a fused face image of the first image and the second image.
- the extraction of multiple facial feature points in the target image in this embodiment is usually also called feature point extraction, which is based on preset facial feature information from the multiple facial feature points.
- Determining multiple positioning feature points that meet the facial rules actually refers to feature point comparison and feature point positioning.
- a typical implementation of feature point extraction and feature point comparison is to obtain feature data helpful for face classification according to the shape description of the face organs and the distance characteristics between them.
- the feature data usually includes Euclidean distance, curvature and angle between feature points. Since a human face is composed of parts such as eyes, nose, mouth, and chin, the geometric description of these parts and the structural relationship between them can be used as an important feature for detecting the face area. If a feature point that satisfies the facial rules is detected, the feature point location is performed, the face position coordinates are obtained, and the face image is extracted.
- the processing procedures for the first image, the second image, and the fusion image are similar, and they can all be implemented based on the foregoing processing procedures for the target image, which will not be repeated here.
- the detection component 052 is not only used to determine the target face image in the target image, but can also be used to detect whether the target face image is based on the principle of living body detection. It is obtained by shooting a real face, and when it is determined that the target face image is obtained by shooting a real face, the target face image is output.
- the detection component 052 can detect the target face image to verify whether the target face image is obtained by photographing a real face.
- the detection component 052 can distinguish the source of the target face image by using characteristics of different infrared reflection characteristics of a fake human face such as a real human face and a piece of paper, a screen, a stereo mask, etc.
- the detection component in this embodiment can process two images at the same time, and can output one face image or two face images as needed.
- the foregoing target image is a combination of the first image and the second image.
- the detection component 052 is specifically configured to extract multiple facial feature points in the first image, and determine multiple positioning feature points that meet the facial rules from the multiple facial feature points based on preset facial feature information, and The multiple positioning feature points determine the first face position coordinates, perform face extraction according to the first face position coordinates and the first image to obtain the first face image, and at the same time, obtain the first face image according to the first face position coordinates and the second face position coordinates.
- the image is subjected to face extraction to obtain a second face image.
- the first image is a grayscale image
- the second image is a color image
- the first face image is a grayscale face image
- the second face image is a color face image
- the detection component 052 is also used to detect whether the gray-scale face image is obtained by shooting a real face based on the principle of living body detection, and when it is determined that the gray-scale face image is obtained by shooting a real face. Output a grayscale face image, and output a color face image based on the extracted second face image.
- FIG. 8 is a schematic diagram of processing the color image and the grayscale image by the detection component in this embodiment.
- the detection component 052 first performs face calibration on a gray image with a higher signal-to-noise ratio, such as feature point extraction, feature point comparison, and feature point positioning, to obtain face position coordinates. And extract the gray-scale face image from the gray-scale image, and perform the live detection process to determine whether the gray-scale face image is obtained by shooting the real face, if it is, then extract the color face image from the color image, thereby Output grayscale face image, color face image, or output color face image.
- a higher signal-to-noise ratio such as feature point extraction, feature point comparison, and feature point positioning
- the detection component has one face image, two face images, or more face images can be determined according to actual needs, and will not be repeated here.
- the first image is a color image and the second image is a grayscale image
- the first facial image is a color facial image
- the second facial image is a grayscale facial image
- the detection component 052 is also used to detect whether the gray-scale face image is obtained by shooting a real face based on the principle of living body detection, and when it is determined that the gray-scale face image is obtained by shooting a real face, based on the extracted first A face image outputs a color face image.
- FIG. 9 is another schematic diagram of the detection component processing the color image and the grayscale image in this embodiment.
- the detection component 052 first performs feature point extraction, feature point comparison, and feature point positioning on the color image to obtain face position coordinates, and then extracts from the grayscale image according to the face position coordinates A gray-scale face image is generated, and a living body detection process is performed to determine whether the gray-scale face image is obtained by photographing a real face. If it is, a color face image is extracted from the color image, thereby outputting a color face image.
- this implementation is illustrated by outputting a color face image.
- this implementation can also output two images, such as a grayscale face image and a color face image.
- the embodiment of the present application does not limit the number of image frames that are specifically output by each implementation manner, and it can be determined according to implementation needs, and will not be repeated here.
- the image output by the face capture machine can be a grayscale face image, a color face image, a grayscale face image, and a color face image fusion One or more of the subsequent face images.
- the foregoing image processor further includes: a cache component 054.
- the cache component 054 may be located before the processing component 051 or after the processing component 051.
- the caching component 054 is used to cache temporary content, the temporary content includes the first image signal and/or the second image signal output by the image sensor 01; or, the temporary content includes the image The first image and/or the second image obtained by the processor 05 during processing.
- FIG. 10 is a schematic structural diagram of another image processor in an embodiment of this application.
- the image processor 05 in this embodiment has an image synchronization function. Specifically, if a subsequent module (for example, the processing component 051) needs to process the first image signal and the second image signal at the same time, this At this time, the buffer component 054 can be located before the processing component 051, and the buffer component 054 is used to store the first image signal and/or the second image signal collected first, and the second image signal and/or the first image signal to be received After the signal is received, it is processed again to achieve synchronization between the first image signal and the second image signal. That is, the cache component 054 in this embodiment can realize synchronization between images with different exposure time periods by caching images.
- the image that the buffer component 054 can store can be the original image signal (first image signal or second image signal) collected by the image sensor, or the first image signal obtained by the image processor during processing. Image and/or second image, first face image and/or second face image, etc.
- the embodiment of the present application does not limit the content cached by the cache component 054, which can be determined according to actual conditions, and will not be repeated here.
- the image processor 05 may also have a noise reduction function.
- the image processor 05 may use a grayscale image with a high signal-to-noise ratio as a guide for color images and grayscale images. Perform joint noise reduction for high-resolution images, such as guided filtering, joint bilateral filtering, etc., to obtain color images and grayscale images with improved signal-to-noise ratio.
- the foregoing image processor further includes: an image enhancement component 055.
- the image enhancement component 055 may be located after the processing component 051, before the detection component 052, or after the detection component 052.
- the image processor 05 includes a fusion component 053
- the image enhancement component 05 can also be located before the fusion component 053.
- the specific setting position of the image enhancement component 055 can be flexibly configured according to application requirements or resource conditions. This embodiment does not It is not limited.
- FIG. 11 is a schematic structural diagram of another image processor in an embodiment of this application.
- the image enhancement component 05 is located after the detection component 052 for explanation.
- the image enhancement component 05 is used to perform enhancement processing on a target image to obtain an enhanced target image.
- the enhancement processing includes at least one of the following: contrast enhancement and super-resolution reconstruction, and the target image is among the following images Any one: the first image, the second image, the fusion image of the first image and the second image, and the face image.
- the target image is a human face image for schematic illustration.
- the image processor 05 has the function of image enhancement processing, which performs enhancement processing such as contrast enhancement, super resolution, etc., on the received first image, second image, or face image, etc., and output quality Elevated face image.
- image enhancement processing which performs enhancement processing such as contrast enhancement, super resolution, etc., on the received first image, second image, or face image, etc., and output quality Elevated face image.
- the image processor 05 processes the low-resolution small face image through super-resolution reconstruction to generate a high-resolution large face image to improve image quality.
- the super-resolution reconstruction processing can adopt interpolation-based, reconstruction-based, and learning-based methods, which will not be repeated here.
- the first light supplement device 02 can perform stroboscopic light supplementation, that is, can perform high-frequency switching of different supplementary light states, and use the first light supplement device when performing image acquisition according to the first preset exposure.
- State fill light use the second state fill light during image acquisition according to the second preset exposure
- the first state fill light and the second state fill light can adopt different fill light configurations
- the parameters include but are not limited to fill light type , Fill light intensity (including the off state), fill light duration, etc., so as to expand the spectral range that the image sensor 01 can receive.
- the first supplementary light device 021 is a device that can emit near-infrared light, such as a near-infrared supplementary light, etc., the first supplementary light device 021 can perform near-infrared supplementary light in a stroboscopic manner, or a similar stroboscopic Other methods of performing near-infrared supplemental light are not limited in the embodiment of the present application.
- the first light supplement device 021 when the first light supplement device 021 performs near-infrared supplement light in a stroboscopic manner, the first light supplement device 021 can be manually controlled to perform near-infrared supplement light in a stroboscopic manner, or through a software program Or a specific device controls the first light supplement device 021 to perform near-infrared supplement light in a strobe mode, which is not limited in the embodiment of the present application.
- the time period during which the first light supplement device 021 performs near-infrared light supplementation may coincide with the exposure time period of the first preset exposure, or may be greater than the exposure time period of the first preset exposure or less than the exposure time period of the first preset exposure. The time period, as long as there is near-infrared supplement light in the entire exposure time period or part of the exposure time period of the first preset exposure, and there is no near-infrared supplement light in the exposure time period of the second preset exposure.
- the exposure time of the image sensor and the fill light duration of the first light supplement device meet certain constraints. If the infrared fill light is turned on in the first fill light state, the fill light time period cannot be the same as that of the second image. The signal exposure time period overlaps; similarly, if the second state supplement light turns on the infrared supplement light, its supplement light time period cannot overlap with the exposure time period of the first image signal to realize multi-spectral image acquisition.
- the exposure time period of the second preset exposure may be between the start exposure time and the end exposure time.
- Time period, for the rolling shutter exposure mode, the exposure time period of the second preset exposure may be the time period between the start exposure time of the first row of effective images of the second image signal and the end exposure time of the last row of effective images, but it is not limited to this.
- the exposure time period of the second preset exposure may also be the exposure time period corresponding to the target image in the second image signal, and the target image is a number of rows of effective images corresponding to the target object or target area in the second image signal.
- the time period between the start exposure time and the end exposure time of several rows of effective images can be regarded as the exposure time period of the second preset exposure.
- the image sensor may generate a first image signal according to a first preset exposure, and a second image signal according to a second preset exposure.
- the first preset exposure and the second preset exposure may be the same or Different exposure parameters, including but not limited to exposure time, gain, aperture size, etc., can be matched with the fill light state to achieve multi-spectral image acquisition.
- the near-infrared light incident on the surface of the object may be reflected by the object and enter the first filter 031.
- the ambient light may include visible light and near-infrared light, and near-infrared light in the ambient light is also reflected by the object when it is incident on the surface of the object, thereby entering the first filter 031.
- the near-infrared light that passes through the first filter 031 when there is near-infrared supplementary light may include the near-infrared light that enters the first filter 031 by the reflection of the object when the first supplementary light device 021 performs near-infrared supplementary light.
- the near-infrared light passing through the first filter 031 when there is no near-infrared supplementary light may include the near-infrared light reflected by the object and entering the first filter 031 when the first supplementary light device 021 is not performing near-infrared supplementary light.
- the near-infrared light passing through the first filter 031 when there is near-infrared supplement light includes the near-infrared light emitted by the first supplementary light device 021 and reflected by the object, and the ambient light reflected by the object Near-infrared light
- the near-infrared light passing through the first filter 031 when there is no near-infrared supplementary light includes near-infrared light reflected by an object in the ambient light.
- the filter assembly 03 may be located between the lens 04 and the image sensor 01, and the image sensor 01 is located on the light-emitting side of the filter assembly 03 as an example.
- the process of the first image signal and the second image signal is: when the image sensor 01 performs the first preset exposure, the first light supplement device 021 has near-infrared supplement light, and the ambient light in the shooting scene and the first light supplement device After the near-infrared light reflected by objects in the scene passes through the lens 04 and the first filter 031 when performing the near-infrared fill light, the image sensor 01 generates the first image signal through the first preset exposure; the second image signal is generated on the image sensor 01 During the preset exposure, the first light supplement device 021 does not have near-infrared fill light.
- the image sensor 01 After the ambient light in the shooting scene passes through the lens 04 and the first filter 031, the image sensor 01 generates the first light through the second preset exposure.
- Two image signals there can be M first preset exposures and N second preset exposures in one frame period of image acquisition, and there can be multiple combinations of sorts between the first preset exposure and the second preset exposure
- the values of M and N and the size relationship between M and N can be set according to actual requirements. For example, the values of M and N can be equal or different.
- the first light-filling device 021 since the intensity of the near-infrared light in the ambient light is lower than the intensity of the near-infrared light emitted by the first light-filling device 021, the first light-filling device 021 passes through the first filter 031 when performing near-infrared light-filling.
- the intensity of the near-infrared light is higher than the intensity of the near-infrared light that passes through the first filter 031 when the first light supplement device 021 does not perform the near-infrared light supplement.
- the wavelength range of the near-infrared light incident on the first filter 031 may be the first reference wavelength range, and the first reference wavelength range is 650 nm to 1100 nm.
- the wavelength range of the first light supplement device 021 for near-infrared supplement light may be the second reference wavelength range, and the second reference wavelength range may be 700 nanometers to 800 nanometers, or 900 nanometers to 1000 nanometers, etc. This is not the case in the embodiment of the application. Make a limit.
- the fill light device included in the fill light can be of visible light, infrared light, or a combination of the two, and the energy of the near-infrared fill light is concentrated in the range of 650 nm to 1000 nm.
- the energy is concentrated in the range of 700nm ⁇ 800nm, or concentrated in the range of 900m ⁇ 1000nm, so as to avoid the influence of 800nm ⁇ 900nm common 850nm infrared lamps to avoid confusion with alternate signal lights.
- the near-infrared light passing through the first filter 031 may include the near-infrared light reflected by the object and entering the first filter 031 when the first light-filling device 021 performs near-infrared light-filling when there is near-infrared supplementary light, And the near-infrared light reflected by the object in the ambient light. Therefore, the intensity of the near-infrared light entering the filter assembly 03 is relatively strong at this time. However, when there is no near-infrared complementary light, the near-infrared light passing through the first filter 031 includes the near-infrared light reflected by the object into the filter assembly 03 in the ambient light.
- the intensity of the near-infrared light passing through the first filter 031 is weak at this time. Therefore, the intensity of the near infrared light included in the first image signal generated and output according to the first preset exposure is higher than the intensity of the near infrared light included in the second image signal generated and output according to the second preset exposure.
- the center wavelength and/or wavelength range of the first light supplement device 021 for near-infrared supplement light there are multiple choices for the center wavelength and/or wavelength range of the first light supplement device 021 for near-infrared supplement light.
- the center wavelength of the near-infrared supplement light of the first light supplement device 021 can be designed, and the characteristics of the first filter 031 can be selected, so that the center of the first light supplement device 021 for the near-infrared light supplement.
- the center wavelength and/or band width of the near-infrared light passing through the first filter 031 can meet the constraint conditions.
- This constraint is mainly used to restrict the center wavelength of the near-infrared light passing through the first filter 031 as accurate as possible, and the band width of the near-infrared light passing through the first filter 031 is as narrow as possible, so as to avoid The infrared light band width is too wide and introduces wavelength interference.
- the center wavelength of the near-infrared light supplemented by the first light-filling device 021 may be the average value in the wavelength range of the highest energy in the spectrum of the near-infrared light emitted by the first light-filling device 021, or it may be understood as the first light supplement
- the set characteristic wavelength or the set characteristic wavelength range can be preset.
- the center wavelength of the first light supplement device 021 for near-infrared supplement light may be any wavelength within the wavelength range of 750 ⁇ 10 nanometers; or, the center wavelength of the first light supplement device 021 for near-infrared supplement light Any wavelength within the wavelength range of 780 ⁇ 10 nanometers; or, the center wavelength of the near-infrared supplement light performed by the first light supplement device 021 is any wavelength within the wavelength range of 810 ⁇ 10 nanometers; or, the first supplement light
- the center wavelength of the device 021 for near-infrared supplementary light is any wavelength within the wavelength range of 940 ⁇ 10 nanometers.
- the set characteristic wavelength range may be a wavelength range of 750 ⁇ 10 nanometers, or a wavelength range of 780 ⁇ 10 nanometers, or a wavelength range of 810 ⁇ 10 nanometers, or a wavelength range of 940 ⁇ 10 nanometers.
- FIG. 12 is a schematic diagram of the relationship between the wavelength and the relative intensity of the near-infrared supplement light performed by a first light supplement device provided in an embodiment of the present application.
- the center wavelength of the first light supplement device 021 for near-infrared supplement light is 940 nanometers.
- the wavelength range of the first light supplement device 021 for near-infrared supplement light is 900 nanometers to 1000 nanometers.
- the relative intensity of near-infrared light is highest.
- the above-mentioned constraint conditions may include: the difference between the center wavelength of the near-infrared light passing through the first filter 031 and the center wavelength of the near-infrared light of the first light supplement device 021 lies in the wavelength fluctuation Within the range, as an example, the wavelength fluctuation range may be 0-20 nanometers.
- the center wavelength of the near-infrared supplement light passing through the first filter 031 can be the wavelength at the peak position in the near-infrared band in the near-infrared light pass rate curve of the first filter 031, or it can be understood as the first A filter 031 is the wavelength at the middle position in the near-infrared waveband whose pass rate exceeds a certain threshold in the near-infrared light pass rate curve of the filter 031.
- the above constraint conditions may include: the first band width may be smaller than the second band width.
- the first waveband width refers to the waveband width of the near-infrared light passing through the first filter 031
- the second waveband width refers to the waveband width of the near-infrared light blocked by the first filter 031.
- the wavelength band width refers to the width of the wavelength range in which the wavelength of light lies.
- the first wavelength band width is 800 nanometers minus 700 nanometers, that is, 100 nanometers.
- the wavelength band width of the near-infrared light passing through the first filter 031 is smaller than the wavelength band width of the near-infrared light blocked by the first filter 031.
- FIG. 13 is a schematic diagram of the relationship between the wavelength of light that can pass through the first filter and the pass rate.
- the near-infrared light incident on the first filter 031 has a wavelength range of 650 nm to 1100 nm.
- the first filter 031 can pass visible light with a wavelength of 380 nm to 650 nm, and a wavelength of 900 nm.
- Near-infrared light with a wavelength between nanometers and 1100 nanometers passes through, and the near-infrared light with a wavelength between 650 nanometers and 900 nanometers is blocked. That is, the width of the first band is 1000 nanometers minus 900 nanometers, that is, 100 nanometers.
- the second band width is 900 nm minus 650 nm, plus 1100 nm minus 1000 nm, or 350 nm. 100 nanometers are smaller than 350 nanometers, that is, the wavelength band width of the near-infrared light passing through the first filter 031 is smaller than the wavelength band width of the near-infrared light blocked by the first filter 031.
- the above relationship curve is just an example.
- the wavelength range of the near-red light that can pass through the filter can be different, and the wavelength range of the near-infrared light blocked by the filter can also be different. different.
- the above constraint conditions may include: passing the first filter
- the half bandwidth of the near-infrared light of the light sheet 031 is less than or equal to 50 nanometers.
- the half bandwidth refers to the band width of near-infrared light with a pass rate greater than 50%.
- the above constraint conditions may include: the third band width may be smaller than the reference band width.
- the third waveband width refers to the waveband width of near-infrared light with a pass rate greater than a set ratio.
- the reference waveband width may be any waveband width in the range of 50 nanometers to 100 nanometers.
- the set ratio can be any ratio from 30% to 50%.
- the set ratio can also be set to other ratios according to usage requirements, which is not limited in the embodiment of the application.
- the band width of the near-infrared light whose pass rate is greater than the set ratio may be smaller than the reference band width.
- the wavelength band of the near-infrared light incident on the first filter 031 is 650 nanometers to 1100 nanometers, the setting ratio is 30%, and the reference wavelength band width is 100 nanometers. It can be seen from FIG. 3 that in the wavelength band of near-infrared light from 650 nanometers to 1100 nanometers, the band width of near-infrared light with a pass rate greater than 30% is significantly less than 100 nanometers.
- the first light supplement device 021 Since the first light supplement device 021 provides near-infrared supplementary light at least during a partial exposure period of the first preset exposure, it does not provide near-infrared supplementary light during the entire exposure period of the second preset exposure, and the first preset exposure
- the exposure and the second preset exposure are two of the multiple exposures of the image sensor 01, that is, the first light supplement device 021 provides near-infrared supplement light during the exposure period of the partial exposure of the image sensor 01, The near-infrared supplementary light is not provided during the exposure time period when another part of the image sensor 01 is exposed.
- the number of times of supplementary light in the unit time length of the first supplementary light device 021 may be lower than the number of exposures of the image sensor 01 in the unit time length, wherein, within the interval of two adjacent times of supplementary light, there is one interval. Or multiple exposures.
- the light supplement device 02 may further include a second light supplement device 022, and the second light supplement device 022 is used for visible light supplement light.
- the second light supplement device 022 provides visible light supplement light for at least part of the exposure time of the first preset exposure, that is, at least the near-infrared supplement light and visible light supplement light are present during the partial exposure time period of the first preset exposure.
- the mixed color of the two lights can be distinguished from the color of the red light in the traffic light, thereby avoiding the human eye from confusing the color of the light fill 02 for near-infrared fill light with the color of the red light in the traffic light.
- the second light supplement device 022 provides visible light supplement light during the exposure time period of the second preset exposure, since the intensity of visible light is not particularly high during the exposure time period of the second preset exposure, When the visible light supplement is performed during the exposure time period of the exposure, the brightness of the visible light in the second image signal can also be increased, thereby ensuring the quality of image collection.
- the second light supplement device 022 may be used to perform visible light supplement light in a constant light mode; or, the second light supplement device 022 may be used to perform visible light supplement light in a stroboscopic manner, wherein, at least in the first Visible light supplement light exists in part of the exposure time period of the preset exposure, and there is no visible light supplement light during the entire exposure time period of the second preset exposure; or, the second light supplement device 022 can be used to perform visible light supplement light in a strobe mode There is no visible light supplementary light at least during the entire exposure time period of the first preset exposure, and visible light supplementary light exists during the partial exposure time period of the second preset exposure.
- the second light supplement device 022 When the second light supplement device 022 performs visible light supplement light in a constant light mode, it can not only prevent human eyes from confusing the color of the first supplement light device 021 for near-infrared supplement light with the color of the red light in the traffic light, but also can improve the Second, the brightness of visible light in the image signal to ensure the quality of image collection.
- the second light supplement device 022 When the second light supplement device 022 performs visible light supplement light in a stroboscopic manner, it can prevent human eyes from confusing the color of the first light supplement device 021 for near-infrared supplement light with the color of the red light in the traffic light, or can improve The brightness of the visible light in the second image signal in turn ensures the quality of image collection, and can also reduce the number of times of supplementary light of the second supplementary light device 022, thereby prolonging the service life of the second supplementary light device 022.
- the filter assembly 03 may further include a second filter 032 and a switching component 033.
- the second filter 032 can be switched by the switching component 033. Switch to the light incident side of image sensor 01. After the second filter 032 is switched to the light incident side of the image sensor 01, the second filter 032 allows visible light to pass and blocks the near-infrared light. After the second filter 032 passes the visible light and blocks the near-infrared light, Exposure is performed by the image sensor 01 to generate and output a third image signal. Therefore, the face image acquisition device of this embodiment is compatible with the existing image acquisition function, which improves flexibility.
- the switching component 033 is used to switch the second filter 032 to the light incident side of the image sensor 01, and can also be understood as the second filter 032 replacing the first filter 031 in the image sensor 01. Position on the light side.
- the first light supplement device 021 may be in a closed state or an open state.
- the aforementioned multiple exposure refers to multiple exposures within one frame period, that is, the image sensor 01 performs multiple exposures within one frame period, thereby generating and outputting at least one frame of the first image signal and At least one frame of the second image signal.
- 1 second includes 25 frame periods, and the image sensor 01 performs multiple exposures in each frame period, thereby generating at least one frame of the first image signal and at least one frame of the second image signal, and the The first image signal and the second image signal are called a group of image signals, so that 25 groups of image signals are generated within 25 frame periods.
- the first preset exposure and the second preset exposure can be two adjacent exposures in multiple exposures in one frame period, or two non-adjacent exposures in multiple exposures in one frame period. The application embodiment does not limit this.
- the first image signal is generated and output by the first preset exposure
- the second image signal is generated and output by the second preset exposure.
- the first image can be The signal and the second image signal are processed.
- the purposes of the first image signal and the second image signal may be different, so in some embodiments, at least one exposure parameter of the first preset exposure and the second preset exposure may be different.
- the at least one exposure parameter may include but is not limited to one or more of exposure time, analog gain, digital gain, and aperture size. Wherein, the exposure gain includes analog gain and/or digital gain.
- the intensity of the near-infrared light sensed by the image sensor 01 is stronger, and the first image signal generated and output accordingly includes the near-infrared light
- the brightness of the light will also be higher.
- near-infrared light with higher brightness is not conducive to the acquisition of external scene information.
- the exposure gain of the first preset exposure may be smaller than the first preset exposure. 2. Exposure gain for preset exposure. In this way, when the first light supplement device 021 performs near-infrared supplement light, the brightness of the near-infrared light contained in the first image signal generated and output by the image sensor 01 will not be affected by the first light supplement device 021 performing near-infrared supplement light. Too high.
- the longer the exposure time the higher the brightness included in the image signal obtained by the image sensor 01, and the longer the motion trailing of the moving objects in the external scene in the image signal; the shorter the exposure time, the longer the image
- the image signal obtained by the sensor 01 includes the lower the brightness, and the shorter the motion trail of the moving object in the external scene is in the image signal. Therefore, in order to ensure that the brightness of the near-infrared light contained in the first image signal is within an appropriate range, and that the moving objects in the external scene have a short motion trail in the first image signal.
- the exposure time of the first preset exposure may be less than the exposure time of the second preset exposure.
- the first light supplement device 021 performs near-infrared supplement light
- the brightness of the near-infrared light contained in the first image signal generated and output by the image sensor 01 will not be affected by the first light supplement device 021 performing near-infrared supplement light. Too high.
- the shorter exposure time makes the motion trailing of the moving object in the external scene appear shorter in the first image signal, thereby facilitating the recognition of the moving object.
- the exposure time of the first preset exposure is 40 milliseconds
- the exposure time of the second preset exposure is 60 milliseconds, and so on.
- the exposure time of the first preset exposure may not only be less than the exposure time of the second preset exposure , Can also be equal to the exposure time of the second preset exposure.
- the exposure gain of the first preset exposure may be less than the exposure gain of the second preset exposure, or may be equal to the second preset exposure The exposure gain.
- the purposes of the first image signal and the second image signal may be the same.
- the exposure time of the first preset exposure may be equal to the exposure time of the second preset exposure. If the exposure time of the first preset exposure and the exposure time of the second preset exposure are different, the exposure time will be longer. There is a motion trailing in the image signal of one channel, resulting in different definitions of the two image signals.
- the exposure gain of the first preset exposure may be equal to the exposure gain of the second preset exposure.
- the exposure gain of the first preset exposure may be less than the exposure gain of the second preset exposure. It can also be equal to the exposure gain of the second preset exposure.
- the exposure time of the first preset exposure may be less than the exposure time of the second preset exposure, or may be equal to the second preset exposure The exposure time.
- the image sensor 01 may include multiple photosensitive channels, and each photosensitive channel may be used to sense at least one kind of light in the visible light waveband and to sense light in the near-infrared waveband. That is, each photosensitive channel can sense at least one light in the visible light band, such as red light, green light, blue light, and yellow light, as well as light in the near-infrared band.
- the multiple photosensitive channels can be used to sense at least two different visible light wavelength bands.
- each pixel of the image sensor 01 can sense the fill light generated by the light fill 02 to ensure that the collected infrared light image has a complete resolution and no pixels are missing.
- the plurality of photosensitive channels may include at least two of R photosensitive channels, G photosensitive channels, B photosensitive channels, Y photosensitive channels, W photosensitive channels, and C photosensitive channels.
- the R photosensitive channel is used to sense the light in the red and near-infrared bands
- the G photosensitive channel is used to sense the light in the green and near-infrared bands
- the B photosensitive channel is used to sense the light in the blue and near-infrared bands.
- Y The photosensitive channel is used to sense light in the yellow band and near-infrared band.
- W can be used to represent the light-sensing channel used to sense full-wavelength light
- C can be used to represent the light-sensing channel used to sense full-wavelength light, so when there is more
- a photosensitive channel includes a photosensitive channel for sensing light of a full waveband
- this photosensitive channel may be a W photosensitive channel or a C photosensitive channel. That is, in practical applications, the photosensitive channel used for sensing the light of the full waveband can be selected according to the use requirements.
- the image sensor 01 may be an RGB sensor, RGBW sensor, or RCCB sensor, or RYYB sensor.
- FIG. 15 is a schematic diagram of an RGB sensor provided by an embodiment of the present application.
- FIG. 16 is a schematic diagram of an RGBW sensor provided by an embodiment of the present application.
- Fig. 17 is a schematic diagram of an RCCB sensor provided by an embodiment of the present application.
- Fig. 18 is a schematic diagram of a RYYB sensor provided by an embodiment of the present application.
- the distribution of the R photosensitive channel, G photosensitive channel and B photosensitive channel in the RGB sensor can be seen in Figure 15.
- the R photosensitive channel, G photosensitive channel, B photosensitive channel and W photosensitive channel in the RGBW sensor Refer to Figure 16 for the distribution of channels, refer to Figure 17 for the distribution of R photosensitive channel, C photosensitive channel and B photosensitive channel in the RCCB sensor, refer to the distribution of R photosensitive channel, Y photosensitive channel and B photosensitive channel in the RYYB sensor Figure 18.
- some photosensitive channels may only sense light in the near-infrared waveband, but not light in the visible light waveband.
- the plurality of photosensitive channels may include at least two of R photosensitive channels, G photosensitive channels, B photosensitive channels, and IR photosensitive channels.
- the R photosensitive channel is used to sense red light and near-infrared light
- the G photosensitive channel is used to sense green light and near-infrared light
- the B photosensitive channel is used to sense blue light and near-infrared light.
- IR The photosensitive channel is used to sense light in the near-infrared band.
- the image sensor 01 may be an RGBIR sensor, where each IR photosensitive channel in the RGBIR sensor can sense light in the near-infrared waveband, but not light in the visible light waveband.
- the image sensor 01 is an RGB sensor
- other image sensors such as RGBIR sensors
- the RGB information collected by the RGB sensor is more complete.
- Some of the photosensitive channels of the RGBIR sensor cannot collect visible light, so the image collected by the RGB sensor The color details are more accurate.
- FIG. 19 is a schematic diagram of a sensing curve of an image sensor according to an embodiment of the present application.
- the R curve in Figure 19 represents the sensing curve of image sensor 01 to light in the red light band
- the G curve represents the sensing curve of image sensor 01 to light in the green light band
- the B curve represents the sensing curve of image sensor 01 to blue light.
- the light sensing curve, the W (or C) curve represents the sensing curve of the image sensor 01 sensing light in the full band
- the NIR (Near infrared) curve represents the sensing curve of the image sensor 01 sensing light in the near infrared band.
- the image sensor 01 may adopt a global exposure method or a rolling shutter exposure method.
- the global exposure mode means that the exposure start time of each row of effective images is the same, and the exposure end time of each row of effective images is the same.
- the global exposure mode is an exposure mode in which all rows of effective images are exposed at the same time and the exposure ends at the same time.
- Rolling shutter exposure mode means that the exposure time of different rows of effective images does not completely coincide, that is, the exposure start time of a row of effective images is later than the exposure start time of the previous row of effective images, and the exposure end time of a row of effective images is later At the end of the exposure of the effective image on the previous line.
- data can be output after each line of effective image is exposed. Therefore, the time from the start of output of the first line of effective image to the end of output of the last line of effective image can be expressed as reading Time out.
- FIG. 20 is a schematic diagram of a rolling shutter exposure method. It can be seen from Figure 20 that the effective image of the first line begins to be exposed at time T1, and the exposure ends at time T3. The effective image of the second line begins to be exposed at time T2 and ends at time T4. Time T2 is backward compared to time T1. A period of time has passed, and time T4 has moved a period of time backward compared to time T3. In addition, the effective image of the first line ends exposure at time T3 and begins to output data, and the output of data ends at time T5. The effective image of line n ends exposure at time T6 and begins to output data, and the output of data ends at time T7, then T3 The time between ⁇ T7 is the read time.
- the time period of the near-infrared fill light and the exposure time period of the nearest second preset exposure do not exist Intersection
- the time period of near-infrared fill light is a subset of the exposure time period of the first preset exposure, or the time period of near-infrared fill light and the exposure time period of the first preset exposure overlap, or the first preset
- the exposure period of exposure is a subset of the near-infrared fill light.
- FIG. 21 is a schematic diagram of the first type of first preset exposure and the second preset exposure provided by an embodiment of the present application.
- FIG. 22 is a schematic diagram of the second first preset exposure and the second preset exposure provided by an embodiment of the present application.
- FIG. 23 is a schematic diagram of a third type of first preset exposure and a second preset exposure provided by an embodiment of the present application. Referring to Figure 21, for any one near-infrared fill light, the time period of near-infrared fill light does not overlap with the exposure time period of the nearest second preset exposure, and the time period of near-infrared fill light is that of the first preset exposure A subset of the exposure time period.
- the time period of near-infrared fill light does not overlap with the exposure time period of the nearest second preset exposure, and the time period of near-infrared fill light is equal to that of the first preset exposure. There is an intersection of exposure time periods.
- the time period of near-infrared fill light does not overlap with the exposure time period of the nearest second preset exposure, and the exposure time period of the first preset exposure is near-infrared fill light A subset of.
- the time period of near-infrared fill light is the same as the exposure time period of the nearest second preset exposure There is no intersection.
- the start time of the near-infrared fill light is no earlier than the exposure start time of the last line of the effective image in the first preset exposure
- the end time of the near-infrared fill light is no later than the exposure of the first line of the effective image in the first preset exposure End time.
- the start time of the near-infrared fill light is not earlier than the exposure end time of the last line of the effective image of the nearest second preset exposure before the first preset exposure and not later than the first line of the first preset exposure.
- the exposure end time of the image, the end time of the near-infrared fill light is no earlier than the exposure start time of the last line of the effective image in the first preset exposure and no later than the nearest second preset exposure after the first preset exposure
- the start time of the near-infrared fill light is not earlier than the exposure end time of the last line of the effective image of the nearest second preset exposure before the first preset exposure and not later than the first line of the first preset exposure.
- the exposure start time of the image, the end time of the near-infrared fill light is no earlier than the exposure end time of the last line of the effective image in the first preset exposure and no later than the nearest second preset exposure after the first preset exposure The exposure start time of the first line of valid images.
- FIG. 24 is a schematic diagram of the first rolling shutter exposure method and near-infrared light supplement provided by an embodiment of the present application.
- FIG. 25 is a schematic diagram of a second rolling shutter exposure method and near-infrared light supplement provided by an embodiment of the present application.
- FIG. 26 is a schematic diagram of a third rolling shutter exposure method and near-infrared light supplement provided by an embodiment of the present application.
- the time period of near-infrared fill light does not overlap with the exposure time period of the nearest second preset exposure, and the start time of near-infrared fill light is no earlier than the first The exposure start time of the last line of the effective image in the preset exposure, and the end time of the near-infrared fill light is no later than the exposure end time of the first line of the effective image in the first preset exposure.
- the time period of near-infrared fill light does not overlap with the exposure time period of the nearest second preset exposure, and the start time of near-infrared fill light is no earlier than the first
- the exposure end time of the last effective image line of the nearest second preset exposure before the preset exposure and not later than the exposure end time of the first effective image line in the first preset exposure, and the end time of the near-infrared fill light is not It is earlier than the exposure start time of the last line of the effective image in the first preset exposure and not later than the exposure start time of the first line of the effective image of the nearest second preset exposure after the first preset exposure.
- the time period of near-infrared fill light does not overlap with the exposure time period of the nearest second preset exposure, and the start time of near-infrared fill light is no earlier than the first
- the exposure end time of the last line of the effective image of the nearest second preset exposure before the preset exposure and not later than the exposure start time of the first line of the effective image in the first preset exposure the end time of the near-infrared fill light is not It is earlier than the exposure end time of the last line of the effective image in the first preset exposure and not later than the exposure start time of the first line of the effective image of the nearest second preset exposure after the first preset exposure.
- FIGS. 24 to 26 are only an example, and the sorting of the first preset exposure and the second preset exposure may not be limited to these examples.
- the first light supplement device 021 can be used to stroboscopically fill light to make the image sensor 01 generate and output the first image signal containing near-infrared brightness information.
- the second image signal containing visible light brightness information and since the first image signal and the second image signal are both acquired by the same image sensor 01, the viewpoint of the first image signal is the same as the viewpoint of the second image signal, so that the The first image signal and the second image signal can obtain complete information of the external scene.
- the intensity of visible light is strong, such as daytime, the proportion of near-infrared light during the day is relatively strong, and the color reproduction of the collected image is not good.
- the third image signal containing the visible light brightness information can be generated and output by the image sensor 01, so Even during the day, images with better color reproduction can be collected, and the true color information of the external scene can be obtained efficiently and simply regardless of the intensity of visible light, or whether it is day or night.
- the present application uses the exposure timing of the image sensor to control the near-infrared supplementary light timing of the supplementary light device, so that the near-infrared supplementary light is performed during the first preset exposure and the first image signal is generated, and during the second preset exposure It does not perform near-infrared supplementary light and generates a second image signal.
- This data collection method can directly collect the first image signal and the second image signal with different brightness information while the structure is simple and the cost is reduced, that is, through one
- the image sensor can acquire two different image signals, which makes the image acquisition device easier and more efficient to acquire the first image signal and the second image signal.
- the first image signal and the second image signal are both generated and output by the same image sensor, so the viewpoint corresponding to the first image signal is the same as the viewpoint corresponding to the second image signal. Therefore, the information of the external scene can be jointly obtained through the first image signal and the second image signal, and there is no difference between the viewpoint corresponding to the first image signal and the viewpoint corresponding to the second image signal. The problem of misalignment with the image generated by the second image signal.
- the face image acquisition device can use the first image signal and the second image signal generated and output by multiple exposures to perform image processing and face detection to obtain a face image.
- the face image acquisition method will be described with the face image acquisition device provided based on the embodiment shown in Figs. 1-26.
- FIG. 27 is a schematic flowchart of an embodiment of a method for acquiring a face image according to an embodiment of the application.
- the method is applied to a face image acquisition device, the face image acquisition device includes an image sensor, a light supplement, a filter component, and an image processor.
- the light supplement includes a first light supplement device, and the filter component includes a first light supplement device.
- Filter the image sensor is located on the light emitting side of the filter assembly.
- the method may include:
- Step 2701 Perform near-infrared light supplementation through the first light-filling device, wherein at least the near-infrared light-filling is performed during the exposure time period of the first preset exposure, and the near-infrared light-filling is not performed during the exposure time period of the second preset exposure Fill light, the first preset exposure and the second preset exposure are two of the multiple exposures of the image sensor.
- Step 2702 Pass the visible light and part of the near-infrared light through the first filter.
- Step 2703 Perform multiple exposures through the image sensor to generate and output a first image signal and a second image signal, where the first image signal is an image signal generated according to the first preset exposure, and the first The second image signal is an image signal generated according to the second preset exposure.
- Step 2704 Perform image processing and face detection on the first image signal and the second image signal by an image processor to obtain a face image.
- the image processor includes a processing component and a detection component, and the foregoing step 2704 may specifically include the following steps:
- the detection component performs face detection processing on the first image and the second image generated by the processing component to obtain the face image.
- the first image is a grayscale image
- the first preprocessing includes any one or a combination of the following operations: image interpolation, gamma mapping, color conversion, and image noise reduction;
- the second image is a color image
- the second preprocessing includes any one or a combination of the following: white balance, image interpolation, gamma mapping, and image noise reduction.
- the image processor further includes: a fusion component.
- the above step 2704 may also include the following steps:
- the detection component performs face detection processing on the fusion image generated by the fusion component to obtain the face image.
- the first image is a grayscale image
- the second image is a color image
- the above step 2704 may further include the following steps:
- the brightness information of the color image is extracted by the fusion component to obtain a brightness image
- the color information of the color image is extracted to obtain a color image
- the brightness image, the color image, and the gray image are fused to obtain the Describe the face image.
- the fusion processing includes at least one of the following operations: pixel-to-point fusion and pyramid multi-scale fusion.
- the image processor further includes: a fusion component.
- the above step 2704 may also include the following steps:
- the first face image and the second face image obtained by the detection component are fused by a fusion component to obtain the face image.
- the foregoing step 2704 may specifically include the following steps:
- the detection component is used to calibrate the position and size of the face area according to the facial features detected in the target image, and output the target face image.
- the target image is any one of the following images: the first image, the second image, the fused image, the combination of the first image and the second image.
- the target image is any one of the following images: the first image, the second image, and the fused image; then the above step 2704 may specifically include the following steps:
- the detection component extracts multiple facial feature points in the target image, determines multiple positioning feature points from the multiple facial feature points based on preset facial feature information, and based on the multiple positioning features
- the feature points determine the position coordinates of the face, and determine the target face image in the target image.
- step 2704 may specifically further include the following steps:
- the detection component detects whether the target face image is obtained by shooting a real face based on the principle of living body detection, and outputs the target face image when it is determined that the target face image is obtained by shooting a real face.
- the target image is a combination of the first image and the second image
- the detection component extracts multiple facial feature points in the first image, determines multiple positioning feature points from the multiple facial feature points based on preset facial feature information, and based on the multiple facial feature points
- the positioning feature points determine the first face position coordinates, and perform face extraction according to the first face position coordinates and the first image to obtain the first face image, and at the same time according to the first face position coordinates and the
- the second image is subjected to face extraction to obtain a second face image.
- the first image is a grayscale image
- the second image is a color image
- the first face image is a grayscale face image
- the second face image is a color face Image
- the first image is a color image and the second image is a grayscale image
- the first face image is a color face image
- the second face image is a grayscale image.
- Face image; the above step 2704 may specifically include the following steps:
- the detection component is used to detect whether the gray-scale face image is obtained by shooting a real face based on the principle of living body detection, and when it is determined that the gray-scale face image is obtained by shooting a real face, based on the extracted first face image
- the color face image is output.
- the image processor further includes: a cache component; the face image acquisition method may further include the following steps:
- the temporary content is cached by the cache component, and the temporary content includes any one of the following content: the first image signal and/or the second image signal output by the image sensor, and the first image signal obtained by the image processor during processing. One image and/or second image.
- the image processor further includes: an image enhancement component; the face image acquisition method may further include the following steps:
- the target image is enhanced by the image enhancement component to obtain an enhanced target image
- the enhancement processing includes at least one of the following: contrast enhancement and super-resolution reconstruction
- the target image is any one of the following images: The first image, the second image, and the face image.
- the intensity of the near-infrared light passing through the first filter when the first light-filling device performs near-infrared light-filling is higher than that when the first light-filling device does not perform near-infrared light-filling.
- the intensity of the near-infrared light of the first filter is higher than that when the first light-filling device does not perform near-infrared light-filling.
- the center wavelength of the near-infrared supplement light performed by the first light supplement device is a set characteristic wavelength or falls within a set characteristic wavelength range
- the center wavelength of the near-infrared light passing through the first filter is sum / Or the band width reaches the constraint condition.
- the center wavelength of the near-infrared supplement light performed by the first light supplement device is any wavelength within a wavelength range of 750 ⁇ 10 nanometers;
- the center wavelength of the near-infrared supplement light performed by the first light supplement device is any wavelength within the wavelength range of 780 ⁇ 10 nanometers;
- the center wavelength of the near-infrared supplement light performed by the first light supplement device is any wavelength within the wavelength range of 810 ⁇ 10 nanometers;
- the center wavelength of the near-infrared supplement light performed by the first light supplement device is any wavelength within the wavelength range of 940 ⁇ 10 nanometers.
- restriction conditions include any one of the following:
- the difference between the center wavelength of the near-infrared light passing through the first filter and the center wavelength of the near-infrared light supplemented by the first light-filling device lies within the wavelength fluctuation range, and the wavelength fluctuation range is 0 to 20 nanometers;
- the half bandwidth of the near-infrared light passing through the first filter is less than or equal to 50 nanometers
- the first waveband width is smaller than the second waveband width; wherein, the first waveband width refers to the waveband width of the near-infrared light passing through the first filter, and the second waveband width refers to the waveband width of the near-infrared light passing through the first filter.
- the third waveband width is smaller than the reference waveband width.
- the third waveband width refers to the waveband width of near-infrared light whose pass rate is greater than a set ratio.
- the reference waveband width is any waveband within the range of 50nm to 150nm.
- Width, the set ratio is any ratio within the ratio range of 30% to 50%.
- At least one exposure parameter of the first preset exposure and the second preset exposure is different, and the at least one exposure parameter is one of exposure time, exposure gain, and aperture size.
- the exposure gain includes analog gain, and/or digital gain.
- At least one exposure parameter of the first preset exposure and the second preset exposure is the same, and the at least one exposure parameter includes one or more of exposure time, exposure gain, and aperture size ,
- the exposure gain includes analog gain and/or digital gain.
- the image sensor includes a plurality of light-sensing channels, and each light-sensing channel is used to sense at least one light in the visible light waveband and to sense light in the near-infrared waveband.
- the image sensor adopts a global exposure mode to perform multiple exposures.
- the time period of the near-infrared fill light is the same as the closest second preset exposure
- the time period of the near-infrared supplement light is a subset of the exposure time period of the first preset exposure, or the time period of the near-infrared supplement light and the exposure of the first preset exposure
- the image sensor adopts a rolling shutter exposure method for multiple exposures.
- the time period of the near-infrared supplement light is the same as that of the nearest second pre-light. Suppose there is no intersection between the exposure time periods of exposure;
- the start time of the near-infrared fill light is no earlier than the exposure start time of the last line of the effective image in the first preset exposure, and the end time of the near-infrared fill light is no later than the first line of the effective image in the first preset exposure The end of the exposure;
- the start time of the near-infrared fill light is no earlier than the exposure end time of the last line of the effective image of the nearest second preset exposure before the first preset exposure and no later than the first preset exposure.
- the exposure end time of the line effective image, the end time of the near-infrared fill light is no earlier than the exposure start time of the last line of the effective image in the first preset exposure and no later than the nearest neighbor after the first preset exposure.
- the exposure start time of the first line of the effective image of the second preset exposure or
- the start time of the near-infrared fill light is no earlier than the exposure end time of the last line of the effective image of the nearest second preset exposure before the first preset exposure and no later than the first preset exposure.
- the exposure start time of the line effective image, the end time of the near-infrared fill light is no earlier than the exposure end time of the last line of the effective image in the first preset exposure and no later than the nearest neighbor after the first preset exposure.
- the exposure start time of the first line of the effective image of the second preset exposure is no earlier than the exposure end time of the last line of the effective image in the first preset exposure and no later than the nearest neighbor after the first preset exposure.
- the light fill device further includes a second light fill device, and the second light fill device is used to fill light with visible light.
- the filter assembly further includes a second filter and a switching component, and both the first filter and the second filter are connected to the switching component;
- the switching component is configured to switch the second filter to the light incident side of the image sensor
- the second filter is switched to the light incident side of the image sensor, the second filter is used to pass visible light and block near-infrared light, and the image sensor is used to generate and output through exposure Third image signal
- the image sensor generates and outputs a first image signal and a second image signal through multiple exposures, where the first image signal is an image signal generated according to a first preset exposure, and the second image signal is According to the image signal generated by the second preset exposure, the first preset exposure and the second preset exposure are two exposures in the multiple exposures.
- the light supplement includes a first light supplement device, and the first light supplement device performs Near-infrared supplementary light, wherein at least the near-infrared supplementary light is performed during the exposure time period of the first preset exposure, and the near-infrared supplementary light is not performed during the exposure time period of the second preset exposure.
- Image processing and face detection are performed on the image signal and the second image signal to obtain a face image.
- only one image sensor is needed to obtain visible light images and infrared light images, which reduces the cost, and avoids that the images obtained by the two image sensors due to the process structure and the registration and synchronization problems of the two image sensors are not synchronized, causing the face The problem of poor image quality.
- At least one refers to one or more, and “multiple” refers to two or more.
- “And/or” describes the association relationship of the associated objects, indicating that there can be three relationships, for example, A and/or B, which can mean: A alone exists, both A and B exist, and B exists alone, where A, B can be singular or plural.
- the character “/” generally indicates that the associated objects before and after are in an “or” relationship; in the formula, the character “/” indicates that the associated objects before and after are in a “division” relationship.
- “The following at least one item (a)” or similar expressions refers to any combination of these items, including any combination of a single item (a) or plural items (a).
- at least one of a, b, or c can mean: a, b, c, ab, ac, bc, or abc, where a, b, and c can be single or multiple One.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
Description
Claims (23)
- 一种人脸图像采集装置,其特征在于,包括:图像传感器、补光器、滤光组件和图像处理器;所述图像传感器用于通过多次曝光产生并输出第一图像信号和第二图像信号,其中,所述第一图像信号是根据第一预设曝光产生的图像信号,所述第二图像信号是根据第二预设曝光产生的图像信号,所述第一预设曝光和所述第二预设曝光为所述多次曝光中的其中两次曝光;所述补光器包括第一补光装置,所述第一补光装置用于进行近红外补光,其中,至少在所述第一预设曝光的曝光时间段内进行近红外补光,在所述第二预设曝光的曝光时间段内不进行近红外补光;所述滤光组件包括第一滤光片,所述第一滤光片用于使可见光和部分近红外光通过;所述图像处理器用于对所述第一图像信号和第二图像信号进行图像处理和人脸检测,得到人脸图像。
- 根据权利要求1所述的装置,其特征在于,所述图像处理器包括:处理组件和检测组件;所述处理组件用于对所述第一图像信号进行第一预处理生成第一图像,以及对所述第二图像信号进行第二预处理生成第二图像;所述检测组件用于对所述处理组件生成的所述第一图像和所述第二图像进行人脸检测处理,得到所述人脸图像。
- 根据权利要求2所述的装置,其特征在于,所述第一图像为灰度图像,所述第一预处理包括如下操作中的任意一种或多种的组合:图像插值、伽马映射、色彩转换和图像降噪;所述第二图像为彩色图像,所述第二预处理包括如下任意一种或多种的组合:白平衡、图像插值、伽马映射和图像降噪。
- 根据权利要求2所述的装置,其特征在于,所述图像处理器还包括:融合组件;所述融合组件用于对所述处理组件生成的所述第一图像和所述第二图像进行融合处理生成融合图像;所述检测组件具体用于对所述融合组件生成的所述融合图像进行人脸检 测处理,得到所述人脸图像。
- 根据权利要求4所述的装置,其特征在于,所述第一图像为灰度图像,所述第二图像为彩色图像;所述融合组件具体用于提取所述彩色图像的亮度信息得到亮度图像、提取所述彩色图像的色彩信息得到色彩图像,以及对所述亮度图像、所述色彩图像以及所述灰度图像进行融合处理,得到所述人脸图像,所述融合处理包括如下操作中的至少一种:像素点对点融合、金字塔多尺度融合。
- 根据权利要求2所述的装置,其特征在于,所述图像处理器还包括:融合组件;所述检测组件具体用于对所述处理组件生成的所述第一图像和所述第二图像进行人脸检测处理,分别得到第一人脸图像和第二人脸图像;所述融合组件具体用于对所述检测组件得到的所述第一人脸图像和所述第二人脸图像进行融合处理,得到所述人脸图像。
- 根据权利要求2所述的装置,其特征在于,所述检测组件具体用于根据在目标图像中检测到的面部特征进行人脸区域位置和大小标定,输出目标人脸图像,所述目标图像为如下图像中的任意一种:所述第一图像、所述第二图像、所述第一图像和所述第二图像的融合图像、所述第一图像和所述第二图像的组合。
- 根据权利要求7所述的装置,其特征在于,所述目标图像为如下图像中的任意一种:所述第一图像、所述第二图像、所述融合图像;所述检测组件具体用于提取所述目标图像中的多个面部特征点,基于预设的面部特征信息从所述多个面部特征点中确定出满足面部规则的多个定位特征点,基于所述多个定位特征点确定人脸位置坐标,确定所述目标图像中的目标人脸图像。
- 根据权利要求8所述的装置,其特征在于,所述检测组件还用于基于活体检测原理检测所述目标人脸图像是否由拍摄真实人脸获得,并在确定所述目标人脸图像由拍摄真实人脸获得时输出所述目标人脸图像。
- 根据权利要求7所述的装置,其特征在于,所述目标图像为所述第一图像和所述第二图像的组合;所述检测组件具体用于提取所述第一图像中的多个面部特征点,基于预设的面部特征信息从所述多个面部特征点中确定出满足面部规则的多个定位 特征点,基于所述多个定位特征点确定第一人脸位置坐标,根据所述第一人脸位置坐标和所述第一图像进行人脸提取得到第一人脸图像,同时根据所述第一人脸位置坐标和所述第二图像进行人脸提取得到第二人脸图像。
- 根据权利要求10所述的装置,其特征在于,所述第一图像为灰度图像,所述第二图像为彩色图像时,所述第一人脸图像为灰度人脸图像,所述第二人脸图像为彩色人脸图像;所述检测组件还用于基于活体检测原理检测所述灰度人脸图像是否由拍摄真实人脸获得,并在确定所述灰度人脸图像由拍摄真实人脸获得时输出所述灰度人脸图像,以及基于提取得到的第二人脸图像输出所述彩色人脸图像。
- 根据权利要求10所述的装置,其特征在于,所述第一图像为彩色图像,所述第二图像为灰度图像时,所述第一人脸图像为彩色人脸图像,所述第二人脸图像为灰度人脸图像;所述检测组件还用于基于活体检测原理检测所述灰度人脸图像是否由拍摄真实人脸获得,并在确定所述灰度人脸图像由拍摄真实人脸获得时,基于提取得到的第一人脸图像输出所述彩色人脸图像。
- 根据权利要求2所述的装置,其特征在于,所述图像处理器还包括:缓存组件;所述缓存组件用于缓存临时内容,所述临时内容包括:所述图像传感器输出的第一图像信号和/或第二图像信号;或者所述临时内容包括:所述图像处理器在处理过程中得到的第一图像和/或第二图像。
- 根据权利要求2所述的装置,其特征在于,所述图像处理器还包括:图像增强组件;所述图像增强组件用于对目标图像进行增强处理,得到增强处理后的目标图像,所述增强处理包括如下至少一种:对比度增强、超分辨率重建,所述目标图像为如下图像中的任意一种:所述第一图像、所述第二图像、所述第一图像和所述第二图像的融合图像、所述人脸图像。
- 根据权利要求1-14任一项所述的装置,其特征在于,所述第一补光装置进行近红外补光的中心波长为设定特征波长或者落在设定特征波长范围时,通过所述第一滤光片的近红外光的中心波长和/或波段宽度达到约束条件。
- 根据权利要求1-14任一项所述的装置,其特征在于,所述第一预设 曝光与所述第二预设曝光的至少一个曝光参数不同,所述至少一个曝光参数为曝光时间、曝光增益、光圈大小中的一种或多种,所述曝光增益包括模拟增益,和/或,数字增益。
- 根据权利要求1-14任一项所述的装置,其特征在于,所述第一预设曝光和所述第二预设曝光的至少一个曝光参数相同,所述至少一个曝光参数包括曝光时间、曝光增益、光圈大小中的一种或多种,所述曝光增益包括模拟增益,和/或,数字增益。
- 根据权利要求1-14任一项所述的装置,其特征在于,所述图像传感器包括多个感光通道,每个感光通道用于感应至少一种可见光波段的光,以及感应近红外波段的光。
- 根据权利要求1-14任一项所述的装置,其特征在于,所述图像传感器采用全局曝光方式进行多次曝光,对于任意一次近红外补光,近红外补光的时间段与最邻近的所述第二预设曝光的曝光时间段不存在交集,近红外补光的时间段是所述第一预设曝光的曝光时间段的子集,或者,近红外补光的时间段与所述第一预设曝光的曝光时间段存在交集,或者所述第一预设曝光的曝光时间段是近红外补光的子集。
- 根据权利要求1-14任一项所述的装置,其特征在于,所述图像传感器采用卷帘曝光方式进行多次曝光,对于任意一次近红外补光,近红外补光的时间段与最邻近的所述第二预设曝光的曝光时间段不存在交集;近红外补光的开始时刻不早于所述第一预设曝光中最后一行有效图像的曝光开始时刻,近红外补光的结束时刻不晚于所述第一预设曝光中第一行有效图像的曝光结束时刻;或者,近红外补光的开始时刻不早于所述第一预设曝光之前的最邻近的第二预设曝光的最后一行有效图像的曝光结束时刻且不晚于所述第一预设曝光中第一行有效图像的曝光结束时刻,近红外补光的结束时刻不早于所述第一预设曝光中最后一行有效图像的曝光开始时刻且不晚于所述第一预设曝光之后的最邻近的第二预设曝光的第一行有效图像的曝光开始时刻;或者近红外补光的开始时刻不早于所述第一预设曝光之前的最邻近的第二预设曝光的最后一行有效图像的曝光结束时刻且不晚于所述第一预设曝光中第一行有效图像的曝光开始时刻,近红外补光的结束时刻不早于所述第一预设 曝光中最后一行有效图像的曝光结束时刻且不晚于所述第一预设曝光之后的最邻近的第二预设曝光的第一行有效图像的曝光开始时刻。
- 根据权利要求1-14任一项所述的装置,其特征在于,所述补光器还包括第二补光装置,所述第二补光装置用于进行可见光补光。
- 根据权利要求1-14任一项所述的装置,其特征在于,所述滤光组件还包括第二滤光片和切换部件,所述第一滤光片和所述第二滤光片均与所述切换部件连接;所述切换部件,用于将所述第二滤光片切换到所述图像传感器的入光侧;在所述第二滤光片切换到所述图像传感器的入光侧之后,所述第二滤光片用于使可见光通过,阻挡近红外光,所述图像传感器,用于通过曝光产生并输出第三图像信号。
- 一种人脸图像采集的方法,应用于人脸图像采集装置,所述人脸图像采集装置包括图像传感器、补光器、滤光组件和图像处理器,所述补光器包括第一补光装置,所述滤光组件包括第一滤光片,所述图像传感器位于所述滤光组件的出光侧,其特征在于,所述方法包括:通过所述第一补光装置进行近红外补光,其中,至少在第一预设曝光的部分曝光时间段内进行近红外补光,在第二预设曝光的曝光时间段内不进行近红外补光,所述第一预设曝光和所述第二预设曝光为所述图像传感器的多次曝光中的其中两次曝光;通过所述第一滤光片使可见光和部分近红外光通过;通过所述图像传感器进行多次曝光,以产生并输出第一图像信号和第二图像信号,所述第一图像信号是根据所述第一预设曝光产生的图像信号,所述第二图像信号是根据所述第二预设曝光产生的图像信号;通过所述图像处理器对所述第一图像信号和第二图像信号进行图像处理和人脸检测,得到人脸图像。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910472685.2A CN110490041B (zh) | 2019-05-31 | 2019-05-31 | 人脸图像采集装置及方法 |
CN201910472685.2 | 2019-05-31 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020238903A1 true WO2020238903A1 (zh) | 2020-12-03 |
Family
ID=68546284
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/092357 WO2020238903A1 (zh) | 2019-05-31 | 2020-05-26 | 人脸图像采集装置及方法 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110490041B (zh) |
WO (1) | WO2020238903A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112669438A (zh) * | 2020-12-31 | 2021-04-16 | 杭州海康机器人技术有限公司 | 一种图像重建方法、装置及设备 |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110490041B (zh) * | 2019-05-31 | 2022-03-15 | 杭州海康威视数字技术股份有限公司 | 人脸图像采集装置及方法 |
CN110493492B (zh) | 2019-05-31 | 2021-02-26 | 杭州海康威视数字技术股份有限公司 | 图像采集装置及图像采集方法 |
CN110493491B (zh) * | 2019-05-31 | 2021-02-26 | 杭州海康威视数字技术股份有限公司 | 一种图像采集装置及摄像方法 |
CN110490042B (zh) * | 2019-05-31 | 2022-02-11 | 杭州海康威视数字技术股份有限公司 | 人脸识别装置和门禁设备 |
CN110493494B (zh) * | 2019-05-31 | 2021-02-26 | 杭州海康威视数字技术股份有限公司 | 图像融合装置及图像融合方法 |
CN113259546B (zh) * | 2020-02-11 | 2023-05-12 | 华为技术有限公司 | 图像获取装置和图像获取方法 |
CN111462125B (zh) * | 2020-04-03 | 2021-08-20 | 杭州恒生数字设备科技有限公司 | 一种增强活体检测图像处理系统 |
CN111524088A (zh) * | 2020-05-06 | 2020-08-11 | 北京未动科技有限公司 | 用于图像采集的方法、装置、设备及计算机可读存储介质 |
CN111597938B (zh) * | 2020-05-07 | 2022-02-22 | 马上消费金融股份有限公司 | 活体检测、模型训练方法及装置 |
CN113538926B (zh) * | 2021-05-31 | 2023-01-17 | 浙江大华技术股份有限公司 | 人脸抓拍方法、人脸抓拍系统以及计算机可读存储介质 |
CN113452903B (zh) * | 2021-06-17 | 2023-07-11 | 浙江大华技术股份有限公司 | 一种抓拍设备、抓拍方法及主控芯片 |
CN115995103A (zh) * | 2021-10-15 | 2023-04-21 | 北京眼神科技有限公司 | 人脸活体检测方法、装置、计算机可读存储介质及设备 |
CN114640795A (zh) * | 2022-03-22 | 2022-06-17 | 深圳市商汤科技有限公司 | 图像处理方法和装置、设备、介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080002060A1 (en) * | 1997-10-09 | 2008-01-03 | Fotonation Vision Limited | Optimized Performance and Performance for Red-Eye Filter Method and Apparatus |
CN107809601A (zh) * | 2017-11-24 | 2018-03-16 | 深圳先牛信息技术有限公司 | 图像传感器 |
CN109194873A (zh) * | 2018-10-29 | 2019-01-11 | 浙江大华技术股份有限公司 | 一种图像处理方法及装置 |
CN109429001A (zh) * | 2017-08-25 | 2019-03-05 | 杭州海康威视数字技术股份有限公司 | 图像采集方法、装置、电子设备以及计算机可读存储介质 |
CN110490041A (zh) * | 2019-05-31 | 2019-11-22 | 杭州海康威视数字技术股份有限公司 | 人脸图像采集装置及方法 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10462390B2 (en) * | 2014-12-10 | 2019-10-29 | Sony Corporation | Image pickup apparatus, image pickup method, program, and image processing apparatus |
CN106488201B (zh) * | 2015-08-28 | 2020-05-01 | 杭州海康威视数字技术股份有限公司 | 一种图像信号的处理方法和系统 |
CN111988587B (zh) * | 2017-02-10 | 2023-02-07 | 杭州海康威视数字技术股份有限公司 | 图像融合设备和图像融合方法 |
CN107566747B (zh) * | 2017-09-22 | 2020-02-14 | 浙江大华技术股份有限公司 | 一种图像亮度增强方法及装置 |
CN208691387U (zh) * | 2018-08-28 | 2019-04-02 | 杭州萤石软件有限公司 | 一种全彩网络摄像机 |
CN208819221U (zh) * | 2018-09-10 | 2019-05-03 | 杭州海康威视数字技术股份有限公司 | 一种人脸活体检测装置 |
CN109635760A (zh) * | 2018-12-18 | 2019-04-16 | 深圳市捷顺科技实业股份有限公司 | 一种人脸识别方法及相关设备 |
-
2019
- 2019-05-31 CN CN201910472685.2A patent/CN110490041B/zh active Active
-
2020
- 2020-05-26 WO PCT/CN2020/092357 patent/WO2020238903A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080002060A1 (en) * | 1997-10-09 | 2008-01-03 | Fotonation Vision Limited | Optimized Performance and Performance for Red-Eye Filter Method and Apparatus |
CN109429001A (zh) * | 2017-08-25 | 2019-03-05 | 杭州海康威视数字技术股份有限公司 | 图像采集方法、装置、电子设备以及计算机可读存储介质 |
CN107809601A (zh) * | 2017-11-24 | 2018-03-16 | 深圳先牛信息技术有限公司 | 图像传感器 |
CN109194873A (zh) * | 2018-10-29 | 2019-01-11 | 浙江大华技术股份有限公司 | 一种图像处理方法及装置 |
CN110490041A (zh) * | 2019-05-31 | 2019-11-22 | 杭州海康威视数字技术股份有限公司 | 人脸图像采集装置及方法 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112669438A (zh) * | 2020-12-31 | 2021-04-16 | 杭州海康机器人技术有限公司 | 一种图像重建方法、装置及设备 |
Also Published As
Publication number | Publication date |
---|---|
CN110490041A (zh) | 2019-11-22 |
CN110490041B (zh) | 2022-03-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020238903A1 (zh) | 人脸图像采集装置及方法 | |
WO2020238806A1 (zh) | 一种图像采集装置及摄像方法 | |
WO2020238807A1 (zh) | 图像融合装置及图像融合方法 | |
WO2020238905A1 (zh) | 图像融合设备和方法 | |
CN110490042B (zh) | 人脸识别装置和门禁设备 | |
CN110490044B (zh) | 人脸建模设备和人脸建模方法 | |
WO2020238970A1 (zh) | 图像降噪装置及图像降噪方法 | |
CN110490187B (zh) | 车牌识别设备和方法 | |
CN110706178A (zh) | 图像融合装置、方法、设备及存储介质 | |
CN110493536B (zh) | 图像采集装置和图像采集的方法 | |
CN108712608A (zh) | 终端设备拍摄方法和装置 | |
CN110493535B (zh) | 图像采集装置和图像采集的方法 | |
CN206370880U (zh) | 一种双摄像头成像系统和移动终端 | |
CN111131798B (zh) | 图像处理方法、图像处理装置以及摄像装置 | |
CN110493496B (zh) | 图像采集装置及方法 | |
US11455710B2 (en) | Device and method of object detection | |
CN110493495B (zh) | 图像采集装置和图像采集的方法 | |
CN110493537B (zh) | 图像采集装置及图像采集方法 | |
WO2020238804A1 (zh) | 图像采集装置及图像采集方法 | |
CN110493493B (zh) | 全景细节摄像机及获取图像信号的方法 | |
CN110493533B (zh) | 图像采集装置及图像采集方法 | |
CN105554485B (zh) | 成像方法、成像装置及电子装置 | |
CN107016343A (zh) | 一种基于贝尔格式图像的红绿灯快速识别方法 | |
CN110505376A (zh) | 图像采集装置及方法 | |
CN109361906A (zh) | 热成像超低照人脸识别球型摄像机 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20815461 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20815461 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20815461 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 27.09.2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20815461 Country of ref document: EP Kind code of ref document: A1 |