WO2020238805A1 - 人脸识别装置和门禁设备 - Google Patents
人脸识别装置和门禁设备 Download PDFInfo
- Publication number
- WO2020238805A1 WO2020238805A1 PCT/CN2020/091910 CN2020091910W WO2020238805A1 WO 2020238805 A1 WO2020238805 A1 WO 2020238805A1 CN 2020091910 W CN2020091910 W CN 2020091910W WO 2020238805 A1 WO2020238805 A1 WO 2020238805A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- face
- image
- exposure
- information
- light
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C9/00—Individual registration on entry or exit
- G07C9/00174—Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys
- G07C9/00563—Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys using personal physical data of the operator, e.g. finger prints, retinal images, voicepatterns
Definitions
- This application relates to the field of computer vision, in particular to a face recognition device and access control equipment.
- the related art provides a photographing device including a multispectral filter array sensor. Part of the pixels in the multispectral filter array sensor in the photographing device are only used for sensing near-infrared light, and the remaining pixels are used for sensing near-infrared light and visible light at the same time.
- the shooting device can collect the original image signal containing the visible light information and the near-infrared light information, and separate the RGB image containing both the visible light information and the near-infrared light information and the near-infrared light information from the collected original image signal. Near infrared image of light information. Afterwards, the near-infrared light information contained in each pixel in the RGB image is removed to obtain a visible light image containing only visible light information.
- the above-mentioned photographing equipment including the multi-spectral filter array sensor needs to separate the near-infrared light information and the visible light information in the collected original image signal in the later stage, the process is more complicated, and the near-infrared image and the visible light image obtained accordingly The image quality is also relatively low. In this way, when performing face recognition based on the image obtained by the photographing device, the accuracy of the face recognition will be low.
- This application provides a face recognition device and access control equipment, which can solve the problem of low face recognition accuracy in related technologies.
- the technical solution is as follows:
- a face recognition device includes: an image acquisition unit, an image processor, and a face analysis unit;
- the image acquisition unit includes a filter assembly, the filter assembly includes a first filter, and the first filter passes visible light and part of near-infrared light;
- the image acquisition unit is configured to acquire a first image signal and a second image signal, the first image signal is an image signal generated according to a first preset exposure, and the second image signal is an image signal generated according to a second preset exposure The generated image signal, wherein near-infrared supplementary light is performed at least during a partial exposure time period of the first preset exposure, and near-infrared supplementary light is not performed during the exposure time period of the second preset exposure;
- the image processor is configured to process at least one of the first image signal and the second image signal to obtain first image information
- the face analysis unit is configured to perform face analysis on the first image information to obtain a face analysis result.
- the image acquisition unit includes: an image sensor and a light supplement, and the image sensor is located on the light exit side of the light filter assembly;
- the image sensor is configured to generate and output the first image signal and the second image signal through multiple exposures, and the first preset exposure and the second preset exposure are those of the multiple exposures Two exposures;
- the light fill device includes a first light fill device, and the first light fill device is used for near-infrared light fill.
- the center wavelength of the near-infrared supplement light performed by the first light supplement device is the set characteristic wavelength or falls within the set characteristic wavelength range
- the center wavelength and/or the band width of the near-infrared light passing through the first filter Reach the constraints.
- the center wavelength of the near-infrared supplement light performed by the first light supplement device is any wavelength within the wavelength range of 750 ⁇ 10 nanometers;
- the center wavelength of the near-infrared supplement light performed by the first light supplement device is any wavelength within the wavelength range of 780 ⁇ 10 nanometers; or
- the center wavelength of the near-infrared supplement light performed by the first light supplement device is any wavelength within the wavelength range of 940 ⁇ 10 nanometers.
- the constraint conditions include:
- the difference between the center wavelength of the near-infrared light passing through the first filter and the center wavelength of the near-infrared light supplemented by the first light-filling device lies within the wavelength fluctuation range, and the wavelength fluctuation range is 0 to 20 nanometers; or
- the half bandwidth of the near-infrared light passing through the first filter is less than or equal to 50 nanometers; or
- the first waveband width is smaller than the second waveband width; wherein, the first waveband width refers to the waveband width of the near-infrared light passing through the first filter, and the second waveband width refers to the waveband width of the near-infrared light passing through the first filter.
- the third waveband width is smaller than the reference waveband width.
- the third waveband width refers to the waveband width of near-infrared light whose pass rate is greater than a set ratio.
- the reference waveband width is any waveband within the range of 50nm to 150nm. width.
- the image sensor includes a plurality of photosensitive channels, and each photosensitive channel is used to sense at least one kind of light in the visible light band and to sense light in the near-infrared band.
- the image sensor adopts a global exposure mode for multiple exposures.
- the time period of near-infrared supplementary light does not overlap with the exposure time period of the nearest second preset exposure.
- the time period of light is a subset of the exposure time period of the first preset exposure, or the time period of near-infrared fill light and the exposure time period of the first preset exposure overlap, or the first preset exposure Let the exposure time period of the exposure be a subset of the time period of the near-infrared fill light.
- the image sensor adopts a rolling shutter exposure method to perform multiple exposures.
- the time period of the near-infrared supplement light does not overlap with the exposure time period of the nearest second preset exposure;
- the start time of the near-infrared fill light is no earlier than the exposure start time of the last line of the effective image in the first preset exposure, and the end time of the near-infrared fill light is no later than the first line of the effective image in the first preset exposure The end of the exposure;
- the start time of the near-infrared fill light is no earlier than the exposure end time of the last line of the effective image of the nearest second preset exposure before the first preset exposure and no later than the first preset exposure.
- the exposure end time of the line effective image, the end time of the near-infrared fill light is no earlier than the exposure start time of the last line of the effective image in the first preset exposure and no later than the nearest neighbor after the first preset exposure.
- the exposure start time of the first line of the effective image of the second preset exposure or
- the start time of the near-infrared fill light is not earlier than the exposure end time of the last line of the effective image of the nearest second preset exposure before the first preset exposure and not later than the first preset exposure.
- the exposure start time of the line effective image, and the end time of the near-infrared fill light is no earlier than the exposure end time of the last line of the effective image in the first preset exposure and no later than the nearest neighbor after the first preset exposure.
- the exposure start time of the first line of the effective image of the second preset exposure is not earlier than the exposure end time of the last line of the effective image of the nearest second preset exposure before the first preset exposure and not later than the first preset exposure.
- At least one exposure parameter of the first preset exposure and the second preset exposure is different, the at least one exposure parameter is one or more of exposure time, exposure gain, and aperture size, and the exposure gain includes Analog gain, and/or, digital gain.
- At least one exposure parameter of the first preset exposure and the second preset exposure is the same, the at least one exposure parameter includes one or more of exposure time, exposure gain, and aperture size, and the exposure gain includes Analog gain, and/or, digital gain.
- the image processor is configured to process at least one of the first image signal and the second image signal by using a first processing parameter to obtain the first image information
- the image processor is further configured to use a second processing parameter to process at least one of the first image signal and the second image signal to obtain second image information;
- the image processor is further configured to transmit the second image information to a display device, and the display device displays the second image information.
- the first processing parameter and the second processing parameter are different.
- the processing performed by the image processor on at least one of the first image signal and the second image signal includes black level, image interpolation, digital gain, and white balance. , At least one of image noise reduction, image enhancement, and image fusion.
- the image processor includes a cache
- the buffer is used to store at least one of the first image signal and the second image signal, or to store at least one of the first image information and the second image information.
- the image processor is further configured to adjust the image acquisition in the process of processing at least one of the first image signal and the second image signal The exposure parameters of the unit.
- the face analysis unit includes: a face detection subunit, a face recognition subunit, and a face database;
- At least one reference face information is stored in the face database
- the face detection subunit is configured to perform face detection on the first image information, output the detected face image, and perform in vivo identification on the face image;
- the face recognition subunit is used for extracting the face information of the face image when the face image passes in vivo identification, and storing the face information of the face image with the face database At least one of the reference face information is compared to obtain the face analysis result.
- the face analysis unit includes: a face detection subunit, a face recognition subunit, and a face database;
- At least one reference face information is stored in the face database
- the face detection subunit is configured to perform face detection on the first image information, output the detected first face image, perform in vivo identification on the first face image, and perform face detection on the second Performing face detection on the image information, outputting the detected second face image, and performing live identification on the second face image;
- the face recognition subunit is configured to extract the face information of the first face image when the first face image and the second face image both pass the identification
- the face information of the face image is compared with at least one reference face information stored in the face database to obtain a face analysis result.
- the first image information is grayscale image information obtained by processing the first image signal
- the second image information is obtained by processing the second image signal.
- the face analysis unit includes: a face detection subunit, a face recognition subunit, and a face database;
- At least one reference face information is stored in the face database
- the face detection subunit is used to perform face detection on the color image information, output the detected color face image, perform in vivo identification on the color face image, and pass the color face image During the living body identification, perform face detection on the gray-scale face image, and output the detected gray face image;
- the face recognition subunit is configured to extract face information of the gray-scale face image, and compare the face information of the gray-scale face image with at least one reference face information stored in the face database Compare and get the result of face analysis.
- the first image information is grayscale image information obtained by processing the first image signal
- the second image information is a combination of the first image signal and the The fused image information obtained by performing image fusion processing on the second image signal
- the face analysis unit includes: a face detection subunit, a face recognition subunit, and a face database
- At least one reference face information is stored in the face database
- the face detection subunit is configured to perform face detection on the fused image information, output the detected fused face image, perform live identification on the fused face image, and pass the fused face image During the living body identification, perform face detection on the gray-scale face image, and output the detected gray face image;
- the face recognition subunit is configured to extract face information of the gray-scale face image, and compare the face information of the gray-scale face image with at least one reference face information stored in the face database Compare and get the result of face analysis.
- the first image information is fused image information obtained by performing image fusion processing on the first image signal and the second image signal
- the second image information is For gray-scale image information obtained by processing the first image signal
- the face analysis unit includes: a face detection subunit, a face recognition subunit, and a face database;
- At least one reference face information is stored in the face database
- the face detection sub-unit is used to perform face detection on the gray-scale image information, output the detected gray-scale face image, perform in vivo identification on the gray-scale face image, and perform face detection on the gray-scale image information.
- face detection on the fused face image perform face detection on the fused face image, and output the detected fused face image;
- the face recognition subunit is used to extract face information of the fused face image, and compare the face information of the fused face image with at least one reference face information stored in the face database Yes, get the face analysis result.
- the first image information is fused image information obtained by performing image fusion processing on the first image signal and the second image signal
- the second image information is
- the face analysis unit includes: a face detection subunit, a face recognition subunit, and a face database;
- At least one reference face information is stored in the face database
- the face detection subunit is used to perform face detection on the color image information, output the detected color face image, perform in vivo identification on the color face image, and pass the color face image During the living body identification, perform face detection on the fused face image, and output the detected fused face image;
- the face recognition subunit is used to extract face information of the fused face image, and compare the face information of the fused face image with at least one reference face information stored in the face database Yes, get the face analysis result.
- the first image information is first fused image information obtained by performing image fusion processing on the first image signal and the second image signal
- the second image information It is the second fused image information obtained by performing image fusion processing on the first image signal and the second image signal.
- the face analysis unit includes: a face detection subunit, a face recognition subunit, and a face database ;
- At least one reference face information is stored in the face database
- the face detection subunit is used to perform face detection on the second fused image information, output the detected second fused face image, perform live identification on the second fused face image, and When the second fused face image passes the living body identification, face detection is performed on the first fused face image, and the detected first fused face image is output;
- the face recognition subunit is configured to extract face information of the first fused face image, and compare the face information of the first fused face image with at least one reference person stored in the face database The face information is compared, and the face analysis result is obtained.
- the face analysis unit is further configured to transmit the face analysis result to a display device, and the display device displays the face analysis result.
- an access control device includes an access control controller and the aforementioned face recognition device;
- the face recognition device is used to transmit the face analysis result to the access controller
- the access controller is configured to output a control signal for opening the door when the face analysis result is successful.
- a face recognition method which is applied to a face recognition device, the face recognition device includes: an image acquisition unit, an image processor, and a face analysis unit, and the image acquisition unit includes a filter component, The filter assembly includes a first filter, and the method includes:
- a first image signal and a second image signal are collected by the image acquisition unit, the first image signal is an image signal generated according to a first preset exposure, and the second image signal is generated according to a second preset exposure An image signal, wherein near-infrared supplementary light is performed at least during a partial exposure time period of the first preset exposure, and near-infrared supplementary light is not performed during the exposure time period of the second preset exposure;
- the face recognition device includes an image acquisition unit, an image processor, and a face analysis unit.
- the image acquisition unit includes a filter assembly, and the filter assembly includes a first filter, and the first filter passes visible light and part of the near-infrared light.
- the image acquisition unit can simultaneously acquire a first image signal containing near-infrared light information (such as near-infrared light brightness information) and a second image signal containing visible light information through the first preset exposure and the second preset exposure.
- the image acquisition unit in this application can directly collect the first image signal and the second image signal, and the acquisition process is simple effective.
- the image processor processes at least one of the first image signal and the second image signal, and the first image information is of higher quality, and then the face analysis unit performs face analysis on the first image information. Obtain more accurate face analysis results, which can effectively improve the accuracy of face recognition.
- Fig. 1 is a schematic structural diagram of a first face recognition device provided by an embodiment of the present application.
- FIG. 2 is a schematic structural diagram of a first image acquisition unit provided by an embodiment of the present application.
- FIG. 3 is a schematic diagram of a principle of generating a first image signal by an image acquisition unit according to an embodiment of the present application.
- FIG. 4 is a schematic diagram of the principle of generating a second image signal by an image acquisition unit provided by an embodiment of the present application.
- FIG. 5 is a schematic diagram of the relationship between the wavelength and relative intensity of the near-infrared supplement light performed by a first light supplement device according to an embodiment of the present application.
- FIG. 6 is a schematic diagram of the relationship between the wavelength of the light passing through the first filter and the pass rate according to an embodiment of the present application.
- Fig. 7 is a schematic structural diagram of a second image acquisition unit provided by an embodiment of the present application.
- Fig. 8 is a schematic diagram of an RGB sensor provided by an embodiment of the present application.
- Fig. 9 is a schematic diagram of an RGBW sensor provided by an embodiment of the present application.
- Fig. 10 is a schematic diagram of an RCCB sensor provided by an embodiment of the present application.
- Fig. 11 is a schematic diagram of a RYYB sensor provided by an embodiment of the present application.
- FIG. 12 is a schematic diagram of a sensing curve of an image sensor provided by an embodiment of the present application.
- FIG. 13 is a schematic diagram of a rolling shutter exposure method provided by an embodiment of the present application.
- FIG. 14 is a schematic diagram of the timing relationship between the first near-infrared fill light and the first preset exposure and the second preset exposure in the global exposure mode provided by an embodiment of the present application.
- FIG. 15 is a schematic diagram of the timing relationship between the second near-infrared fill light provided by an embodiment of the present application and the first preset exposure and the second preset exposure in the global exposure mode.
- FIG. 16 is a schematic diagram of the timing relationship between the third near-infrared fill light provided by an embodiment of the present application and the first preset exposure and the second preset exposure in the global exposure mode.
- FIG. 17 is a schematic diagram of the timing relationship between the first preset exposure and the second preset exposure in the first near-infrared fill light and the rolling shutter exposure mode provided by an embodiment of the present application.
- FIG. 18 is a schematic diagram of the timing relationship between the first preset exposure and the second preset exposure in the second near-infrared fill light and the rolling shutter exposure mode provided by an embodiment of the present application.
- FIG. 19 is a schematic diagram of the timing relationship between the first preset exposure and the second preset exposure in the third near-infrared fill light and the rolling shutter exposure mode provided by an embodiment of the present application.
- FIG. 20 is a schematic structural diagram of a third image acquisition unit provided by an embodiment of the present application.
- FIG. 21 is a schematic structural diagram of a second face recognition device provided by an embodiment of the present application.
- Fig. 22 is a schematic structural diagram of a third face recognition apparatus shown in an embodiment of the present application.
- FIG. 23 is a schematic structural diagram of a fourth face recognition device shown in an embodiment of the present application.
- FIG. 24 is a schematic structural diagram of an access control device shown in an embodiment of the present application.
- Fig. 25 is a flowchart of a face recognition method shown in an embodiment of the present application.
- Image acquisition unit 2: Image processor, 3: Face analysis unit, 01: Image sensor, 02: Filler, 03: Filter component, 04: Lens, 021: First fill light device, 022: The second light supplement device, 031: first filter, 032: second filter, 033: switching component, 311: face detection subunit, 312: face recognition subunit, 313, face database, 001 : Access controller, 002: Face recognition device.
- Fig. 1 is a schematic structural diagram of a face recognition device provided by an embodiment of the present application.
- the face recognition device includes: an image acquisition unit 1, an image processor 2, and a face analysis unit 3.
- the image acquisition unit 1 is used to acquire a first image signal and a second image signal.
- the image processor 2 is configured to process at least one of the first image signal and the second image signal to obtain first image information.
- the face analysis unit 3 is used to perform face analysis on the first image information to obtain a face analysis result.
- the first image signal is an image signal generated according to a first preset exposure
- the second image signal is an image signal generated according to a second preset exposure.
- the near-infrared supplementary light is performed at least during a partial exposure time period of the first preset exposure, and the near-infrared supplementary light is not performed during the exposure time period of the second preset exposure.
- the image acquisition unit 1 can simultaneously acquire the first image signal containing near-infrared light information (such as near-infrared light brightness information) and the first image signal containing visible light information through the first preset exposure and the second preset exposure.
- the second image signal Compared with the image processing method that needs to separate the near-infrared light information and the visible light information in the collected original image signal later, the image acquisition unit 1 in this application can directly collect the first image signal and the second image signal.
- the acquisition process Simple and effective.
- the image processor 2 processes at least one of the first image signal and the second image signal, and the first image information is of higher quality.
- the face analysis unit 3 performs face analysis on the first image information. A more accurate face analysis result can be obtained, which can effectively improve the accuracy of face recognition.
- the image acquisition unit 1, the image processor 2, and the face analysis unit 3 included in the face recognition device will be separately described below.
- Image acquisition unit 1
- the image acquisition unit includes an image sensor 01, a light supplement 02 and a filter assembly 03, and the image sensor 01 is located on the light exit side of the filter assembly 03.
- the image sensor 01 is used to generate and output a first image signal and a second image signal through multiple exposures.
- the first preset exposure and the second preset exposure are two of the multiple exposures.
- the light supplement 02 includes a first light supplement device 021, and the first light supplement device 021 is used for near-infrared light supplement.
- the filter assembly 03 includes a first filter 031, and the first filter 031 passes visible light and part of the near-infrared light.
- the intensity of the near-infrared light passing through the first filter 031 when the first light supplement device 021 performs near-infrared light supplementation is higher than that of the first light supplement device 021 that passes through the first filter 031 when the first light supplement device 021 does not perform near-infrared light supplementation.
- the intensity of near-infrared light is higher than that of the first light supplement device 021 that passes through the first filter 031 when the first light supplement device 021 does not perform near-infrared light supplementation.
- the image acquisition unit 1 may further include a lens 04.
- the filter assembly 03 may be located between the lens 04 and the image sensor 01, and the image sensor 01 is located at the light output of the filter assembly 03 side.
- the lens 04 is located between the filter assembly 03 and the image sensor 01, and the image sensor 01 is located on the light exit side of the lens 04.
- the first filter 031 can be a filter film.
- the first filter 031 can be attached to the light-emitting side of the lens 04
- the light supplement 02 may be located in the image acquisition unit 1 or outside the image acquisition unit 1.
- the light supplement 02 can be a part of the image acquisition unit 1 or a device independent of the image acquisition unit 1.
- the light supplement 02 can be connected to the image acquisition unit 1, so as to ensure that the exposure timing of the image sensor 01 in the image acquisition unit 1 is consistent with that of the light supplement 02.
- the timing of the near-infrared supplementary light of the first light-filling device 021 has a certain relationship.
- the near-infrared supplementary light is performed at least during a partial exposure time period of the first preset exposure, but not during the exposure time period of the second preset exposure. Near infrared fill light.
- the first supplementary light device 021 is a device that can emit near-infrared light, such as a near-infrared supplementary light, etc., the first supplementary light device 021 can perform near-infrared supplementary light in a stroboscopic manner, or other similar stroboscopic The near-infrared supplementary light is performed in a manner, which is not limited in the embodiment of the present application.
- the first light supplement device 021 when the first light supplement device 021 performs near-infrared supplement light in a stroboscopic manner, the first light supplement device 021 can be manually controlled to perform near-infrared supplement light in a stroboscopic manner, or through a software program Or a specific device controls the first light supplement device 021 to perform near-infrared supplement light in a strobe mode, which is not limited in the embodiment of the present application.
- the time period during which the first light supplement device 021 performs near-infrared light supplementation may coincide with the exposure time period of the first preset exposure, or may be greater than the exposure time period of the first preset exposure or less than the exposure time period of the first preset exposure.
- the near-infrared supplementary light is performed during the entire exposure period or part of the exposure period of the first preset exposure, and the near-infrared supplementary light is not performed during the exposure time period of the second preset exposure.
- the near-infrared supplementary light is not performed during the exposure time period of the second preset exposure.
- the exposure time period of the second preset exposure may be between the start exposure time and the end exposure time.
- Time period, for the rolling shutter exposure mode the exposure time period of the second preset exposure may be the time period between the start exposure time of the first row of effective images of the second image signal and the end exposure time of the last row of effective images, but it is not limited to this.
- the exposure time period of the second preset exposure may also be the exposure time period corresponding to the target image in the second image signal, and the target image is a number of rows of effective images corresponding to the target object or target area in the second image signal.
- the time period between the start exposure time and the end exposure time of several rows of effective images can be regarded as the exposure time period of the second preset exposure.
- the near-infrared light incident on the surface of the object may be reflected by the object and enter the first filter 031.
- the ambient light may include visible light and near-infrared light, and near-infrared light in the ambient light is also reflected by the object when it is incident on the surface of the object, thereby entering the first filter 031.
- the near-infrared light that passes through the first filter 031 when performing near-infrared light supplementation may include the near-infrared light that is reflected by the object and enters the first filter 031 when the first light supplement device 021 performs near-infrared light supplementation.
- the near-infrared light passing through the first filter 031 when the near-infrared light supplement is not performed may include the near-infrared light reflected by the object into the first filter 031 when the first light supplement device 021 is not performing the near-infrared light supplement.
- the near-infrared light that passes through the first filter 031 when performing near-infrared supplementary light includes the near-infrared light emitted by the first supplementary light device 021 and reflected by the object, and the ambient light reflected by the object Near-infrared light
- the near-infrared light passing through the first filter 031 when the near-infrared supplementary light is not performed includes near-infrared light reflected by an object in the ambient light.
- the image acquisition unit 1 acquires the first image signal and the first image signal.
- the second image signal process is as follows: referring to Figure 3, when the image sensor 01 performs the first preset exposure, the first light supplement device 021 performs near-infrared light supplement, and at this time the ambient light in the shooting scene and the first light supplement device perform In the near-infrared fill light, the near-infrared light reflected by objects in the scene passes through the lens 04 and the first filter 031, and then the image sensor 01 generates the first image signal through the first preset exposure; see FIG.
- the first fill light device 021 does not perform near-infrared fill light.
- the image sensor 01 passes through the second preset Exposure generates a second image signal.
- M first preset exposures and N second preset exposures in one frame period of image acquisition.
- M and N and the magnitude relationship between M and N can be set according to actual requirements. For example, the values of M and N may be equal or different.
- the first filter 031 can pass part of the near-infrared light band.
- the near-infrared light band passing through the first filter 031 can be part of the near-infrared light band, or it can be all
- the near-infrared light band is not limited in the embodiment of the present application.
- the first light supplement device 021 since the intensity of the near-infrared light in the ambient light is lower than the intensity of the near-infrared light emitted by the first light supplement device 021, the first light supplement device 021 passes through the first filter 031 when performing near-infrared supplement light.
- the intensity of the near-infrared light is higher than the intensity of the near-infrared light passing through the first filter 031 when the first light supplement device 021 is not performing near-infrared light supplementation.
- the wavelength range of the first light supplement device 021 for near-infrared supplement light may be the second reference wavelength range, and the second reference wavelength range may be 700 nanometers to 800 nanometers, or 900 nanometers to 1000 nanometers, which can reduce common Interference caused by near red light at 850 nm.
- the wavelength range of the near-infrared light incident on the first filter 031 may be the first reference wavelength range, and the first reference wavelength range is 650 nanometers to 1100 nanometers.
- the near-infrared light passing through the first filter 031 during the near-infrared light supplementation may include the near-infrared light reflected by the object and entering the first filter 031 when the first light supplement device 021 performs near-infrared light supplementation, and The near-infrared light reflected by an object in the ambient light. Therefore, the intensity of the near-infrared light entering the filter assembly 03 is relatively strong at this time. However, when the near-infrared supplementary light is not performed, the near-infrared light passing through the first filter 031 includes the near-infrared light reflected by the object in the ambient light and entering the filter assembly 03.
- the intensity of the near-infrared light passing through the first filter 031 is weak at this time. Therefore, the intensity of the near infrared light included in the first image signal generated and output according to the first preset exposure is higher than the intensity of the near infrared light included in the second image signal generated and output according to the second preset exposure.
- the center wavelength and/or wavelength range of the first light supplement device 021 for near-infrared supplement light there are multiple choices for the center wavelength and/or wavelength range of the first light supplement device 021 for near-infrared supplement light.
- the center wavelength of the near-infrared supplement light of the first light supplement device 021 can be designed, and the characteristics of the first filter 031 can be selected, so that the center of the first light supplement device 021 for the near-infrared light supplement.
- the center wavelength and/or band width of the near-infrared light passing through the first filter 031 can meet the constraint conditions.
- This constraint is mainly used to restrict the center wavelength of the near-infrared light passing through the first filter 031 as accurate as possible, and the band width of the near-infrared light passing through the first filter 031 is as narrow as possible, so as to avoid The infrared light band width is too wide and introduces wavelength interference.
- the center wavelength of the near-infrared supplement light performed by the first light supplement device 021 may be the average value in the wavelength range of the highest energy in the spectrum of the near-infrared light emitted by the first light supplement device 021, or it may be understood as the first supplement light
- the set characteristic wavelength or the set characteristic wavelength range can be preset.
- the center wavelength of the first light supplement device 021 for near-infrared supplement light may be any wavelength within the wavelength range of 750 ⁇ 10 nanometers; or, the center wavelength of the first light supplement device 021 for near-infrared supplement light It is any wavelength within the wavelength range of 780 ⁇ 10 nanometers; or, the center wavelength of the first light supplement device 021 for near-infrared supplement light is any wavelength within the wavelength range of 940 ⁇ 10 nanometers. That is, the set characteristic wavelength range may be a wavelength range of 750 ⁇ 10 nanometers, or a wavelength range of 780 ⁇ 10 nanometers, or a wavelength range of 940 ⁇ 10 nanometers.
- the center wavelength of the near-infrared supplement light performed by the first light supplement device 021 is 940 nanometers
- the relationship between the wavelength and the relative intensity of the near-infrared supplement light performed by the first light supplement device 021 is shown in FIG. 5. It can be seen from FIG. 5 that the wavelength range of the first light supplement device 021 for near-infrared supplement light is 900 nanometers to 1000 nanometers, and the relative intensity of near-infrared light is the highest at 940 nanometers.
- the above constraint conditions may include: the difference between the center wavelength of the near-infrared light passing through the first filter 031 and the center wavelength of the near-infrared light of the first light supplement device 021 lies in the wavelength fluctuation Within the range, as an example, the wavelength fluctuation range may be 0-20 nanometers.
- the center wavelength of the near-infrared supplementary light passing through the first filter 031 can be the wavelength at the peak position in the near-infrared band in the near-infrared light pass rate curve of the first filter 031, or it can be understood as the first
- the near-infrared light pass rate curve of a filter 031 is the wavelength at the middle position in the near-infrared waveband whose pass rate exceeds a certain threshold.
- the above constraint conditions may include: the first band width may be smaller than the second band width.
- the first waveband width refers to the waveband width of the near-infrared light passing through the first filter 031
- the second waveband width refers to the waveband width of the near-infrared light blocked by the first filter 031.
- the wavelength band width refers to the width of the wavelength range in which the wavelength of light lies.
- the first wavelength band width is 800 nanometers minus 700 nanometers, that is, 100 nanometers.
- the wavelength band width of the near-infrared light passing through the first filter 031 is smaller than the wavelength band width of the near-infrared light blocked by the first filter 031.
- FIG. 6 is a schematic diagram of the relationship between the wavelength of light that can pass through the first filter 031 and the pass rate.
- the wavelength band of the near-infrared light incident on the first filter 031 is 650 nanometers to 1100 nanometers.
- the first filter 031 can pass visible light with a wavelength of 380 nanometers to 650 nanometers and a wavelength of near 900 nanometers to 1100 nanometers.
- Infrared light passes through and blocks near-infrared light with a wavelength between 650 nanometers and 900 nanometers. That is, the width of the first band is 1000 nanometers minus 900 nanometers, that is, 100 nanometers.
- the second band width is 900 nm minus 650 nm, plus 1100 nm minus 1000 nm, or 350 nm. 100 nanometers are smaller than 350 nanometers, that is, the wavelength band width of the near-infrared light passing through the first filter 031 is smaller than the wavelength band width of the near-infrared light blocked by the first filter 031.
- the above relationship curve is just an example.
- the wavelength range of the near-red light that can pass through the filter can be different, and the wavelength range of the near-infrared light blocked by the filter can also be different. different.
- the above constraint conditions may include: passing the first filter
- the half bandwidth of the near-infrared light of the light sheet 031 is less than or equal to 50 nanometers.
- the half bandwidth refers to the band width of near-infrared light with a pass rate greater than 50%.
- the above constraint condition may include: the third band width may be smaller than the reference band width.
- the third waveband width refers to the waveband width of near-infrared light with a pass rate greater than a set ratio.
- the reference waveband width may be any waveband width in the range of 50 nanometers to 100 nanometers.
- the set ratio can be any ratio from 30% to 50%.
- the set ratio can also be set to other ratios according to usage requirements, which is not limited in the embodiment of the present application.
- the band width of the near-infrared light whose pass rate is greater than the set ratio may be smaller than the reference band width.
- the wavelength band of the near-infrared light incident on the first filter 031 is 650 nm to 1100 nm, the setting ratio is 30%, and the reference wavelength band width is 100 nm. It can be seen from FIG. 6 that in the wavelength band of near-infrared light from 650 nanometers to 1100 nanometers, the band width of near-infrared light with a pass rate greater than 30% is significantly less than 100 nanometers.
- the first light supplement device 021 Since the first light supplement device 021 provides near-infrared supplementary light at least during a partial exposure period of the first preset exposure, it does not provide near-infrared supplementary light during the entire exposure period of the second preset exposure, and the first preset exposure
- the exposure and the second preset exposure are two of the multiple exposures of the image sensor 01, that is, the first light supplement device 021 provides near-infrared supplement light during the exposure period of the partial exposure of the image sensor 01, The near-infrared supplementary light is not provided during the exposure time period when another part of the image sensor 01 is exposed.
- the number of times of supplementary light in the unit time length of the first supplementary light device 021 may be lower than the number of exposures of the image sensor 01 in the unit time length, wherein, within the interval of two adjacent times of supplementary light, there is one interval. Or multiple exposures.
- the light supplement 02 can also A second light supplement device 022 is included, and the second light supplement device 022 is used for visible light supplement light.
- the second light supplement device 022 provides visible light supplement light at least during a part of the exposure time of the first preset exposure, that is, it performs near-infrared supplement light and visible light supplement light at least during the partial exposure time period of the first preset exposure.
- the mixed color of the two lights can be distinguished from the color of the red light in the traffic light, thereby avoiding the human eye from confusing the color of the light fill 02 for near-infrared fill light with the color of the red light in the traffic light.
- the second light supplement device 022 provides visible light supplement light during the exposure time period of the second preset exposure, since the intensity of visible light is not particularly high during the exposure time period of the second preset exposure, When the visible light supplement is performed during the exposure time period of the exposure, the brightness of the visible light in the second image signal can also be increased, thereby ensuring the quality of image collection.
- the second light supplement device 022 may be used to perform visible light supplement light in a constant light mode; or, the second light supplement device 022 may be used to perform visible light supplement light in a stroboscopic manner, wherein, at least in the first Visible light supplement light exists in part of the exposure time period of the preset exposure, and there is no visible light supplement light during the entire exposure time period of the second preset exposure; or, the second light supplement device 022 can be used to perform visible light supplement light in a strobe mode There is no visible light supplementary light at least during the entire exposure time period of the first preset exposure, and visible light supplementary light exists during the partial exposure time period of the second preset exposure.
- the second light supplement device 022 When the second light supplement device 022 performs visible light supplement light in a constant light mode, it can not only prevent human eyes from confusing the color of the first supplement light device 021 for near-infrared supplement light with the color of the red light in the traffic light, but also can improve the Second, the brightness of visible light in the image signal to ensure the quality of image collection.
- the second light supplement device 022 When the second light supplement device 022 performs visible light supplement light in a stroboscopic manner, it can prevent human eyes from confusing the color of the first light supplement device 021 for near-infrared supplement light with the color of the red light in the traffic light, or can improve The brightness of the visible light in the second image signal in turn ensures the quality of image collection, and can also reduce the number of times of supplementary light of the second supplementary light device 022, thereby prolonging the service life of the second supplementary light device 022.
- the aforementioned multiple exposure refers to multiple exposures within one frame period, that is, the image sensor 01 performs multiple exposures within one frame period, thereby generating and outputting at least one frame of the first image signal and At least one frame of the second image signal.
- 1 second includes 25 frame periods, and the image sensor 01 performs multiple exposures in each frame period, thereby generating at least one frame of the first image signal and at least one frame of the second image signal, and the The first image signal and the second image signal are called a group of image signals, so that 25 groups of image signals are generated within 25 frame periods.
- the first preset exposure and the second preset exposure can be two adjacent exposures in multiple exposures in one frame period, or two non-adjacent exposures in multiple exposures in one frame period. The application embodiment does not limit this.
- the first image signal is generated and output by the first preset exposure
- the second image signal is generated and output by the second preset exposure.
- the first image can be The signal and the second image signal are processed.
- the purposes of the first image signal and the second image signal may be different, so in some embodiments, at least one exposure parameter of the first preset exposure and the second preset exposure may be different.
- the at least one exposure parameter may include but is not limited to one or more of exposure time, analog gain, digital gain, and aperture size. Wherein, the exposure gain includes analog gain and/or digital gain.
- the intensity of the near-infrared light sensed by the image sensor 01 is stronger when the near-infrared light is supplemented, and the first image signal generated and output accordingly includes the near-infrared light
- the brightness of the light will also be higher.
- near-infrared light with higher brightness is not conducive to the acquisition of external scene information.
- the greater the exposure gain, the higher the brightness of the image signal output by the image sensor 01, and the smaller the exposure gain the lower the brightness of the image signal output by the image sensor 01.
- the exposure gain of the first preset exposure may be less than the first preset exposure. 2. Exposure gain for preset exposure. In this way, when the first light supplement device 021 performs near-infrared supplement light, the brightness of the near-infrared light contained in the first image signal generated and output by the image sensor 01 will not be affected by the first light supplement device 021 performing near-infrared supplement light. Too high.
- the longer the exposure time the higher the brightness included in the image signal obtained by the image sensor 01, and the longer the motion trailing of the moving objects in the external scene in the image signal; the shorter the exposure time, the longer the image
- the image signal obtained by the sensor 01 includes the lower the brightness, and the shorter the motion trail of the moving object in the external scene is in the image signal. Therefore, in order to ensure that the brightness of the near-infrared light contained in the first image signal is within an appropriate range, and that the moving objects in the external scene have a short motion trail in the first image signal.
- the exposure time of the first preset exposure may be less than the exposure time of the second preset exposure.
- the first light supplement device 021 performs near-infrared supplement light
- the brightness of the near-infrared light contained in the first image signal generated and output by the image sensor 01 will not be affected by the first light supplement device 021 performing near-infrared supplement light. Too high.
- the shorter exposure time makes the motion trailing of the moving object in the external scene appear shorter in the first image signal, thereby facilitating the recognition of the moving object.
- the exposure time of the first preset exposure is 40 milliseconds
- the exposure time of the second preset exposure is 60 milliseconds, and so on.
- the exposure time of the first preset exposure may not only be less than the exposure time of the second preset exposure , Can also be equal to the exposure time of the second preset exposure.
- the exposure gain of the first preset exposure may be less than the exposure gain of the second preset exposure, or may be equal to the second preset exposure The exposure gain.
- the purpose of the first image signal and the second image signal may be the same.
- the exposure time of the first preset exposure may be equal to the exposure time of the second preset exposure. If the exposure time of the first preset exposure and the exposure time of the second preset exposure are different, the exposure time will be longer. There is motion smearing in one of the image signals, resulting in different definitions of the two image signals.
- the exposure gain of the first preset exposure may be equal to the exposure gain of the second preset exposure.
- the exposure gain of the first preset exposure may be less than the exposure gain of the second preset exposure. It can also be equal to the exposure gain of the second preset exposure.
- the exposure time of the first preset exposure may be less than the exposure time of the second preset exposure, or may be equal to the second preset exposure The exposure time.
- the image sensor 01 may include multiple photosensitive channels, and each photosensitive channel may be used to sense at least one type of light in the visible light band and to sense light in the near-infrared band. That is, each photosensitive channel can not only sense at least one kind of light in the visible light band, but also can sense light in the near-infrared band. In this way, it can be ensured that the first image signal and the second image signal have complete resolution without missing Pixel values.
- the multiple photosensitive channels can be used to sense at least two different visible light wavelength bands.
- the plurality of photosensitive channels may include at least two of R photosensitive channels, G photosensitive channels, B photosensitive channels, Y photosensitive channels, W photosensitive channels, and C photosensitive channels.
- the R photosensitive channel is used to sense the light in the red and near-infrared bands
- the G photosensitive channel is used to sense the light in the green and near-infrared bands
- the B photosensitive channel is used to sense the light in the blue and near-infrared bands.
- Y The photosensitive channel is used to sense light in the yellow band and near-infrared band.
- W can be used to represent the light-sensing channel used to sense full-wavelength light
- C can be used to represent the light-sensing channel used to sense full-wavelength light, so when there is more
- a photosensitive channel includes a photosensitive channel for sensing light of a full waveband
- this photosensitive channel may be a W photosensitive channel or a C photosensitive channel. That is, in practical applications, the photosensitive channel used for sensing the light of the full waveband can be selected according to the use requirements.
- the image sensor 01 may be an RGB sensor, RGBW sensor, or RCCB sensor, or RYYB sensor.
- the distribution of the R photosensitive channel, the G photosensitive channel and the B photosensitive channel in the RGB sensor can be seen in Figure 8.
- the distribution of the R photosensitive channel, G photosensitive channel, B photosensitive channel and W photosensitive channel in the RGBW sensor can be seen in the figure 9.
- the distribution of the R photosensitive channel, the C photosensitive channel and the B photosensitive channel in the RCCB sensor can be seen in Figure 10
- the distribution of the R photosensitive channel, the Y photosensitive channel and the B photosensitive channel in the RYYB sensor can be seen in Figure 11.
- some photosensitive channels may only sense light in the near-infrared waveband, but not light in the visible light waveband. In this way, it can be ensured that the first image signal has a complete resolution without missing pixel values.
- the plurality of photosensitive channels may include at least two of R photosensitive channels, G photosensitive channels, B photosensitive channels, and IR photosensitive channels. Among them, the R photosensitive channel is used to sense red light and near-infrared light, the G photosensitive channel is used to sense green light and near-infrared light, and the B photosensitive channel is used to sense blue light and near-infrared light. IR The photosensitive channel is used to sense light in the near-infrared band.
- the image sensor 01 may be an RGBIR sensor, where each IR photosensitive channel in the RGBIR sensor can sense light in the near-infrared waveband, but not light in the visible light waveband.
- the image sensor 01 is an RGB sensor
- other image sensors such as RGBIR sensors
- the RGB information collected by the RGB sensor is more complete.
- Some of the photosensitive channels of the RGBIR sensor cannot collect visible light, so the image collected by the RGB sensor The color details are more accurate.
- the multiple photosensitive channels included in the image sensor 01 may correspond to multiple sensing curves.
- the R curve in FIG. 12 represents the sensing curve of the image sensor 01 to light in the red light band
- the G curve represents the sensing curve of the image sensor 01 to light in the green light band
- the B curve represents the image sensor 01
- the W (or C) curve represents the sensing curve of the image sensor 01 sensing the light in the full band
- the NIR (Near infrared) curve represents the sensing of the image sensor 01 sensing the light in the near infrared band. curve.
- the image sensor 01 may adopt a global exposure method or a rolling shutter exposure method.
- the global exposure mode means that the exposure start time of each row of effective images is the same, and the exposure end time of each row of effective images is the same.
- the global exposure mode is an exposure mode in which all rows of effective images are exposed at the same time and the exposure ends at the same time.
- Rolling shutter exposure means that the exposure time of different rows of effective images does not completely overlap, that is, the exposure start time of one row of effective images is later than the exposure start time of the previous row of effective images, and the exposure end time of one row of effective images is later At the end of the exposure of the effective image on the previous line.
- the data in the rolling exposure mode, the data can be output after each line of the effective image is exposed. Therefore, the time from the start of the output of the first line of the effective image to the end of the output of the last line of the effective image can be expressed as reading Time out.
- FIG. 13 is a schematic diagram of a rolling shutter exposure method. It can be seen from Figure 13 that the effective image of line 1 starts to be exposed at time T1, and the exposure ends at time T3. The effective image of line 2 begins to be exposed at time T2 and ends at time T4. Time T2 is backward compared to time T1. A period of time has passed, and time T4 has moved a period of time backward compared to time T3. In addition, the effective image of the first line ends exposure at time T3 and begins to output data, and the output of data ends at time T5. The effective image of line n ends exposure at time T6 and begins to output data, and the output of data ends at time T7, then T3 The time between ⁇ T7 is the read time.
- the time period of the near-infrared fill light and the exposure time period of the nearest second preset exposure do not exist Intersection
- the time period of near-infrared fill light is a subset of the exposure time period of the first preset exposure, or the time period of near-infrared fill light and the exposure time period of the first preset exposure overlap, or the first preset
- the exposure time period of exposure is a subset of the time period of near-infrared fill light.
- the near-infrared supplementary light is performed at least during a part of the exposure time period of the first preset exposure, and the near-infrared supplementary light is not performed during the entire exposure time period of the second preset exposure. Set the exposure to affect.
- the time period of near-infrared fill light does not overlap with the exposure time period of the nearest second preset exposure, and the time period of near-infrared fill light is the first preset A subset of the exposure time period for exposure.
- the time period of near-infrared fill light does not overlap with the exposure time period of the nearest second preset exposure, and the time period of near-infrared fill light is equal to that of the first preset exposure. There is an intersection of exposure time periods.
- the time period of near-infrared fill light does not overlap with the exposure time period of the nearest second preset exposure, and the exposure time period of the first preset exposure is near-infrared fill light A subset of the time period. 14 to 16 are only an example, and the sorting of the first preset exposure and the second preset exposure may not be limited to these examples.
- the time period of near-infrared fill light is the same as the exposure time period of the nearest second preset exposure There is no intersection.
- the start time of the near-infrared fill light is no earlier than the exposure start time of the last line of the effective image in the first preset exposure
- the end time of the near-infrared fill light is no later than the exposure of the first line of the effective image in the first preset exposure End time.
- the start time of the near-infrared fill light is not earlier than the exposure end time of the last line of the effective image of the nearest second preset exposure before the first preset exposure and not later than the first line of the first preset exposure.
- the exposure end time of the image, the end time of the near-infrared fill light is no earlier than the exposure start time of the last line of the effective image in the first preset exposure and no later than the nearest second preset exposure after the first preset exposure
- the start time of the near-infrared fill light is not earlier than the exposure end time of the last line of the effective image of the nearest second preset exposure before the first preset exposure and not later than the first line of the first preset exposure.
- the exposure start time of the image, the end time of the near-infrared fill light is no earlier than the exposure end time of the last line of the effective image in the first preset exposure and no later than the nearest second preset exposure after the first preset exposure The exposure start time of the first line of valid images.
- the time period of near-infrared fill light does not overlap with the exposure time period of the nearest second preset exposure, and the start time of near-infrared fill light is no earlier than The exposure start time of the last line of the effective image in the first preset exposure, and the end time of the near-infrared fill light is no later than the exposure end time of the first line of the effective image in the first preset exposure.
- the time period of near-infrared fill light does not overlap with the exposure time period of the nearest second preset exposure, and the start time of near-infrared fill light is no earlier than the first
- the end time of the near-infrared fill light is not It is earlier than the exposure start time of the last line of the effective image in the first preset exposure and not later than the exposure start time of the first line of the effective image of the nearest second preset exposure after the first preset exposure.
- the time period of near-infrared fill light does not overlap with the exposure time period of the nearest second preset exposure, and the start time of near-infrared fill light is no earlier than the first
- the exposure end time of the last line of the effective image of the nearest second preset exposure before the preset exposure and not later than the exposure start time of the first line of the effective image in the first preset exposure the end time of the near-infrared fill light is not It is earlier than the exposure end time of the last line of the effective image in the first preset exposure and not later than the exposure start time of the first line of the effective image of the nearest second preset exposure after the first preset exposure.
- the slanted dotted line indicates the start time of exposure
- the slanted solid line indicates the end time of exposure
- the vertical dotted line indicates the first
- the time period of the near-infrared fill light corresponding to the preset exposure, FIGS. 17 to 19 are only examples, and the order of the first preset exposure and the second preset exposure may not be limited to these examples.
- the multiple exposures may include odd-numbered exposures and even-numbered exposures.
- the first preset exposure and the second preset exposure may include but are not limited to the following methods:
- the first preset exposure is one exposure in an odd number of exposures
- the second preset exposure is one exposure in an even number of exposures.
- the multiple exposures may include the first preset exposure and the second preset exposure arranged in a parity order.
- the odd number of exposures such as the first exposure, the third exposure, and the fifth exposure in the multiple exposure are all the first preset exposures
- the second exposure, the fourth exposure, and the sixth exposure are even numbered times.
- the exposure is the second preset exposure.
- the first preset exposure is one exposure in an even number of exposures
- the second preset exposure is one exposure in an odd number of exposures.
- the multiple exposures may include the first exposure in a parity order.
- the preset exposure and the second preset exposure For example, odd-numbered exposures such as the first exposure, third exposure, and fifth exposure in multiple exposures are all second preset exposures, and even-numbered exposures such as second exposure, fourth exposure, and sixth exposure
- the exposure is the first preset exposure.
- the first preset exposure is one exposure in the specified odd number of exposures
- the second preset exposure is one exposure in the other exposures except the specified odd number of exposures, that is, The second preset exposure may be an odd number of exposures in multiple exposures, or an even number of exposures in multiple exposures.
- the first preset exposure is one exposure in the specified even number of exposures
- the second preset exposure is one exposure in the other exposures except the specified even number of exposures, that is, The second preset exposure may be an odd number of exposures in multiple exposures, or an even number of exposures in multiple exposures.
- the first preset exposure is one exposure in the first exposure sequence
- the second preset exposure is one exposure in the second exposure sequence.
- the first preset exposure is one exposure in the second exposure sequence
- the second preset exposure is one exposure in the first exposure sequence
- the aforementioned multiple exposure includes multiple exposure sequences
- the first exposure sequence and the second exposure sequence are the same exposure sequence or two different exposure sequences in the multiple exposure sequences
- each exposure sequence includes N exposures
- the N exposures include 1 first preset exposure and N-1 second preset exposures, or the N exposures include 1 second preset exposure and N-1 second preset exposures, where N is A positive integer greater than 2.
- each exposure sequence includes 3 exposures, and these 3 exposures may include 1 first preset exposure and 2 second preset exposures.
- the first exposure of each exposure sequence may be the first preset Exposure
- the second and third exposures are the second preset exposure. That is, each exposure sequence can be expressed as: a first preset exposure, a second preset exposure, and a second preset exposure.
- these 3 exposures may include 1 second preset exposure and 2 first preset exposures, so that the first exposure of each exposure sequence may be the second preset exposure, the second and the third The exposure is the first preset exposure. That is, each exposure sequence can be expressed as: the second preset exposure, the first preset exposure, and the first preset exposure.
- the filter assembly 03 further includes a second filter 032 and a switching component 033, and both the first filter 031 and the second filter 032 are connected to the switching component 033.
- the switching component 033 is used to switch the second filter 032 to the light incident side of the image sensor 01.
- the second filter 032 After the second filter 032 is switched to the light incident side of the image sensor 01, the second filter 032 enables the visible light waveband The light passes through to block the light in the near-infrared light band, and the image sensor 01 is used to generate and output a third image signal through exposure.
- the switching component 033 is used to switch the second filter 032 to the light incident side of the image sensor 01, and can also be understood as the second filter 032 replacing the first filter 031 in the image sensor 01. Position on the light side.
- the first light supplement device 021 may be in the off state or in the on state.
- the first light supplement device 021 can be used to perform stroboscopic light supplementation, so that the image sensor 01 generates and outputs a first image signal containing near-infrared brightness information.
- the second image signal containing visible light brightness information and because the first image signal and the second image signal are both acquired by the same image sensor 01, the viewpoint of the first image signal is the same as the viewpoint of the second image signal, so that the The first image signal and the second image signal can acquire complete external scene information.
- the intensity of visible light is strong, for example, during the daytime, the proportion of near-infrared light during the day is relatively strong, and the color reproduction of the collected image is not good.
- the image sensor 01 can generate and output a third image signal containing visible light brightness information, so that even During the day, images with good color reproduction can also be collected, and the real color information of the external scene can be obtained efficiently and simply regardless of the intensity of visible light, or whether it is day or night, which improves the image acquisition unit 1 is flexible in use and can be easily compatible with other image acquisition units.
- the image processor 2 may process the third image signal to output third image information, and the face analysis unit 3 may perform face analysis on the third image information to obtain a face analysis result.
- This application uses the exposure timing of the image sensor 01 to control the near-infrared supplementary light timing of the supplementary light device, so that the near-infrared supplementary light is performed during the first preset exposure and the first image signal is generated. In the process, the near-infrared supplement light is not performed and the second image signal is generated.
- This data collection method can directly collect the first image signal and the second image signal with different brightness information while the structure is simple and the cost is reduced.
- One image sensor 01 can acquire two different image signals, which makes the image acquisition unit 1 easier and more efficient to acquire the first image signal and the second image signal.
- the first image signal and the second image signal are both generated and output by the same image sensor 01, so the viewpoint corresponding to the first image signal is the same as the viewpoint corresponding to the second image signal. Therefore, the information of the external scene can be jointly obtained through the first image signal and the second image signal, and there is no difference between the viewpoint corresponding to the first image signal and the viewpoint corresponding to the second image signal. It is not aligned with the image generated by the second image signal.
- the image processor 2 may be a logic platform containing signal processing algorithms or programs.
- the image processor 2 may be a computer based on the X86 or ARM architecture, or may be an FPGA (Field-Programmable Gate Array, Field-Programmable Gate Array) logic circuit.
- FPGA Field-Programmable Gate Array, Field-Programmable Gate Array
- the image processor 2 is configured to process at least one of the first image signal and the second image signal by using the first processing parameter to obtain first image information.
- the image processor 2 is also used to process at least one of the first image signal and the second image signal by using the second processing parameter to obtain second image information, and then transmit the second image information to the display device, and then The device displays the second image information.
- the first image signal and the second image signal can be flexibly combined according to the two different application requirements of face analysis and display, so that the two different application requirements can be compared. Good satisfaction.
- the processing performed by the image processor 2 on at least one of the first image signal and the second image signal may include black level, image interpolation, digital gain, white balance, image noise reduction, image enhancement, and image fusion. At least one of the others.
- the first processing parameter and the second processing parameter may be the same or different.
- the first processing parameter and the second processing parameter may be different.
- the first processing parameter can be set in advance according to the display requirement, and the second processing parameter can be set in advance according to the face analysis requirement.
- the first processing parameter and the second processing parameter are when processing black level, image interpolation, digital gain, white balance, image noise reduction, image enhancement, image fusion, etc., on at least one of the first image signal and the second image signal. The required parameters.
- the image processor 2 can flexibly select a more appropriate combination of first processing parameters and image signals to obtain the first image information, so as to achieve more favorable face analysis.
- the image effect improves the accuracy of face recognition.
- the image processor 2 can flexibly select a more appropriate combination of second processing parameters and image signals to obtain the second image information, so as to achieve a better quality image display effect.
- the image processor 2 may use the first processing parameter to process the first image signal containing the near-infrared light information, and output the gray image information as the first image information.
- the image quality of the grayscale image information obtained by processing the first image signal is better, which is more suitable for face analysis and can improve the face Recognition accuracy rate.
- the image processor 2 may use the second processing parameter to process the second image signal containing the visible light information, and output the color image information as the second image information.
- the second image signal contains visible light information
- the color reproduction of the color image information obtained by processing the second image signal is more accurate, which is more suitable for display and can improve the image display effect.
- the image processor 2 may use the first processing parameter to process the first image signal and the second image signal, and output the first image information. In this case, the image processor 2 needs to perform image fusion processing on the first image signal and the second image signal.
- the image processor 2 may use the second processing parameter to process the first image signal and the second image signal, and output second image information. In this case, the image processor 2 needs to perform image fusion processing on the first image signal and the second image signal.
- the first image signal and the second image signal do not enter the image processor 2 at the same time. If the image processor 2 needs to perform image fusion processing on the first image signal and the second image signal, the first image signal and the second image signal need to be synchronized first.
- the image processor 2 may include a buffer for storing at least one of the first image signal and the second image signal, so as to achieve synchronization of the first image signal and the second image signal.
- the image processor 2 may perform image fusion processing on the synchronized first image signal and the second image signal to obtain the first image information.
- the cache can also be used to store other information, for example, it can be used to store at least one of the first image information and the second image information.
- the image processor 2 may store the first image signal in the buffer first, and after the second image signal also enters the image processor 2, Then perform image fusion processing on the first image signal and the second image signal.
- the image processor 2 may first store the second image signal in the buffer, and wait until the first image signal also enters the image processor 2. , And then perform image fusion processing on the first image signal and the second image signal.
- the image processor 2 is also used to adjust the exposure parameters of the image acquisition unit 1 in the process of processing at least one of the first image signal and the second image signal. Specifically, in the process of processing at least one of the first image signal and the second image signal, the image processor 2 may determine the exposure parameter adjustment value according to the attribute parameter generated in the processing process, and then carry the exposure parameter. The control signal of the parameter adjustment value is sent to the image acquisition unit 1, and the image acquisition unit 1 adjusts its own exposure parameters according to the exposure parameter adjustment value.
- attribute parameters generated in the process of processing at least one of the first image signal and the second image signal may include image resolution, image brightness, image contrast, and the like.
- the image processor 2 adjusts the exposure parameters of the image acquisition unit 1, that is, adjusts the exposure parameters of the image sensor 01 in the image acquisition unit 1.
- the image processor 2 can adjust the exposure parameters of the image sensor 01 at the same time.
- the working state of the light supplement 02 and the working state of the filter assembly 03 are controlled.
- the image processor 2 can control the on-off state of the first light-filling device 021 in the light-filler 02, and can also control the on-off state of the second light-filling device 022 in the light-filler 02, or control the first light-filling device 022 in the filter assembly 03 Switch between a filter 031 and a second filter 032.
- the face analysis unit 3 is a logic platform containing a face analysis algorithm or program.
- the face analysis unit 3 may be a computer based on X86 or ARM architecture, or an FPGA logic circuit.
- the face analysis unit 3 can share hardware with the image processor 2.
- the face analysis unit 3 and the image processor 2 can run on the same FPGA logic circuit.
- the face analysis unit 3 and the image processor 2 may not share hardware, which is not limited in the embodiment of the present application.
- the face analysis unit 3 may include: a face detection subunit 311, a face recognition subunit 312, and a face database 313.
- At least one reference face information is stored in the face database 313.
- the face detection subunit 311 is configured to perform face detection on the first image information, output the detected face image, and perform living body identification on the face image.
- the face recognition sub-unit 312 is used to extract the face information of the face image when the face image passes in vivo identification, and compare the face information of the face image with at least one reference face stored in the face database 313 The information is compared to obtain the face analysis result.
- At least one reference face information stored in the face database 313 may be set in advance.
- the multiple reference facial information may be preset facial information of facial images of users who have a certain authority (such as the authority to open a door).
- the face detection sub-unit 311 can perform face detection on the first image information, and perform biometric authentication on the detected face image to prevent camouflage attacks such as photos, videos, and masks.
- the operation can be directly ended, and it is determined that the face analysis result is a face recognition failure.
- the face recognition sub-unit 312 may compare the face information of the face image with at least one reference face information stored in the face database 313 when the face image passes in vivo identification. If the face information of the face image is successfully compared with any reference face information, it can be determined that the face analysis result is a successful recognition; if the face information of the face image is compared with the at least one reference face information If it fails, it can be determined that the face analysis result is a recognition failure.
- the face information may be face feature data, etc.
- the face feature data may include the curvature of the face and the attributes (such as size, position, distance, etc.) of facial contour points (such as iris, nose, and corner of the mouth).
- the face recognition sub-unit 312 when the face recognition subunit 312 compares the face information of the face image with at least one piece of reference face information stored in the face database 313, the face recognition For any reference face information, the face recognition sub-unit 312 can calculate the matching degree between the reference face information and the face information of the face image, and determine the reference when the matching degree is greater than or equal to the matching degree threshold The face information is successfully compared with the face information of the face image. When the matching degree is less than the matching degree threshold, it is determined that the comparison between the reference face information and the face information of the face image fails.
- the matching degree threshold can be set in advance.
- At least one reference face information is stored in the face database 313.
- the face detection subunit 311 is used to perform face detection on the first image information, output the detected first face image, perform living body identification on the first face image, and perform face detection on the second image information, and output The second face image is detected, and the second face image is subjected to living body identification.
- the face recognition subunit 312 is used to extract the face information of the first face image when the first face image and the second face image both pass the living body identification, and combine the face information of the first face image with the face information.
- At least one reference face information stored in the database 313 is compared to obtain a face analysis result.
- At least one reference face information stored in the face database 313 may be set in advance.
- the at least one reference face information may be preset face information of a face image of a user with a certain authority.
- the face detection sub-unit 311 can perform face detection on both the first image information and the second image information, and perform live identification on both the detected first face image and the second face image.
- the operation can be directly ended, and it is determined that the face analysis result is a face recognition failure.
- the face detection sub-unit 311 realizes multi-spectral living body identification through the first image information and the second image information, thereby effectively improving the accuracy of living body identification.
- the face recognition subunit 312 may compare the face information of the first face image with at least one reference person stored in the face database 313 when both the first face image and the second face image pass the living body identification. Face information is compared. If the face information of the first face image is successfully compared with any reference face information, it can be determined that the face analysis result is a successful recognition; if the face information of the first face image matches the at least one reference face information If the comparison fails, it can be determined that the face analysis result is a recognition failure.
- the facial information may be facial feature data, etc.
- the facial feature data may include facial curvature, attributes of facial contour points, and the like.
- the face recognition sub-unit 312 compares the face information of the first face image with the at least one reference face information stored in the face database 313, the at least one reference face information is For any one of the reference face information, the face recognition subunit 312 can calculate the matching degree between this reference face information and the face information of the first face image, and when the matching degree is greater than or equal to the matching degree threshold, determine The reference face information is successfully compared with the face information of the first face image. When the matching degree is less than the matching degree threshold, it is determined that the comparison of the reference face information with the face information of the first face image fails.
- the matching degree threshold can be set in advance.
- At least one reference face information is stored in the face database 313.
- the face detection subunit 311 is used to perform face detection on the second image information, output the detected second face image, perform in vivo identification on the second face image, and when the second face image passes in vivo identification, Perform face detection on the first image information, and output the detected first face image.
- the face recognition subunit 312 is used to extract the face information of the first face image, and compare the face information of the first face image with at least one reference face information stored in the face database 313 to obtain the face information. Analyze the results.
- the first image information is grayscale image information obtained by processing the first image signal
- the second image information is color image information obtained by processing the second image signal
- the face database 313 stores at least one reference face information ;
- the face detection sub-unit 311 is used to perform face detection on color image information, output the detected color face image, perform in vivo identification on the color face image, and perform grayscale when the color face image passes the in vivo identification
- the face image performs face detection and outputs the detected gray face image;
- the face recognition sub-unit 312 is used to extract the face information of the gray-scale face image, and combine the face information of the gray-scale face image with the face database At least one reference face information stored in 313 is compared to obtain a face analysis result.
- the first image information is grayscale image information obtained by processing the first image signal
- the second image information is fused image information obtained by performing image fusion processing on the first image signal and the second image signal.
- the face database 313 At least one reference face information is stored in the fusion image information; the face detection subunit 311 is used to perform face detection on the fusion image information, output the detected fusion face image, perform living body identification on the fusion face image, and perform face detection on the fusion face image.
- the image When the image passes the living body identification, it performs face detection on the gray-scale face image, and outputs the detected gray face image; the face recognition subunit 312 is used to extract the face information of the gray-scale face image, and the gray-scale face The face information of the image is compared with at least one reference face information stored in the face database 313 to obtain a face analysis result.
- the first image information is fused image information obtained by performing image fusion processing on the first image signal and the second image signal
- the second image information is grayscale image information obtained by processing the first image signal.
- the face database 313 At least one reference face information is stored in the computer; the face detection subunit 311 is used to perform face detection on grayscale image information, output the detected grayscale face image, perform in vivo identification on the grayscale face image, and When the gray-scale face image passes the living body identification, face detection is performed on the fused face image, and the detected fused face image is output; the face recognition subunit 312 is used to extract the face information of the fused face image, and merge the human face.
- the face information of the face image is compared with at least one reference face information stored in the face database 313 to obtain a face analysis result.
- the first image information is fused image information obtained by performing image fusion processing on the first image signal and the second image signal
- the second image information is color image information obtained by processing the second image signal.
- the face database 313 is At least one reference face information is stored; the face detection subunit 311 is used to perform face detection on color image information, output the detected color face image, perform in vivo identification of the color face image, and display the color face image When in vivo identification, face detection is performed on the fused face image, and the detected fused face image is output; the face recognition subunit 312 is used to extract the face information of the fused face image, and merge the face of the face image The information is compared with at least one reference face information stored in the face database 313 to obtain a face analysis result.
- the first image information is the first fused image information obtained by performing image fusion processing on the first image signal and the second image signal
- the second image information is obtained by performing image fusion processing on the first image signal and the second image signal.
- At least one reference face information is stored in the face database 313;
- the face detection subunit 311 is used to perform face detection on the second fused image information, and output the detected second fused face image , Perform in vivo identification on the second fused face image, and perform face detection on the first fused face image when the second fused face image passes the in vivo identification, and output the detected first fused face image;
- the recognition sub-unit 312 is used to extract the face information of the first fused face image, and compare the face information of the first fused face image with at least one reference face information stored in the face database 313 to obtain the face Analyze the results.
- the face analysis unit 3 can also send the human face analysis result after obtaining the face analysis result.
- the face analysis result is transmitted to the display device, and the display device displays the face analysis result. In this way, the user can learn the face analysis result in time.
- the face recognition device includes an image acquisition unit 1, an image processor 2, and a face analysis unit 3.
- the image acquisition unit 1 includes a filter assembly 03, which includes a first filter 031, and the first filter 031 allows visible light and part of the near-infrared light to pass.
- the image acquisition unit 1 can simultaneously acquire a first image signal containing near-infrared light information (such as near-infrared light brightness information) and a second image signal containing visible light information through the first preset exposure and the second preset exposure.
- the image acquisition unit 1 in this application can directly collect the first image signal and the second image signal.
- the acquisition process Simple and effective.
- the image processor 2 processes at least one of the first image signal and the second image signal, and the first image information is of higher quality.
- the face analysis unit 3 performs face analysis on the first image information. A more accurate face analysis result can be obtained, which can effectively improve the accuracy of face recognition.
- FIG. 24 is a schematic structural diagram of an access control device provided by an embodiment of the present application.
- the access control device includes an access control controller 001 and the face recognition device 002 shown in any one of FIGS. 1 to 23.
- the face recognition device 002 is used to transmit the face analysis result to the access controller 001.
- the access controller 001 is used for outputting a control signal for opening the door when the face analysis result is successful.
- the access controller 001 does not perform any operation when the face analysis result is that the recognition fails.
- the access control equipment includes an access control controller 001 and a face recognition device 002.
- the face recognition device 002 has a relatively high face recognition accuracy rate, so that the control accuracy of the access control controller 001 can be ensured and the security of the access control .
- the face recognition device provided in the embodiment of the present application can be applied not only to access control equipment, but also to other equipment with facial recognition requirements, such as payment equipment, which is not limited in the embodiment of the present application.
- the face recognition method includes:
- Step 251 Pass visible light and part of near-infrared light through the first filter.
- Step 252 Collect a first image signal and a second image signal by the image acquisition unit, the first image signal is an image signal generated according to a first preset exposure, and the second image signal is an image signal generated according to a second preset exposure.
- the near-infrared supplementary light is performed at least during a partial exposure time period of the first preset exposure, and the near-infrared supplementary light is not performed during the exposure time period of the second preset exposure.
- Step 253 Process at least one of the first image signal and the second image signal by the image processor to obtain first image information.
- Step 254 Perform face analysis on the first image information by the face analysis unit to obtain a face analysis result.
- the image acquisition unit includes: an image sensor and a light supplement, the image sensor is located on the light exit side of the filter assembly, and the light supplement includes a first light supplement device;
- Multiple exposures are performed through the image sensor to generate and output a first image signal and a second image signal.
- the first preset exposure and the second preset exposure are two of the multiple exposures; through the first fill light
- the device performs near-infrared fill light.
- the center wavelength of the near-infrared supplement light performed by the first light supplement device is the set characteristic wavelength or the center wavelength of the near-infrared light passing through the first filter when it falls within the set characteristic wavelength range And/or the band width meets the constraints.
- the center wavelength of the first light supplement device for near-infrared supplement light is any wavelength within the wavelength range of 750 ⁇ 10 nanometers;
- the center wavelength of the first light supplement device for near-infrared supplement light is any wavelength within the wavelength range of 780 ⁇ 10 nanometers; or
- the center wavelength of the first light supplement device for near-infrared supplement light is any wavelength within the wavelength range of 940 ⁇ 10 nanometers.
- the constraints include:
- the difference between the center wavelength of the near-infrared light passing through the first filter and the center wavelength of the near-infrared light supplemented by the first light supplement device is within the wavelength fluctuation range, and the wavelength fluctuation range is 0-20 nanometers; or
- the half-bandwidth of the near-infrared light passing through the first filter is less than or equal to 50 nanometers; or
- the first waveband width is smaller than the second waveband width; where the first waveband width refers to the waveband width of the near-infrared light that passes through the first filter, and the second waveband width refers to the near-infrared light that is blocked by the first filter. Band width; or
- the third waveband width is smaller than the reference waveband width.
- the third waveband width refers to the waveband width of near-infrared light whose pass rate is greater than a set ratio.
- the reference waveband width is any waveband width in the range of 50 nm to 150 nm.
- the image sensor includes a plurality of light-sensing channels, and each light-sensing channel is used to sense at least one kind of light in the visible light waveband and to sense light in the near-infrared waveband.
- the image sensor uses a global exposure method for multiple exposures.
- the time period of the near-infrared fill light does not exist with the exposure time period of the nearest second preset exposure Intersection, the time period of near-infrared fill light is a subset of the exposure time period of the first preset exposure, or the time period of near-infrared fill light and the exposure time period of the first preset exposure overlap, or the first preset
- the exposure time period of exposure is a subset of the time period of near-infrared fill light.
- the image sensor adopts rolling shutter exposure for multiple exposures.
- the time period of near-infrared fill light is different from the exposure time period of the nearest second preset exposure. There is an intersection
- the start time of the near-infrared fill light is no earlier than the exposure start time of the last line of the effective image in the first preset exposure, and the end time of the near-infrared fill light is no later than the exposure end time of the first line of the effective image in the first preset exposure ;
- the start time of the near-infrared fill light is no earlier than the exposure end time of the last effective image line of the nearest second preset exposure before the first preset exposure and no later than the first effective image line of the first preset exposure Exposure end time, the end time of the near-infrared fill light is no earlier than the exposure start time of the last line of the effective image in the first preset exposure and no later than the first preset exposure of the nearest second preset exposure after the first preset exposure
- the start time of the exposure of a valid image or
- the start time of the near-infrared fill light is no earlier than the exposure end time of the last effective image line of the nearest second preset exposure before the first preset exposure and no later than the first effective image line of the first preset exposure
- the exposure start time, the end time of the near-infrared fill light is no earlier than the exposure end time of the last line of the effective image in the first preset exposure and no later than the first preset exposure of the nearest second preset exposure after the first preset exposure
- At least one exposure parameter of the first preset exposure and the second preset exposure is different, and the at least one exposure parameter is one or more of exposure time, exposure gain, and aperture size, and the exposure gain Including analog gain, and/or, digital gain.
- At least one exposure parameter of the first preset exposure and the second preset exposure is the same, the at least one exposure parameter includes one or more of exposure time, exposure gain, and aperture size, and exposure gain Including analog gain, and/or, digital gain.
- the image processor uses the first processing parameter to process at least one of the first image signal and the second image signal to obtain the first image information; the image processor uses the second processing parameter to process the first image signal and the second image At least one of the signals is processed to obtain second image information; the second image information is transmitted to the display device through the image processor, and the second image information is displayed by the display device.
- the first processing parameter and the second processing parameter are different.
- the processing performed by the image processor on at least one of the first image signal and the second image signal includes black level, image interpolation, digital gain, white balance, image noise reduction, image enhancement, At least one of image fusion.
- the image processor includes a cache
- At least one of the first image signal and the second image signal is stored in the buffer, or at least one of the first image information and the second image information is stored in the buffer.
- the image processor is used to adjust the exposure parameter of the image acquisition unit in the process of processing at least one of the first image signal and the second image signal.
- the face analysis unit includes: a face detection subunit, a face recognition subunit, and a face database;
- the face recognition subunit extracts the face information of the face image when the face image passes in vivo identification, and compares the face information of the face image with at least one reference face information stored in the face database to obtain Face analysis results.
- the face analysis unit includes: a face detection subunit, a face recognition subunit, and a face database;
- the face recognition subunit extracts the face information of the first face image when both the first face image and the second face image pass the living body identification, and the face information of the first face image is combined with the face database At least one stored reference face information is compared to obtain a face analysis result.
- the first image information is grayscale image information obtained by processing the first image signal
- the second image information is color image information obtained by processing the second image signal
- the face analysis unit includes: Face detection subunit, face recognition subunit and face database;
- the face information of the grayscale face image is extracted through the face recognition subunit, and the face information of the grayscale face image is compared with at least one reference face information stored in the face database to obtain the face analysis result.
- the first image information is grayscale image information obtained by processing the first image signal
- the second image information is a fused image obtained by performing image fusion processing on the first image signal and the second image signal.
- the face analysis unit includes: face detection sub-unit, face recognition sub-unit and face database;
- the face information of the grayscale face image is extracted through the face recognition subunit, and the face information of the grayscale face image is compared with at least one reference face information stored in the face database to obtain the face analysis result.
- the first image information is fused image information obtained by performing image fusion processing on the first image signal and the second image signal
- the second image information is a grayscale image obtained by processing the first image signal.
- the face analysis unit includes: face detection sub-unit, face recognition sub-unit and face database;
- the face information of the fused face image is extracted through the face recognition subunit, and the face information of the fused face image is compared with at least one reference face information stored in the face database to obtain a face analysis result.
- the first image information is fused image information obtained by performing image fusion processing on the first image signal and the second image signal
- the second image information is color image information obtained by processing the second image signal.
- the face analysis unit includes: face detection subunit, face recognition subunit and face database;
- the face information of the fused face image is extracted through the face recognition subunit, and the face information of the fused face image is compared with at least one reference face information stored in the face database to obtain a face analysis result.
- the first image information is the first fused image information obtained by performing image fusion processing on the first image signal and the second image signal
- the second image information is the first image signal and the second image signal.
- the second fused image information obtained by image fusion processing of the signal, the face analysis unit includes: a face detection subunit, a face recognition subunit and a face database;
- perform face detection on the first fusion face image and output the detected first fusion face image
- the face information of the first fusion face image is extracted through the face recognition subunit, and the face information of the first fusion face image is compared with at least one reference face information stored in the face database to obtain a face analysis result.
- the face analysis result is transmitted to the display device through the face analysis unit, and the display device displays the face analysis result.
- the face recognition device includes an image acquisition unit, an image processor, and a face analysis unit.
- the image acquisition unit includes a filter assembly, and the filter assembly includes a first filter that passes visible light and part of the near-infrared light.
- the image acquisition unit can simultaneously acquire a first image signal containing near-infrared light information (such as near-infrared light brightness information) and a second image signal containing visible light information through the first preset exposure and the second preset exposure.
- the image acquisition unit in this application can directly collect the first image signal and the second image signal, and the acquisition process is simple effective.
- the image processor processes at least one of the first image signal and the second image signal, and the first image information is of higher quality, and then the face analysis unit performs face analysis on the first image information. Obtain more accurate face analysis results, which can effectively improve the accuracy of face recognition.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
Abstract
Description
Claims (25)
- 一种人脸识别装置,其特征在于,所述人脸识别装置包括:图像采集单元(1)、图像处理器(2)和人脸分析单元(3);所述图像采集单元(1)包括滤光组件(03),所述滤光组件(03)包括第一滤光片(031),所述第一滤光片(031)使可见光和部分近红外光通过;所述图像采集单元(1),用于采集第一图像信号和第二图像信号,所述第一图像信号是根据第一预设曝光产生的图像信号,所述第二图像信号是根据第二预设曝光产生的图像信号,其中,至少在所述第一预设曝光的部分曝光时间段内进行近红外补光,在所述第二预设曝光的曝光时间段内不进行近红外补光;所述图像处理器(2),用于对所述第一图像信号和所述第二图像信号中的至少一个进行处理,得到第一图像信息;所述人脸分析单元(3),用于对所述第一图像信息进行人脸分析,得到人脸分析结果。
- 如权利要求1所述的人脸识别装置,其特征在于,所述图像采集单元(1)包括:图像传感器(01)和补光器(02),所述图像传感器(01)位于所述滤光组件(03)的出光侧;所述图像传感器(01),用于通过多次曝光产生并输出所述第一图像信号和所述第二图像信号,所述第一预设曝光和所述第二预设曝光为所述多次曝光的其中两次曝光;所述补光器(02)包括第一补光装置(021),所述第一补光装置(021)用于进行近红外补光。
- 如权利要求2所述的人脸识别装置,其特征在于,所述第一补光装置(021)进行近红外补光的中心波长为设定特征波长或者落在设定特征波长范围时,通过所述第一滤光片(031)的近红外光的中心波长和/或波段宽度达到约束条件。
- 如权利要求3所述的人脸识别装置,其特征在于,所述第一补光装置(021)进行近红外补光的中心波长为750±10纳米的波长范围内的任一波长;或者所述第一补光装置(021)进行近红外补光的中心波长为780±10纳米的波长范围内的任一波长;或者所述第一补光装置(021)进行近红外补光的中心波长为940±10纳米的波长范围内的任一波长。
- 如权利要求3所述的人脸识别装置,其特征在于,所述约束条件包括:通过所述第一滤光片(031)的近红外光的中心波长与所述第一补光装置(021)进行近红外补光的中心波长之间的差值位于波长波动范围内,所述波长波动范围为0~20纳米;或者通过所述第一滤光片(031)的近红外光的半带宽小于或等于50纳米;或者第一波段宽度小于第二波段宽度;其中,所述第一波段宽度是指通过所述第一滤光片 (031)的近红外光的波段宽度,所述第二波段宽度是指被所述第一滤光片(031)阻挡的近红外光的波段宽度;或者第三波段宽度小于参考波段宽度,所述第三波段宽度是指通过率大于设定比例的近红外光的波段宽度,所述参考波段宽度为50纳米~150纳米的波段范围内的任一波段宽度。
- 如权利要求2所述的人脸识别装置,其特征在于,所述图像传感器(01)包括多个感光通道,每个感光通道用于感应至少一种可见光波段的光,以及感应近红外波段的光。
- 如权利要求2所述的人脸识别装置,其特征在于,所述图像传感器(01)采用全局曝光方式进行多次曝光,对于任意一次近红外补光,近红外补光的时间段与最邻近的所述第二预设曝光的曝光时间段不存在交集,近红外补光的时间段是所述第一预设曝光的曝光时间段的子集,或者,近红外补光的时间段与所述第一预设曝光的曝光时间段存在交集,或者所述第一预设曝光的曝光时间段是近红外补光的时间段的子集。
- 如权利要求2所述的人脸识别装置,其特征在于,所述图像传感器(01)采用卷帘曝光方式进行多次曝光,对于任意一次近红外补光,近红外补光的时间段与最邻近的所述第二预设曝光的曝光时间段不存在交集;近红外补光的开始时刻不早于所述第一预设曝光中最后一行有效图像的曝光开始时刻,近红外补光的结束时刻不晚于所述第一预设曝光中第一行有效图像的曝光结束时刻;或者,近红外补光的开始时刻不早于所述第一预设曝光之前的最邻近的第二预设曝光的最后一行有效图像的曝光结束时刻且不晚于所述第一预设曝光中第一行有效图像的曝光结束时刻,近红外补光的结束时刻不早于所述第一预设曝光中最后一行有效图像的曝光开始时刻且不晚于所述第一预设曝光之后的最邻近的第二预设曝光的第一行有效图像的曝光开始时刻;或者近红外补光的开始时刻不早于所述第一预设曝光之前的最邻近的第二预设曝光的最后一行有效图像的曝光结束时刻且不晚于所述第一预设曝光中第一行有效图像的曝光开始时刻,近红外补光的结束时刻不早于所述第一预设曝光中最后一行有效图像的曝光结束时刻且不晚于所述第一预设曝光之后的最邻近的第二预设曝光的第一行有效图像的曝光开始时刻。
- 如权利要求1所述的人脸识别装置,其特征在于,所述第一预设曝光与所述第二预设曝光的至少一个曝光参数不同,所述至少一个曝光参数为曝光时间、曝光增益、光圈大小中的一种或多种,所述曝光增益包括模拟增益,和/或,数字增益。
- 如权利要求1所述的人脸识别装置,其特征在于,所述第一预设曝光和所述第二预设曝光的至少一个曝光参数相同,所述至少一个曝光参数包括曝光时间、曝光增益、光圈大小中的一种或多种,所述曝光增益包括模拟增益,和/或,数字增益。
- 如权利要求1-10中任一项所述的人脸识别装置,其特征在于,所述图像处理器(2),用于采用第一处理参数对所述第一图像信号和所述第二图像信号中的至少一个进行处理,得到所述第一图像信息;所述图像处理器(2),还用于采用第二处理参数对所述第一图像信号和所述第二图像信号中的至少一个进行处理,得到第二图像信息;所述图像处理器(2),还用于将所述第二图像信息传输到显示设备,由所述显示设备显示所述第二图像信息。
- 如权利要求11所述的人脸识别装置,其特征在于,当所述第一图像信息和所述第二图像信息均是对所述第一图像信号处理得到时,或者,当所述第一图像信息和所述第二图像信息均是对所述第二图像信号处理得到时,或者,当所述第一图像信息和所述第二图像信息均是对所述第一图像信号和所述第二图像信号处理得到时,所述第一处理参数和所述第二处理参数不同。
- 如权利要求11所述的人脸识别装置,其特征在于,所述图像处理器(2)对所述第一图像信号和所述第二图像信号中的至少一个进行的处理包括黑电平、图像插值、数字增益、白平衡、图像降噪、图像增强、图像融合中的至少一种。
- 如权利要求11所述的人脸识别装置,其特征在于,所述图像处理器(2)包括缓存;所述缓存,用于存储所述第一图像信号和所述第二图像信号中的至少一个,或者,用于存储所述第一图像信息和所述第二图像信息中的至少一个。
- 如权利要求1-10中任一项所述的人脸识别装置,其特征在于,所述图像处理器(2),还用于在对所述第一图像信号和所述第二图像信号中的至少一个进行处理的过程中,调整所述图像采集单元(1)的曝光参数。
- 如权利要求1-10中任一项所述的人脸识别装置,其特征在于,所述人脸分析单元(3)包括:人脸检测子单元(311)、人脸识别子单元(312)和人脸数据库(313);所述人脸数据库(313)中存储有至少一个参考人脸信息;所述人脸检测子单元(311),用于对所述第一图像信息进行人脸检测,输出检测到的人脸图像,并对所述人脸图像进行活体鉴别;所述人脸识别子单元(312),用于在所述人脸图像通过活体鉴别时,提取所述人脸图像的人脸信息,将所述人脸图像的人脸信息与所述人脸数据库(313)中存储的至少一个参考人脸信息进行比对,得到人脸分析结果。
- 如权利要求11所述的人脸识别装置,其特征在于,所述人脸分析单元(3)包括:人脸检测子单元(311)、人脸识别子单元(312)和人脸数据库(313);所述人脸数据库(313)中存储有至少一个参考人脸信息;所述人脸检测子单元(311),用于对所述第一图像信息进行人脸检测,输出检测到的第一人脸图像,对所述第一人脸图像进行活体鉴别,以及对所述第二图像信息进行人脸检测,输出检测到的第二人脸图像,对所述第二人脸图像进行活体鉴别;所述人脸识别子单元(312),用于在所述第一人脸图像和所述第二人脸图像均通过活体鉴别时,提取所述第一人脸图像的人脸信息,将所述第一人脸图像的人脸信息与所述人脸数据库(313)中存储的至少一个参考人脸信息进行比对,得到人脸分析结果。
- 如权利要求11所述的人脸识别装置,其特征在于,所述第一图像信息是对所述第一图像信号处理得到的灰度图像信息,所述第二图像信息是对所述第二图像信号处理得到的彩色图像信息,所述人脸分析单元(3)包括:人脸检测子单元(311)、人脸识别子单元(312)和人脸数据库(313);所述人脸数据库(313)中存储有至少一个参考人脸信息;所述人脸检测子单元(311),用于对所述彩色图像信息进行人脸检测,输出检测到的彩色人脸图像,对所述彩色人脸图像进行活体鉴别,以及在所述彩色人脸图像通过活体鉴别时,对所述灰度人脸图像进行人脸检测,输出检测到的灰色人脸图像;所述人脸识别子单元(312),用于提取所述灰度人脸图像的人脸信息,将所述灰度人脸图像的人脸信息与所述人脸数据库(313)中存储的至少一个参考人脸信息进行比对,得到人脸分析结果。
- 如权利要求11所述的人脸识别装置,其特征在于,所述第一图像信息是对所述第一图像信号处理得到的灰度图像信息,所述第二图像信息是对所述第一图像信号和所述第二图像信号进行图像融合处理得到的融合图像信息,所述人脸分析单元(3)包括:人脸检测子单元(311)、人脸识别子单元(312)和人脸数据库(313);所述人脸数据库(313)中存储有至少一个参考人脸信息;所述人脸检测子单元(311),用于对所述融合图像信息进行人脸检测,输出检测到的融合人脸图像,对所述融合人脸图像进行活体鉴别,以及在所述融合人脸图像通过活体鉴别时,对所述灰度人脸图像进行人脸检测,输出检测到的灰色人脸图像;所述人脸识别子单元(312),用于提取所述灰度人脸图像的人脸信息,将所述灰度人脸图像的人脸信息与所述人脸数据库(313)中存储的至少一个参考人脸信息进行比对,得到人脸分析结果。
- 如权利要求11所述的人脸识别装置,其特征在于,所述第一图像信息是对所述第一图像信号和所述第二图像信号进行图像融合处理得到的融合图像信息,所述第二图像信息是对所述第一图像信号处理得到的灰度图像信息,所述人脸分析单元(3)包括:人脸检测子单元(311)、人脸识别子单元(312)和人脸数据库(313);所述人脸数据库(313)中存储有至少一个参考人脸信息;所述人脸检测子单元(311),用于对所述灰度图像信息进行人脸检测,输出检测到的灰度人脸图像,对所述灰度人脸图像进行活体鉴别,以及在所述灰度人脸图像通过活体鉴别时,对所述融合人脸图像进行人脸检测,输出检测到的融合人脸图像;所述人脸识别子单元(312),用于提取所述融合人脸图像的人脸信息,将所述融合人脸图像的人脸信息与所述人脸数据库(313)中存储的至少一个参考人脸信息进行比对,得到人脸分析结果。
- 如权利要求11所述的人脸识别装置,其特征在于,所述第一图像信息是对所述第一图像信号和所述第二图像信号进行图像融合处理得到的融合图像信息,所述第二图像信息是对所述第二图像信号处理得到的彩色图像信息,所述人脸分析单元(3)包括:人脸检测子单元(311)、人脸识别子单元(312)和人脸数据库(313);所述人脸数据库(313)中存储有至少一个参考人脸信息;所述人脸检测子单元(311),用于对所述彩色图像信息进行人脸检测,输出检测到的彩色人脸图像,对所述彩色人脸图像进行活体鉴别,以及在所述彩色人脸图像通过活体鉴别时,对所述融合人脸图像进行人脸检测,输出检测到的融合人脸图像;所述人脸识别子单元(312),用于提取所述融合人脸图像的人脸信息,将所述融合人脸图像的人脸信息与所述人脸数据库(313)中存储的至少一个参考人脸信息进行比对,得到人脸分析结果。
- 如权利要求11所述的人脸识别装置,其特征在于,所述第一图像信息是对所述第一图像信号和所述第二图像信号进行图像融合处理得到的第一融合图像信息,所述第二图像信息是对所述第一图像信号和所述第二图像信号进行图像融合处理得到的第二融合图像信息,所述人脸分析单元(3)包括:人脸检测子单元(311)、人脸识别子单元(312)和人脸数据库(313);所述人脸数据库(313)中存储有至少一个参考人脸信息;所述人脸检测子单元(311),用于对所述第二融合图像信息进行人脸检测,输出检测到的第二融合人脸图像,对所述第二融合人脸图像进行活体鉴别,以及在所述第二融合人脸图像通过活体鉴别时,对所述第一融合人脸图像进行人脸检测,输出检测到的第一融合人脸图像;所述人脸识别子单元(312),用于提取所述第一融合人脸图像的人脸信息,将所述第一融合人脸图像的人脸信息与所述人脸数据库(313)中存储的至少一个参考人脸信息进行比对,得到人脸分析结果。
- 如权利要求1-10中任一项所述的人脸识别装置,其特征在于,所述人脸分析单元(3),还用于将所述人脸分析结果传输到显示设备,由所述显示设备对所述人脸分析结果进行显示。
- 一种门禁设备,其特征在于,所述门禁设备包括门禁控制器和上述权利要求1-23中任一项所述的人脸识别装置;所述人脸识别装置,用于将所述人脸分析结果传输到所述门禁控制器;所述门禁控制器,用于在所述人脸分析结果为识别成功时,输出用于打开门禁的控制信号。
- 一种人脸识别方法,应用于人脸识别装置,所述人脸识别装置包括:图像采集单元、图像处理器和人脸分析单元,所述图像采集单元包括滤光组件,所述滤光组件包括第一滤光片,其特征在于,所述方法包括:通过所述第一滤光片使可见光和部分近红外光通过;通过所述图像采集单元采集第一图像信号和第二图像信号,所述第一图像信号是根据第一预设曝光产生的图像信号,所述第二图像信号是根据第二预设曝光产生的图像信号,其中,至少在所述第一预设曝光的部分曝光时间段内进行近红外补光,在所述第二预设曝光的曝光时间段内不进行近红外补光;通过所述图像处理器对所述第一图像信号和所述第二图像信号中的至少一个进行处理,得到第一图像信息;通过所述人脸分析单元对所述第一图像信息进行人脸分析,得到人脸分析结果。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910472703.7 | 2019-05-31 | ||
CN201910472703.7A CN110490042B (zh) | 2019-05-31 | 2019-05-31 | 人脸识别装置和门禁设备 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020238805A1 true WO2020238805A1 (zh) | 2020-12-03 |
Family
ID=68546292
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/091910 WO2020238805A1 (zh) | 2019-05-31 | 2020-05-22 | 人脸识别装置和门禁设备 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110490042B (zh) |
WO (1) | WO2020238805A1 (zh) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110490042B (zh) * | 2019-05-31 | 2022-02-11 | 杭州海康威视数字技术股份有限公司 | 人脸识别装置和门禁设备 |
CN110493492B (zh) * | 2019-05-31 | 2021-02-26 | 杭州海康威视数字技术股份有限公司 | 图像采集装置及图像采集方法 |
CN110493491B (zh) * | 2019-05-31 | 2021-02-26 | 杭州海康威视数字技术股份有限公司 | 一种图像采集装置及摄像方法 |
CN112989866B (zh) * | 2019-12-02 | 2024-04-09 | 浙江宇视科技有限公司 | 对象识别方法、装置、电子设备和可读存储介质 |
CN113128259B (zh) * | 2019-12-30 | 2023-08-29 | 杭州海康威视数字技术股份有限公司 | 人脸识别设备及人脸识别方法 |
CN116978104A (zh) * | 2023-08-11 | 2023-10-31 | 泰智达(北京)网络科技有限公司 | 一种人脸识别系统 |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101226587A (zh) * | 2007-01-15 | 2008-07-23 | 中国科学院自动化研究所 | 图像采集装置及应用该装置的人脸识别系统和方法 |
CN101931755A (zh) * | 2010-07-06 | 2010-12-29 | 上海洪剑智能科技有限公司 | 一种人脸识别用的调制光滤光装置和滤光方法 |
KR20110128574A (ko) * | 2010-05-24 | 2011-11-30 | 주식회사 다음커뮤니케이션 | 이미지내 생체 얼굴 인식 방법 및 인식 장치 |
CN107220621A (zh) * | 2017-05-27 | 2017-09-29 | 北京小米移动软件有限公司 | 终端进行人脸识别的方法及装置 |
CN108289179A (zh) * | 2018-02-08 | 2018-07-17 | 深圳泰华安全技术工程有限公司 | 一种提高视频信号采集抗干扰能力的方法 |
CN110312079A (zh) * | 2018-03-20 | 2019-10-08 | 北京中科奥森科技有限公司 | 图像采集装置及其应用系统 |
CN110490187A (zh) * | 2019-05-31 | 2019-11-22 | 杭州海康威视数字技术股份有限公司 | 车牌识别设备和方法 |
CN110490041A (zh) * | 2019-05-31 | 2019-11-22 | 杭州海康威视数字技术股份有限公司 | 人脸图像采集装置及方法 |
CN110490042A (zh) * | 2019-05-31 | 2019-11-22 | 杭州海康威视数字技术股份有限公司 | 人脸识别装置和门禁设备 |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN203193649U (zh) * | 2013-04-16 | 2013-09-11 | 北京天诚盛业科技有限公司 | 电子签名装置 |
JP2016096430A (ja) * | 2014-11-13 | 2016-05-26 | パナソニックIpマネジメント株式会社 | 撮像装置及び撮像方法 |
JP6597636B2 (ja) * | 2014-12-10 | 2019-10-30 | ソニー株式会社 | 撮像装置、撮像方法、およびプログラム、並びに画像処理装置 |
CN105187727A (zh) * | 2015-06-17 | 2015-12-23 | 广州市巽腾信息科技有限公司 | 一种图像信息采集装置、图像采集方法及其用途 |
CN106449617B (zh) * | 2015-08-05 | 2019-04-12 | 杭州海康威视数字技术股份有限公司 | 用于产生光的光源设备及其补光方法和装置 |
CN105868753B (zh) * | 2016-04-05 | 2019-10-18 | 浙江宇视科技有限公司 | 蓝色车牌颜色的识别方法及装置 |
CN108234898A (zh) * | 2018-02-07 | 2018-06-29 | 信利光电股份有限公司 | 多摄像头的同步拍摄方法、拍摄装置、移动终端和可读存储介质 |
CN109635760A (zh) * | 2018-12-18 | 2019-04-16 | 深圳市捷顺科技实业股份有限公司 | 一种人脸识别方法及相关设备 |
-
2019
- 2019-05-31 CN CN201910472703.7A patent/CN110490042B/zh active Active
-
2020
- 2020-05-22 WO PCT/CN2020/091910 patent/WO2020238805A1/zh active Application Filing
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101226587A (zh) * | 2007-01-15 | 2008-07-23 | 中国科学院自动化研究所 | 图像采集装置及应用该装置的人脸识别系统和方法 |
KR20110128574A (ko) * | 2010-05-24 | 2011-11-30 | 주식회사 다음커뮤니케이션 | 이미지내 생체 얼굴 인식 방법 및 인식 장치 |
CN101931755A (zh) * | 2010-07-06 | 2010-12-29 | 上海洪剑智能科技有限公司 | 一种人脸识别用的调制光滤光装置和滤光方法 |
CN107220621A (zh) * | 2017-05-27 | 2017-09-29 | 北京小米移动软件有限公司 | 终端进行人脸识别的方法及装置 |
CN108289179A (zh) * | 2018-02-08 | 2018-07-17 | 深圳泰华安全技术工程有限公司 | 一种提高视频信号采集抗干扰能力的方法 |
CN110312079A (zh) * | 2018-03-20 | 2019-10-08 | 北京中科奥森科技有限公司 | 图像采集装置及其应用系统 |
CN110490187A (zh) * | 2019-05-31 | 2019-11-22 | 杭州海康威视数字技术股份有限公司 | 车牌识别设备和方法 |
CN110490041A (zh) * | 2019-05-31 | 2019-11-22 | 杭州海康威视数字技术股份有限公司 | 人脸图像采集装置及方法 |
CN110490042A (zh) * | 2019-05-31 | 2019-11-22 | 杭州海康威视数字技术股份有限公司 | 人脸识别装置和门禁设备 |
Also Published As
Publication number | Publication date |
---|---|
CN110490042B (zh) | 2022-02-11 |
CN110490042A (zh) | 2019-11-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020238805A1 (zh) | 人脸识别装置和门禁设备 | |
WO2020238903A1 (zh) | 人脸图像采集装置及方法 | |
WO2020238806A1 (zh) | 一种图像采集装置及摄像方法 | |
WO2020238807A1 (zh) | 图像融合装置及图像融合方法 | |
WO2020238905A1 (zh) | 图像融合设备和方法 | |
US8416302B2 (en) | Low-light imaging augmented with non-intrusive lighting | |
US11657606B2 (en) | Dynamic image capture and processing | |
CN110490044B (zh) | 人脸建模设备和人脸建模方法 | |
CN107730445A (zh) | 图像处理方法、装置、存储介质和电子设备 | |
US7084907B2 (en) | Image-capturing device | |
CN110490187B (zh) | 车牌识别设备和方法 | |
CN107800965B (zh) | 图像处理方法、装置、计算机可读存储介质和计算机设备 | |
WO2020238970A1 (zh) | 图像降噪装置及图像降噪方法 | |
CN109191403A (zh) | 图像处理方法和装置、电子设备、计算机可读存储介质 | |
WO2021073140A1 (zh) | 单目摄像机、图像处理系统以及图像处理方法 | |
CN110493536B (zh) | 图像采集装置和图像采集的方法 | |
CN102316247A (zh) | 图像处理装置 | |
CN110493535B (zh) | 图像采集装置和图像采集的方法 | |
CN110493495B (zh) | 图像采集装置和图像采集的方法 | |
CN107396079B (zh) | 白平衡调整方法和装置 | |
WO2020238804A1 (zh) | 图像采集装置及图像采集方法 | |
CN110493493B (zh) | 全景细节摄像机及获取图像信号的方法 | |
WO2020027210A1 (ja) | 画像処理装置、画像処理方法、および画像処理プログラム | |
US11153546B2 (en) | Low-light imaging system | |
US20120314044A1 (en) | Imaging device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20815121 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20815121 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20815121 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 16.09.2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20815121 Country of ref document: EP Kind code of ref document: A1 |