WO2020238807A1 - 图像融合装置及图像融合方法 - Google Patents

图像融合装置及图像融合方法 Download PDF

Info

Publication number
WO2020238807A1
WO2020238807A1 PCT/CN2020/091917 CN2020091917W WO2020238807A1 WO 2020238807 A1 WO2020238807 A1 WO 2020238807A1 CN 2020091917 W CN2020091917 W CN 2020091917W WO 2020238807 A1 WO2020238807 A1 WO 2020238807A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
unit
exposure
light
pixel
Prior art date
Application number
PCT/CN2020/091917
Other languages
English (en)
French (fr)
Inventor
范蒙
Original Assignee
杭州海康威视数字技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视数字技术股份有限公司 filed Critical 杭州海康威视数字技术股份有限公司
Priority to US17/615,343 priority Critical patent/US20220222795A1/en
Priority to EP20814163.0A priority patent/EP3979622A4/en
Publication of WO2020238807A1 publication Critical patent/WO2020238807A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/54Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/56Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/72Combination of two or more compensation controls
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/74Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/743Bracketing, i.e. taking a series of images with varying exposure conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/57Control of the dynamic range
    • H04N25/58Control of the dynamic range involving two or more exposures
    • H04N25/587Control of the dynamic range involving two or more exposures acquired sequentially, e.g. using the combination of odd and even image fields
    • H04N25/589Control of the dynamic range involving two or more exposures acquired sequentially, e.g. using the combination of odd and even image fields with different integration times, e.g. short and long exposures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10144Varying exposure
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6811Motion detection based on the image signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/131Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements including elements passing infrared wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/133Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements including elements passing panchromatic light, e.g. filters passing white light
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/134Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/53Control of the integration time
    • H04N25/531Control of the integration time by controlling rolling shutters in CMOS SSIS

Definitions

  • This application relates to the field of computer vision technology, and in particular to an image fusion device and an image fusion method.
  • the photographing device can acquire different images according to the different visible light intensity in the external scene, so that the acquisition of information of the external scene is not restricted by the visible light intensity, and then the different acquired images are merged.
  • the photographing device can acquire a visible light image when the visible light intensity is strong in an external scene, and acquire a near-infrared light image when the visible light intensity is weak in an external scene, so as to obtain information about the external scene under different visible light intensities.
  • the related art provides a method for obtaining visible light images and near-infrared light images based on a binocular camera.
  • the binocular camera includes a visible light camera and a near-infrared light camera.
  • the binocular camera can obtain visible light images through the visible light camera.
  • the optical camera acquires near-infrared light images.
  • the shooting range of the visible light camera and the shooting range of the near-infrared light camera only partially overlap, and the binocular camera has a complicated structure for image acquisition. , The problem of higher cost.
  • the embodiments of the present application provide an image fusion device and an image fusion method, which are used to collect two different image signals with a simple structure and reduce costs, and then to fuse the two images to obtain a fused image signal.
  • the technical solution is as follows:
  • an image fusion device in one aspect, includes an image sensor, a light fill, a filter component, and an image processing unit, the image sensor is located on the light exit side of the filter component;
  • the image sensor is configured to generate and output a first image signal and a second image signal through multiple exposures, wherein the first image signal is an image generated according to a first preset exposure, and the second image signal is An image generated according to a second preset exposure, where the first preset exposure and the second preset exposure are two of the multiple exposures;
  • the light supplementer includes a first light supplement device, and the first light supplement device is used to perform near-infrared supplement light, wherein the near-infrared supplement light is performed at least during a partial exposure time period of the first preset exposure, Do not perform near-infrared fill light during the exposure time period of the second preset exposure;
  • the filter assembly includes a first filter, and the first filter passes visible light and part of near-infrared light;
  • the image processing unit is configured to process the first image signal and the second image signal to obtain a fused image.
  • an image fusion method which is applied to an image fusion device, the image fusion device includes an image sensor, a light supplement, a filter component, and an image processing unit, and the light supplement includes a first light supplement device ,
  • the filter assembly includes a first filter, and the image sensor is located on the light exit side of the filter assembly, wherein the method includes:
  • the near-infrared light-filling is performed by the first light-filling device, wherein the near-infrared light-filling is performed at least during a part of the exposure time period of the first preset exposure, and the near-infrared light is not performed during the exposure time period of the second preset exposure Fill light, the first preset exposure and the second preset exposure are two of the multiple exposures of the image sensor;
  • Multiple exposures are performed by the image sensor to generate and output a first image signal and a second image signal, the first image signal is an image generated according to the first preset exposure, and the second image signal is An image generated according to the second preset exposure;
  • the image processing unit processes the first image signal and the second image signal to obtain a fused image.
  • This application uses the exposure timing of the image sensor to control the near-infrared supplementary light timing of the light supplement device, so that the first image signal is generated through the first preset exposure when the near-infrared supplementary light is performed, and the second preset is used when the near-infrared supplementary light is not performed.
  • the exposure generates the second image signal.
  • This data collection method can directly collect the first image signal and the second image signal with different brightness information while the structure is simple and the cost is reduced. That is, the two image signals can be acquired through one image sensor. Different image signals are combined to make the image fusion device more convenient, and the fusion of the first image signal and the second image signal is more efficient.
  • the first image signal and the second image signal are both generated and output by the same image sensor, so the viewpoint corresponding to the first image signal is the same as the viewpoint corresponding to the second image signal. Therefore, the information of the external scene can be jointly obtained through the first image signal and the second image signal, and there is no difference between the viewpoint corresponding to the first image signal and the viewpoint corresponding to the second image signal.
  • the problem of misalignment with the image generated by the second image signal is higher.
  • Fig. 1 is a schematic structural diagram of a first image fusion device provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of the principle of generating a first image signal by an image fusion device according to an embodiment of the present application.
  • FIG. 3 is a schematic diagram of the principle of generating a second image signal by an image fusion device according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of the relationship between the wavelength and the relative intensity of the near-infrared supplement light performed by a first light supplement device according to an embodiment of the application.
  • FIG. 5 is a schematic diagram of the relationship between the wavelength of light passed through the first filter and the pass rate provided by an embodiment of the present application.
  • Fig. 6 is a schematic structural diagram of a second image fusion device provided by an embodiment of the present application.
  • Fig. 7 is a schematic diagram of an RGB sensor provided by an embodiment of the present application.
  • Fig. 8 is a schematic diagram of an RGBW sensor provided by an embodiment of the present application.
  • Fig. 9 is a schematic diagram of an RCCB sensor provided by an embodiment of the present application.
  • Fig. 10 is a schematic diagram of a RYYB sensor provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram of a sensing curve of an image sensor provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of a rolling shutter exposure method provided by an embodiment of the present application.
  • FIG. 13 is a schematic diagram of the timing relationship between the first near-infrared fill light and the first preset exposure and the second preset exposure in the global exposure mode provided by an embodiment of the present application.
  • FIG. 14 is a schematic diagram of the timing relationship between the second near-infrared fill light provided by an embodiment of the present application and the first preset exposure and the second preset exposure in the global exposure mode.
  • FIG. 15 is a schematic diagram of the timing relationship between the third near-infrared fill light provided by an embodiment of the present application and the first preset exposure and the second preset exposure in the global exposure mode.
  • FIG. 16 is a schematic diagram of the timing relationship between the first preset exposure and the second preset exposure in the first near-infrared fill light and the rolling shutter exposure mode provided by an embodiment of the present application.
  • FIG. 17 is a schematic diagram of the timing relationship between the first preset exposure and the second preset exposure in the second near-infrared fill light and the rolling shutter exposure mode provided by an embodiment of the present application.
  • FIG. 18 is a schematic diagram of the timing relationship between the first preset exposure and the second preset exposure in the third near-infrared fill light and the rolling shutter exposure mode provided by an embodiment of the present application.
  • Fig. 19 is a schematic diagram of an image processing unit provided by an embodiment of the present application.
  • Fig. 20 is a schematic diagram of another image processing unit provided by an embodiment of the present application.
  • FIG. 21 is a flowchart of an image fusion method provided by an embodiment of the present application.
  • 01 Image sensor
  • 02 Filler
  • 03 Filter component
  • 04 Image processing unit
  • 05 Lens: 06: Encoding compression unit
  • 07 Intelligent analysis unit
  • 021 the first light supplement device
  • 022 the second light supplement device
  • 031 the first filter
  • Fig. 1 is a schematic structural diagram of an image fusion device provided by an embodiment of the present application.
  • the image acquisition device includes an image sensor 01, a light supplement 02, and a filter assembly 03, and the image sensor 01 is located on the light exit side of the filter assembly 03.
  • the image sensor 01 is used to generate and output a first image signal and a second image signal through multiple exposures.
  • the first image signal is an image generated according to the first preset exposure
  • the second image signal is an image generated according to the second preset exposure
  • the first preset exposure and the second preset exposure are among the multiple exposures. Two exposures.
  • the light supplement 02 includes a first light supplement device 021, and the first light supplement device 021 is used to perform near-infrared supplementary light, wherein the near-infrared supplementary light is performed at least during a partial exposure period of the first preset exposure. No near-infrared fill light is performed during the exposure time period of the preset exposure.
  • the filter assembly 03 includes a first filter 031. The first filter 031 allows visible light and part of the near-infrared light to pass. The first light-filling device 021 passes through the first filter 031 when the near-infrared light is filled.
  • the intensity of the near-infrared light is higher than the intensity of the near-infrared light passing through the first filter 031 when the first light supplement device 021 is not performing near-infrared light supplementation.
  • the image processing unit 02 is used to process the first image signal and the second image signal to obtain a fused image.
  • This application uses the exposure timing of the image sensor to control the near-infrared supplementary light timing of the light supplement device, so that the first image signal is generated through the first preset exposure when the near-infrared supplementary light is performed, and the second preset is used when the near-infrared supplementary light is not performed.
  • the exposure generates the second image signal.
  • This data collection method can directly collect the first image signal and the second image signal with different brightness information while the structure is simple and the cost is reduced. That is, the two image signals can be acquired through one image sensor. Different image signals are combined to make the image fusion device more convenient, and the fusion of the first image signal and the second image signal is more efficient.
  • the first image signal and the second image signal are both generated and output by the same image sensor, so the viewpoint corresponding to the first image signal is the same as the viewpoint corresponding to the second image signal. Therefore, the information of the external scene can be jointly obtained through the first image signal and the second image signal, and there is no difference between the viewpoint corresponding to the first image signal and the viewpoint corresponding to the second image signal.
  • the problem of misalignment with the image generated by the second image signal is higher.
  • the image sensor 01, the light supplement 02 and the filter assembly 03 are all used for image acquisition, they can be collectively referred to as the image acquisition unit of the image fusion device.
  • the image fusion device may further include a coding compression unit 06 and an intelligent analysis unit 07.
  • the encoding compression unit 06 is configured to perform encoding compression processing on the fused image output by the image processing unit, and output the encoded and compressed image.
  • the intelligent analysis unit 07 is used to analyze and process the fused image output by the image processing unit and output the analysis result.
  • the image acquisition unit, image processing unit, coding compression unit, and intelligent analysis unit included in the image fusion device will be described separately below.
  • the image acquisition unit includes an image sensor 01, a light supplement 02 and a filter component 03, and the image sensor 01 is located on the light output side of the filter component 03.
  • the image acquisition unit may further include a lens 05.
  • the filter assembly 03 may be located between the lens 05 and the image sensor 01, and the image sensor 01 is located on the light exit side of the filter assembly 03 .
  • the lens 05 is located between the filter assembly 03 and the image sensor 01, and the image sensor 01 is located on the light exit side of the lens 05.
  • the first filter 031 can be a filter film.
  • the first filter 031 can be attached to the light emitting side of the lens 05 On the surface, or, when the lens 05 is located between the filter assembly 03 and the image sensor 01, the first filter 031 may be attached to the surface of the lens 05 on the light incident side.
  • the light supplement 02 can be located in the image acquisition unit or outside the image acquisition unit.
  • the light supplement 02 can be a part of the image acquisition unit or a device independent of the image acquisition unit.
  • the light supplement 02 can be connected to the image acquisition unit in communication, so as to ensure that the exposure timing of the image sensor 01 in the image acquisition unit is consistent with the first supplement included in the light supplement 02.
  • the timing of the near-infrared supplement light of the light device 021 has a certain relationship. For example, the near-infrared supplement light is performed at least during a partial exposure time period of the first preset exposure, and the near-infrared supplement light is not performed during the exposure time period of the second preset exposure Light.
  • the first supplementary light device 021 is a device that can emit near-infrared light, such as a near-infrared supplementary light, etc., the first supplementary light device 021 can perform near-infrared supplementary light in a stroboscopic manner, or other similar stroboscopic The near-infrared supplementary light is performed in a manner, which is not limited in the embodiment of the present application.
  • the first supplement light device 021 when the first light supplement device 021 performs near-infrared supplement light in a stroboscopic manner, the first supplement light device 021 may be manually controlled to perform near-infrared supplement light in a stroboscopic manner, or through a software program Or a specific device controls the first light supplement device 021 to perform near-infrared supplement light in a strobe mode, which is not limited in the embodiment of the present application.
  • the time period during which the first light supplement device 021 performs near-infrared light supplementation may coincide with the exposure time period of the first preset exposure, or may be greater than the exposure time period of the first preset exposure or less than the exposure time period of the first preset exposure. The time period, as long as the near-infrared supplement light is performed during the entire exposure time period or part of the exposure time period of the first preset exposure, and the near-infrared supplement light is not performed during the exposure time period of the second preset exposure.
  • the near-infrared supplementary light is not performed during the exposure time period of the second preset exposure.
  • the exposure time period of the second preset exposure may be between the start exposure time and the end exposure time.
  • Time period, for the rolling shutter exposure mode the exposure time period of the second preset exposure may be the time period between the start exposure time of the first row of effective images of the second image signal and the end exposure time of the last row of effective images, but it is not limited to this.
  • the exposure time period of the second preset exposure may also be the exposure time period corresponding to the target image in the second image signal, and the target image is several rows of effective images corresponding to the target object or target area in the second image signal.
  • the time period between the start exposure time and the end exposure time of the several rows of effective images can be regarded as the exposure time period of the second preset exposure.
  • the first light supplement device 021 performs near-infrared supplement light on the external scene
  • the near-infrared light incident on the surface of the object may be reflected by the object, and thus enter the first filter 031.
  • the ambient light may include visible light and near-infrared light, and the near-infrared light in the ambient light is also reflected by the object when it is incident on the surface of the object, and thus enters the first filter 031.
  • the near-infrared light that passes through the first filter 031 when performing near-infrared light supplementation may include the near-infrared light that is reflected by the object and enters the first filter 031 when the first light supplement device 021 performs near-infrared light supplementation.
  • the near-infrared light that passes through the first filter 031 when the near-infrared light is not performed may include the near-infrared light reflected by the object and enters the first filter 031 when the first light-filling device 021 is not performing the near-infrared light.
  • the near-infrared light that passes through the first filter 031 when performing near-infrared supplementary light includes the near-infrared light emitted by the first supplementary light device 021 and reflected by the object, and the ambient light reflected by the object Near-infrared light
  • the near-infrared light passing through the first filter 031 when the near-infrared supplementary light is not performed includes the near-infrared light reflected by the object in the ambient light.
  • the filter component 03 can be located between the lens 05 and the image sensor 01, and the image sensor 01 is located on the light-emitting side of the filter component 03 as an example.
  • the image acquisition unit collects the first image signal and the second image signal.
  • the image signal process is as follows: referring to Figure 2, when the image sensor 01 performs the first preset exposure, the first light supplement device 021 performs near-infrared light supplementation, and the ambient light in the shooting scene and the first light supplement device perform close After the near-infrared light reflected by the objects in the scene during the infrared fill light passes through the lens 05 and the first filter 031, the image sensor 01 generates the first image signal through the first preset exposure; see Figure 3, the image sensor 01 performs During the second preset exposure, the first light-filling device 021 does not perform near-infrared light-filling.
  • the ambient light in the shooting scene passes through the lens 05 and the first filter 031, and the image sensor 01 passes through the second preset exposure Generate a second image signal.
  • the values of M and N and the magnitude relationship between M and N can be set according to actual requirements, for example, the values of M and N may be equal or not equal.
  • the first filter 031 can pass part of the near-infrared light waveband.
  • the near-infrared light waveband passing through the first filter 031 can be part of the near-infrared light waveband, or it can be all
  • the near-infrared light band is not limited in the embodiment of the application.
  • the first light supplement device 021 since the intensity of the near-infrared light in the ambient light is lower than the intensity of the near-infrared light emitted by the first light supplement device 021, the first light supplement device 021 passes through the first filter 031 when performing near-infrared supplement light.
  • the intensity of the near-infrared light is higher than the intensity of the near-infrared light passing through the first filter 031 when the first light supplement device 021 is not performing near-infrared light supplementation.
  • the wavelength range of the first light supplement device 021 for near-infrared supplement light may be the second reference wavelength range, and the second reference wavelength range may be 700 nanometers to 800 nanometers, or 900 nanometers to 1000 nanometers, which can reduce common Interference caused by 850nm near-infrared light.
  • the wavelength range of the near-infrared light incident on the first filter 031 may be a first reference wavelength range, and the first reference wavelength range is 650 nanometers to 1100 nanometers.
  • the near-infrared light passing through the first filter 031 may include the near-infrared light reflected by the object and entering the first filter 031 when the first light-filling device 021 performs near-infrared light supplementation, And the near-infrared light reflected by the object in the ambient light. Therefore, the intensity of the near-infrared light entering the filter assembly 03 is relatively strong at this time. However, when the near-infrared supplementary light is not performed, the near-infrared light passing through the first filter 031 includes the near-infrared light reflected by the object into the filter assembly 03 in the ambient light.
  • the intensity of the near-infrared light passing through the first filter 031 is weak at this time. Therefore, the intensity of the near infrared light included in the first image signal generated and output according to the first preset exposure is higher than the intensity of the near infrared light included in the second image signal generated and output according to the second preset exposure.
  • the center wavelength and/or wavelength range of the first light supplement device 021 for near-infrared supplement light there are multiple choices for the center wavelength and/or wavelength range of the first light supplement device 021 for near-infrared supplement light.
  • the center wavelength of the near-infrared supplement light of the first light supplement device 021 can be designed, and the characteristics of the first filter 031 can be selected, so that the center of the first light supplement device 021 for the near-infrared light supplement.
  • the center wavelength and/or band width of the near-infrared light passing through the first filter 031 can meet the constraint conditions.
  • This constraint is mainly used to restrict the center wavelength of the near-infrared light passing through the first filter 031 as accurately as possible, and the band width of the near-infrared light passing through the first filter 031 as narrow as possible, so as to avoid The infrared light band width is too wide and introduces wavelength interference.
  • the center wavelength of the near-infrared supplement light performed by the first light supplement device 021 may be the average value in the wavelength range of the highest energy in the spectrum of the near-infrared light emitted by the first light supplement device 021, or it may be understood as the first supplement light
  • the set characteristic wavelength or the set characteristic wavelength range can be preset.
  • the center wavelength of the first light supplement device 021 for near-infrared supplement light may be any wavelength within the wavelength range of 750 ⁇ 10 nanometers; or, the center wavelength of the first light supplement device 021 for near-infrared supplement light It is any wavelength in the wavelength range of 780 ⁇ 10 nanometers; or, the center wavelength of the first light supplement device 021 for near-infrared supplement light is any wavelength in the wavelength range of 940 ⁇ 10 nanometers. That is, the set characteristic wavelength range may be a wavelength range of 750 ⁇ 10 nanometers, or a wavelength range of 780 ⁇ 10 nanometers, or a wavelength range of 940 ⁇ 10 nanometers.
  • the center wavelength of the near-infrared supplement light performed by the first light supplement device 021 is 940 nanometers
  • the relationship between the wavelength and the relative intensity of the near-infrared supplement light performed by the first light supplement device 021 is shown in FIG. 4. It can be seen from FIG. 4 that the wavelength range of the first light supplement device 021 for near-infrared supplement light is 900 nanometers to 1000 nanometers, and the relative intensity of near-infrared light is the highest at 940 nanometers.
  • the above constraint conditions may include: the difference between the center wavelength of the near-infrared light passing through the first filter 031 and the center wavelength of the near-infrared light of the first light supplement device 021 lies in the wavelength fluctuation Within the range, as an example, the wavelength fluctuation range may be 0-20 nanometers.
  • the center wavelength of the near-infrared supplement light passing through the first filter 031 can be the wavelength at the peak position in the near-infrared band in the near-infrared light pass rate curve of the first filter 031, or it can be understood as the first A filter 031 is the wavelength at the middle position in the near-infrared waveband whose pass rate exceeds a certain threshold in the near-infrared light pass rate curve.
  • the above constraint conditions may include: the first band width may be smaller than the second band width.
  • the first waveband width refers to the waveband width of the near-infrared light passing through the first filter 031
  • the second waveband width refers to the waveband width of the near-infrared light blocked by the first filter 031.
  • the wavelength band width refers to the width of the wavelength range in which the wavelength of light lies.
  • the first wavelength band width is 800 nanometers minus 700 nanometers, that is, 100 nanometers.
  • the wavelength band width of the near-infrared light passing through the first filter 031 is smaller than the wavelength band width of the near-infrared light blocked by the first filter 031.
  • Fig. 5 is a schematic diagram of the relationship between the wavelength of light that can pass through the first filter 031 and the pass rate.
  • the wavelength band of the near-infrared light incident on the first filter 031 is 650 nanometers to 1100 nanometers.
  • the first filter 031 can pass visible light with a wavelength of 380 nanometers to 650 nanometers and a wavelength of near 900 nanometers to 1100 nanometers.
  • Infrared light passes through and blocks near-infrared light with a wavelength between 650 nanometers and 900 nanometers. That is, the width of the first band is 1000 nanometers minus 900 nanometers, that is, 100 nanometers.
  • the second band width is 900 nm minus 650 nm, plus 1100 nm minus 1000 nm, or 350 nm. 100 nanometers are smaller than 350 nanometers, that is, the wavelength band width of the near-infrared light passing through the first filter 031 is smaller than the wavelength band width of the near-infrared light blocked by the first filter 031.
  • the above relationship curve is just an example.
  • the wavelength range of the near-red light that can pass through the filter can be different, and the wavelength range of the near-infrared light blocked by the filter can also be different. different.
  • the above constraint conditions may include: passing the first filter
  • the half bandwidth of the near-infrared light of the light sheet 031 is less than or equal to 50 nanometers.
  • the half bandwidth refers to the band width of near-infrared light with a pass rate greater than 50%.
  • the above constraint condition may include: the third band width may be smaller than the reference band width.
  • the third waveband width refers to the waveband width of near-infrared light with a pass rate greater than a set ratio.
  • the reference waveband width may be any waveband width in the range of 50 nanometers to 100 nanometers.
  • the set ratio can be any ratio from 30% to 50%.
  • the set ratio can also be set to other ratios according to usage requirements, which is not limited in the embodiment of the present application.
  • the band width of the near-infrared light whose pass rate is greater than the set ratio may be smaller than the reference band width.
  • the wavelength band of the near-infrared light incident on the first filter 031 is 650 nanometers to 1100 nanometers, the setting ratio is 30%, and the reference wavelength band width is 100 nanometers. It can be seen from FIG. 5 that in the wavelength band of near-infrared light from 650 nanometers to 1100 nanometers, the band width of near-infrared light with a pass rate greater than 30% is significantly less than 100 nanometers.
  • the first light supplement device 021 Since the first light supplement device 021 provides near-infrared supplementary light at least during a partial exposure time period of the first preset exposure, it does not provide near-infrared supplementary light during the entire exposure time period of the second preset exposure, and the first preset exposure
  • the exposure and the second preset exposure are two of the multiple exposures of the image sensor 01, that is, the first light supplement device 021 provides near-infrared supplement light during the exposure period of the partial exposure of the image sensor 01, The near-infrared supplementary light is not provided during the exposure time period when another part of the image sensor 01 is exposed.
  • the number of times of supplementary light in the unit time length of the first supplementary light device 021 can be lower than the number of exposures of the image sensor 01 in the unit time length, wherein, within the interval of two adjacent times of supplementary light, there is one interval. Or multiple exposures.
  • the light supplement 02 is also A second light supplement device 022 may be included, and the second light supplement device 022 is used for visible light supplement light.
  • the second light supplement device 022 provides visible light supplement light at least during a part of the exposure time of the first preset exposure, that is, it performs near-infrared supplement light and visible light supplement light at least during the partial exposure time period of the first preset exposure.
  • the mixed color of the two lights can be distinguished from the color of the red light in the traffic light, thereby avoiding the human eye from confusing the color of the light fill 02 for near-infrared fill light with the color of the red light in the traffic light.
  • the second light supplement device 022 provides visible light supplement light during the exposure time period of the second preset exposure, since the intensity of visible light is not particularly high during the exposure time period of the second preset exposure, When the visible light supplement is performed during the exposure time period of the exposure, the brightness of the visible light in the second image signal can also be increased, thereby ensuring the quality of image collection.
  • the second light supplement device 022 may be used to perform visible light supplement light in a constant light mode; or, the second light supplement device 022 may be used to perform visible light supplement light in a stroboscopic manner, wherein, at least in the first Visible light supplement light exists in part of the exposure time period of the preset exposure, and there is no visible light supplement light during the entire exposure time period of the second preset exposure; or, the second light supplement device 022 can be used to perform visible light supplement light in a strobe mode There is no visible light supplementary light at least during the entire exposure time period of the first preset exposure, and visible light supplementary light exists during the partial exposure time period of the second preset exposure.
  • the second light supplement device 022 When the second light supplement device 022 performs visible light supplement light in a constant light mode, it can not only prevent human eyes from confusing the color of the first light supplement device 021 for near-infrared supplement light with the color of the red light in the traffic light, but also can improve The brightness of the visible light in the second image signal in turn ensures the quality of image collection.
  • the second light supplement device 022 When the second light supplement device 022 performs visible light supplement light in a stroboscopic manner, it can prevent human eyes from confusing the color of the first light supplement device 021 for near-infrared supplement light with the color of the red light in the traffic light, or can improve The brightness of the visible light in the second image signal in turn ensures the quality of image collection, and can also reduce the number of times of supplementary light of the second supplementary light device 022, thereby prolonging the service life of the second supplementary light device 022.
  • the aforementioned multiple exposures refer to multiple exposures within one frame period, that is, the image sensor 01 performs multiple exposures within one frame period, thereby generating and outputting at least one frame of the first image signal and At least one frame of the second image signal.
  • 1 second includes 25 frame periods, and the image sensor 01 performs multiple exposures in each frame period, thereby generating at least one frame of the first image signal and at least one frame of the second image signal, and the The first image signal and the second image signal are called a group of images, so that 25 groups of images are generated within 25 frame periods.
  • the first preset exposure and the second preset exposure may be two adjacent exposures in multiple exposures in one frame period, or two non-adjacent exposures in multiple exposures in one frame period. The application embodiment does not limit this.
  • the first image signal is generated and output by the first preset exposure
  • the second image signal is generated and output by the second preset exposure.
  • the first image can be The signal and the second image signal are processed.
  • the purposes of the first image signal and the second image signal may be different, so in some embodiments, at least one exposure parameter of the first preset exposure and the second preset exposure may be different.
  • the at least one exposure parameter may include but is not limited to one or more of exposure time, analog gain, digital gain, and aperture size. Wherein, the exposure gain includes analog gain and/or digital gain.
  • the intensity of the near-infrared light sensed by the image sensor 01 is stronger when the near-infrared light supplement is performed, and the first image signal generated and output accordingly includes the near-infrared light
  • the brightness of the light will also be higher.
  • near-infrared light with higher brightness is not conducive to the acquisition of external scene information.
  • the greater the exposure gain, the higher the brightness of the image output by the image sensor 01, and the smaller the exposure gain the lower the brightness of the image output by the image sensor 01.
  • the exposure gain of the first preset exposure may be smaller than the second preset exposure. Set the exposure gain for exposure.
  • the longer the exposure time the higher the brightness included in the image obtained by the image sensor 01, and the longer the motion trailing of the moving objects in the external scene in the image; the shorter the exposure time, the image sensor 01 The lower the brightness included in the obtained image, and the shorter the motion trail of the moving objects in the external scene in the image. Therefore, in order to ensure that the brightness of the near-infrared light contained in the first image signal is within an appropriate range, and that the moving objects in the external scene have a short motion trail in the first image signal.
  • the exposure time of the first preset exposure may be less than the exposure time of the second preset exposure.
  • the first light supplement device 021 performs near-infrared supplement light
  • the brightness of the near-infrared light contained in the first image signal generated and output by the image sensor 01 will not be affected by the first light supplement device 021 performing near-infrared supplement light. Too high.
  • the shorter exposure time makes the motion trailing of the moving object in the external scene appear shorter in the first image signal, thereby facilitating the recognition of the moving object.
  • the exposure time of the first preset exposure is 40 milliseconds
  • the exposure time of the second preset exposure is 60 milliseconds, and so on.
  • the exposure time of the first preset exposure may not only be less than the exposure time of the second preset exposure , Can also be equal to the exposure time of the second preset exposure.
  • the exposure gain of the first preset exposure may be less than the exposure gain of the second preset exposure, or may be equal to the second preset exposure The exposure gain.
  • the purpose of the first image signal and the second image signal may be the same.
  • the exposure time of the first preset exposure may be equal to the exposure time of the second preset exposure. If the exposure time of the first preset exposure and the exposure time of the second preset exposure are different, the exposure time will be longer. There is a motion trailing in one channel of the image, resulting in different definitions of the two channels.
  • the exposure gain of the first preset exposure may be equal to the exposure gain of the second preset exposure.
  • the exposure gain of the first preset exposure may be less than the exposure gain of the second preset exposure. It can also be equal to the exposure gain of the second preset exposure.
  • the exposure time of the first preset exposure may be less than the exposure time of the second preset exposure, or may be equal to the second preset exposure The exposure time.
  • the image sensor 01 may include multiple photosensitive channels, and each photosensitive channel may be used to sense at least one type of light in the visible light band and to sense light in the near-infrared band. That is, each photosensitive channel can not only sense at least one kind of light in the visible light band, but also can sense light in the near-infrared band. In this way, it can be ensured that the first image signal and the second image signal have complete resolution without missing pixels. value.
  • the multiple photosensitive channels can be used to sense at least two different visible light wavelength bands.
  • the plurality of photosensitive channels may include at least two of R photosensitive channels, G photosensitive channels, B photosensitive channels, Y photosensitive channels, W photosensitive channels, and C photosensitive channels.
  • the R photosensitive channel is used to sense the light in the red and near-infrared bands
  • the G photosensitive channel is used to sense the light in the green and near-infrared bands
  • the B photosensitive channel is used to sense the light in the blue and near-infrared bands.
  • Y The photosensitive channel is used to sense light in the yellow band and near-infrared band.
  • W can be used to represent the light-sensing channel used to sense full-wavelength light
  • C can be used to represent the light-sensing channel used to sense full-wavelength light, so when there is more
  • a photosensitive channel includes a photosensitive channel for sensing light of a full waveband
  • this photosensitive channel may be a W photosensitive channel or a C photosensitive channel. That is, in practical applications, the photosensitive channel used for sensing the light of the full waveband can be selected according to the use requirements.
  • the image sensor 01 may be an RGB sensor, RGBW sensor, or RCCB sensor, or RYYB sensor.
  • the distribution of the R photosensitive channel, the G photosensitive channel and the B photosensitive channel in the RGB sensor can be seen in Figure 7, and the distribution of the R photosensitive channel, G photosensitive channel, B photosensitive channel and W photosensitive channel in the RGBW sensor can be seen in Figure 7.
  • the distribution of the R photosensitive channel, the C photosensitive channel and the B photosensitive channel in the RCCB sensor can be seen in Figure 9, and the distribution of the R photosensitive channel, the Y photosensitive channel and the B photosensitive channel in the RYYB sensor can be seen in Figure 10.
  • some photosensitive channels may only sense light in the near-infrared waveband, but not light in the visible light waveband. In this way, it can be ensured that the first image signal has complete resolution without missing pixel values.
  • the plurality of photosensitive channels may include at least two of R photosensitive channels, G photosensitive channels, B photosensitive channels, and IR photosensitive channels. Among them, the R photosensitive channel is used to sense red light and near-infrared light, the G photosensitive channel is used to sense green light and near-infrared light, and the B photosensitive channel is used to sense blue light and near-infrared light. IR The photosensitive channel is used to sense light in the near-infrared band.
  • the image sensor 01 may be an RGBIR sensor, where each IR photosensitive channel in the RGBIR sensor can sense light in the near-infrared waveband, but not light in the visible light waveband.
  • the image sensor 01 is an RGB sensor
  • other image sensors such as RGBIR sensors
  • the RGB information collected by the RGB sensor is more complete.
  • Some of the photosensitive channels of the RGBIR sensor cannot collect visible light, so the RGB sensor collects The color details of the image are more accurate.
  • the multiple photosensitive channels included in the image sensor 01 may correspond to multiple sensing curves.
  • the R curve in FIG. 11 represents the sensing curve of the image sensor 01 to light in the red light band
  • the G curve represents the sensing curve of the image sensor 01 to light in the green light band
  • the B curve represents the image sensor 01
  • the W (or C) curve represents the sensing curve of the image sensor 01 to sense the light in the full band
  • the NIR (Near infrared) curve represents the sensing of the image sensor 01 to sense the light in the near infrared band. curve.
  • the image sensor 01 may adopt a global exposure method or a rolling shutter exposure method.
  • the global exposure mode means that the exposure start time of each row of effective images is the same, and the exposure end time of each row of effective images is the same.
  • the global exposure mode is an exposure mode in which all rows of effective images are exposed at the same time and the exposure ends at the same time.
  • Rolling shutter exposure mode means that the exposure time of different rows of effective images does not completely coincide, that is, the exposure start time of a row of effective images is later than the exposure start time of the previous row of effective images of the row, and the exposure of a row of effective images The end time is later than the exposure end time of the effective image in the previous row of the effective image.
  • data can be output after each line of effective image is exposed. Therefore, the time from the start of output of the first line of effective image to the end of output of the last line of effective image can be expressed as reading Time out.
  • FIG. 12 is a schematic diagram of a rolling shutter exposure method. It can be seen from Figure 12 that the effective image of the first line begins to be exposed at time T1, and the exposure ends at time T3. The effective image of the second line begins to be exposed at time T2 and ends at time T4. Time T2 is backward compared to time T1. A period of time has passed, and time T4 has passed a period of time backward compared to time T3. In addition, the effective image of the first line ends exposure at time T3 and begins to output data, and the output of data ends at time T5. The effective image of line n ends exposure at time T6 and begins to output data, and the output of data ends at time T7, then T3 The time between ⁇ T7 is the read time.
  • the time period of the near-infrared fill light and the exposure time period of the nearest second preset exposure do not exist Intersection
  • the time period of near-infrared fill light is a subset of the exposure time period of the first preset exposure, or the time period of near-infrared fill light and the exposure time period of the first preset exposure overlap, or the first preset
  • the exposure period of exposure is a subset of the near-infrared fill light.
  • the near-infrared supplementary light is performed at least during a part of the exposure time period of the first preset exposure, and the near-infrared supplementary light is not performed during the entire exposure time period of the second preset exposure. Set the exposure to affect.
  • the time period of near-infrared fill light does not overlap with the exposure time period of the nearest second preset exposure, and the time period of near-infrared fill light is the first preset A subset of the exposure time period for exposure.
  • the time period of near-infrared fill light does not overlap with the exposure time period of the nearest second preset exposure, and the time period of near-infrared fill light is equal to that of the first preset exposure. There is an intersection of exposure time periods.
  • the time period of near-infrared fill light does not overlap with the exposure time period of the nearest second preset exposure, and the exposure time period of the first preset exposure is near-infrared fill light
  • a subset of. 13 to 15 are only an example, and the sorting of the first preset exposure and the second preset exposure may not be limited to these examples.
  • the time period of near-infrared fill light is the same as the exposure time period of the nearest second preset exposure There is no intersection.
  • the start time of the near-infrared fill light is no earlier than the exposure start time of the last line of the effective image in the first preset exposure
  • the end time of the near-infrared fill light is no later than the exposure of the first line of the effective image in the first preset exposure End time.
  • the start time of the near-infrared fill light is not earlier than the exposure end time of the last line of the effective image of the nearest second preset exposure before the first preset exposure and is not later than the first line of the first preset exposure.
  • the image exposure end time, the end time of the near-infrared fill light is no earlier than the exposure start time of the last line of the effective image in the first preset exposure and no later than the nearest second preset exposure after the first preset exposure
  • the start time of the near-infrared fill light is not earlier than the exposure end time of the last line of the effective image of the nearest second preset exposure before the first preset exposure and is not later than the first line of the first preset exposure.
  • the image exposure start time, the end time of the near-infrared fill light is no earlier than the exposure end time of the last line of the effective image in the first preset exposure and no later than the nearest second preset exposure after the first preset exposure The start time of the exposure of the first line of the effective image.
  • the time period of near-infrared fill light does not overlap with the exposure time period of the nearest second preset exposure, and the start time of near-infrared fill light is no earlier than The exposure start time of the last line of the effective image in the first preset exposure, and the end time of the near-infrared fill light is no later than the exposure end time of the first line of the effective image in the first preset exposure.
  • the time period of near-infrared fill light does not overlap with the exposure time period of the nearest second preset exposure, and the start time of near-infrared fill light is no earlier than the first
  • the end time of the near-infrared fill light is not It is earlier than the exposure start time of the last line of the effective image in the first preset exposure and not later than the exposure start time of the first line of the effective image of the nearest second preset exposure after the first preset exposure.
  • the time period of near-infrared fill light does not overlap with the exposure time period of the nearest second preset exposure, and the start time of near-infrared fill light is no earlier than the first
  • the exposure end time of the last line of the effective image of the nearest second preset exposure before the preset exposure and not later than the exposure start time of the first line of the effective image in the first preset exposure the end time of the near-infrared fill light is not It is earlier than the exposure end time of the last line of the effective image in the first preset exposure and not later than the exposure start time of the first line of the effective image of the nearest second preset exposure after the first preset exposure.
  • 16 to 18 are only an example, and the sorting of the first preset exposure and the second preset exposure may not be limited to these examples.
  • the multiple exposures may include odd-numbered exposures and even-numbered exposures.
  • the first preset exposure and the second preset exposure may include but are not limited to the following methods:
  • the first preset exposure is one exposure in an odd number of exposures
  • the second preset exposure is one exposure in an even number of exposures.
  • the multiple exposures may include the first preset exposure and the second preset exposure arranged in a parity order.
  • odd-numbered exposures such as the first exposure, third exposure, and fifth exposure in multiple exposures are all the first preset exposures
  • the second exposure, fourth exposure, and sixth exposure are even-numbered exposures.
  • the exposure is the second preset exposure.
  • the first preset exposure is one exposure in an even number of exposures
  • the second preset exposure is one exposure in an odd number of exposures.
  • the multiple exposures may include the first exposure in a parity order.
  • the preset exposure and the second preset exposure For example, odd-numbered exposures such as the first exposure, third exposure, and fifth exposure in multiple exposures are all second preset exposures, and even-numbered exposures such as second exposure, fourth exposure, and sixth exposure
  • the exposure is the first preset exposure.
  • the first preset exposure is one exposure in the specified odd number of exposures
  • the second preset exposure is one exposure in the other exposures except the specified odd number of exposures, that is, The second preset exposure may be an odd number of exposures in multiple exposures, or an even number of exposures in multiple exposures.
  • the first preset exposure is one exposure in the specified even number of exposures
  • the second preset exposure is one exposure in the other exposures except the specified even number of exposures, that is, The second preset exposure may be an odd number of exposures in multiple exposures, or an even number of exposures in multiple exposures.
  • the first preset exposure is one exposure in the first exposure sequence
  • the second preset exposure is one exposure in the second exposure sequence.
  • the first preset exposure is one exposure in the second exposure sequence
  • the second preset exposure is one exposure in the first exposure sequence
  • the aforementioned multiple exposure includes multiple exposure sequences
  • the first exposure sequence and the second exposure sequence are the same exposure sequence or two different exposure sequences in the multiple exposure sequences
  • each exposure sequence includes N exposures
  • the N exposures include 1 first preset exposure and N-1 second preset exposures, or the N exposures include 1 second preset exposure and N-1 second preset exposures, where N is A positive integer greater than 2.
  • each exposure sequence includes 3 exposures, and these 3 exposures may include 1 first preset exposure and 2 second preset exposures.
  • the first exposure of each exposure sequence may be the first preset Exposure
  • the second and third exposures are the second preset exposure. That is, each exposure sequence can be expressed as: a first preset exposure, a second preset exposure, and a second preset exposure.
  • these 3 exposures may include 1 second preset exposure and 2 first preset exposures, so that the first exposure of each exposure sequence may be the second preset exposure, the second and the third The exposure is the first preset exposure. That is, each exposure sequence can be expressed as: the second preset exposure, the first preset exposure, and the first preset exposure.
  • the filter assembly 03 further includes a second filter and a switching component, and both the first filter 031 and the second filter are connected to the switching component.
  • the switching component is used to switch the second filter to the light entrance side of the image sensor 01. After the second filter is switched to the light entrance side of the image sensor 01, the second filter allows light in the visible light band to pass through, Blocking light in the near-infrared light band, the image sensor 01 is used to generate and output a third image signal through exposure.
  • the switching component is used to switch the second filter to the light incident side of the image sensor 01, and it can also be understood as the second filter replacing the first filter 031 on the light incident side of the image sensor 01 position.
  • the first light supplement device 021 may be in the off state or in the on state.
  • the first light supplement device 021 can be used to stroboscopic light supplement to make the image sensor 01 generate and output a first image signal containing near-infrared brightness information.
  • the second image signal containing visible light brightness information and since the first image signal and the second image signal are both acquired by the same image sensor 01, the viewpoint of the first image signal is the same as the viewpoint of the second image signal, so that the The first image signal and the second image signal can obtain complete information of the external scene.
  • the intensity of visible light is strong, for example, during the daytime, the proportion of near-infrared light in the daytime is relatively strong, and the color reproduction of the collected image is not good.
  • the third image signal containing the visible light brightness information can be generated and output by the image sensor 01, so Even during the day, images with good color reproduction can be collected, so that regardless of the intensity of visible light, or whether it is day or night, the true color information of the external scene can be obtained efficiently and simply, which improves the image
  • the acquisition unit is flexible in use and can be easily compatible with other image acquisition devices.
  • This application uses the exposure timing of the image sensor to control the near-infrared supplementary light timing of the supplementary light device, so as to perform near-infrared supplementary light during the first preset exposure and generate the first image signal, and during the second preset exposure It does not perform near-infrared supplementary light and generates a second image signal.
  • This data collection method can directly collect the first image signal and the second image signal with different brightness information while the structure is simple and the cost is reduced, that is, through one
  • the image sensor can acquire two different images, which makes the image acquisition device easier and more efficient to acquire the first image signal and the second image signal.
  • the first image signal and the second image signal are both generated and output by the same image sensor, so the viewpoint corresponding to the first image signal is the same as the viewpoint corresponding to the second image signal. Therefore, the information of the external scene can be jointly obtained through the first image signal and the second image signal, and there is no difference between the viewpoint corresponding to the first image signal and the viewpoint corresponding to the second image signal. It is not aligned with the image generated by the second image signal.
  • the image processing unit 04 may process the first image signal and the second image signal to obtain a fused image.
  • each pixel of the sensor can only capture one color value, and the other two color values are missing.
  • the image sensor 01 generates and outputs a mosaic image.
  • the images output by the image sensor 01 usually have noise signals, when the image processing unit 04 receives the first image signal and the second image signal output by the image sensor 01, it can first compare the first image signal and the second image signal. The image signal is preprocessed, and then the preprocessed image is fused.
  • the image processing unit 04 may include a preprocessing unit 041 and an image fusion unit 042.
  • the preprocessing unit 041 is used to preprocess the first image signal and the second image signal, and output the first preprocessed image and the second preprocessed image.
  • the image fusion unit 042 is used to fuse the first preprocessed image and the second preprocessed image to obtain a fused image.
  • the preprocessing unit 041 may include an image repair unit 0411 and an image noise reduction unit 0412.
  • the image restoration unit 0411 is configured to perform restoration processing on the first image signal to obtain a first restored image, and perform restoration processing on the second image signal to obtain a second restored image.
  • the first repaired image is a grayscale image
  • the second repaired image is a color image.
  • the image noise reduction unit 0412 is configured to perform noise reduction processing on the first repaired image and the second repaired image to obtain the first preprocessed image and the second preprocessed image.
  • the image restoration unit 0411 may perform demosaicing processing on the first image signal and the second image signal, respectively, to obtain the first repaired image and the second repaired image.
  • the image restoration unit 0411 performs demosaic processing on the first image signal and the second image signal, it can use methods such as bilinear interpolation or adaptive interpolation for processing. No more details.
  • the image repair unit 0411 can perform demosaicing processing on the first image signal and the second image signal, and can also perform black level, dead pixel correction, and Gamma correction on the first image signal and the second image signal.
  • the repair processing is not limited in this embodiment of the application.
  • the image noise reduction unit 0412 may perform joint noise reduction processing on the first restoration image and the second restoration image to obtain the first preprocessed image and the second restoration image. Process images.
  • the image noise reduction unit 0412 may include a temporal noise reduction unit.
  • the temporal noise reduction unit is used to perform motion estimation based on the first restored image and the second restored image to obtain a motion estimation result, and perform temporal filtering processing on the first restored image according to the motion estimation result to obtain the first preprocessed image, Perform temporal filtering processing on the second restored image according to the motion estimation result to obtain the second preprocessed image.
  • the temporal noise reduction unit may include a motion estimation unit and a temporal filtering unit.
  • the motion estimation unit may be used to generate a first frame difference image according to the first restored image and the first historical noise reduction image, and determine the first frame difference image according to the first frame difference image and multiple first set frame difference thresholds.
  • the first temporal filtering strength of each pixel in the repaired image where the first historical noise reduction image refers to an image after noise reduction is performed on any one of the first N frames of the first repaired image;
  • the time-domain filtering unit is used to perform time-domain filtering processing on the first repaired image according to the first time-domain filtering intensity of each pixel to obtain the first pre-processed image.
  • the second repaired image is subjected to temporal filtering processing to obtain a second preprocessed image.
  • the motion estimation unit may perform difference processing on each pixel in the first repaired image and the pixel value of the corresponding pixel in the first historical noise reduction image to obtain an original frame difference image, and the original frame The difference image is used as the first frame difference image.
  • the motion estimation unit may perform difference processing on each pixel in the first repaired image and the pixel value of the corresponding pixel in the first historical noise reduction image to obtain the original frame difference image.
  • the original frame difference image is processed to obtain the first frame difference image.
  • processing the original frame difference image may refer to performing spatial smoothing processing or block quantization processing on the original frame difference image.
  • the motion estimation unit may determine the first temporal filtering intensity of each pixel according to each pixel in the first frame difference image and multiple first set frame difference thresholds.
  • each pixel in the first frame difference image corresponds to a first set frame difference threshold
  • the first set frame difference threshold corresponding to each pixel point may be the same or different.
  • the first set frame difference threshold corresponding to each pixel can be set by an external user.
  • the motion estimation unit may perform difference processing between the last frame image of the first repaired image and the first historical noise reduction image to obtain the first noise intensity image, according to the first noise intensity
  • the noise intensity of each pixel in the image determines the first set frame difference threshold of the pixel at the corresponding position in the first frame difference image.
  • the first set frame difference threshold corresponding to each pixel point may also be determined in other ways, which is not limited in the embodiment of the present application.
  • the motion estimation unit can determine the corresponding pixel according to the frame difference of the pixel and the first set frame difference threshold corresponding to the pixel by the following formula (1)
  • the first temporal filtering intensity of the pixel is the frame difference of each pixel in the first frame difference image.
  • (x, y) is the position of the pixel in the image; ⁇ nir (x, y) refers to the first temporal filtering intensity of the pixel with coordinates (x, y), dif nir (x, y) refers to a frame of the pixel difference, dif_thr nir (x, y) means that the first frame set difference threshold value corresponding to the pixel.
  • the frame difference of the pixel point is smaller than the first set frame difference threshold, which means that the pixel point tends to be stationary, that is, the The smaller the motion level corresponding to the pixel. From the above formula (1), it can be seen that for any pixel, the frame difference of the pixel is smaller than the first set frame difference threshold, and the first time domain of the pixel at the same position as the pixel is The greater the filter strength.
  • the exercise level is used to indicate the intensity of the exercise. The higher the exercise level, the more intense the exercise.
  • the value of the first time domain filtering strength can be between 0 and 1.
  • the time-domain filtering unit may directly perform time-domain filtering processing on the first repaired image and the second repaired image directly according to the first time-domain filtering strength , Thereby obtaining the first preprocessed image and the second preprocessed image.
  • the first repaired image is significantly better than that of the second repaired image
  • the first repaired image is a near-infrared light image and has a high signal-to-noise ratio
  • each pixel in the first repaired image The first time-domain filtering strength of the point performs time-domain filtering processing on the second repaired image, which can more accurately distinguish the noise and effective information in the image, thereby avoiding the loss of image detail information and image smearing in the denoised image The problem.
  • the motion estimation unit may generate at least one first frame difference image according to the first repaired image and at least one first historical noise reduction image, and according to the at least one frame difference image and each frame
  • the multiple first set frame difference thresholds corresponding to the difference image determine the first temporal filtering intensity of each pixel in the first repaired image.
  • the at least one historical noise reduction image refers to an image obtained by performing noise reduction on the first N frames of the first restored image.
  • the motion estimation unit may determine the corresponding one according to the first historical noise-reduction image and the first repaired image with reference to the aforementioned related implementations.
  • the first frame difference image After that, the motion estimation unit may determine each first frame difference image according to each first frame difference image and multiple first set frame difference thresholds corresponding to each first frame difference image with reference to the aforementioned related implementation manners.
  • the intensity of time-domain filtering for each pixel is averaging of time-domain filtering for each pixel.
  • the motion estimation unit may fuse the temporal filtering strengths of the corresponding pixels in the first frame difference images to obtain the first temporal filtering strengths corresponding to each pixel.
  • the corresponding pixels in each first frame difference image refer to pixels located at the same position.
  • the motion estimation unit may select the represented motion level from at least one temporal filter intensity corresponding to the at least one pixel The maximum time domain filtering strength, and then the selected time domain filtering strength is used as the first time domain filtering strength of the pixel at the corresponding position in the first repaired image.
  • the motion estimation unit may generate the first frame difference image according to the first restored image and the first historical noise reduction image, and determine the first frame difference image according to the first frame difference image and multiple first set frame difference thresholds.
  • the first temporal filtering strength of each pixel in the image may be generated.
  • the first historical noise reduction image refers to the image after noise reduction is performed on any one of the first N frames of the first restored image; the motion estimation unit also uses To generate a second frame difference image according to the second restored image and the second historical noise reduction image, and determine the second time of each pixel in the second restored image according to the second frame difference image and multiple second set frame difference thresholds
  • the second historical noise reduction image refers to the image after noise reduction is performed on any one of the first N frames of the second restored image; the motion estimation unit is also used to calculate the noise according to each pixel in the first restored image
  • the first time-domain filter strength of each pixel and the second time-domain filter strength of each pixel in the second repaired image determine the joint time-domain filter strength of each pixel; the time-domain filtering unit is used to determine the joint time-domain filtering strength of each pixel
  • a time-domain filtering strength or joint time-domain filtering strength performs time-domain filtering processing on the first repaired image to obtain the first pre-processed image, and performing time-domain filtering
  • the motion estimation unit can not only determine the first temporal filtering strength of each pixel in the first repaired image through the implementation described above, but also determine the second temporal filtering of each pixel in the second repaired image. strength.
  • the motion estimation unit may first make a difference between the pixel value of each pixel in the second repaired image and the corresponding pixel in the second historical noise reduction image Processing to obtain the second frame difference image.
  • the first repaired image and the second repaired image are aligned, and the second repaired image and the second historical noise reduction image are also aligned.
  • the motion estimation unit may determine the second temporal filtering strength of each pixel according to each pixel in the second frame difference image and multiple second set frame difference thresholds.
  • each pixel in the second frame difference image corresponds to a second set frame difference threshold, that is, multiple second set frame difference thresholds and each pixel point in the second frame difference image one by one correspond.
  • the second set frame difference threshold corresponding to each pixel may be the same or different. In a possible implementation manner, the second set frame difference threshold corresponding to each pixel point can be set by an external user.
  • the motion estimation unit may perform difference processing between the previous frame image of the second restored image and the second historical noise reduction image to obtain a second noise intensity image, according to the second noise intensity
  • the noise intensity of each pixel in the image determines the second set frame difference threshold of the pixel at the corresponding position in the second frame difference image.
  • the second set frame difference threshold corresponding to each pixel point can also be determined in other ways, which is not limited in the embodiment of the present application.
  • the motion estimation unit can determine the corresponding pixel according to the frame difference of the pixel and the second set frame difference threshold corresponding to the pixel by the following formula (2) The second temporal filtering intensity of the pixel.
  • the frame difference of each pixel in the second frame difference image is the pixel value of the corresponding pixel.
  • ⁇ vis (x, y) refers to the second time domain filter intensity of the pixel with coordinates (x, y)
  • dif vis (x, y) represents the frame difference of the pixel
  • dif_thr vis (x, y) ) Represents the second set frame difference threshold corresponding to the pixel.
  • the smaller the motion level of the pixel the larger the value of the corresponding second temporal filtering intensity.
  • the value of the second time domain filtering strength can be between 0 and 1.
  • the motion estimation unit may weight the first time domain filtering strength and the second time domain filtering strength of each pixel, thereby Get the joint time domain weight of each pixel.
  • the determined joint time domain weight of each pixel is the motion estimation result of the first repaired image and the second repaired image.
  • the first time-domain filter strength and the second time-domain filter strength of each pixel actually refer to the first repaired image and the second repaired image.
  • the motion estimation unit may weight the first time domain filtering strength and the second time domain filtering strength of each pixel by the following formula (3), thereby obtaining the joint time domain filtering strength of each pixel .
  • refers to the neighborhood range centered on the pixel with coordinates (x, y), that is, the local image area centered on the pixel with coordinates (x, y), (x+i,y +j) refers to the pixel coordinates in the local image area, Refers to the first temporal filtering strength in the local image area centered on the pixel with coordinates (x, y), It refers to the local second time domain filtering strength centered on the pixel with coordinates (x, y), and ⁇ fus (x, y) refers to the joint time domain filtering strength of the pixel with coordinates (x, y).
  • the first temporal filtering intensity can be used to indicate the motion level of the pixel in the first repaired image
  • the second temporal filtering strength can be used to indicate the motion level of the pixel in the second repaired image
  • the joint time-domain filtering strength determined by the above-mentioned method simultaneously fuses the first time-domain filtering strength and the second time-domain filtering strength, that is, the joint time-domain filtering strength also takes into account that the pixel point appears in the first restored image The movement trend of and the movement trend shown in the second restored image.
  • the joint time domain filtering strength can more accurately characterize the motion trend of the pixel points.
  • the subsequent time domain filtering is performed with the joint time domain filtering strength At this time, image noise can be removed more effectively, and problems such as image smearing caused by misjudgment of the motion level of pixels can be alleviated.
  • the time-domain filtering unit may perform time-domain filtering processing on the first repaired image and the second repaired image respectively according to the joint time-domain filtering strength, thereby obtaining the first preprocessed image And the second preprocessed image.
  • the time-domain filtering unit may perform time-domain weighting on each pixel in the first restored image and the first historical noise-reduction image according to the joint time-domain filtering strength of each pixel by the following formula (4) Processing to obtain the first preprocessed image.
  • the time domain is performed on each pixel in the second restored image and the second historical noise reduction image by the following formula (5) Weighting processing to obtain the second preprocessed image.
  • ⁇ fus (x, y) refers to the joint temporal filtering strength of the pixel with the coordinates (x, y)
  • I nir (x ,y,t) refers to the pixel with coordinates (x,y) in the first repaired image
  • I vis (x, y, t) refers to the pixel with the coordinate (x, y) in the second restored image.
  • the time-domain filtering unit may also perform time-domain filtering on the first repaired image according to the first time-domain filtering strength of each pixel to obtain
  • the second repaired image is subjected to time-domain filtering processing according to the joint time-domain filtering intensity of each pixel, so as to obtain a visible light image.
  • the time domain filtering strength filters it.
  • the image noise reduction unit 0412 may include a spatial noise reduction unit.
  • the spatial noise reduction unit is used to perform edge estimation based on the first restored image and the second restored image to obtain an edge estimation result, and perform spatial filtering processing on the first restored image according to the edge estimation result to obtain a first preprocessed image, according to The edge estimation result performs spatial filtering processing on the second repaired image to obtain the second preprocessed image.
  • the spatial noise reduction unit may include an edge estimation unit and a spatial filtering unit.
  • the edge estimation unit is used to determine the first spatial filtering strength of each pixel in the first repaired image; the spatial filtering unit is used to perform the first repair based on the first spatial filtering strength corresponding to each pixel.
  • the image is subjected to spatial filtering processing to obtain a first preprocessed image, and the second repaired image is subjected to spatial filtering processing according to the first spatial filtering intensity corresponding to each pixel to obtain the second preprocessed image.
  • the edge estimation unit may determine the first spatial filtering intensity of the corresponding pixel according to the difference between each pixel of the first repaired image and other pixels in its neighborhood.
  • the edge estimation unit can generate the first spatial filtering intensity of each pixel through the following formula (6).
  • refers to a neighborhood range centered on a pixel with coordinates (x, y), that is, a local image area centered on a pixel with coordinates (x, y).
  • (x+i,y+j) refers to the pixel coordinates in the local image area
  • img nir (x,y) refers to the pixel value of the pixel with coordinates (x,y) in the first repaired image
  • ⁇ 1 and ⁇ 2 refer to the standard deviation of Gaussian distribution
  • It refers to the first spatial filtering strength determined according to the difference between the pixel point (x, y) and the pixel point (x+i, y+j) in the local image area.
  • the neighborhood of each pixel includes multiple pixels. In this way, for any pixel, the difference between each pixel in the local image area of the pixel and the pixel can be determined. Obtain multiple first spatial filtering intensities corresponding to the pixel. After determining the multiple first spatial filtering intensities of each pixel, the spatial filtering unit may perform spatial filtering processing on the first repaired image and the second repaired image respectively according to the multiple first spatial filtering intensities of each pixel, thereby Obtain the first preprocessed image and the second preprocessed image.
  • the edge estimation unit is used to determine the first spatial filter strength of each pixel in the first repaired image, and determine the second spatial filter strength of each pixel in the second repaired image;
  • the image is convolved to obtain the first texture image, and the second repaired image is convolved to obtain the second texture image; according to the first spatial filtering strength, the second spatial filtering strength, the first texture image and the second texture image Determine the joint spatial filtering strength corresponding to each pixel;
  • the spatial filtering unit is used to perform spatial filtering processing on the first repaired image according to the first spatial filtering strength corresponding to each pixel to obtain the first preprocessed image.
  • the joint spatial filtering intensity corresponding to the pixel points performs spatial filtering processing on the second repaired image to obtain the second preprocessed image.
  • the edge estimation unit can not only determine the first spatial filter intensity of each pixel in the first repaired image through the implementation described above, but also determine the second spatial filter intensity of each pixel in the second repaired image. .
  • the edge estimation unit may determine the second spatial filtering of the corresponding pixel according to the difference between each pixel of the second repaired image and other pixels in its neighborhood strength. Wherein, the edge estimation unit can generate the second spatial filtering intensity of each pixel through the following formula (7).
  • refers to a neighborhood range centered on a pixel with coordinates (x, y), that is, a local image area centered on a pixel with coordinates (x, y).
  • (x+i,y+j) refers to the pixel coordinates in the local image area
  • img vis (x,y) refers to the pixel value of the pixel with coordinates (x,y) in the second repaired image
  • ⁇ 1 and ⁇ 2 refer to the standard deviation of Gaussian distribution
  • It refers to the second spatial filtering strength determined by the pixel point with coordinates (x, y) in the local image area according to the difference between it and the pixel point (x+i, y+j).
  • the neighborhood of each pixel includes multiple pixels, for each pixel, multiple second spatial filtering intensities of each pixel can be obtained through the method described above.
  • the edge estimation unit can use the Sobel edge detection operator to perform convolution processing on the first repaired image and the second repaired image to obtain the first texture.
  • the image and the second texture image are used as weights to weight the multiple first spatial filter intensities and multiple second spatial filter intensities of each pixel to generate multiple joint spatial filter intensities of each pixel.
  • the Sobel edge detection operator is shown in the following equation (8).
  • the edge estimation unit can generate the joint spatial filtering strength through the following formula (9).
  • sobel H refers to the Sobel edge detection operator in the horizontal direction
  • sobel V refers to the Sobel edge detection operator in the vertical direction
  • ⁇ fus (x+i,y+j) refers to the coordinates (x,y) Any joint spatial filtering strength of pixels in its neighborhood ⁇ , Refers to the texture information of the pixel with coordinates (x, y) in the first texture image, Refers to the texture information of the pixel with the coordinate (x, y) in the second texture image.
  • the joint spatial filtering strength is relatively larger. That is, in the embodiment of the present application, when performing spatial filtering, a weaker spatial filtering strength is used at the edge, and a strong spatial filtering strength is used at the non-edge, thereby improving the noise reduction effect.
  • the spatial filtering unit may perform spatial weighting processing on the first repaired image and the second repaired image respectively according to the joint spatial filtering strength, so as to obtain the first preprocessed image and the second preprocessed image.
  • the spatial filtering unit may perform spatial filtering processing on the first repaired image according to the first spatial filtering strength of each pixel.
  • the spatial filtering unit may perform spatial filtering processing on each pixel in the first repaired image according to the first spatial filtering strength of each pixel through the following formula (10), thereby obtaining the first preprocessing
  • weighting is performed on each pixel in the second repaired image by the following formula (11), thereby obtaining the second preprocessed image.
  • I nir (x+i,y+j) refers to the neighborhood of the pixel with coordinates (x,y) in the first repaired image Pixels within the range, ⁇ nir (x+i,y+j) is the first spatial filtering strength of the pixel with coordinates (x,y) in the neighborhood range, ⁇ refers to the coordinate as (x, y) is the center of the neighborhood, Is the pixel with coordinates (x,y) in the second preprocessed image, I vis (x+i,y+j) refers to the neighborhood range of the pixel with coordinates (x,y) in the second repaired image For the pixels in, ⁇ fus (x+i, y+j) is the joint spatial filtering strength of the pixel with coordinates (x, y) in the neighborhood.
  • the image noise reduction unit 0412 may also include the above-mentioned temporal noise reduction unit and spatial noise reduction unit at the same time.
  • the spatial noise reduction unit performs spatial filtering on the obtained first temporal noise reduction image and the second temporal noise reduction image, thereby obtaining the first preprocessed image and the second preprocessed image.
  • the spatial noise reduction unit may first perform spatial filtering on the first repaired image and the second repaired image to obtain the first spatial noise reduced image and the second spatial noise reduced image. After that, the temporal noise reduction unit performs temporal filtering on the obtained first spatial noise reduction image and the second spatial noise reduction image, thereby obtaining the first preprocessed image and the second preprocessed image.
  • the image fusion unit 042 may fuse the first preprocessed image and the second preprocessed image to obtain a fused image.
  • the image fusion unit 042 may include a first fusion unit.
  • the first fusion unit is used to merge the first pre-processed image and the second pre-processed image through the first fusion process to obtain a fused image.
  • the possible implementation of the first fusion processing may include the following:
  • the first fusion unit can traverse each pixel position, and calculate the RGB color at the same pixel position of the first preprocessed image and the second preprocessed image according to the preset fusion weight of each pixel position.
  • the vector is fused.
  • the first fusion unit may generate a fusion image through the following model (12).
  • img refers to the fusion image
  • img 1 refers to the first preprocessed image
  • img 2 refers to the second preprocessed image
  • w refers to the fusion weight. It should be noted that the value range of the fusion weight is (0, 1), for example, the fusion weight can be 0.5.
  • the fusion weight may also be obtained by processing the first preprocessed image and the second preprocessed image.
  • the first fusion unit may perform edge extraction on the first preprocessed image to obtain the first edge image.
  • Edge extraction is performed on the second preprocessed image to obtain a second edge image.
  • the fusion weight of each pixel position is determined according to the first edge image and the second edge image.
  • the first fusion unit may process the brightness signal in the second pre-processed image through a low-pass filter to obtain a low-frequency signal.
  • the first preprocessed image is processed by a high-pass filter to obtain a high-frequency signal.
  • the low-frequency signal and the high-frequency signal are added to obtain a fused brightness signal.
  • the fused luminance signal and the chrominance signal in the second preprocessed image are synthesized to obtain the fused image.
  • the first fusion unit may perform color space conversion on the first preprocessed image, thereby separating the first brightness image and the first color image.
  • the color space conversion is performed on the second preprocessed image, thereby separating the second brightness image and the second color image.
  • Pyramid decomposition is performed on the first brightness image and the second brightness image to obtain a plurality of basic images and detailed images with different scale information, and according to the relative magnitude of the information entropy or gradient between the first brightness image and the second brightness image
  • Multiple basic images and detail images are weighted and reconstructed to obtain a fused brightness image.
  • the first fusion unit may select a color image with higher color accuracy in the first color image and the second color image as the color component of the fusion image, and then select the brightness image corresponding to the fusion brightness image and the selected color image Adjust the color components of the fused image to improve color accuracy.
  • the image processing unit 04 can directly output the fused image.
  • the image fusion unit 042 may include a second fusion unit and a third fusion unit.
  • the second fusion unit is used for fusing the first pre-processed image and the second pre-processed image through the second fusion process to obtain the first target image.
  • the third fusion unit is used for fusing the first preprocessed image and the second preprocessed image through the third fusion process to obtain the second target image.
  • the fusion image includes a first target image and a second target image.
  • the second fusion process and the third fusion process may be different.
  • the second fusion process and the third fusion process may be any two of the three possible implementations of the first fusion process described above.
  • the second fusion process is any one of the three possible implementation manners of the first fusion process described above, and the third fusion process is a processing manner other than the foregoing three possible implementation manners.
  • the third fusion process is any one of the three possible implementation manners of the first fusion process described above, and the second fusion process is a processing manner other than the foregoing three possible implementation manners.
  • the second fusion process and the third fusion process may also be the same.
  • the fusion parameter of the second fusion process is the first fusion parameter
  • the fusion parameter of the third fusion process is the second fusion parameter
  • the first fusion parameter and the second fusion parameter are different.
  • the second fusion processing and the third fusion processing may both be the first possible implementation manners in the first fusion processing described above.
  • the fusion weights in the second fusion process and the third fusion process may be different.
  • Different fusion units use different fusion processing to fuse the two preprocessed images, or use the same fusion process but different fusion parameters to fuse the two preprocessed images, you can get two different image styles Fusion image.
  • the image processing unit can respectively output different fused images to different subsequent units, so as to meet the requirements of different subsequent operations on the fused images.
  • the encoding compression unit 06 may compress and output the fused image for subsequent display and storage.
  • the encoding compression unit 06 may use the H.264 standard to compress the fused image.
  • the encoding compression unit 06 may compress and output the first target image for subsequent display and storage.
  • the intelligent analysis unit 07 can analyze and process the fused image, and output the analysis result for subsequent use.
  • the intelligent analysis unit 07 may be a neural network calculation unit in a SoC (System on Chip) chip, which uses a deep learning network to analyze and process the first fusion image.
  • SoC System on Chip
  • FastRCNN can be used to analyze and process the first fusion image.
  • the intelligent analysis unit 07 may compress and output the second target image for subsequent display and storage.
  • This application uses the exposure timing of the image sensor to control the near-infrared supplementary light timing of the light supplement device, so that the first image signal is generated through the first preset exposure when the near-infrared supplementary light is performed, and the second preset is used when the near-infrared supplementary light is not performed.
  • the exposure generates the second image signal.
  • This data collection method can directly collect the first image signal and the second image signal with different brightness information while the structure is simple and the cost is reduced. That is, two image signals can be acquired through one image sensor. Different image signals, and then the two image signals are fused, so that the image fusion device is more convenient, and the fusion of the first image signal and the second image signal is efficient.
  • the first image signal and the second image signal are both generated and output by the same image sensor, so the viewpoint corresponding to the first image signal is the same as the viewpoint corresponding to the second image signal. Therefore, the information of the external scene can be jointly obtained through the first image signal and the second image signal, and there is no difference between the viewpoint corresponding to the first image signal and the viewpoint corresponding to the second image signal. It is not aligned with the image generated by the second image signal. In this way, the quality of the fused image obtained by subsequent processing based on the first image signal and the second image signal is higher.
  • the image fusion device can generate and output the first image signal and the second image signal through multiple exposures, and perform fusion processing on the first image signal and the second image signal to obtain a fused image.
  • the image acquisition method will be described with the image acquisition device provided based on the embodiment shown in FIGS. 1-20. Referring to Figure 21, the method includes:
  • Step 2101 Perform near-infrared fill light through the first fill light device included in the light fill, where the near-infrared fill light is performed at least during a part of the exposure time period of the first preset exposure, and at the exposure time of the second preset exposure No near-infrared supplementary light is performed in the segment, and the first preset exposure and the second preset exposure are two of the multiple exposures of the image sensor.
  • Step 2102 Pass visible light and part of near-infrared light through the first filter included in the filter assembly.
  • Step 2103 Perform multiple exposures through the image sensor to generate and output a first image signal and a second image signal, the first image signal is an image generated according to the first preset exposure, and the second image signal is based on the second preset The image produced by the exposure.
  • Step 2104 Process the first image signal and the second image signal by the image processing unit to obtain a fused image.
  • the image processing unit includes a preprocessing unit and an image fusion unit;
  • the first image signal and the second image signal are processed by the image processing unit to obtain a fused image, including:
  • the first pre-processed image and the second pre-processed image are fused by the image fusion unit to obtain a fused image.
  • the preprocessing unit includes an image restoration unit and an image noise reduction unit;
  • Preprocessing the first image signal and the second image signal by the preprocessing unit, and outputting the first preprocessed image and the second preprocessed image includes:
  • the first image signal is repaired by the image repair unit to obtain a first repaired image
  • the second image signal is repaired to obtain a second repaired image
  • the first repaired image is a grayscale image
  • the second repaired image is a color image
  • the image noise reduction unit performs noise reduction processing on the first repaired image to obtain a first preprocessed image, and performs noise reduction processing on the second repaired image to obtain a second preprocessed image.
  • the image noise reduction unit includes a temporal noise reduction unit or a spatial noise reduction unit;
  • the motion estimation is performed according to the first repaired image and the second repaired image, and the motion estimation result is obtained.
  • the first repaired image is filtered in time domain to obtain the first preprocessed image.
  • the estimation result performs temporal filtering processing on the second repaired image to obtain the second preprocessed image; or
  • the spatial denoising unit perform edge estimation based on the first repaired image and the second repaired image to obtain the edge estimation result.
  • the temporal noise reduction unit includes a motion estimation unit
  • the motion estimation is performed according to the first repaired image and the second repaired image to obtain the motion estimation result, including:
  • a first frame difference image is generated according to the first restored image and the first historical noise reduction image, and each pixel in the first restored image is determined according to the first frame difference image and multiple first set frame difference thresholds
  • the first temporal filtering strength of the first historical noise reduction image refers to the image after noise reduction is performed on any one of the first N frames of the first restored image.
  • N is greater than or equal to 1, and multiple first settings
  • the fixed frame difference threshold corresponds to multiple pixels in the first frame difference image one to one;
  • a second frame difference image is generated according to the second restored image and the second historical noise reduction image, and each pixel in the second restored image is determined according to the second frame difference image and multiple second set frame difference thresholds
  • the second temporal filtering strength of the second historical noise reduction image refers to the image signal after noise reduction is performed on any one of the first N frames of the second restored image.
  • Multiple second set frame difference thresholds and The multiple pixels in the second frame difference image have a one-to-one correspondence number;
  • the first time domain filtering strength and the second time domain filtering strength of each pixel are fused to obtain the joint time domain filtering strength of each pixel; or, through the motion estimation unit from each pixel One of the first time domain filtering strength and the second time domain filtering strength of is selected as the joint time domain filtering strength of the corresponding pixel;
  • the motion estimation result includes the first time domain filtering strength of each pixel and/or the joint time domain filtering strength of each pixel.
  • the temporal noise reduction unit further includes a temporal filtering unit
  • the first restored image is subjected to temporal filtering processing according to the motion estimation result to obtain the first preprocessed image, including:
  • the time-domain filtering unit is used to perform time-domain filtering processing on the first repaired image according to the first time-domain filtering strength of each pixel to obtain the first pre-processed image, and to perform the time-domain filtering on the first time-domain filtering strength of each pixel. 2. Perform temporal filtering processing on the repaired image to obtain a second preprocessed image; or
  • the time-domain filtering unit is used to perform time-domain filtering processing on the first repaired image according to the first time-domain filtering strength of each pixel to obtain the first pre-processed image.
  • the second Perform temporal filtering processing on the repaired image to obtain a second preprocessed image;
  • the time domain filtering unit is used to perform time domain filtering processing on the first repaired image according to the joint time domain filtering strength of each pixel to obtain the first preprocessed image, and perform the second restoration according to the joint time domain filtering strength of each pixel
  • the image is subjected to temporal filtering processing to obtain a second preprocessed image.
  • the first frame difference image refers to the original frame difference image obtained by performing difference processing on the first restored image and the first historical noise reduction image; or, the first frame difference image refers to the original frame difference image.
  • the second frame difference image refers to the original frame difference image obtained by performing difference processing on the second restored image and the second historical noise reduction image; or, the second frame difference image refers to the frame obtained after processing the original frame difference image Poor image.
  • the first set frame difference threshold corresponding to each pixel is different, or the first set frame difference threshold corresponding to each pixel is the same;
  • the second set frame difference threshold corresponding to each pixel is different, or the second set frame difference threshold corresponding to each pixel is the same.
  • the multiple first set frame difference thresholds are determined according to the noise intensity of multiple pixels in the first noise intensity image, and the first noise intensity image corresponds to the first historical noise reduction image.
  • the image before noise reduction and the first historical noise reduction image are determined;
  • the multiple second set frame difference thresholds are determined according to the noise intensity of multiple pixels in the second noise intensity image, and the second noise intensity image is based on the image before noise reduction corresponding to the second historical noise reduction image and the second history The denoising image is confirmed.
  • the spatial noise reduction unit includes an edge estimation unit
  • edge estimation is performed according to the first repaired image and the second repaired image to obtain the edge estimation result, including:
  • the edge estimation result includes the first spatial filtering strength and/or the joint spatial filtering strength of each pixel.
  • the spatial noise reduction unit further includes a spatial filtering unit
  • the first repaired image is subjected to spatial filtering processing according to the first spatial filtering intensity corresponding to each pixel to obtain the first preprocessed image, and the second repair is performed according to the first spatial filtering intensity corresponding to each pixel
  • the image is subjected to spatial filtering processing to obtain a second preprocessed image;
  • the first repaired image is subjected to spatial filtering processing according to the first spatial filtering intensity corresponding to each pixel to obtain the first preprocessed image
  • the second repaired image is processed according to the joint spatial filtering intensity corresponding to each pixel Perform spatial filtering processing to obtain a second preprocessed image
  • the first repaired image is subjected to spatial filtering processing according to the joint spatial filtering strength corresponding to each pixel to obtain the first preprocessed image
  • the second repaired image is processed according to the joint spatial filtering strength corresponding to each pixel.
  • the spatial filtering process obtains the second preprocessed image.
  • the first local information and the second local information include at least one of local gradient information, local brightness information, and local information entropy.
  • the image noise reduction unit includes a temporal noise reduction unit and a spatial noise reduction unit;
  • time-domain noise reduction unit perform motion estimation according to the first repaired image and the second repaired image to obtain the motion estimation result, and perform time-domain filtering on the first repaired image according to the motion estimation result to obtain the first time-domain denoised image, according to The motion estimation result performs temporal filtering on the second restored image to obtain a second temporal denoising image;
  • the spatial noise reduction unit perform edge estimation based on the first temporal noise reduction image and the second temporal noise reduction image to obtain the edge estimation result, and perform spatial filtering on the first temporal noise reduction image according to the edge estimation result to obtain the first Preprocess the image, and perform spatial filtering on the second temporal noise reduction image according to the edge estimation result to obtain the second preprocessed image;
  • the spatial denoising unit perform edge estimation according to the first and second repaired images to obtain the edge estimation result, and perform spatial filtering on the first repaired image according to the edge estimation result to obtain the first spatial denoised image, and according to the edge estimation result Performing spatial filtering on the second restored image to obtain a second spatial denoising image;
  • the temporal denoising unit perform motion estimation based on the first spatial denoised image and the second spatial denoised image to obtain the motion estimation result, and perform temporal filtering on the first spatial denoised image according to the motion estimation result to obtain the first prediction
  • the image is processed, and the second spatial noise reduction image is time-domain filtered according to the motion estimation result to obtain the second preprocessed image.
  • the image fusion unit includes a first fusion unit, and the image fusion device further includes an encoding compression unit and an intelligent analysis unit;
  • the first preprocessed image and the second preprocessed image are fused by the image fusion unit to obtain a fused image, including:
  • the first pre-processed image and the second pre-processed image are merged using the first fusion process to obtain a fused image;
  • Methods also include:
  • the fusion image is analyzed and processed by the intelligent analysis unit to obtain the analysis result and output the analysis result.
  • the image fusion unit includes a second fusion unit and a third fusion unit, and the image fusion device further includes an encoding compression unit and an intelligent analysis unit;
  • the first preprocessed image and the second preprocessed image are fused by the image fusion unit to obtain a fused image, including:
  • the first pre-processed image and the second pre-processed image are fused by the third fusion process to obtain the second target image;
  • Methods also include:
  • the second target image is analyzed and processed by the intelligent analysis unit to obtain the analysis result and output the analysis result.
  • the second fusion process and the third fusion process are different;
  • the second fusion process is the same as the third fusion process, but the fusion parameter of the second fusion process is the first fusion parameter, the fusion parameter of the second fusion process is the second fusion parameter, and the first fusion parameter and the second fusion parameter are different .
  • the light fill device may further include a second light fill device.
  • the visible light supplement is also performed by the second supplement light device.
  • the intensity of the near-infrared light passing through the first filter when the first light supplement device performs near-infrared light supplementation is higher than that when the first light supplement device does not perform near-infrared light supplementation.
  • the intensity of the near-infrared light of the filter is higher than that when the first light supplement device does not perform near-infrared light supplementation.
  • the wavelength range of the near-infrared light incident on the first filter is the first reference wavelength range
  • the first reference wavelength range is 650 nanometers to 1100 nanometers.
  • the center wavelength of the near-infrared supplement light performed by the first light supplement device is the set characteristic wavelength or the center wavelength of the near-infrared light passing through the first filter when it falls within the set characteristic wavelength range And/or the band width meets the constraints.
  • the center wavelength of the near-infrared supplement light performed by the first light supplement device is any wavelength within the wavelength range of 750 ⁇ 10 nanometers;
  • the center wavelength of the first light supplement device for near-infrared supplement light is any wavelength within the wavelength range of 780 ⁇ 10 nanometers; or
  • the center wavelength of the first light supplement device for near-infrared supplement light is any wavelength within the wavelength range of 940 ⁇ 10 nanometers.
  • the constraints include:
  • the difference between the center wavelength of the near-infrared light passing through the first filter and the center wavelength of the near-infrared light supplemented by the first light supplement device is within the wavelength fluctuation range, and the wavelength fluctuation range is 0-20 nanometers.
  • the constraints include:
  • the half bandwidth of the near-infrared light passing through the first filter is less than or equal to 50 nanometers.
  • the constraints include:
  • the first waveband width is smaller than the second waveband width; where the first waveband width refers to the waveband width of the near-infrared light passing through the first filter, and the second waveband width refers to the near-infrared light that is blocked by the first filter. Band width.
  • constraints are:
  • the third waveband width is smaller than the reference waveband width.
  • the third waveband width refers to the waveband width of near-infrared light whose pass rate is greater than a set ratio.
  • the reference waveband width is any waveband width in the range of 50 nm to 150 nm.
  • the set ratio is any ratio within a ratio range of 30% to 50%.
  • At least one exposure parameter of the first preset exposure and the second preset exposure is different, the at least one exposure parameter is one or more of exposure time, exposure gain, and aperture size, and the exposure gain Including analog gain, and/or, digital gain.
  • the exposure gain of the first preset exposure is smaller than the exposure gain of the second preset exposure.
  • At least one exposure parameter of the first preset exposure and the second preset exposure is the same, the at least one exposure parameter includes one or more of exposure time, exposure gain, and aperture size, and exposure gain Including analog gain, and/or, digital gain.
  • the exposure time of the first preset exposure is equal to the exposure time of the second preset exposure.
  • the image sensor includes a plurality of light-sensing channels, and each light-sensing channel is used to sense at least one type of light in the visible light waveband and light in the near-infrared waveband.
  • multiple photosensitive channels are used to sense at least two different visible light wavelength bands.
  • the multiple photosensitive channels include at least two of R photosensitive channels, G photosensitive channels, B photosensitive channels, Y photosensitive channels, W photosensitive channels, and C photosensitive channels;
  • the R photosensitive channel is used to sense the light in the red and near-infrared bands
  • the G photosensitive channel is used to sense the light in the green and near-infrared bands
  • the B photosensitive channel is used to sense the light in the blue and near-infrared bands.
  • the photosensitive channel is used to sense the light in the yellow band and the near-infrared band
  • the W photosensitive channel is used to sense the full band of light
  • the C photosensitive channel is used to sense the full band of light.
  • the image sensor is an RGB sensor, an RGBW sensor, or an RCCB sensor, or an RYYB sensor.
  • the second light supplement device is used to perform visible light supplement light in a constant light mode
  • the second light supplement device is used to perform visible light supplement light in a stroboscopic manner, wherein visible light supplement light exists at least during a part of the exposure time period of the first preset exposure, and does not exist during the entire exposure time period of the second preset exposure Visible light fill light; or
  • the second light supplement device is used to perform visible light supplement light in a stroboscopic manner, wherein at least there is no visible light supplement light during the entire exposure time period of the first preset exposure, and there is no visible light supplement light during the partial exposure time period of the second preset exposure Visible light fill light.
  • the number of light supplements per unit time length of the first light supplement device is lower than the number of exposures of the image sensor in a unit time length, wherein, within the interval of every two adjacent light supplements , Interval one or more exposures.
  • the image sensor uses a global exposure method for multiple exposures.
  • the time period of the near-infrared fill light does not exist with the exposure time period of the nearest second preset exposure Intersection, the time period of near-infrared fill light is a subset of the exposure time period of the first preset exposure, or the time period of near-infrared fill light and the exposure time period of the first preset exposure overlap, or the first preset The exposure period of exposure is a subset of the near-infrared fill light.
  • the image sensor adopts rolling shutter exposure for multiple exposures.
  • the time period of near-infrared fill light is different from the exposure time period of the nearest second preset exposure. There is an intersection
  • the start time of the near-infrared fill light is no earlier than the exposure start time of the last line of the effective image in the first preset exposure, and the end time of the near-infrared fill light is no later than the exposure end time of the first line of the effective image in the first preset exposure ;
  • the start time of the near-infrared fill light is no earlier than the exposure end time of the last effective image line of the nearest second preset exposure before the first preset exposure and no later than the first effective image line of the first preset exposure Exposure end time, the end time of the near-infrared fill light is no earlier than the exposure start time of the last line of the effective image in the first preset exposure and no later than the first preset exposure of the nearest second preset exposure after the first preset exposure
  • the start time of the exposure of a valid image or
  • the start time of the near-infrared fill light is no earlier than the exposure end time of the last effective image line of the nearest second preset exposure before the first preset exposure and no later than the first effective image line of the first preset exposure Exposure start time, the end time of the near-infrared fill light is no earlier than the exposure end time of the last line of the effective image in the first preset exposure and no later than the first preset exposure of the nearest second preset exposure after the first preset exposure The time at which the exposure of the effective image starts.
  • multiple exposures include odd exposures and even exposures
  • the first preset exposure is one exposure in odd-numbered exposures
  • the second preset exposure is one exposure in even-numbered exposures
  • the first preset exposure is one exposure in an even number of exposures
  • the second preset exposure is one exposure in an odd number of exposures
  • the first preset exposure is one of the specified odd-numbered exposures
  • the second preset exposure is one of the other exposures except the specified odd-numbered exposures
  • the first preset exposure is one of the specified even-numbered exposures
  • the second preset exposure is one of the other exposures except the specified even-numbered exposures
  • the first preset exposure is one exposure in the first exposure sequence
  • the second preset exposure is one exposure in the second exposure sequence
  • the first preset exposure is one exposure in the second exposure sequence
  • the second preset exposure is one exposure in the first exposure sequence
  • multiple exposures include multiple exposure sequences
  • the first exposure sequence and the second exposure sequence are one exposure sequence or two exposure sequences among the multiple exposure sequences
  • each exposure sequence includes N exposures
  • N exposures include 1.
  • First preset exposure and N-1 second preset exposure, or N exposures include 1 second preset exposure and N-1 second preset exposure, and N is a positive integer greater than 2.
  • This application uses the exposure timing of the image sensor to control the near-infrared supplementary light timing of the light supplement device, so that the first image signal is generated through the first preset exposure when the near-infrared supplementary light is performed, and the second preset is used when the near-infrared supplementary light is not performed.
  • the exposure generates the second image signal.
  • This data collection method can directly collect the first image signal and the second image signal with different brightness information while the structure is simple and the cost is reduced. That is, the two image signals can be acquired through one image sensor. Different image signals, and then the two image signals are fused, so that the image fusion device is more convenient, and the fusion of the first image signal and the second image signal is efficient.
  • the first image signal and the second image signal are both generated and output by the same image sensor, so the viewpoint corresponding to the first image signal is the same as the viewpoint corresponding to the second image signal. Therefore, the information of the external scene can be jointly acquired through the first image signal and the second image signal, and the viewpoint corresponding to the first image signal is not the same as the viewpoint corresponding to the second image signal, which may result in It is not aligned with the image generated by the second image signal. In this way, the quality of the fused image obtained by subsequent processing based on the first image signal and the second image signal is higher.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

本申请公开了一种图像融合装置,属于计算机视觉技术领域。在本申请中,第一补光装置至少在第一预设曝光的部分曝光时间段内进行近红外补光,在第二预设曝光的曝光时间段内不进行近红外补光。第一滤光片使可见光和部分近红外光通过。图像传感器在第一预设曝光的曝光时间段内产生并输出第一图像信号,在第二预设曝光的曝光时间段内产生并输出第二图像信号。由于第一图像信号和第二图像信号均通过同一图像传感器产生并输出,因此第一图像信号和第二图像信号对应的视点相同,可以提高后续融合图像的质量,而且也可以简化图像融合装置的结构,进而降低成本。

Description

图像融合装置及图像融合方法
本申请要求于2019年5月31日提交的申请号为201910472710.7、发明名称为“图像融合装置及图像融合方法”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机视觉技术领域,特别涉及一种图像融合装置及图像融合方法。
背景技术
目前,各种各样的拍摄设备被广泛的应用于诸如智能交通、安保等领域。当前常用的拍摄设备可以根据外部场景中可见光强度的不同获取不同的图像,以使对外部场景的信息的获取不受可见光强度的限制,进而将获取到的不同的图像进行融合。例如,拍摄设备可以在外部场景中可见光强度较强时获取可见光图像,在外部场景中可见光强度较弱时获取近红外光图像,以实现在不同的可见光强度下都能够获取外部场景的信息。
相关技术中提供了一种基于双目相机获取可见光图像和近红外光图像的方法,该双目相机包括可见光摄像头和近红外光摄像头,该双目相机可以通过可见光摄像头获取可见光图像,通过近红外光摄像头获取近红外光图像。然而,由于该双目相机的可见光摄像头和近红外光摄像头的视点不同,因此可见光摄像头的拍摄范围与近红外光摄像头的拍摄范围只存在部分的交叠,并且双目相机进行图像采集存在结构复杂、成本较高的问题。
发明内容
本申请实施例提供了一种图像融合装置及图像融合方法,用以在结构简单、降低成本的同时采集两路不同的图像信号,进而将两路图像融合得到融合图像信号。所述技术方案如下:
一方面,提供了一种图像融合装置,所述图像融合装置包括图像传感器、 补光器、滤光组件和图像处理单元,所述图像传感器位于所述滤光组件的出光侧;
所述图像传感器,用于通过多次曝光产生并输出第一图像信号和第二图像信号,其中,所述第一图像信号是根据第一预设曝光产生的图像,所述第二图像信号是根据第二预设曝光产生的图像,所述第一预设曝光和所述第二预设曝光为所述多次曝光中的其中两次曝光;
所述补光器包括第一补光装置,所述第一补光装置用于进行近红外补光,其中,至少在所述第一预设曝光的部分曝光时间段内进行近红外补光,在所述第二预设曝光的曝光时间段内不进行近红外补光;
所述滤光组件包括第一滤光片,所述第一滤光片使可见光和部分近红外光通过;
所述图像处理单元用于对所述第一图像信号和所述第二图像信号进行处理得到融合图像。
另一方面,提供了一种图像融合方法,应用于图像融合装置,所述图像融合装置包括图像传感器、补光器、滤光组件和图像处理单元,所述补光器包括第一补光装置,所述滤光组件包括第一滤光片,所述图像传感器位于所述滤光组件的出光侧,其特征在于,所述方法包括:
通过所述第一补光装置进行近红外补光,其中,至少在第一预设曝光的部分曝光时间段内进行近红外补光,在第二预设曝光的曝光时间段内不进行近红外补光,所述第一预设曝光和所述第二预设曝光为所述图像传感器的多次曝光中的其中两次曝光;
通过所述第一滤光片使可见光波段的光和近红外光波段的光通过;
通过所述图像传感器进行多次曝光,以产生并输出第一图像信号和第二图像信号,所述第一图像信号是根据所述第一预设曝光产生的图像,所述第二图像信号是根据所述第二预设曝光产生的图像;
通过所述图像处理单元对所述第一图像信号和所述第二图像信号进行处理得到融合图像。
本申请实施例提供的技术方案带来的有益效果是:
本申请利用图像传感器的曝光时序来控制补光装置的近红外补光时序,以 便进行近红外补光时通过第一预设曝光产生第一图像信号,不进行近红外补光时通过第二预设曝光产生第二图像信号,这样的数据采集方式,可以在结构简单、降低成本的同时直接采集到亮度信息不同的第一图像信号和第二图像信号,也即通过一个图像传感器就可以获取两种不同的图像信号,将这两种图像信号进行融合,使得该图像融合装置更加简便,且第一图像信号和第二图像信号的融合更加高效。并且,第一图像信号和第二图像信号均由同一个图像传感器产生并输出,所以第一图像信号对应的视点与第二图像信号对应的视点相同。因此,通过第一图像信号和第二图像信号可以共同获取外部场景的信息,且不会存在因第一图像信号对应的视点与第二图像信号对应的视点不相同,而导致根据第一图像信号和第二图像信号生成的图像不对齐的问题。如此,后续根据该第一图像信号和第二图像信号进行处理得到的融合图像的质量更高。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的第一种图像融合装置的结构示意图。
图2是本申请实施例提供的一种图像融合装置产生第一图像信号的原理示意图。
图3是本申请实施例提供的一种图像融合装置产生第二图像信号的原理示意图。
图4是本申请实施例提供的一种第一补光装置进行近红外补光的波长和相对强度之间的关系示意图。
图5是本申请实施例提供的一种第一滤光片通过的光的波长与通过率之间的关系的示意图。
图6是本申请实施例提供的第二种图像融合装置的结构示意图。
图7是本申请实施例提供的一种RGB传感器的示意图。
图8是本申请实施例提供的一种RGBW传感器的示意图。
图9是本申请实施例提供的一种RCCB传感器的示意图。
图10是本申请实施例提供的一种RYYB传感器的示意图。
图11是本申请实施例提供的一种图像传感器的感应曲线示意图。
图12是本申请实施例提供的一种卷帘曝光方式的示意图。
图13是本申请实施例提供的第一种近红外补光与全局曝光方式中的第一预设曝光和第二预设曝光之间的时序关系示意图。
图14是本申请实施例提供的第二种近红外补光与全局曝光方式中的第一预设曝光和第二预设曝光之间的时序关系示意图。
图15是本申请实施例提供的第三种近红外补光与全局曝光方式中的第一预设曝光和第二预设曝光之间的时序关系示意图。
图16是本申请实施例提供的第一种近红外补光与卷帘曝光方式中第一预设曝光和第二预设曝光之间的时序关系示意图。
图17是本申请实施例提供的第二种近红外补光与卷帘曝光方式中第一预设曝光和第二预设曝光之间的时序关系示意图。
图18是本申请实施例提供的第三种近红外补光与卷帘曝光方式中第一预设曝光和第二预设曝光之间的时序关系示意图。
图19是本申请实施例提供的一种图像处理单元的示意图。
图20是本申请实施例提供的另一种图像处理单元的示意图。
图21是本申请实施例提供的一种图像融合方法的流程图。
附图标记:
01:图像传感器,02:补光器,03:滤光组件;04:图像处理单元;05:镜头:06:编码压缩单元;07:智能分析单元;
021:第一补光装置,022:第二补光装置,031:第一滤光片。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
图1是本申请实施例提供的一种图像融合装置的结构示意图。该图像采集装置包括图像传感器01、补光器02和滤光组件03,图像传感器01位于滤光组件03的出光侧。图像传感器01用于通过多次曝光产生并输出第一图像信号和第二图像信号。其中,第一图像信号是根据第一预设曝光产生的图像,第二 图像信号是根据第二预设曝光产生的图像,第一预设曝光和第二预设曝光为该多次曝光中的其中两次曝光。补光器02包括第一补光装置021,第一补光装置021用于进行近红外补光,其中,至少在第一预设曝光的部分曝光时间段内进行近红外补光,在第二预设曝光的曝光时间段内不进行近红外补光。滤光组件03包括第一滤光片031,第一滤光片031使可见光和部分近红外光通过,其中,第一补光装021进行近红外光补光时通过第一滤光片031的近红外光的强度高于第一补光装置021未进行近红外补光时通过第一滤光片031的近红外光的强度。图像处理单元02用于对第一图像信号和第二图像信号进行处理得到融合图像。
本申请利用图像传感器的曝光时序来控制补光装置的近红外补光时序,以便进行近红外补光时通过第一预设曝光产生第一图像信号,不进行近红外补光时通过第二预设曝光产生第二图像信号,这样的数据采集方式,可以在结构简单、降低成本的同时直接采集到亮度信息不同的第一图像信号和第二图像信号,也即通过一个图像传感器就可以获取两种不同的图像信号,将这两种图像信号进行融合,使得该图像融合装置更加简便,且第一图像信号和第二图像信号的融合更加高效。并且,第一图像信号和第二图像信号均由同一个图像传感器产生并输出,所以第一图像信号对应的视点与第二图像信号对应的视点相同。因此,通过第一图像信号和第二图像信号可以共同获取外部场景的信息,且不会存在因第一图像信号对应的视点与第二图像信号对应的视点不相同,而导致根据第一图像信号和第二图像信号生成的图像不对齐的问题。如此,后续根据该第一图像信号和第二图像信号进行处理得到的融合图像的质量更高。
需要说明的是,由于图像传感器01、补光器02和滤光组件03均用于图像采集,因此,可以将其统称为该图像融合装置的图像采集单元。
另外,在一些可能的实现方式中,如图1所示,该图像融合装置还可以包括编码压缩单元06和智能分析单元07。
其中,编码压缩单元06用于对图像处理单元输出的融合图像进行编码压缩处理,并输出编码压缩处理后的图像。智能分析单元07用于对图像处理单元输出的融合图像进行分析处理,并输出分析结果。
下面对该图像融合装置中包括的图像采集单元、图像处理单元、编码压缩单元和智能分析单元分别进行说明。
1、图像采集单元
如图1所示,该图像采集单元包括图像传感器01、补光器02和滤光组件03,图像传感器01位于滤光组件03的出光侧。
在本申请实施例中,参见图1,图像采集单元还可以包括镜头05,此时,滤光组件03可以位于镜头05和图像传感器01之间,且图像传感器01位于滤光组件03的出光侧。或者,镜头05位于滤光组件03与图像传感器01之间,且图像传感器01位于镜头05的出光侧。作为一种示例,第一滤光片031可以是滤光薄膜,这样,当滤光组件03位于镜头05和图像传感器01之间时,第一滤光片031可以贴在镜头05的出光侧的表面,或者,当镜头05位于滤光组件03与图像传感器01之间时,第一滤光片031可以贴在镜头05的入光侧的表面。
需要说明的一点是,补光器02可以位于图像采集单元内,也可以位于图像采集单元的外部。补光器02可以为图像采集单元的一部分,也可以为独立于图像采集单元的一个器件。当补光器02位于图像采集单元的外部时,补光器02可以与图像采集单元进行通信连接,从而可以保证图像采集单元中的图像传感器01的曝光时序与补光器02包括的第一补光装置021的近红外补光时序存在一定的关系,如至少在第一预设曝光的部分曝光时间段内进行近红外补光,在第二预设曝光的曝光时间段内不进行近红外补光。
另外,第一补光装置021为可以发出近红外光的装置,例如近红外补光灯等,第一补光装置021可以以频闪方式进行近红外补光,也可以以类似频闪的其他方式进行近红外补光,本申请实施例对此不做限定。在一些示例中,当第一补光装置021以频闪方式进行近红外补光时,可以通过手动方式来控制第一补光装置021以频闪方式进行近红外补光,也可以通过软件程序或特定设备来控制第一补光装置021以频闪方式进行近红外补光,本申请实施例对此不做限定。其中,第一补光装置021进行近红外补光的时间段可以与第一预设曝光的曝光时间段重合,也可以大于第一预设曝光的曝光时间段或者小于第一预设曝光的曝光时间段,只要在第一预设曝光的整个曝光时间段或者部分曝光时间段内进行近红外补光,而在第二预设曝光的曝光时间段内不进行近红外补光即可。
需要说明的是,第二预设曝光的曝光时间段内不进行近红外补光,对于全局曝光方式来说,第二预设曝光的曝光时间段可以是开始曝光时刻和结束曝光时刻之间的时间段,对于卷帘曝光方式来说,第二预设曝光的曝光时间段可以 是第二图像信号第一行有效图像的开始曝光时刻与最后一行有效图像的结束曝光时刻之间的时间段,但并不局限于此。例如,第二预设曝光的曝光时间段也可以是第二图像信号中目标图像对应的曝光时间段,目标图像为第二图像信号中与目标对象或目标区域所对应的若干行有效图像,这若干行有效图像的开始曝光时刻与结束曝光时刻之间的时间段可以看作第二预设曝光的曝光时间段。
需要说明的另一点是,由于第一补光装置021在对外部场景进行近红外补光时,入射到物体表面的近红外光可能会被物体反射,从而进入到第一滤光片031中。并且由于通常情况下,环境光可以包括可见光和近红外光,且环境光中的近红外光入射到物体表面时也会被物体反射,从而进入到第一滤光片031中。因此,在进行近红外补光时通过第一滤光片031的近红外光可以包括第一补光装置021进行近红外补光时经物体反射进入第一滤光片031的近红外光,在不进行近红外补光时通过第一滤光片031的近红外光可以包括第一补光装置021未进行近红外补光时经物体反射进入第一滤光片031的近红外光。也即是,在进行近红外补光时通过第一滤光片031的近红外光包括第一补光装置021发出的且经物体反射后的近红外光,以及环境光中经物体反射后的近红外光,在不进行近红外补光时通过第一滤光片031的近红外光包括环境光中经物体反射后的近红外光。
以图像采集单元中,滤光组件03可以位于镜头05和图像传感器01之间,且图像传感器01位于滤光组件03的出光侧的结构特征为例,图像采集单元采集第一图像信号和第二图像信号的过程为:参见图2,在图像传感器01进行第一预设曝光时,第一补光装置021进行近红外补光,此时拍摄场景中的环境光和第一补光装置进行近红外补光时被场景中物体反射的近红外光经由镜头05、第一滤光片031之后,由图像传感器01通过第一预设曝光产生第一图像信号;参见图3,在图像传感器01进行第二预设曝光时,第一补光装置021不进行近红外补光,此时拍摄场景中的环境光经由镜头05、第一滤光片031之后,由图像传感器01通过第二预设曝光产生第二图像信号。在图像采集的一个帧周期内可以有M个第一预设曝光和N个第二预设曝光,第一预设曝光和第二预设曝光之间可以有多种组合的排序,在图像采集的一个帧周期中,M和N的取值以及M和N的大小关系可以根据实际需求设置,例如,M和N的取值可相等,也可不相等。
需要说明的是,第一滤光片031可以使部分近红外光波段的光通过,换句话说,通过第一滤光片031的近红外光波段可以为部分近红外光波段,也可以为全部近红外光波段,本申请实施例对此不做限定。
另外,由于环境光中的近红外光的强度低于第一补光装置021发出的近红外光的强度,因此,第一补光装置021进行近红外补光时通过第一滤光片031的近红外光的强度高于第一补光装置021未进行近红外补光时通过第一滤光片031的近红外光的强度。
其中,第一补光装置021进行近红外补光的波段范围可以为第二参考波段范围,第二参考波段范围可以为700纳米~800纳米,或者900纳米~1000纳米,这样,可以减轻常见的850纳米的近红外灯造成的干扰。另外,入射到第一滤光片031的近红外光的波段范围可以为第一参考波段范围,第一参考波段范围为650纳米~1100纳米。
由于在进行近红外补光时,通过第一滤光片031的近红外光可以包括第一补光装置021进行近红外光补光时经物体反射进入第一滤光片031的近红外光,以及环境光中的经物体反射后的近红外光。所以此时进入滤光组件03的近红外光的强度较强。但是,在不进行近红外补光时,通过第一滤光片031的近红外光包括环境光中经物体反射进入滤光组件03的近红外光。由于没有第一补光装置021进行补光的近红外光,所以此时通过第一滤光片031的近红外光的强度较弱。因此,根据第一预设曝光产生并输出的第一图像信号包括的近红外光的强度,要高于根据第二预设曝光产生并输出的第二图像信号包括的近红外光的强度。
第一补光装置021进行近红外补光的中心波长和/或波段范围可以有多种选择,本申请实施例中,为了使第一补光装置021和第一滤光片031有更好的配合,可以对第一补光装置021进行近红外补光的中心波长进行设计,以及对第一滤光片031的特性进行选择,从而使得在第一补光装置021进行近红外补光的中心波长为设定特征波长或者落在设定特征波长范围时,通过第一滤光片031的近红外光的中心波长和/或波段宽度可以达到约束条件。该约束条件主要是用来约束通过第一滤光片031的近红外光的中心波长尽可能准确,以及通过第一滤光片031的近红外光的波段宽度尽可能窄,从而避免出现因近红外光波段宽度过宽而引入波长干扰。
其中,第一补光装置021进行近红外补光的中心波长可以为第一补光装置 021发出的近红外光的光谱中能量最大的波长范围内的平均值,也可以理解为第一补光装置021发出的近红外光的光谱中能量超过一定阈值的波长范围内的中间位置处的波长。
其中,设定特征波长或者设定特征波长范围可以预先设置。作为一种示例,第一补光装置021进行近红外补光的中心波长可以为750±10纳米的波长范围内的任一波长;或者,第一补光装置021进行近红外补光的中心波长为780±10纳米的波长范围内的任一波长;或者,第一补光装置021进行近红外补光的中心波长为940±10纳米的波长范围内的任一波长。也即是,设定特征波长范围可以为750±10纳米的波长范围、或者780±10纳米的波长范围、或者940±10纳米的波长范围。示例性地,第一补光装置021进行近红外补光的中心波长为940纳米,第一补光装置021进行近红外补光的波长和相对强度之间的关系如图4所示。从图4可以看出,第一补光装置021进行近红外补光的波段范围为900纳米~1000纳米,其中,在940纳米处,近红外光的相对强度最高。
由于在进行近红外补光时,通过第一滤光片031的近红外光大部分为第一补光装置021进行近红外补光时经物体反射进入第一滤光片031的近红外光,因此,在一些实施例中,上述约束条件可以包括:通过第一滤光片031的近红外光的中心波长与第一补光装置021进行近红外补光的中心波长之间的差值位于波长波动范围内,作为一种示例,波长波动范围可以为0~20纳米。
其中,通过第一滤光片031的近红外补光的中心波长可以为第一滤光片031的近红外光通过率曲线中的近红外波段范围内波峰位置处的波长,也可以理解为第一滤光片031的近红外光通过率曲线中通过率超过一定阈值的近红外波段范围内的中间位置处的波长。
为了避免通过第一滤光片031的近红外光的波段宽度过宽而引入波长干扰,在一些实施例中,上述约束条件可以包括:第一波段宽度可以小于第二波段宽度。其中,第一波段宽度是指通过第一滤光片031的近红外光的波段宽度,第二波段宽度是指被第一滤光片031阻挡的近红外光的波段宽度。应当理解的是,波段宽度是指光线的波长所处的波长范围的宽度。例如,通过第一滤光片031的近红外光的波长所处的波长范围为700纳米~800纳米,那么第一波段宽度为800纳米减去700纳米,即100纳米。换句话说,通过第一滤光片031的近红外光的波段宽度小于第一滤光片031阻挡的近红外光的波段宽度。
例如,参见图5,图5为第一滤光片031可以通过的光的波长与通过率之 间的关系的一种示意图。入射到第一滤光片031的近红外光的波段为650纳米~1100纳米,第一滤光片031可以使波长位于380纳米~650纳米的可见光通过,以及波长位于900纳米~1100纳米的近红外光通过,阻挡波长位于650纳米~900纳米的近红外光。也即是,第一波段宽度为1000纳米减去900纳米,即100纳米。第二波段宽度为900纳米减去650纳米,加上1100纳米减去1000纳米,即350纳米。100纳米小于350纳米,即通过第一滤光片031的近红外光的波段宽度小于第一滤光片031阻挡的近红外光的波段宽度。以上关系曲线仅是一种示例,对于不同的滤光片,能够通过滤光片的近红光波段的波段范围可以有所不同,被滤光片阻挡的近红外光的波段范围也可以有所不同。
为了避免在非近红外补光的时间段内,通过第一滤光片031的近红外光的波段宽度过宽而引入波长干扰,在一些实施例中,上述约束条件可以包括:通过第一滤光片031的近红外光的半带宽小于或等于50纳米。其中,半带宽是指通过率大于50%的近红外光的波段宽度。
为了避免通过第一滤光片031的近红外光的波段宽度过宽而引入波长干扰,在一些实施例中,上述约束条件可以包括:第三波段宽度可以小于参考波段宽度。其中,第三波段宽度是指通过率大于设定比例的近红外光的波段宽度,作为一种示例,参考波段宽度可以为50纳米~100纳米的波段范围内的任一波段宽度。设定比例可以为30%~50%中的任一比例,当然设定比例还可以根据使用需求设置为其他比例,本申请实施例对此不做限定。换句话说,通过率大于设定比例的近红外光的波段宽度可以小于参考波段宽度。
例如,参见图5,入射到第一滤光片031的近红外光的波段为650纳米~1100纳米,设定比例为30%,参考波段宽度为100纳米。从图5可以看出,在650纳米~1100纳米的近红外光的波段中,通过率大于30%的近红外光的波段宽度明显小于100纳米。
由于第一补光装置021至少在第一预设曝光的部分曝光时间段内提供近红外补光,在第二预设曝光的整个曝光时间段内不提供近红外补光,而第一预设曝光和第二预设曝光为图像传感器01的多次曝光中的其中两次曝光,也即是,第一补光装置021在图像传感器01的部分曝光的曝光时间段内提供近红外补光,在图像传感器01的另外一部分曝光的曝光时间段内不提供近红外补光。所以,第一补光装置021在单位时间长度内的补光次数可以低于图像传感器01在该单位时间长度内的曝光次数,其中,每相邻两次补光的间隔时间段内,间 隔一次或多次曝光。
在一种可能的实现方式中,由于人眼容易将第一补光装置021进行近红外光补光的颜色与交通灯中的红灯的颜色混淆,所以,参见图6,补光器02还可以包括第二补光装置022,第二补光装置022用于进行可见光补光。这样,如果第二补光装置022至少在第一预设曝光的部分曝光时间提供可见光补光,也即是,至少在第一预设曝光的部分曝光时间段内进行近红外补光和可见光补光,这两种光的混合颜色可以区别于交通灯中的红灯的颜色,从而避免了人眼将补光器02进行近红外补光的颜色与交通灯中的红灯的颜色混淆。另外,如果第二补光装置022在第二预设曝光的曝光时间段内提供可见光补光,由于第二预设曝光的曝光时间段内可见光的强度不是特别高,因此,在第二预设曝光的曝光时间段内进行可见光补光时,还可以提高第二图像信号中的可见光的亮度,进而保证图像采集的质量。
在一些实施例中,第二补光装置022可以用于以常亮方式进行可见光补光;或者,第二补光装置022可以用于以频闪方式进行可见光补光,其中,至少在第一预设曝光的部分曝光时间段内存在可见光补光,在第二预设曝光的整个曝光时间段内不存在可见光补光;或者,第二补光装置022可以用于以频闪方式进行可见光补光,其中,至少在第一预设曝光的整个曝光时间段内不存在可见光补光,在第二预设曝光的部分曝光时间段内存在可见光补光。当第二补光装置022以常亮方式进行可见光补光时,不仅可以避免人眼将第一补光装置021进行近红外补光的颜色与交通灯中的红灯的颜色混淆,还可以提高第二图像信号中的可见光的亮度,进而保证图像采集的质量。当第二补光装置022以频闪方式进行可见光补光时,可以避免人眼将第一补光装置021进行近红外补光的颜色与交通灯中的红灯的颜色混淆,或者,可以提高第二图像信号中的可见光的亮度,进而保证图像采集的质量,而且还可以减少第二补光装置022的补光次数,从而延长第二补光装置022的使用寿命。
在一些实施例中,上述多次曝光是指一个帧周期内的多次曝光,也即是,图像传感器01在一个帧周期内进行多次曝光,从而产生并输出至少一帧第一图像信号和至少一帧第二图像信号。例如,1秒内包括25个帧周期,图像传感器01在每个帧周期内进行多次曝光,从而产生至少一帧第一图像信号和至少一帧第二图像信号,将一个帧周期内产生的第一图像信号和第二图像信号称为一组图像,这样,25个帧周期内就会产生25组图像。其中,第一预设曝光和 第二预设曝光可以是一个帧周期内多次曝光中相邻的两次曝光,也可以是一个帧周期内多次曝光中不相邻的两次曝光,本申请实施例对此不做限定。
第一图像信号是第一预设曝光产生并输出的,第二图像信号是第二预设曝光产生并输出的,在产生并输出第一图像信号和第二图像信号之后,可以对第一图像信号和第二图像信号进行处理。在某些情况下,第一图像信号和第二图像信号的用途可能不同,所以在一些实施例中,第一预设曝光与第二预设曝光的至少一个曝光参数可以不同。作为一种示例,该至少一个曝光参数可以包括但不限于曝光时间、模拟增益、数字增益、光圈大小中的一种或多种。其中,曝光增益包括模拟增益和/或数字增益。
在一些实施例中。可以理解的是,与第二预设曝光相比,在进行近红外补光时,图像传感器01感应到的近红外光的强度较强,相应地产生并输出的第一图像信号包括的近红外光的亮度也会较高。但是较高亮度的近红外光不利于外部场景信息的获取。而且在一些实施例中,曝光增益越大,图像传感器01输出的图像的亮度越高,曝光增益越小,图像传感器01输出的图像的亮度越低,因此,为了保证第一图像信号包含的近红外光的亮度在合适的范围内,在第一预设曝光和第二预设曝光的至少一个曝光参数不同的情况下,作为一种示例,第一预设曝光的曝光增益可以小于第二预设曝光的曝光增益。这样,在第一补光装置021进行近红外补光时,图像传感器01产生并输出的第一图像信号包含的近红外光的亮度,不会因第一补光装置021进行近红外补光而过高。
在另一些实施例中,曝光时间越长,图像传感器01得到的图像包括的亮度越高,并且外部场景中的运动的对象在图像中的运动拖尾越长;曝光时间越短,图像传感器01得到的图像包括的亮度越低,并且外部场景中的运动的对象在图像中的运动拖尾越短。因此,为了保证第一图像信号包含的近红外光的亮度在合适的范围内,且外部场景中的运动的对象在第一图像信号中的运动拖尾较短。在第一预设曝光和第二预设曝光的至少一个曝光参数不同的情况下,作为一种示例,第一预设曝光的曝光时间可以小于第二预设曝光的曝光时间。这样,在第一补光装置021进行近红外补光时,图像传感器01产生并输出的第一图像信号包含的近红外光的亮度,不会因第一补光装置021进行近红外补光而过高。并且较短的曝光时间使外部场景中的运动的对象在第一图像信号中出现的运动拖尾较短,从而有利于对运动对象的识别。示例性地,第一预设曝光的曝光时间为40毫秒,第二预设曝光的曝光时间为60毫秒等。
值得注意的是,在一些实施例中,当第一预设曝光的曝光增益小于第二预设曝光的曝光增益时,第一预设曝光的曝光时间不仅可以小于第二预设曝光的曝光时间,还可以等于第二预设曝光的曝光时间。同理,当第一预设曝光的曝光时间小于第二预设曝光的曝光时间时,第一预设曝光的曝光增益可以小于第二预设曝光的曝光增益,也可以等于第二预设曝光的曝光增益。
在另一些实施例中,第一图像信号和第二图像信号的用途可以相同,例如第一图像信号和第二图像信号都用于智能分析时,为了能使进行智能分析的人脸或目标在运动时能够有同样的清晰度,第一预设曝光与第二预设曝光的至少一个曝光参数可以相同。作为一种示例,第一预设曝光的曝光时间可以等于第二预设曝光的曝光时间,如果第一预设曝光的曝光时间和第二预设曝光的曝光时间不同,会出现曝光时间较长的一路图像存在运动拖尾,导致两路图像的清晰度不同。同理,作为另一种示例,第一预设曝光的曝光增益可以等于第二预设曝光的曝光增益。
值得注意的是,在一些实施例中,当第一预设曝光的曝光时间等于第二预设曝光的曝光时间时,第一预设曝光的曝光增益可以小于第二预设曝光的曝光增益,也可以等于第二预设曝光的曝光增益。同理,当第一预设曝光的曝光增益等于第二预设曝光的曝光增益时,第一预设曝光的曝光时间可以小于第二预设曝光的曝光时间,也可以等于第二预设曝光的曝光时间。
其中,图像传感器01可以包括多个感光通道,每个感光通道可以用于感应至少一种可见光波段的光,以及感应近红外波段的光。也即是,每个感光通道既能感应至少一种可见光波段的光,又能感应近红外波段的光,这样,可以保证第一图像信号和第二图像信号具有完整的分辨率,不缺失像素值。在一种可能的实现方式中,该多个感光通道可以用于感应至少两种不同的可见光波段的光。
在一些实施例中,该多个感光通道可以包括R感光通道、G感光通道、B感光通道、Y感光通道、W感光通道和C感光通道中的至少两种。其中,R感光通道用于感应红光波段和近红外波段的光,G感光通道用于感应绿光波段和近红外波段的光,B感光通道用于感应蓝光波段和近红外波段的光,Y感光通道用于感应黄光波段和近红外波段的光。由于在一些实施例中,可以用W来表示用于感应全波段的光的感光通道,在另一些实施例中,可以用C来表示用于感应全波段的光的感光通道,所以当该多个感光通道包括用于感应全波段的 光的感光通道时,这个感光通道可以是W感光通道,也可以是C感光通道。也即是,在实际应用中,可以根据使用需求来选择用于感应全波段的光的感光通道。示例性地,图像传感器01可以为RGB传感器、RGBW传感器,或RCCB传感器,或RYYB传感器。其中,RGB传感器中的R感光通道、G感光通道和B感光通道的分布方式可以参见图7,RGBW传感器中的R感光通道、G感光通道、B感光通道和W感光通道的分布方式可以参见图8,RCCB传感器中的R感光通道、C感光通道和B感光通道分布方式可以参见图9,RYYB传感器中的R感光通道、Y感光通道和B感光通道分布方式可以参见图10。
在另一些实施例中,有些感光通道也可以仅感应近红外波段的光,而不感应可见光波段的光,这样,可以保证第一图像信号具有完整的分辨率,不缺失像素值。作为一种示例,该多个感光通道可以包括R感光通道、G感光通道、B感光通道、IR感光通道中的至少两种。其中,R感光通道用于感应红光波段和近红外波段的光,G感光通道用于感应绿光波段和近红外波段的光,B感光通道用于感应蓝光波段和近红外波段的光,IR感光通道用于感应近红外波段的光。
示例地,图像传感器01可以为RGBIR传感器,其中,RGBIR传感器中的每个IR感光通道都可以感应近红外波段的光,而不感应可见光波段的光。
其中,当图像传感器01为RGB传感器时,相比于其他图像传感器,如RGBIR传感器等,,RGB传感器采集的RGB信息更完整,RGBIR传感器有一部分的感光通道采集不到可见光,所以RGB传感器采集的图像的色彩细节更准确。
值得注意的是,图像传感器01包括的多个感光通道可以对应多条感应曲线。示例性地,参见图11,图11中的R曲线代表图像传感器01对红光波段的光的感应曲线,G曲线代表图像传感器01对绿光波段的光的感应曲线,B曲线代表图像传感器01对蓝光波段的光的感应曲线,W(或者C)曲线代表图像传感器01感应全波段的光的感应曲线,NIR(Near infrared,近红外光)曲线代表图像传感器01感应近红外波段的光的感应曲线。
作为一种示例,图像传感器01可以采用全局曝光方式,也可以采用卷帘曝光方式。其中,全局曝光方式是指每一行有效图像的曝光开始时刻均相同,且每一行有效图像的曝光结束时刻均相同。换句话说,全局曝光方式是所有行有效图像同时进行曝光并且同时结束曝光的一种曝光方式。卷帘曝光方式是指 不同行有效图像的曝光时间不完全重合,也即是,一行有效图像的曝光开始时刻晚于该行有效图像的上一行有效图像的曝光开始时刻,且一行有效图像的曝光结束时刻晚于该行有效图像的上一行有效图像的曝光结束时刻。另外,卷帘曝光方式中每一行有效图像结束曝光后可以进行数据输出,因此,从第一行有效图像的数据开始输出时刻到最后一行有效图像的数据结束输出时刻之间的时间可以表示为读出时间。
示例性地,参见图12,图12为一种卷帘曝光方式的示意图。从图12可以看出,第1行有效图像在T1时刻开始曝光,在T3时刻结束曝光,第2行有效图像在T2时刻开始曝光,在T4时刻结束曝光,T2时刻相比于T1时刻向后推移了一个时间段,T4时刻相比于T3时刻向后推移了一个时间段。另外,第1行有效图像在T3时刻结束曝光并开始输出数据,在T5时刻结束数据的输出,第n行有效图像在T6时刻结束曝光并开始输出数据,在T7时刻结束数据的输出,则T3~T7时刻之间的时间即为读出时间。
在一些实施例中,当图像传感器01采用全局曝光方式进行多次曝光时,对于任意一次近红外补光,近红外补光的时间段与最邻近的第二预设曝光的曝光时间段不存在交集,近红外补光的时间段是第一预设曝光的曝光时间段的子集,或者,近红外补光的时间段与第一预设曝光的曝光时间段存在交集,或者第一预设曝光的曝光时间段是近红外补光的子集。这样,即可实现至少在第一预设曝光的部分曝光时间段内进行近红外补光,在第二预设曝光的整个曝光时间段内不进行近红外补光,从而不会对第二预设曝光造成影响。
例如,参见图13,对于任意一次近红外补光,近红外补光的时间段与最邻近的第二预设曝光的曝光时间段不存在交集,近红外补光的时间段是第一预设曝光的曝光时间段的子集。参见图14,对于任意一次近红外补光,近红外补光的时间段与最邻近的第二预设曝光的曝光时间段不存在交集,近红外补光的时间段与第一预设曝光的曝光时间段存在交集。参见图15,对于任意一次近红外补光,近红外补光的时间段与最邻近的第二预设曝光的曝光时间段不存在交集,第一预设曝光的曝光时间段是近红外补光的子集。图13至图15仅是一种示例,第一预设曝光和第二预设曝光的排序可以不限于这些示例。
在另一些实施例中,当图像传感器01采用卷帘曝光方式进行多次曝光时,对于任意一次近红外补光,近红外补光的时间段与最邻近的第二预设曝光的曝光时间段不存在交集。并且,近红外补光的开始时刻不早于第一预设曝光中最 后一行有效图像的曝光开始时刻,近红外补光的结束时刻不晚于第一预设曝光中第一行有效图像的曝光结束时刻。或者,近红外补光的开始时刻不早于第一预设曝光之前的最邻近的第二预设曝光的最后一行有效图像的曝光结束时刻且不晚于第一预设曝光中第一行有效图像的曝光结束时刻,近红外补光的结束时刻不早于第一预设曝光中最后一行有效图像的曝光开始时刻且不晚于第一预设曝光之后的最邻近的第二预设曝光的第一行有效图像的曝光开始时刻。或者,近红外补光的开始时刻不早于第一预设曝光之前的最邻近的第二预设曝光的最后一行有效图像的曝光结束时刻且不晚于第一预设曝光中第一行有效图像的曝光开始时刻,近红外补光的结束时刻不早于第一预设曝光中最后一行有效图像的曝光结束时刻且不晚于第一预设曝光之后的最邻近的第二预设曝光的第一行有效图像的曝光开始时刻。
例如,参见图16,对于任意一次近红外补光,近红外补光的时间段与最邻近的第二预设曝光的曝光时间段不存在交集,并且,近红外补光的开始时刻不早于第一预设曝光中最后一行有效图像的曝光开始时刻,近红外补光的结束时刻不晚于第一预设曝光中第一行有效图像的曝光结束时刻。参见图17,对于任意一次近红外补光,近红外补光的时间段与最邻近的第二预设曝光的曝光时间段不存在交集,并且,近红外补光的开始时刻不早于第一预设曝光之前的最邻近的第二预设曝光的最后一行有效图像的曝光结束时刻且不晚于第一预设曝光中第一行有效图像的曝光结束时刻,近红外补光的结束时刻不早于第一预设曝光中最后一行有效图像的曝光开始时刻且不晚于第一预设曝光之后的最邻近的第二预设曝光的第一行有效图像的曝光开始时刻。参见图18,对于任意一次近红外补光,近红外补光的时间段与最邻近的第二预设曝光的曝光时间段不存在交集,并且,近红外补光的开始时刻不早于第一预设曝光之前的最邻近的第二预设曝光的最后一行有效图像的曝光结束时刻且不晚于第一预设曝光中第一行有效图像的曝光开始时刻,近红外补光的结束时刻不早于第一预设曝光中最后一行有效图像的曝光结束时刻且不晚于第一预设曝光之后的最邻近的第二预设曝光的第一行有效图像的曝光开始时刻。图16至图18仅是一种示例,第一预设曝光和第二预设曝光的排序可以不限于这些示例。
其中,多次曝光可以包括奇数次曝光和偶数次曝光,这样,第一预设曝光和第二预设曝光可以包括但不限于如下几种方式:
第一种可能的实现方式,第一预设曝光为奇数次曝光中的一次曝光,第二 预设曝光为偶数次曝光中的一次曝光。这样,多次曝光可以包括按照奇偶次序排列的第一预设曝光和第二预设曝光。例如,多次曝光中的第1次曝光、第3次曝光、第5次曝光等奇数次曝光均为第一预设曝光,第2次曝光、第4次曝光、第6次曝光等偶数次曝光均为第二预设曝光。
第二种可能的实现方式,第一预设曝光为偶数次曝光中的一次曝光,第二预设曝光为奇数次曝光中的一次曝光,这样,多次曝光可以包括按照奇偶次序排列的第一预设曝光和第二预设曝光。例如,多次曝光中的第1次曝光、第3个曝光、第5次曝光等奇数次曝光均为第二预设曝光,第2次曝光、第4次曝光、第6次曝光等偶数次曝光均为第一预设曝光。
第三种可能的实现方式,第一预设曝光为指定的奇数次曝光中的一次曝光,第二预设曝光为除指定的奇数次曝光之外的其他曝光中的一次曝光,也即是,第二预设曝光可以为多次曝光中的奇数次曝光,也可以为多次曝光中的偶数次曝光。
第四种可能的实现方式,第一预设曝光为指定的偶数次曝光中的一次曝光,第二预设曝光为除指定的偶数次曝光之外的其他曝光中的一次曝光,也即是,第二预设曝光可以为多次曝光中的奇数次曝光,也可以为多次曝光中的偶数次曝光。
第五种可能的实现方式,第一预设曝光为第一曝光序列中的一次曝光,第二预设曝光为第二曝光序列中的一次曝光。
第六种可能的实现方式,第一预设曝光为第二曝光序列中的一次曝光,第二预设曝光为第一曝光序列中的一次曝光。
其中,上述多次曝光包括多个曝光序列,第一曝光序列和第二曝光序列为该多个曝光序列中的同一个曝光序列或者两个不同的曝光序列,每个曝光序列包括N次曝光,该N次曝光包括1次第一预设曝光和N-1次第二预设曝光,或者,该N次曝光包括1次第二预设曝光和N-1次第二预设曝光,N为大于2的正整数。
例如,每个曝光序列包括3次曝光,这3次曝光可以包括1次第一预设曝光和2次第二预设曝光,这样,每个曝光序列的第1次曝光可以为第一预设曝光,第2次和第3次曝光为第二预设曝光。也即是,每个曝光序列可以表示为:第一预设曝光、第二预设曝光、第二预设曝光。或者,这3次曝光可以包括1次第二预设曝光和2次第一预设曝光,这样,每个曝光序列的第1次曝光可以 为第二预设曝光,第2次和第3次曝光为第一预设曝光。也即是,每个曝光序列可以表示为:第二预设曝光、第一预设曝光、第一预设曝光。
上述仅提供了六种第一预设曝光和第二预设曝光的可能的实现方式,实际应用中,不限于上述六种可能的实现方式,本申请实施例对此不做限定。
在一些实施例中,滤光组件03还包括第二滤光片和切换部件,第一滤光片031和第二滤光片均与切换部件连接。切换部件,用于将第二滤光片切换到图像传感器01的入光侧,在第二滤光片切换到图像传感器01的入光侧之后,第二滤光片使可见光波段的光通过,阻挡近红外光波段的光,图像传感器01,用于通过曝光产生并输出第三图像信号。
需要说明的是,切换部件用于将第二滤光片切换到图像传感器01的入光侧,也可以理解为第二滤光片替换第一滤光片031在图像传感器01的入光侧的位置。在第二滤光片切换到图像传感器01的入光侧之后,第一补光装置021可以处于关闭状态也可以处于开启状态。
综上,当环境光中的可见光强度较弱时,例如夜晚,可以通过第一补光装置021频闪式的补光,使图像传感器01产生并输出包含近红外亮度信息的第一图像信号,以及包含可见光亮度信息的第二图像信号,且由于第一图像信号和第二图像信号均由同一个图像传感器01获取,所以第一图像信号的视点与第二图像信号的视点相同,从而通过第一图像信号和第二图像信号可以获取完整的外部场景的信息。在可见光强度较强时,例如白天,白天近红外光的占比比较强,采集的图像的色彩还原度不佳,可以通过图像传感器01产生并输出的包含可见光亮度信息的第三图像信号,这样即使白天,也可以采集到色彩还原度比较好的图像,这样,可达到不论可见光强度的强弱,或者说不论白天还是夜晚,均能高效、简便地获取外部场景的真实色彩信息,提高了图像采集单元的使用灵活性,并且还可以方便地与其他图像采集装置进行兼容。
本申请利用图像传感器的曝光时序来控制补光装置的近红外补光时序,以便在第一预设曝光的过程中进行近红外补光并产生第一图像信号,在第二预设曝光的过程中不进行近红外补光并产生第二图像信号,这样的数据采集方式,可以在结构简单、降低成本的同时直接采集到亮度信息不同的第一图像信号和第二图像信号,也即通过一个图像传感器就可以获取两种不同的图像,使得该图像采集装置更加简便,进而使得获取第一图像信号和第二图像信号也更加高效。并且,第一图像信号和第二图像信号均由同一个图像传感器产生并输出, 所以第一图像信号对应的视点与第二图像信号对应的视点相同。因此,通过第一图像信号和第二图像信号可以共同获取外部场景的信息,且不会存在因第一图像信号对应的视点与第二图像信号对应的视点不相同,而导致根据第一图像信号和第二图像信号生成的图像不对齐。
2、图像处理单元
在本申请实施例中,当图像采集单元输出第一图像信号和第二图像信号后,可以由图像处理单元04对第一图像信号和第二图像信号进行处理,得到融合图像。
需要说明的是,当图像采集单元中的图像传感器01为以bayer方式排列的传感器时,该传感器的每一个像素只能捕获一种色彩值,而缺失另外两种色彩值。在这种情况下,该图像传感器01生成并输出的将是一幅马赛克图像。并且,由于图像传感器01输出的图像通常均存在噪声信号,因此,当图像处理单元04接收到图像传感器01输出的第一图像信号和第二图像信号时,可以先对第一图像信号和第二图像信号进行预处理,然后再将预处理后的图像进行融合。
在一些实施例中,如图19所示,该图像处理单元04可以包括预处理单元041和图像融合单元042。该预处理单元041用于对第一图像信号和第二图像信号进行预处理,并输出第一预处理图像和第二预处理图像。图像融合单元042用于对第一预处理图像和第二预处理图像进行融合,得到融合图像。
在一种可能的实现方式中,如图20所示,预处理单元041可以包括图像修复单元0411和图像降噪单元0412。图像修复单元0411用于对第一图像信号进行修复处理,得到第一修复图像,对第二图像信号进行修复处理,得到第二修复图像。其中,第一修复图像为灰度图像,第二修复图像为彩色图像。图像降噪单元0412用于对第一修复图像和第二修复图像进行降噪处理,得到第一预处理图像和第二预处理图像。
基于前述介绍可知,当图像传感器01为以bayer方式排列的传感器时,输出的图像将为马赛克图像。基于此,在本申请实施例中,图像修复单元0411可以分别对第一图像信号和第二图像信号进行去马赛克处理,从而得到第一修复图像和第二修复图像。
需要说明的是,图像修复单元0411在对第一图像信号和第二图像信号进行去马赛克处理时,可以采用双线性插值法或自适应插值法等方法来进行处 理,本申请实施例对此不再详述。
另外,图像修复单元0411除了可以对第一图像信号和第二图像信号进行去马赛克处理之外,还可以对第一图像信号和第二图像信号进行诸如黑电平、坏点校正、Gamma校正等修复处理,本申请实施例对此不做限定。
图像修复单元0411在得到第一修复图像和第二修复图像之后,可以由图像降噪单元0412对第一修复图像和第二修复图像进行联合降噪处理,得到第一预处理图像和第二预处理图像。
在一些可能的实现方式中,图像降噪单元0412可以包括时域降噪单元。其中,时域降噪单元用于根据第一修复图像和第二修复图像进行运动估计,得到运动估计结果,根据运动估计结果对第一修复图像进行时域滤波处理,得到第一预处理图像,根据运动估计结果对第二修复图像进行时域滤波处理,得到第二预处理图像。
需要说明的是,该时域降噪单元可以包括运动估计单元和时域滤波单元。
在一些示例中,该运动估计单元可以用于根据第一修复图像和第一历史降噪图像生成第一帧差图像,根据第一帧差图像和多个第一设定帧差阈值确定第一修复图像中每个像素点的第一时域滤波强度,其中,第一历史降噪图像是指对所述第一修复图像的前N帧图像中的任一帧图像进行降噪后的图像;该时域滤波单元用于根据每个像素点的第一时域滤波强度对第一修复图像进行时域滤波处理,得到第一预处理图像,根据每个像素点的第一时域滤波强度对第二修复图像进行时域滤波处理,得到第二预处理图像。
示例性地,该运动估计单元可以将第一修复图像中的每个像素点和第一历史降噪图像中对应的像素点的像素值进行作差处理,得到原始帧差图像,将该原始帧差图像作为第一帧差图像。
或者,该运动估计单元可以将第一修复图像中的每个像素点和第一历史降噪图像中对应的像素点的像素值进行作差处理,得到原始帧差图像。之后,对该原始帧差图像进行处理,从而得到第一帧差图像。其中,对原始帧差图像进行处理,可以是指对原始帧差图像进行空域平滑处理或分块量化处理。
在得到第一帧差图像之后,该运动估计单元可以根据第一帧差图像中的每个像素点和多个第一设定帧差阈值确定每个像素点的第一时域滤波强度。其中,第一帧差图像中的每个像素点均对应一个第一设定帧差阈值,且各个像素点对应的第一设定帧差阈值有可能相同,也可能不同。在一种可能的实现方式 中,每个像素点对应的第一设定帧差阈值可以由外部用户自行进行设置。在另一种可能的实现方式中,该运动估计单元可以将第一修复图像的上一帧图像与第一历史降噪图像进行作差处理,从而得到第一噪声强度图像,根据第一噪声强度图像中每个像素点的噪声强度确定第一帧差图像中相应位置处的像素点的第一设定帧差阈值。当然,每个像素点对应的第一设定帧差阈值也可以通过其他方式确定得到,本申请实施例对此不做限定。
对于第一帧差图像中的每个像素点,该运动估计单元可以根据该像素点的帧差和该像素点对应的第一设定帧差阈值,通过下述公式(1)来确定得到相应像素点的第一时域滤波强度。其中,第一帧差图像中的每个像素点的帧差为相应像素点的像素值。
Figure PCTCN2020091917-appb-000001
其中,(x,y)为像素点在图像中的位置;α nir(x,y)是指坐标为(x,y)的像素点的第一时域滤波强度,dif nir(x,y)是指该像素点的帧差,dif_thr nir(x,y)是指该像素点对应的第一设定帧差阈值。
需要说明的是,对于第一帧差图像中的每个像素点,像素点的帧差相较于第一设定帧差阈值越小,则说明该像素点越趋向于静止,也即,该像素点所对应的运动级别越小。而由上述公式(1)可知,对于任意一个像素点,该像素点的帧差相对于第一设定帧差阈值越小,则与该像素点位于同一位置上的像素点的第一时域滤波强度越大。其中,运动级别用于指示运动的剧烈程度,运动级别越大,则说明运动越剧烈。第一时域滤波强度的取值可以在0到1之间。
在确定第一修复图像中每个像素点的第一时域滤波强度之后,则时域滤波单元可以直接根据第一时域滤波强度分别对第一修复图像和第二修复图像进行时域滤波处理,从而得到第一预处理图像和第二预处理图像。
需要说明的是,当第一修复图像的图像质量明显优于第二修复图像时,由于第一修复图像是近红外光图像,具有高信噪比,因此,利用第一修复图像中每个像素点的第一时域滤波强度对第二修复图像进行时域滤波处理,可以更为准确的区分图像中的噪声和有效信息,从而避免降噪后的图像中图像细节信息的损失以及图像拖尾的问题。
需要说明的是,在一些可能的情况中,运动估计单元可以根据第一修复图像和至少一个第一历史降噪图像生成至少一个第一帧差图像,并根据至少一个 帧差图像和每个帧差图像对应的多个第一设定帧差阈值确定第一修复图像中每个像素点的第一时域滤波强度。
其中,至少一个历史降噪图像是指对第一修复图像的前N帧图像进行降噪得到的图像。对于该至少一个第一历史降噪图像中的每个第一历史降噪图像,该运动估计单元可以根据该第一历史降噪图像和第一修复图像,参考前述介绍的相关实现方式来确定对应的第一帧差图像。之后,该运动估计单元可以根据每个第一帧差图像以及每个第一帧差图像对应的多个第一设定帧差阈值,参考前述相关实现方式确定每个第一帧差图像中每个像素点的时域滤波强度。之后,该运动估计单元可以将对各个第一帧差图像中相对应的像素点的时域滤波强度进行融合,从而得到每个像素点对应的第一时域滤波强度。其中,各个第一帧差图像中相对应的像素点是指位于同一位置的像素点。或者,对于各个第一帧差图像中处于同一位置上的像素点,对于这至少一个像素点,运动估计单元可以从该至少一个像素点对应的至少一个时域滤波强度中选择所表示的运动级别最大的时域滤波强度,进而将选择的时域滤波强度作为第一修复图像中相应位置上的像素点的第一时域滤波强度。
在另一些示例中,该运动估计单元可以根据第一修复图像和第一历史降噪图像生成第一帧差图像,根据第一帧差图像和多个第一设定帧差阈值确定第一修复图像中每个像素点的第一时域滤波强度,第一历史降噪图像是指对第一修复图像的前N帧图像中的任一帧图像进行降噪后的图像;运动估计单元还用于根据第二修复图像和第二历史降噪图像生成第二帧差图像,根据第二帧差图像和多个第二设定帧差阈值确定第二修复图像中每个像素点的第二时域滤波强度,第二历史降噪图像是指对第二修复图像的前N帧图像中的任一帧图像进行降噪后的图像;运动估计单元还用于根据第一修复图像中每个像素点的第一时域滤波强度和第二修复图像中每个像素点的第二时域滤波强度确定每个像素点的联合时域滤波强度;时域滤波单元用于根据每个像素点的第一时域滤波强度或联合时域滤波强度对第一修复图像进行时域滤波处理,得到第一预处理图像,根据每个像素点的联合时域滤波强度对第二修复图像进行时域滤波处理,得到第二预处理图像。
也即,运动估计单元不仅可以通过前述介绍的实现方式确定第一修复图像中每个像素点的第一时域滤波强度,还可以确定第二修复图像中每个像素点的第二时域滤波强度。
在确定每个像素点的第二时域滤波强度时,该运动估计单元可以首先将第二修复图像中的每个像素点和第二历史降噪图像中对应的像素点的像素值进行作差处理,得到第二帧差图像。其中,第一修复图像和第二修复图像是对齐的,第二修复图像和第二历史降噪图像也是对齐的。
在得到第二帧差图像之后,该运动估计单元可以根据第二帧差图像中的每个像素点和多个第二设定帧差阈值确定每个像素点的第二时域滤波强度。其中,第二帧差图像中的每个像素点均对应一个第二设定帧差阈值,也即,多个第二设定帧差阈值与第二帧差图像中的每个像素点一一对应。并且,各个像素点对应的第二设定帧差阈值有可能相同,也可能不同。在一种可能的实现方式中,每个像素点对应的第二设定帧差阈值可以由外部用户自行进行设置。在另一种可能的实现方式中,该运动估计单元可以将第二修复图像的上一帧图像与第二历史降噪图像进行作差处理,从而得到第二噪声强度图像,根据第二噪声强度图像中每个像素点的噪声强度确定第二帧差图像中相应位置处的像素点的第二设定帧差阈值。当然,每个像素点对应的第二设定帧差阈值也可以通过其他方式确定得到,本申请实施例对此不做限定。
对于第二帧差图像中的每个像素点,该运动估计单元可以根据该像素点的帧差和该像素点对应的第二设定帧差阈值,通过下述公式(2)来确定得到相应像素点的第二时域滤波强度。其中,第二帧差图像中的每个像素点的帧差即为相应像素点的像素值。
Figure PCTCN2020091917-appb-000002
其中,α vis(x,y)是指坐标为(x,y)的像素点的第二时域滤波强度,dif vis(x,y)表示该像素点的帧差,dif_thr vis(x,y)表示该像素点对应的第二设定帧差阈值。
需要说明的是,对于第二帧差图像中的每个像素点,像素点的帧差相对于第二设定帧差阈值越小,则说明该像素点越趋向于静止,也即,该像素点的运动级别越小。而由上述公式(2)可知,对于任意一个像素点,该像素点的帧差相对于第二设定帧差阈值越小,则与该像素点位于同一位置上的像素点的第二时域滤波强度越大。综上可知,在本申请实施例中,像素点的运动级别越小,则对应的第二时域滤波强度的取值越大。其中,第二时域滤波强度的取值可以在0到1之间。
在确定每个像素点的第一时域滤波强度和第二时域滤波强度之后,该运动 估计单元可以将每个像素点的第一时域滤波强度和第二时域滤波强度进行加权,从而得到每个像素点的联合时域权重。此时,确定的每个像素点的联合时域权重即为第一修复图像和第二修复图像的运动估计结果。其中,由于第一修复图像和第二修复图像是对齐的,所以,每个像素点的第一时域滤波强度和第二时域滤波强度实际上是指针对第一修复图像和第二修复图像中的同一像素位置的时域滤波强度。
示例性地,该运动估计单元可以通过下式(3)来对每个像素点的第一时域滤波强度和第二时域滤波强度进行加权,从而得到每个像素点的联合时域滤波强度。
Figure PCTCN2020091917-appb-000003
其中,Ω是指以坐标为(x,y)的像素点为中心的邻域范围,也即,以坐标为(x,y)的像素点为中心的局部图像区域,(x+i,y+j)是指该局部图像区域内的像素点坐标,
Figure PCTCN2020091917-appb-000004
是指以坐标为(x,y)的像素点为中心的局部图像区域内的第一时域滤波强度,
Figure PCTCN2020091917-appb-000005
是指以坐标为(x,y)的像素点为中心的局部第二时域滤波强度,α fus(x,y)是指坐标为(x,y)的像素点的联合时域滤波强度。
需要说明的是,第一时域滤波强度可以用于表示像素点在第一修复图像中的运动级别,第二时域滤波强度可以用于指示像素点在第二修复图像中的运动级别,而通过上述方式确定的联合时域滤波强度同时融合了第一时域滤波强度和第二时域滤波强度,也即,该联合时域滤波强度同时考虑了该像素点在第一修复图像中表现出的运动趋势以及在第二修复图像中表现出的运动趋势。这样,相较于第一时域滤波强度或第二时域滤波强度,该联合时域滤波强度能够更加准确的表征像素点的运动趋势,这样,后续以该联合时域滤波强度进行时域滤波时,可以更有效的去除图像噪声,并且,可以减轻由于对像素点的运动级别的误判所导致的图像拖尾等问题。
在确定每个像素点的联合时域滤波强度之后,时域滤波单元可以根据该联合时域滤波强度对第一修复图像和第二修复图像分别进行时域滤波处理,从而得到第一预处理图像和第二预处理图像。
示例性地,时域滤波单元可以根据每个像素点的联合时域滤波强度,通过下述公式(4)对第一修复图像和第一历史降噪图像中的每个像素点进行时域加权处理,从而得到第一预处理图像,根据每个像素点的联合时域滤波强度,通过下述公式(5)对第二修复图像和第二历史降噪图像中的每个像素点进行时域加权处理,从而得到第二预处理图像。
Figure PCTCN2020091917-appb-000006
Figure PCTCN2020091917-appb-000007
其中,
Figure PCTCN2020091917-appb-000008
是指第一预处理图像中坐标为(x,y)的像素点,
Figure PCTCN2020091917-appb-000009
是指第一历史降噪图像中坐标为(x,y)的像素点,α fus(x,y)是指坐标为(x,y)的像素点的联合时域滤波强度,I nir(x,y,t)是指第一修复图像中坐标为(x,y)的像素点,
Figure PCTCN2020091917-appb-000010
是指第二预处理图像中坐标为(x,y)的像素点,
Figure PCTCN2020091917-appb-000011
是指第二历史降噪图像中坐标为(x,y)的像素点,I vis(x,y,t)是指第二修复图像中坐标为(x,y)的像素点。
或者,考虑到第一修复图像为具有高信噪比的近红外光信号,该时域滤波单元也可以根据每个像素点的第一时域滤波强度对第一修复图像进行时域滤波,得到近红外光图像,根据每个像素点的联合时域滤波强度对第二修复图像进行时域滤波处理,从而得到可见光图像。
需要说明的是,由前述对于时域滤波强度与运动级别的关系的介绍可知,在本申请实施例中,对于第一修复图像和第二修复图像中运动越激烈的区域,可以采用越小的时域滤波强度对其进行滤波。
在另一些可能的实现方式中,该图像降噪单元0412可以包括空域降噪单元。其中,该空域降噪单元用于根据第一修复图像和第二修复图像进行边缘估计,得到边缘估计结果,根据边缘估计结果对第一修复图像进行空域滤波处理,得到第一预处理图像,根据边缘估计结果对第二修复图像进行空域滤波处理,得到第二预处理图像。
需要说明的是,该空域降噪单元可以包括边缘估计单元和空域滤波单元。
在一些示例中,该边缘估计单元用于确定第一修复图像中每个像素点的第一空域滤波强度;该空域滤波单元用于根据每个像素点对应的第一空域滤波强度对第一修复图像进行空域滤波处理,得到第一预处理图像,根据每个像素点对应的第一空域滤波强度对第二修复图像进行空域滤波处理,得到第二预处理图像。
示例性地,该边缘估计单元可以根据第一修复图像的每个像素点与其邻域内的其他像素点之间的差异,确定相应像素点的第一空域滤波强度。其中,该边缘估计单元可以通过下式(6)来生成每个像素点的第一空域滤波强度。
Figure PCTCN2020091917-appb-000012
其中,Ω是指以坐标为(x,y)的像素点为中心的邻域范围,也即,以坐标为(x,y)的像素点为中心的局部图像区域。(x+i,y+j)是指该局部图像区域内的像素点坐标,img nir(x,y)是指第一修复图像中坐标为(x,y)的像素点的像素值,δ 1和δ 2是指高斯分布标准差,
Figure PCTCN2020091917-appb-000013
是指在该局部图像区域根据像素点(x,y)与像素点(x+i,y+j)之间的差异确定的第一空域滤波强度。
需要说明的是,每个像素点的邻域内包括多个像素点,这样,对于任一像素点,根据该像素点与该像素点的局部图像区域内的各个像素点之间的差异分别可以确定得到该像素点对应的多个第一空域滤波强度。在确定每个像素点的多个第一空域滤波强度之后,空域滤波单元可以根据每个像素点的多个第一空域滤波强度分别对第一修复图像和第二修复图像进行空域滤波处理,从而得到第一预处理图像和第二预处理图像。
在另一些示例中,该边缘估计单元用于确定第一修复图像中每个像素点的第一空域滤波强度,确定第二修复图像中每个像素点的第二空域滤波强度;对第一修复图像进行卷积处理,得到第一纹理图像,对第二修复图像进行卷积处理,得到第二纹理图像;根据第一空域滤波强度、第二空域滤波强度、第一纹理图像和第二纹理图像确定每个像素点对应的联合空域滤波强度;该空域滤波单元用于根据每个像素点对应的第一空域滤波强度对第一修复图像进行空域滤波处理,得到第一预处理图像,根据每个像素点对应的联合空域滤波强度对第二修复图像进行空域滤波处理,得到第二预处理图像。
也即,该边缘估计单元不仅可以通过前述介绍的实现方式确定第一修复图像中每个像素点的第一空域滤波强度,还可以确定第二修复图像中每个像素点的第二空域滤波强度。
在确定每个像素点的第二空域滤波强度时,该边缘估计单元可以根据第二修复图像的每个像素点与其邻域内的其他像素点之间的差异,确定相应像素点的第二空域滤波强度。其中,该边缘估计单元可以通过下式(7)来生成每个像素点的第二空域滤波强度。
Figure PCTCN2020091917-appb-000014
其中,Ω是指以坐标为(x,y)的像素点为中心的邻域范围,也即,以坐标为(x,y)的像素点为中心的局部图像区域。(x+i,y+j)是指该局部图像区域内的像素点坐标,img vis(x,y)是指第二修复图像中坐标为(x,y)的像素点的像素值,δ 1和δ 2是指高斯分布标准差,
Figure PCTCN2020091917-appb-000015
是指坐标为(x,y)的像素点在该局部图像区域内根据其与像素点(x+i,y+j)之间的差异确定的第二空域滤波强度。同理,由于每个像素点的邻域内包括多个像素点,因此,对于每个像素点,通过前述介绍的方法可以得到每个像素点的多个第二空域滤波强度。
由上述公式6和7可知,对于以坐标为(x,y)的像素点为中心的局部图像区域,该像素点与该局部图像区域内的像素点之间的差异越小,则该像素点对应的多个空域滤波强度越大。也即,该像素点的空域滤波强度的大小与该像素点与对应的局部图像区域内的像素点之间的差异大小呈负相关。
在确定每个像素点的第一空域滤波强度和第二空域滤波强度之后,边缘估计单元可以利用Sobel边缘检测算子分别对第一修复图像和第二修复图像进行卷积处理,得到第一纹理图像和第二纹理图像,并以此为权重对每个像素点的多个第一空域滤波强度和多个第二空域滤波强度进行加权处理,生成每个像素点的多个联合空域滤波强度。
示例性地,Sobel边缘检测算子如下式(8)所示。边缘估计单元可以通过下式(9)来生成联合空域滤波强度。
Figure PCTCN2020091917-appb-000016
Figure PCTCN2020091917-appb-000017
其中,sobel H是指水平方向上的Sobel边缘检测算子,sobel V是指垂直方向上的Sobel边缘检测算子;β fus(x+i,y+j)是指坐标为(x,y)的像素点在其邻域范围Ω内的任一联合空域滤波强度,
Figure PCTCN2020091917-appb-000018
是指第一纹理图像中坐标为(x,y)的像素点的纹理信息,
Figure PCTCN2020091917-appb-000019
是指第二纹理图像中坐标为(x,y)的像素点的纹理信息。
需要说明的是,在确定联合空域滤波强度时,通过边缘检测算子进行了相应处理,所以,最终得到的每个像素点的多个联合空域滤波强度越小,则表明 该像素点与局部图像区域内的其他像素点之间的差异越大,因此可见,在本申请实施例中,图像中相邻像素点亮度差异越大的区域,联合空域滤波强度越小,而相邻像素点亮度差异较小的区域,联合空域滤波强度则相对较大。也即,在本申请实施例中,在进行空域滤波时,在边缘采用较弱的空域滤波强度,在非边缘采用较强的空域滤波强度,从而提高了降噪效果。
在得到联合空域滤波强度之后,空域滤波单元可以根据联合空域滤波强度分别对第一修复图像和第二修复图像进行空域加权处理,从而得到第一预处理图像和第二预处理图像。
或者,考虑到第一修复图像是具有高信噪比的近红外光图像,因此,在第一修复图像的质量明显优于第二修复图像时,无需利用第二修复图像的边缘信息来辅助第一修复图像进行空域滤波处理。在这种情况下,该空域滤波单元可以根据每个像素点的第一空域滤波强度对第一修复图像进行空域滤波处理。根据每个像素点的联合空域滤波强度对第二修复图像进行空域滤波处理。
示例性地,该空域滤波单元可以根据每个像素点的第一空域滤波强度通过下述公式(10)来对第一修复图像中的每个像素点进行空域滤波处理,从而得到第一预处理图像,根据每个像素点的联合时域滤波强度,通过下述公式(11)对第二修复图像中的每个像素点进行加权处理,从而得到第二预处理图像。
Figure PCTCN2020091917-appb-000020
Figure PCTCN2020091917-appb-000021
其中,
Figure PCTCN2020091917-appb-000022
是指第一预处理图像中坐标为(x,y)的像素点,I nir(x+i,y+j)是指第一修复图像中坐标为(x,y)的像素点的邻域范围内的像素点,β nir(x+i,y+j)为坐标为(x,y)的像素点在该邻域范围内的第一空域滤波强度,Ω是指以坐标为(x,y)的像素点为中心的邻域范围,
Figure PCTCN2020091917-appb-000023
为第二预处理图像中坐标为(x,y)的像素点,I vis(x+i,y+j)是指第二修复图像中坐标为(x,y)的像素点的邻域范围内的像素点,β fus(x+i,y+j)为坐标为(x,y)的像素点在该邻域范围内的联合空域滤波强度。
值得注意的是,在本申请实施例中,图像降噪单元0412也可以同时包括上述的时域降噪单元和空域降噪单元。在这种情况下,可以参照前述介绍的相 关实现方式,先通过时域降噪单元对第一修复图像和第二修复图像进行时域滤波,得到的第一时域降噪图像和第二时域降噪图像。之后,再由空域降噪单元对得到的第一时域降噪图像和第二时域降噪图像进行空域滤波,从而得到第一预处理图像和第二预处理图像。或者,可以先由空域降噪单元对第一修复图像和第二修复图像进行空域滤波,得到第一空域降噪图像和第二空域降噪图像。之后,再由时域降噪单元对得到的第一空域降噪图像和第二空域降噪图像进行时域滤波,从而得到第一预处理图像和第二预处理图像。
预处理单元041在得到第一预处理图像和第二预处理图像之后,图像融合单元042可以将第一预处理图像和第二预处理图像进行融合,得到融合图像。
在一种可能的实现方式中,图像融合单元042可以包括第一融合单元。其中,第一融合单元用于通过第一融合处理对第一预处理图像和第二预处理图像进行融合,得到融合图像。
需要说明的是,第一融合处理的可能实现方式可以包括以下几种:
第一种可能的实现方式,第一融合单元可以遍历每个像素位置,根据预设的每个像素位置的融合权重对第一预处理图像和第二预处理图像的相同像素位置处的RGB色彩向量进行融合。
示例性地,第一融合单元可以通过下述模型(12)来生成融合图像。
img=img 1*(1-w)+img 2*w      (12)
其中,img是指融合图像,img 1是指第一预处理图像,img 2是指第二预处理图像,w是指融合权重。需要说明的是,融合权重的取值范围为(0,1),例如,融合权重可以为0.5。
需要说明的是,在上述模型(12)中,融合权重也可以是通过对第一预处理图像和第二预处理图像进行处理得到的。
示例性地,第一融合单元可以对第一预处理图像进行边缘提取,得到第一边缘图像。对第二预处理图像进行边缘提取,得到第二边缘图像。根据第一边缘图像和第二边缘图像确定每个像素位置的融合权重。
在第二种可能的实现方式,第一融合单元可以将第二预处理图像中的亮度信号通过低通滤波器进行处理,得到低频信号。将第一预处理图像通过高通滤波器进行处理,得到高频信号。将低频信号与高频信号相加,得到融合亮度信号。最后将融合亮度信号和第二预处理图像中的色度信号进行合成,即可得到融合图像。
第三种可能的实现方式中,第一融合单元可以对第一预处理图像进行色彩空间转换,从而分离出第一亮度图像和第一色彩图像。对第二预处理图像进行色彩空间转换,从而分离出第二亮度图像和第二色彩图像。分别对第一亮度图像和第二亮度图像进行金字塔分解,得到多个具有不同尺度信息的基础图像和细节图像,并根据第一亮度图像与第二亮度图像之间信息熵或梯度的相对大小对多个基础图像和细节图像进行加权,重构得到融合亮度图像。之后,第一融合单元可以选取第一色彩图像和第二色彩图像中具有更高色彩准确性的色彩图像来作为融合图像的色彩分量,并根据融合亮度图像与选取的色彩图像所对应的亮度图像之间的差异来调整融合图像的色彩分量,提升色彩准确性。
在得到融合图像之后,图像处理单元04可以直接输出该融合图像。
在另一些可能的实现方式中,该图像融合单元042可以包括第二融合单元和第三融合单元。其中,第二融合单元用于通过第二融合处理对第一预处理图像和第二预处理图像进行融合,得到第一目标图像。第三融合单元用于通过第三融合处理对第一预处理图像和第二预处理图像进行融合,得到第二目标图像。融合图像包括第一目标图像和第二目标图像。
需要说明的是,第二融合处理和第三融合处理可以不同。例如,第二融合处理和第三融合处理可以为前述介绍的第一融合处理的三种可能实现方式中的任意两种。或者,第二融合处理为前述介绍的第一融合处理的三种可能实现方式中的任意一种,第三融合处理为除上述三种可能实现方式之外的其余的处理方式。或者,第三融合处理为前述介绍的第一融合处理的三种可能实现方式中的任意一种,第二融合处理为除上述三种可能实现方式之外的其余的处理方式。
第二融合处理和第三融合处理也可以相同。在这种情况下,第二融合处理的融合参数为第一融合参数,第三融合处理的融合参数为第二融合参数,第一融合参数和第二融合参数不同。例如,第二融合处理和第三融合处理可以均为上述介绍的第一融合处理中的第一种可能的实现方式。但是,第二融合处理和第三融合处理中的融合权重可以不同。
通过不同的融合单元采用不同的融合处理来对两个预处理图像进行融合,或者是采用相同的融合处理但是不同的融合参数来对两个预处理图像进行融合,可以得到两种不同图像风格的融合图像。图像处理单元可以分别向后续的不同单元输出不同的融合图像,以此来满足后续不同操作对融合图像的要求。
3、编码压缩单元06
当图像处理单元04输出一路融合图像时,编码压缩单元06可以对该融合图像进行压缩并输出,以供后续显示和存储。示例性地,编码压缩单元06可以采用H.264的标准对该融合图像进行压缩。
当图像处理单元04输出两路融合图像,也即第一目标图像和第二目标图像时,编码压缩单元06可以对第一目标图像进行压缩并输出,以供后续显示和存储。
4、智能分析单元07
当图像处理单元04输出一路融合图像时,智能分析单元07可以对该融合图像进行分析处理,并输出分析结果,以供后续使用。
需要说明的是,智能分析单元07可以为SoC(System on Chip)芯片中的神经网络计算单元,采用深度学习网络,对第一融合图像进行分析处理。例如,可以采用FastRCNN对第一融合图像进行分析处理。
当图像处理单元04输出两路融合图像,也即第一目标图像和第二目标图像时,智能分析单元07可以对第二目标图像进行压缩并输出,以供后续显示和存储。
本申请利用图像传感器的曝光时序来控制补光装置的近红外补光时序,以便进行近红外补光时通过第一预设曝光产生第一图像信号,不进行近红外补光时通过第二预设曝光产生第二图像信号,这样的数据采集方式,可以在结构简单、降低成本的同时直接采集到亮度信息不同的第一图像信号和第二图像信号,也即通过一个图像传感器就可以获取两种不同的图像信号,进而将这两种图像信号进行融合,使得该图像融合装置更加简便,进而第一图像信号和第二图像信号的融合高效。并且,第一图像信号和第二图像信号均由同一个图像传感器产生并输出,所以第一图像信号对应的视点与第二图像信号对应的视点相同。因此,通过第一图像信号和第二图像信号可以共同获取外部场景的信息,且不会存在因第一图像信号对应的视点与第二图像信号对应的视点不相同,而导致根据第一图像信号和第二图像信号生成的图像不对齐。如此,后续根据该第一图像信号和第二图像信号进行处理得到的融合图像的质量更高。
基于上述对图像融合装置的描述,该图像融合装置可以通过多次曝光产生并输出第一图像信号和第二图像信号,将第一图像信号和第二图像信号进行融 合处理,从而得到融合图像。接下来,以基于上述图1-20所示的实施例提供的图像采集装置来对图像采集方法进行说明。参见图21,该方法包括:
步骤2101:通过补光器包括的第一补光装置进行近红外补光,其中,至少在第一预设曝光的部分曝光时间段内进行近红外补光,在第二预设曝光的曝光时间段内不进行近红外补光,第一预设曝光和第二预设曝光为图像传感器的多次曝光中的其中两次曝光。
步骤2102:通过滤光组件包括的第一滤光片使可见光和部分近红外光通过。
步骤2103:通过图像传感器进行多次曝光,以产生并输出第一图像信号和第二图像信号,第一图像信号是根据第一预设曝光产生的图像,第二图像信号是根据第二预设曝光产生的图像。
步骤2104:通过图像处理单元对第一图像信号和第二图像信号进行处理得到融合图像。
图像处理单元包括预处理单元和图像融合单元;
通过图像处理单元对第一图像信号和第二图像信号进行处理得到融合图像,包括:
通过预处理单元对第一图像信号和第二图像信号进行预处理,并输出第一预处理图像和第二预处理图像;
通过图像融合单元对第一预处理图像和第二预处理图像进行融合,得到融合图像。
在一种可能的实现方式中,预处理单元包括图像修复单元和图像降噪单元;
通过预处理单元对第一图像信号和第二图像信号进行预处理,并输出第一预处理图像和第二预处理图像,包括:
通过图像修复单元对第一图像信号进行修复处理,得到第一修复图像,对第二图像信号进行修复处理,得到第二修复图像,第一修复图像为灰度图像,第二修复图像为彩色图像;
通过图像降噪单元对第一修复图像进行降噪处理,得到第一预处理图像,对第二修复图像进行降噪处理,得到第二预处理图像。
在一种可能的实现方式中,图像降噪单元包括时域降噪单元或空域降噪单元;
通过图像降噪单元对第一修复图像进行降噪处理,得到第一预处理图像,对第二修复图像进行降噪处理,得到第二预处理图像,包括:
通过时域降噪单元,根据第一修复图像和第二修复图像进行运动估计,得到运动估计结果,根据运动估计结果对第一修复图像进行时域滤波处理,得到第一预处理图像,根据运动估计结果对第二修复图像进行时域滤波处理,得到第二预处理图像;或者
通过空域降噪单元,根据第一修复图像和第二修复图像进行边缘估计,得到边缘估计结果,根据边缘估计结果对第一修复图像进行空域滤波处理,得到第一预处理图像,根据边缘估计结果对第二修复图像进行空域滤波处理,得到第二预处理图像。
在一种可能的实现方式中,时域降噪单元包括运动估计单元;
通过时域降噪单元,根据第一修复图像和第二修复图像进行运动估计,得到运动估计结果,包括:
通过运动估计单元,根据第一修复图像和第一历史降噪图像生成第一帧差图像,根据第一帧差图像和多个第一设定帧差阈值确定第一修复图像中每个像素点的第一时域滤波强度,第一历史降噪图像是指对第一修复图像的前N帧图像中的任一帧图像进行降噪后的图像,N大于或等于1,多个第一设定帧差阈值与第一帧差图像中的多个像素点一一对应;
通过运动估计单元,根据第二修复图像和第二历史降噪图像生成第二帧差图像,根据第二帧差图像和多个第二设定帧差阈值确定第二修复图像中每个像素点的第二时域滤波强度,第二历史降噪图像是指对第二修复图像的前N帧图像中的任一帧图像进行降噪后的图像信,多个第二设定帧差阈值与第二帧差图像中的多个像素点一一对应号;
通过运动估计单元,对每个像素点的第一时域滤波强度和第二时域滤波强度进行融合,得到每个像素点的联合时域滤波强度;或者,通过运动估计单元从每个像素点的第一时域滤波强度和第二时域滤波强度中选择一个时域滤波强度作为相应像素点的联合时域滤波强度;
其中,运动估计结果包括每个像素点的第一时域滤波强度和/或每个像素点的联合时域滤波强度。
在一种可能的实现方式中,时域降噪单元还包括时域滤波单元;
通过时域降噪单元,根据运动估计结果对第一修复图像进行时域滤波处 理,得到第一预处理图像,包括:
时域滤波单元用于根据每个像素点的第一时域滤波强度对第一修复图像进行时域滤波处理,得到第一预处理图像,根据每个像素点的第一时域滤波强度对第二修复图像进行时域滤波处理,得到第二预处理图像;或者
时域滤波单元用于根据每个像素点的第一时域滤波强度对第一修复图像进行时域滤波处理,得到第一预处理图像,根据每个像素点的联合时域滤波强度对第二修复图像进行时域滤波处理,得到第二预处理图像;或者
时域滤波单元用于根据每个像素点的联合时域滤波强度对第一修复图像进行时域滤波处理,得到第一预处理图像,根据每个像素点的联合时域滤波强度对第二修复图像进行时域滤波处理,得到第二预处理图像。
在一种可能的实现方式中,第一帧差图像是指对第一修复图像和第一历史降噪图像进行作差处理得到的原始帧差图像;或者,第一帧差图像是指对原始帧差图像进行处理后得到的帧差图像;
第二帧差图像是指对第二修复图像和第二历史降噪图像进行作差处理得到的原始帧差图像;或者,第二帧差图像是指对原始帧差图像进行处理后得到的帧差图像。
在一种可能的实现方式中,每个像素点对应的第一设定帧差阈值不同,或者,每个像素点对应的第一设定帧差阈值相同;
每个像素点对应的第二设定帧差阈值不同,或者,每个像素点对应的第二设定帧差阈值相同。
在一种可能的实现方式中,多个第一设定帧差阈值是根据第一噪声强度图像中多个像素点的噪声强度确定得到,第一噪声强度图像根据第一历史降噪图像对应的降噪前的图像和第一历史降噪图像确定得到;
多个第二设定帧差阈值是根据第二噪声强度图像中多个像素点的噪声强度确定得到,第二噪声强度图像根据第二历史降噪图像对应的降噪前的图像和第二历史降噪图像确定得到。
在一种可能的实现方式中,空域降噪单元包括边缘估计单元;
通过空域降噪单元,根据第一修复图像和第二修复图像进行边缘估计,得到边缘估计结果,包括:
通过边缘估计单元确定第一修复图像中每个像素点的第一空域滤波强度;
通过边缘估计单元确定第二修复图像中每个像素点的第二空域滤波强度;
通过边缘估计单元对第一修复图像进行局部信息提取,得到第一局部信息,对第二修复图像进行局部信息提取,得到第二局部信息;根据第一空域滤波强度、第二空域滤波强度、第一局部信息和第二局部信息确定每个像素点对应的联合空域滤波强度;
其中,边缘估计结果包括每个像素点的第一空域滤波强度和/或联合空域滤波强度。
在一种可能的实现方式中,空域降噪单元还包括空域滤波单元;
根据边缘估计结果对第一修复图像进行空域滤波处理,得到第一预处理图像,根据边缘估计结果对第二修复图像进行空域滤波处理,得到第二预处理图像,包括:
通过空域滤波单元,根据每个像素点对应的第一空域滤波强度对第一修复图像进行空域滤波处理,得到第一预处理图像,根据每个像素点对应的第一空域滤波强度对第二修复图像进行空域滤波处理,得到第二预处理图像;或者
通过空域滤波单元,根据每个像素点对应的第一空域滤波强度对第一修复图像进行空域滤波处理,得到第一预处理图像,根据每个像素点对应的联合空域滤波强度对第二修复图像进行空域滤波处理,得到第二预处理图像;或者
通过空域滤波单元,根据每个像素点对应的联合空域滤波强度对第一修复图像进行空域滤波处理,得到第一预处理图像,根据每个像素点对应的联合空域滤波强度对第二修复图像进行空域滤波处理,得到第二预处理图像。
在一种可能的实现方式中,第一局部信息和第二局部信息包括局部梯度信息、局部亮度信息和局部信息熵中的至少一种。
在一种可能的实现方式中,图像降噪单元包括时域降噪单元和空域降噪单元;
通过降噪单元对第一修复图像进行降噪处理,得到第一预处理图像,对第二修复图像进行降噪处理,得到第二预处理图像,包括:
通过时域降噪单元,根据第一修复图像和第二修复图像进行运动估计,得到运动估计结果,根据运动估计结果对第一修复图像进行时域滤波,得到第一时域降噪图像,根据运动估计结果对第二修复图像进行时域滤波,得到第二时域降噪图像;
通过空域降噪单元,根据第一时域降噪图像和第二时域降噪图像进行边缘估计,得到边缘估计结果,根据边缘估计结果对第一时域降噪图像进行空域滤 波,得到第一预处理图像,根据边缘估计结果对第二时域降噪图像进行空域滤波,得到第二预处理图像;
或者,
通过空域降噪单元,根据第一修复图像和第二修复图像进行边缘估计,得到边缘估计结果,根据边缘估计结果对第一修复图像进行空域滤波,得到第一空域降噪图像,根据边缘估计结果对第二修复图像进行空域滤波,得到第二空域降噪图像;
通过时域降噪单元,根据第一空域降噪图像和第二空域降噪图像进行运动估计,得到运动估计结果,根据运动估计结果对第一空域降噪图像进行时域滤波,得到第一预处理图像,根据运动估计结果对第二空域降噪图像进行时域滤波,得到第二预处理图像。
在一种可能的实现方式中,图像融合单元包括第一融合单元,图像融合装置还包括编码压缩单元和智能分析单元;
通过图像融合单元对第一预处理图像和第二预处理图像进行融合,得到融合图像,包括:
通过第一融合单元,利用第一融合处理对第一预处理图像和第二预处理图像进行融合,得到融合图像;
方法还包括:
通过编码压缩单元对融合图像进行编码压缩处理,并输出编码压缩处理后的图像,编码压缩处理后的图像用于进行显示或存储;
通过智能分析单元对融合图像进行分析处理,得到分析结果,输出分析结果。
在一种可能的实现方式中,图像融合单元包括第二融合单元和第三融合单元,图像融合装置还包括编码压缩单元和智能分析单元;
通过图像融合单元对第一预处理图像和第二预处理图像进行融合,得到融合图像,包括:
通过第二融合单元,利用第二融合处理对第一预处理图像和第二预处理图像进行融合,得到第一目标图像;
通过第三融合单元,利用第三融合处理对第一预处理图像和第二预处理图像进行融合,得到第二目标图像;
方法还包括:
通过编码压缩单元对第一目标图像进行编码压缩处理,并输出编码压缩处理后的图像,编码压缩处理后的图像用于进行显示或存储;
通过智能分析单元对第二目标图像进行分析处理,得到分析结果,输出分析结果。
在一种可能的实现方式中,第二融合处理和第三融合处理不同;
或者,第二融合处理和第三融合处理相同,但第二融合处理的融合参数为第一融合参数,第二融合处理的融合参数为第二融合参数,第一融合参数和第二融合参数不同。
在一种可能的实现方式中,补光器还可以包括第二补光装置,此时,通过滤光组件包括的第一滤光片使可见光波段的光和近红外光波段的光通过之前,还通过第二补光装置进行可见光补光。
在一种可能的实现方式中,第一补光装置进行近红外光补光时通过第一滤光片的近红外光的强度高于第一补光装置未进行近红外补光时通过第一滤光片的近红外光的强度。
在一种可能的实现方式中,入射到第一滤光片的近红外光的波段范围为第一参考波段范围,第一参考波段范围为650纳米~1100纳米。
在一种可能的实现方式中,第一补光装置进行近红外补光的中心波长为设定特征波长或者落在设定特征波长范围时,通过第一滤光片的近红外光的中心波长和/或波段宽度达到约束条件。
在一种可能的实现方式中,第一补光装置进行近红外补光的中心波长为750±10纳米的波长范围内的任一波长;或者
第一补光装置进行近红外补光的中心波长为780±10纳米的波长范围内的任一波长;或者
第一补光装置进行近红外补光的中心波长为940±10纳米的波长范围内的任一波长。
在一种可能的实现方式中,约束条件包括:
通过第一滤光片的近红外光的中心波长与第一补光装置进行近红外补光的中心波长之间的差值位于波长波动范围内,波长波动范围为0~20纳米。
在一种可能的实现方式中,约束条件包括:
通过第一滤光片的近红外光的半带宽小于或等于50纳米。
在一种可能的实现方式中,约束条件包括:
第一波段宽度小于第二波段宽度;其中,第一波段宽度是指通过第一滤光片的近红外光的波段宽度,第二波段宽度是指被第一滤光片阻挡的近红外光的波段宽度。
在一种可能的实现方式中,约束条件为:
第三波段宽度小于参考波段宽度,第三波段宽度是指通过率大于设定比例的近红外光的波段宽度,参考波段宽度为50纳米~150纳米的波段范围内的任一波段宽度。
在一种可能的实现方式中,设定比例为30%~50%的比例范围内的任一比例。
在一种可能的实现方式中,第一预设曝光与第二预设曝光的至少一个曝光参数不同,至少一个曝光参数为曝光时间、曝光增益、光圈大小中的一种或多种,曝光增益包括模拟增益,和/或,数字增益。
在一种可能的实现方式中,第一预设曝光的曝光增益小于第二预设曝光的曝光增益。
在一种可能的实现方式中,第一预设曝光和第二预设曝光的至少一个曝光参数相同,至少一个曝光参数包括曝光时间、曝光增益、光圈大小中的一种或多种,曝光增益包括模拟增益,和/或,数字增益。
在一种可能的实现方式中,第一预设曝光的曝光时间等于第二预设曝光的曝光时间。
在一种可能的实现方式中,图像传感器包括多个感光通道,每个感光通道用于感应至少一种可见光波段的光,以及感应近红外波段的光。
在一种可能的实现方式中,多个感光通道用于感应至少两种不同的可见光波段的光。
在一种可能的实现方式中,多个感光通道包括R感光通道、G感光通道、B感光通道、Y感光通道、W感光通道和C感光通道中的至少两种;
其中,R感光通道用于感应红光波段和近红外波段的光,G感光通道用于感应绿光波段和近红外波段的光,B感光通道用于感应蓝光波段和近红外波段的光,Y感光通道用于感应黄光波段和近红外波段的光,W感光通道用于感应全波段的光,C感光通道用于感应全波段的光。
在一种可能的实现方式中,图像传感器为RGB传感器、RGBW传感器,或RCCB传感器,或RYYB传感器。
在一种可能的实现方式中,第二补光装置用于以常亮方式进行可见光补光;或者
第二补光装置用于以频闪方式进行可见光补光,其中,至少在第一预设曝光的部分曝光时间段内存在可见光补光,在第二预设曝光的整个曝光时间段内不存在可见光补光;或者
第二补光装置用于以频闪方式进行可见光补光,其中,至少在第一预设曝光的整个曝光时间段内不存在可见光补光,在第二预设曝光的部分曝光时间段内存在可见光补光。
在一种可能的实现方式中,第一补光装置在单位时间长度内的补光次数低于图像传感器在单位时间长度内的曝光次数,其中,每相邻两次补光的间隔时间段内,间隔一次或多次曝光。
在一种可能的实现方式中,图像传感器采用全局曝光方式进行多次曝光,对于任意一次近红外补光,近红外补光的时间段与最邻近的第二预设曝光的曝光时间段不存在交集,近红外补光的时间段是第一预设曝光的曝光时间段的子集,或者,近红外补光的时间段与第一预设曝光的曝光时间段存在交集,或者第一预设曝光的曝光时间段是近红外补光的子集。
在一种可能的实现方式中,图像传感器采用卷帘曝光方式进行多次曝光,对于任意一次近红外补光,近红外补光的时间段与最邻近的第二预设曝光的曝光时间段不存在交集;
近红外补光的开始时刻不早于第一预设曝光中最后一行有效图像的曝光开始时刻,近红外补光的结束时刻不晚于第一预设曝光中第一行有效图像的曝光结束时刻;
或者,
近红外补光的开始时刻不早于第一预设曝光之前的最邻近的第二预设曝光的最后一行有效图像的曝光结束时刻且不晚于第一预设曝光中第一行有效图像的曝光结束时刻,近红外补光的结束时刻不早于第一预设曝光中最后一行有效图像的曝光开始时刻且不晚于第一预设曝光之后的最邻近的第二预设曝光的第一行有效图像的曝光开始时刻;或者
近红外补光的开始时刻不早于第一预设曝光之前的最邻近的第二预设曝光的最后一行有效图像的曝光结束时刻且不晚于第一预设曝光中第一行有效图像的曝光开始时刻,近红外补光的结束时刻不早于第一预设曝光中最后一行 有效图像的曝光结束时刻且不晚于第一预设曝光之后的最邻近的第二预设曝光的第一行有效图像的曝光开始时刻。
在一种可能的实现方式中,多次曝光包括奇数次曝光和偶数次曝光;
第一预设曝光为奇数次曝光中的一次曝光,第二预设曝光为偶数次曝光中的一次曝光;或者
第一预设曝光为偶数次曝光中的一次曝光,第二预设曝光为奇数次曝光中的一次曝光;或者
第一预设曝光为指定的奇数次曝光中的一次曝光,第二预设曝光为除指定的奇数次曝光之外的其他曝光中的一次曝光;或者
第一预设曝光为指定的偶数次曝光中的一次曝光,第二预设曝光为除指定的偶数次曝光之外的其他曝光中的一次曝光;或者,
第一预设曝光为第一曝光序列中的一次曝光,第二预设曝光为第二曝光序列中的一次曝光;或者
第一预设曝光为第二曝光序列中的一次曝光,第二预设曝光为第一曝光序列中的一次曝光;
其中,多次曝光包括多个曝光序列,第一曝光序列和第二曝光序列为多个曝光序列中的一个曝光序列或者两个曝光序列,每个曝光序列包括N次曝光,N次曝光包括1次第一预设曝光和N-1次第二预设曝光,或者,N次曝光包括1次第二预设曝光和N-1次第二预设曝光,N为大于2的正整数。
需要说明的是,由于本实施例与上述图1-19所示的实施例可以采用同样的发明构思,因此,关于本实施例内容的解释可以参考上述图1-19所示实施例中相关内容的解释,此处不再赘述。
本申请利用图像传感器的曝光时序来控制补光装置的近红外补光时序,以便进行近红外补光时通过第一预设曝光产生第一图像信号,不进行近红外补光时通过第二预设曝光产生第二图像信号,这样的数据采集方式,可以在结构简单、降低成本的同时直接采集到亮度信息不同的第一图像信号和第二图像信号,也即通过一个图像传感器就可以获取两种不同的图像信号,进而将这两种图像信号进行融合,使得该图像融合装置更加简便,进而第一图像信号和第二图像信号的融合高效。并且,第一图像信号和第二图像信号均由同一个图像传感器产生并输出,所以第一图像信号对应的视点与第二图像信号对应的视点相同。因此,通过第一图像信号和第二图像信号可以共同获取外部场景的信息, 且不会存在因第一图像信号对应的视点与第二图像信号对应的视点不相同,而导致根据第一图像信号和第二图像信号生成的图像不对齐。如此,后续根据该第一图像信号和第二图像信号进行处理得到的融合图像的质量更高。
以上所述仅为本申请的可选实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (33)

  1. 一种图像融合装置,所述图像融合装置包括图像传感器(01)、补光器(02)、滤光组件(03)和图像处理单元(04),所述图像传感器(01)位于所述滤光组件(03)的出光侧;
    所述图像传感器(01),用于通过多次曝光产生并输出第一图像信号和第二图像信号,其中,所述第一图像信号是根据第一预设曝光产生的图像,所述第二图像信号是根据第二预设曝光产生的图像,所述第一预设曝光和所述第二预设曝光为所述多次曝光中的其中两次曝光;
    所述补光器(02)包括第一补光装置(021),所述第一补光装置(021)用于进行近红外补光,其中,至少在所述第一预设曝光的部分曝光时间段内进行近红外补光,在所述第二预设曝光的曝光时间段内不进行近红外补光;
    所述滤光组件(03)包括第一滤光片(031),所述第一滤光片(031)使可见光和部分近红外光通过;
    所述图像处理单元(04)用于对所述第一图像信号和所述第二图像信号进行处理得到融合图像。
  2. 如权利要求1所述的图像融合装置,所述图像处理单元(04)包括预处理单元(041)和图像融合单元(042);
    所述预处理单元(041)用于对所述第一图像信号和所述第二图像信号进行预处理,并输出第一预处理图像和第二预处理图像;
    所述图像融合单元(042)用于对所述第一预处理图像和所述第二预处理图像进行融合,得到所述融合图像。
  3. 如权利要求2所述的图像融合装置,所述预处理单元(041)包括图像修复单元(0411)和图像降噪单元(0412);
    所述图像修复单元(0411)用于对所述第一图像信号进行修复处理,得到第一修复图像,对所述第二图像信号进行修复处理,得到第二修复图像,所述第一修复图像为灰度图像,所述第二修复图像为彩色图像;
    所述图像降噪单元(0412)用于对所述第一修复图像进行降噪处理,得到所述第一预处理图像,对所述第二修复图像进行降噪处理,得到所述第二预处 理图像。
  4. 如权利要求3所述的图像融合装置,所述图像降噪单元(0412)包括时域降噪单元或空域降噪单元;
    所述时域降噪单元用于根据所述第一修复图像和所述第二修复图像进行运动估计,得到运动估计结果,根据所述运动估计结果对所述第一修复图像进行时域滤波处理,得到所述第一预处理图像,根据所述运动估计结果对所述第二修复图像进行时域滤波处理,得到所述第二预处理图像;或者
    所述空域降噪单元用于根据所述第一修复图像和所述第二修复图像进行边缘估计,得到边缘估计结果,根据所述边缘估计结果对所述第一修复图像进行空域滤波处理,得到所述第一预处理图像,根据所述边缘估计结果对所述第二修复图像进行空域滤波处理,得到所述第二预处理图像。
  5. 如权利要求4所述的图像融合装置,所述时域降噪单元包括运动估计单元;
    所述运动估计单元用于根据所述第一修复图像和第一历史降噪图像生成第一帧差图像,根据所述第一帧差图像和多个第一设定帧差阈值确定所述第一修复图像中每个像素点的第一时域滤波强度,所述第一历史降噪图像是指对所述第一修复图像的前N帧图像中的任一帧图像进行降噪后的图像,所述N大于或等于1,所述多个第一设定帧差阈值与所述第一帧差图像中的多个像素点一一对应;
    所述运动估计单元还用于根据所述第二修复图像和第二历史降噪图像生成第二帧差图像,根据所述第二帧差图像和多个第二设定帧差阈值确定所述第二修复图像中每个像素点的第二时域滤波强度,所述第二历史降噪图像是指对所述第二修复图像的前N帧图像中的任一帧图像进行降噪后的图像信号,所述多个第二设定帧差阈值与所述第二帧差图像中的多个像素点一一对应;
    所述运动估计单元还用于对每个像素点的第一时域滤波强度和第二时域滤波强度进行融合,得到相应像素点的联合时域滤波强度;或者,所述运动估计单元还用于从每个像素点的第一时域滤波强度和第二时域滤波强度中选择一个时域滤波强度作为相应像素点的联合时域滤波强度;
    其中,所述运动估计结果包括每个像素点的第一时域滤波强度和/或每个像 素点的联合时域滤波强度。
  6. 如权利要求5所述的图像融合装置,所述时域降噪单元还包括时域滤波单元;
    所述时域滤波单元用于根据每个像素点的第一时域滤波强度对所述第一修复图像进行时域滤波处理,得到所述第一预处理图像,根据每个像素点的第一时域滤波强度对所述第二修复图像进行时域滤波处理,得到所述第二预处理图像;或者
    所述时域滤波单元用于根据每个像素点的第一时域滤波强度对所述第一修复图像进行时域滤波处理,得到所述第一预处理图像,根据每个像素点的联合时域滤波强度对所述第二修复图像进行时域滤波处理,得到所述第二预处理图像;或者
    所述时域滤波单元用于根据每个像素点的联合时域滤波强度对所述第一修复图像进行时域滤波处理,得到所述第一预处理图像,根据每个像素点的联合时域滤波强度对所述第二修复图像进行时域滤波处理,得到所述第二预处理图像。
  7. 如权利要求5或6所述的图像融合装置,所述第一帧差图像是指对所述第一修复图像和所述第一历史降噪图像进行作差处理得到的原始帧差图像;或者,所述第一帧差图像是指对所述原始帧差图像进行处理后得到的帧差图像;
    所述第二帧差图像是指对所述第二修复图像和所述第二历史降噪图像进行作差处理得到的原始帧差图像;或者,所述第二帧差图像是指对所述原始帧差图像进行处理后得到的帧差图像。
  8. 如权利要求5或6所述的图像融合装置,所述多个第一设定帧差阈值是根据第一噪声强度图像中多个像素点的噪声强度确定得到,所述第一噪声强度图像根据所述第一历史降噪图像对应的降噪前的图像和所述第一历史降噪图像确定得到;
    所述多个第二设定帧差阈值是根据第二噪声强度图像中多个像素点的噪声强度确定得到,所述第二噪声强度图像根据所述第二历史降噪图像对应的降 噪前的图像和所述第二历史降噪图像确定得到。
  9. 如权利要求4所述的图像融合装置,所述空域降噪单元包括边缘估计单元;
    所述边缘估计单元用于确定所述第一修复图像中每个像素点的第一空域滤波强度;
    所述边缘估计单元还用于确定所述第二修复图像中每个像素点的第二空域滤波强度;
    所述边缘估计单元还用于对所述第一修复图像进行局部信息提取,得到第一局部信息,对所述第二修复图像进行局部信息提取,得到第二局部信息;根据所述第一空域滤波强度、所述第二空域滤波强度、所述第一局部信息和所述第二局部信息确定每个像素点对应的联合空域滤波强度;
    其中,所述边缘估计结果包括每个像素点的第一空域滤波强度和/或联合空域滤波强度。
  10. 如权利要求9所述的图像融合装置,所述空域降噪单元还包括空域滤波单元;
    所述空域滤波单元用于根据每个像素点对应的第一空域滤波强度对所述第一修复图像进行空域滤波处理,得到所述第一预处理图像,根据每个像素点对应的第一空域滤波强度对所述第二修复图像进行空域滤波处理,得到所述第二预处理图像;或者
    所述空域滤波单元用于根据每个像素点对应的第一空域滤波强度对所述第一修复图像进行空域滤波处理,得到所述第一预处理图像,根据每个像素点对应的联合空域滤波强度对所述第二修复图像进行空域滤波处理,得到所述第二预处理图像;或者
    所述空域滤波单元用于根据每个像素点对应的联合空域滤波强度对所述第一修复图像进行空域滤波处理,得到所述第一预处理图像,根据每个像素点对应的联合空域滤波强度对所述第二修复图像进行空域滤波处理,得到所述第二预处理图像。
  11. 如权利要求3所述的图像融合装置,其特征在于,所述降噪单元包括 时域降噪单元和空域降噪单元;
    所述时域降噪单元用于根据所述第一修复图像和所述第二修复图像进行运动估计,得到运动估计结果,根据所述运动估计结果对所述第一修复图像进行时域滤波,得到第一时域降噪图像,根据所述运动估计结果对所述第二修复图像进行时域滤波,得到第二时域降噪图像;
    所述空域降噪单元用于根据所述第一时域降噪图像和所述第二时域降噪图像进行边缘估计,得到边缘估计结果,根据所述边缘估计结果对所述第一时域降噪图像进行空域滤波,得到所述第一预处理图像,根据所述边缘估计结果对所述第二时域降噪图像进行空域滤波,得到所述第二预处理图像;
    或者,
    所述空域降噪单元用于根据所述第一修复图像和所述第二修复图像进行边缘估计,得到边缘估计结果,根据所述边缘估计结果对所述第一修复图像进行空域滤波,得到第一空域降噪图像,根据所述边缘估计结果对所述第二修复图像进行空域滤波,得到第二空域降噪图像;
    所述时域降噪单元用于根据所述第一空域降噪图像和所述第二空域降噪图像进行运动估计,得到运动估计结果,根据所述运动估计结果对所述第一空域降噪图像进行时域滤波,得到所述第一预处理图像,根据所述运动估计结果对所述第二空域降噪图像进行时域滤波,得到所述第二预处理图像。
  12. 如权利要求2-11任一所述的图像融合装置,所述图像融合单元(042)包括第一融合单元,所述图像融合装置还包括编码压缩单元和智能分析单元;
    所述第一融合单元用于通过第一融合处理对所述第一预处理图像和所述第二预处理图像进行融合,得到所述融合图像;
    所述编码压缩单元用于对所述融合图像进行编码压缩处理,并输出编码压缩处理后的图像,所述编码压缩处理后的图像用于进行显示或存储;
    所述智能分析单元用于对所述融合图像进行分析处理,得到分析结果,输出所述分析结果。
  13. 如权利要求2-11任一所述的图像融合装置,所述图像融合单元(042)包括第二融合单元和第三融合单元,所述图像融合装置还包括编码压缩单元和智能分析单元;
    所述第二融合单元用于通过第二融合处理对所述第一预处理图像和所述第二预处理图像进行融合,得到第一目标图像;
    所述第三融合单元用于通过第三融合处理对所述第一预处理图像和所述第二预处理图像进行融合,得到第二目标图像;
    所述编码压缩单元用于对所述第一目标图像进行编码压缩处理,并输出编码压缩处理后的图像,所述编码压缩处理后的图像用于进行显示或存储;
    所述智能分析单元用于对所述第二目标图像进行分析处理,得到分析结果,输出所述分析结果。
  14. 如权利要求13所述的图像融合装置,所述第二融合处理和所述第三融合处理不同;
    或者,所述第二融合处理和所述第三融合处理相同,但所述第二融合处理的融合参数为第一融合参数,所述第二融合处理的融合参数为第二融合参数,所述第一融合参数和所述第二融合参数不同。
  15. 如权利要求1所述的图像融合装置,所述第一补光装置(021)进行近红外补光的中心波长为设定特征波长或者落在设定特征波长范围时,通过所述第一滤光片(031)的近红外光的中心波长和/或波段宽度达到约束条件。
  16. 如权利要求15所述的图像融合装置,所述第一补光装置(021)进行近红外补光的中心波长为750±10纳米的波长范围内的任一波长;或者
    所述第一补光装置(021)进行近红外补光的中心波长为780±10纳米的波长范围内的任一波长;或者
    所述第一补光装置(021)进行近红外补光的中心波长为940±10纳米的波长范围内的任一波长。
  17. 如权利要求15所述的图像融合装置,所述约束条件包括:
    通过所述第一滤光片(031)的近红外光的中心波长与所述第一补光装置(021)进行近红外补光的中心波长之间的差值位于波长波动范围内,所述波长波动范围为0~20纳米;或者
    通过所述第一滤光片(031)的近红外光的半带宽小于或等于50纳米;或 者
    第一波段宽度小于第二波段宽度;其中,所述第一波段宽度是指通过所述第一滤光片(031)(031)的近红外光的波段宽度,所述第二波段宽度是指被所述第一滤光片(031)(031)阻挡的近红外光的波段宽度;或者
    第三波段宽度小于参考波段宽度,所述第三波段宽度是指通过率大于设定比例的近红外光的波段宽度,所述参考波段宽度为50纳米~150纳米的波段范围内的任一波段宽度。
  18. 如权利要求1所述的图像融合装置,所述第一预设曝光与所述第二预设曝光的至少一个曝光参数不同,所述至少一个曝光参数为曝光时间、曝光增益、光圈大小中的一种或多种,所述曝光增益包括模拟增益,和/或,数字增益。
  19. 如权利要求1所述的图像融合装置,所述第一预设曝光和所述第二预设曝光的至少一个曝光参数相同,所述至少一个曝光参数包括曝光时间、曝光增益、光圈大小中的一种或多种,所述曝光增益包括模拟增益,和/或,数字增益。
  20. 如权利要求1所述的图像融合装置,所述图像传感器(01)包括多个感光通道,每个感光通道用于感应至少一种可见光波段的光,以及感应近红外波段的光。
  21. 如权利要求20所述的图像融合装置,所述多个感光通道用于感应至少两种不同的可见光波段的光。
  22. 如权利要求1所述的图像融合装置,所述图像传感器(01)采用全局曝光方式进行多次曝光,对于任意一次近红外补光,近红外补光的时间段与最邻近的所述第二预设曝光的曝光时间段不存在交集,近红外补光的时间段是所述第一预设曝光的曝光时间段的子集,或者,近红外补光的时间段与所述第一预设曝光的曝光时间段存在交集,或者所述第一预设曝光的曝光时间段是近红外补光的子集。
  23. 如权利要求1所述的图像融合装置,所述图像传感器(01)采用卷帘曝光方式进行多次曝光,对于任意一次近红外补光,近红外补光的时间段与最邻近的所述第二预设曝光的曝光时间段不存在交集;
    近红外补光的开始时刻不早于所述第一预设曝光中最后一行有效图像的曝光开始时刻,近红外补光的结束时刻不晚于所述第一预设曝光中第一行有效图像的曝光结束时刻;
    或者,
    近红外补光的开始时刻不早于所述第一预设曝光之前的最邻近的第二预设曝光的最后一行有效图像的曝光结束时刻且不晚于所述第一预设曝光中第一行有效图像的曝光结束时刻,近红外补光的结束时刻不早于所述第一预设曝光中最后一行有效图像的曝光开始时刻且不晚于所述第一预设曝光之后的最邻近的第二预设曝光的第一行有效图像的曝光开始时刻;或者
    近红外补光的开始时刻不早于所述第一预设曝光之前的最邻近的第二预设曝光的最后一行有效图像的曝光结束时刻且不晚于所述第一预设曝光中第一行有效图像的曝光开始时刻,近红外补光的结束时刻不早于所述第一预设曝光中最后一行有效图像的曝光结束时刻且不晚于所述第一预设曝光之后的最邻近的第二预设曝光的第一行有效图像的曝光开始时刻。
  24. 一种图像融合方法,应用于图像融合装置,所述图像融合装置包括图像传感器、补光器、滤光组件和图像处理单元,所述补光器包括第一补光装置,所述滤光组件包括第一滤光片,所述图像传感器位于所述滤光组件的出光侧,所述方法包括:
    通过所述第一补光装置进行近红外补光,其中,至少在第一预设曝光的部分曝光时间段内进行近红外补光,在第二预设曝光的曝光时间段内不进行近红外补光,所述第一预设曝光和所述第二预设曝光为所述图像传感器的多次曝光中的其中两次曝光;
    通过所述第一滤光片使可见光波段的光和近红外光波段的部分光通过;
    通过所述图像传感器进行多次曝光,以产生并输出第一图像信号和第二图像信号,所述第一图像信号是根据所述第一预设曝光产生的图像,所述第二图像信号是根据所述第二预设曝光产生的图像;
    通过所述图像处理单元对所述第一图像信号和所述第二图像信号进行处 理得到融合图像。
  25. 如权利要求24所述的方法,所述图像处理单元包括预处理单元和图像融合单元;
    所述通过所述图像处理单元对所述第一图像信号和所述第二图像信号进行处理得到融合图像,包括:
    通过所述预处理单元对所述第一图像信号和所述第二图像信号进行预处理,并输出第一预处理图像和第二预处理图像;
    通过所述图像融合单元对所述第一预处理图像和所述第二预处理图像进行融合,得到所述融合图像。
  26. 如权利要求25所述的方法,所述预处理单元包括图像修复单元和图像降噪单元;
    所述通过所述预处理单元对所述第一图像信号和所述第二图像信号进行预处理,并输出第一预处理图像和第二预处理图像,包括:
    通过所述图像修复单元对所述第一图像信号进行修复处理,得到第一修复图像,对所述第二图像信号进行修复处理,得到第二修复图像,所述第一修复图像为灰度图像,所述第二修复图像为彩色图像;
    通过所述图像降噪单元对所述第一修复图像进行降噪处理,得到所述第一预处理图像,对所述第二修复图像进行降噪处理,得到所述第二预处理图像。
  27. 如权利要求26所述的方法,所述图像降噪单元包括时域降噪单元或空域降噪单元;
    所述通过所述图像降噪单元对所述第一修复图像进行降噪处理,得到所述第一预处理图像,对所述第二修复图像进行降噪处理,得到所述第二预处理图像,包括:
    通过所述时域降噪单元,根据所述第一修复图像和所述第二修复图像进行运动估计,得到运动估计结果,根据所述运动估计结果对所述第一修复图像进行时域滤波处理,得到所述第一预处理图像,根据所述运动估计结果对所述第二修复图像进行时域滤波处理,得到所述第二预处理图像;或者
    通过所述空域降噪单元,根据所述第一修复图像和所述第二修复图像进行 边缘估计,得到边缘估计结果,根据所述边缘估计结果对所述第一修复图像进行空域滤波处理,得到所述第一预处理图像,根据所述边缘估计结果对所述第二修复图像进行空域滤波处理,得到所述第二预处理图像。
  28. 如权利要求27所述的方法,所述时域降噪单元包括运动估计单元;
    所述通过所述时域降噪单元,根据所述第一修复图像和所述第二修复图像进行运动估计,得到运动估计结果,包括:
    通过所述运动估计单元,根据所述第一修复图像和第一历史降噪图像生成第一帧差图像,根据所述第一帧差图像和多个第一设定帧差阈值确定所述第一修复图像中每个像素点的第一时域滤波强度,所述第一历史降噪图像是指对所述第一修复图像的前N帧图像中的任一帧图像进行降噪后的图像,所述N大于或等于1,所述多个第一设定帧差阈值与所述第一帧差图像中的多个像素点一一对应;
    通过所述运动估计单元,根据所述第二修复图像和第二历史降噪图像生成第二帧差图像,根据所述第二帧差图像和多个第二设定帧差阈值确定所述第二修复图像中每个像素点的第二时域滤波强度,所述第二历史降噪图像是指对所述第二修复图像的前N帧图像中的任一帧图像进行降噪后的图像信号,所述多个第二设定帧差阈值与所述第二帧差图像中的多个像素点一一对应;
    通过所述运动估计单元,对每个像素点的第一时域滤波强度和第二时域滤波强度进行融合,得到相应像素点的联合时域滤波强度;或者,通过所述运动估计单元从每个像素点的第一时域滤波强度和第二时域滤波强度中选择一个时域滤波强度作为相应像素点的联合时域滤波强度;
    其中,所述运动估计结果包括每个像素点的第一时域滤波强度和/或每个像素点的联合时域滤波强度。
  29. 如权利要求28所述的方法,所述时域降噪单元还包括时域滤波单元;
    通过所述时域降噪单元,根据所述运动估计结果对所述第一修复图像进行时域滤波处理,得到所述第一预处理图像,包括:
    所述时域滤波单元用于根据每个像素点的第一时域滤波强度对所述第一修复图像进行时域滤波处理,得到所述第一预处理图像,根据每个像素点的第一时域滤波强度对所述第二修复图像进行时域滤波处理,得到所述第二预处理 图像;或者
    所述时域滤波单元用于根据每个像素点的第一时域滤波强度对所述第一修复图像进行时域滤波处理,得到所述第一预处理图像,根据每个像素点的联合时域滤波强度对所述第二修复图像进行时域滤波处理,得到所述第二预处理图像;或者
    所述时域滤波单元用于根据每个像素点的联合时域滤波强度对所述第一修复图像进行时域滤波处理,得到所述第一预处理图像,根据每个像素点的联合时域滤波强度对所述第二修复图像进行时域滤波处理,得到所述第二预处理图像。
  30. 如权利要求27所述的方法,所述空域降噪单元包括边缘估计单元;
    所述通过所述空域降噪单元,根据所述第一修复图像和所述第二修复图像进行边缘估计,得到边缘估计结果,包括:
    通过所述边缘估计单元确定所述第一修复图像中每个像素点的第一空域滤波强度;
    通过所述边缘估计单元确定所述第二修复图像中每个像素点的第二空域滤波强度;
    通过所述边缘估计单元对所述第一修复图像进行局部信息提取,得到第一局部信息,对所述第二修复图像进行局部信息提取,得到第二局部信息;根据所述第一空域滤波强度、所述第二空域滤波强度、所述第一局部信息和所述第二局部信息确定每个像素点对应的联合空域滤波强度;
    其中,所述边缘估计结果包括每个像素点的第一空域滤波强度和/或联合空域滤波强度。
  31. 如权利要求30所述的方法,所述空域降噪单元还包括空域滤波单元;
    所述根据所述边缘估计结果对所述第一修复图像进行空域滤波处理,得到所述第一预处理图像,根据所述边缘估计结果对所述第二修复图像进行空域滤波处理,得到所述第二预处理图像,包括:
    通过所述空域滤波单元,根据每个像素点对应的第一空域滤波强度对所述第一修复图像进行空域滤波处理,得到所述第一预处理图像,根据每个像素点对应的第一空域滤波强度对所述第二修复图像进行空域滤波处理,得到所述第 二预处理图像;或者
    通过所述空域滤波单元,根据每个像素点对应的第一空域滤波强度对所述第一修复图像进行空域滤波处理,得到所述第一预处理图像,根据每个像素点对应的联合空域滤波强度对所述第二修复图像进行空域滤波处理,得到所述第二预处理图像;或者
    通过所述空域滤波单元,根据每个像素点对应的联合空域滤波强度对所述第一修复图像进行空域滤波处理,得到所述第一预处理图像,根据每个像素点对应的联合空域滤波强度对所述第二修复图像进行空域滤波处理,得到所述第二预处理图像。
  32. 如权利要求24-31任一所述的方法,所述图像融合单元包括第一融合单元,所述图像融合装置还包括编码压缩单元和智能分析单元;
    所述通过所述图像融合单元对所述第一预处理图像和所述第二预处理图像进行融合,得到所述融合图像,包括:
    通过所述第一融合单元,利用第一融合处理对所述第一预处理图像和所述第二预处理图像进行融合,得到所述融合图像;
    所述方法还包括:
    通过所述编码压缩单元对所述融合图像进行编码压缩处理,并输出编码压缩处理后的图像,所述编码压缩处理后的图像用于进行显示或存储;
    通过所述智能分析单元对所述融合图像进行分析处理,得到分析结果,输出所述分析结果。
  33. 如权利要求24-31任一所述的方法,其特征在于,所述图像融合单元包括第二融合单元和第三融合单元,所述图像融合装置还包括编码压缩单元和智能分析单元;
    所述通过所述图像融合单元对所述第一预处理图像和所述第二预处理图像进行融合,得到所述融合图像,包括:
    通过所述第二融合单元,利用第二融合处理对所述第一预处理图像和所述第二预处理图像进行融合,得到第一目标图像;
    通过所述第三融合单元,利用第三融合处理对所述第一预处理图像和所述第二预处理图像进行融合,得到第二目标图像;
    所述方法还包括:
    通过所述编码压缩单元对所述第一目标图像进行编码压缩处理,并输出编码压缩处理后的图像,所述编码压缩处理后的图像用于进行显示或存储;
    通过所述智能分析单元对所述第二目标图像进行分析处理,得到分析结果,输出所述分析结果。
PCT/CN2020/091917 2019-05-31 2020-05-22 图像融合装置及图像融合方法 WO2020238807A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US17/615,343 US20220222795A1 (en) 2019-05-31 2020-05-22 Apparatus for image fusion and method for image fusion
EP20814163.0A EP3979622A4 (en) 2019-05-31 2020-05-22 IMAGE MERGER DEVICE AND IMAGE MERGER METHOD

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910472710.7 2019-05-31
CN201910472710.7A CN110493494B (zh) 2019-05-31 2019-05-31 图像融合装置及图像融合方法

Publications (1)

Publication Number Publication Date
WO2020238807A1 true WO2020238807A1 (zh) 2020-12-03

Family

ID=68545895

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/091917 WO2020238807A1 (zh) 2019-05-31 2020-05-22 图像融合装置及图像融合方法

Country Status (4)

Country Link
US (1) US20220222795A1 (zh)
EP (1) EP3979622A4 (zh)
CN (1) CN110493494B (zh)
WO (1) WO2020238807A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117939307A (zh) * 2024-03-19 2024-04-26 四川辰宇微视科技有限公司 一种适用于融合相机的自适应亮度调节方法
CN117939307B (zh) * 2024-03-19 2024-06-04 四川辰宇微视科技有限公司 一种适用于融合相机的自适应亮度调节方法

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110493492B (zh) 2019-05-31 2021-02-26 杭州海康威视数字技术股份有限公司 图像采集装置及图像采集方法
CN110490811B (zh) * 2019-05-31 2022-09-09 杭州海康威视数字技术股份有限公司 图像降噪装置及图像降噪方法
CN110493494B (zh) * 2019-05-31 2021-02-26 杭州海康威视数字技术股份有限公司 图像融合装置及图像融合方法
CN110493491B (zh) * 2019-05-31 2021-02-26 杭州海康威视数字技术股份有限公司 一种图像采集装置及摄像方法
CN112995581A (zh) * 2019-12-12 2021-06-18 北京英泰智科技股份有限公司 一种视频监控方法及系统
EP4109894A4 (en) * 2020-03-03 2023-03-22 Huawei Technologies Co., Ltd. IMAGE SENSOR AND IMAGE SENSITIZATION METHOD
CN113542613B (zh) * 2020-04-14 2023-05-12 华为技术有限公司 一种用于拍照的装置及方法
CN113572968B (zh) * 2020-04-24 2023-07-18 杭州萤石软件有限公司 图像融合方法、装置、摄像设备及存储介质
CN114697558B (zh) * 2020-12-28 2023-10-31 合肥君正科技有限公司 一种抑制宽动态范围图像频闪的方法
CN115314628B (zh) * 2021-05-08 2024-03-01 杭州海康威视数字技术股份有限公司 一种成像方法、系统及摄像机
CN115314629B (zh) * 2021-05-08 2024-03-01 杭州海康威视数字技术股份有限公司 一种成像方法、系统及摄像机
CN116630762B (zh) * 2023-06-25 2023-12-22 山东卓业医疗科技有限公司 一种基于深度学习的多模态医学图像融合方法
CN117692652B (zh) * 2024-02-04 2024-04-26 中国矿业大学 一种基于深度学习的可见光与红外视频融合编码方法

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014122714A1 (ja) * 2013-02-07 2014-08-14 パナソニック株式会社 撮像装置及びその駆動方法
CN104661008A (zh) * 2013-11-18 2015-05-27 深圳中兴力维技术有限公司 低照度条件下彩色图像质量提升的处理方法和装置
CN107566747A (zh) * 2017-09-22 2018-01-09 浙江大华技术股份有限公司 一种图像亮度增强方法及装置
CN108419061A (zh) * 2017-02-10 2018-08-17 杭州海康威视数字技术股份有限公司 基于多光谱的图像融合设备、方法及图像传感器
CN108886593A (zh) * 2016-02-29 2018-11-23 松下知识产权经营株式会社 摄像装置、以及在其中使用的固体摄像元件
CN110493494A (zh) * 2019-05-31 2019-11-22 杭州海康威视数字技术股份有限公司 图像融合装置及图像融合方法
CN110493493A (zh) * 2019-05-31 2019-11-22 杭州海康威视数字技术股份有限公司 全景细节摄像机及获取图像信号的方法
CN110493532A (zh) * 2018-12-12 2019-11-22 杭州海康威视数字技术股份有限公司 一种图像处理方法和系统
CN110490041A (zh) * 2019-05-31 2019-11-22 杭州海康威视数字技术股份有限公司 人脸图像采集装置及方法

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7609291B2 (en) * 2005-12-07 2009-10-27 Avago Technologies Ecbu Ip (Singapore) Pte. Ltd. Device and method for producing an enhanced color image using a flash of infrared light
KR101574730B1 (ko) * 2008-11-17 2015-12-07 삼성전자 주식회사 영상 처리 장치 및 방법
CN102687502B (zh) * 2009-08-25 2015-07-08 双光圈国际株式会社 减少彩色图像中的噪声
US8488055B2 (en) * 2010-09-30 2013-07-16 Apple Inc. Flash synchronization using image sensor interface timing signal
CN102682589B (zh) * 2012-01-09 2015-03-25 西安智意能电子科技有限公司 一种用于对受控设备进行遥控的系统
US20150002734A1 (en) * 2013-07-01 2015-01-01 Motorola Mobility Llc Electronic Device with Modulated Light Flash Operation for Rolling Shutter Image Sensor
CN104134352B (zh) * 2014-08-15 2018-01-19 青岛比特信息技术有限公司 基于长短曝光结合的视频车辆特征检测系统及其检测方法
JP2016076807A (ja) * 2014-10-06 2016-05-12 ソニー株式会社 画像処理装置、撮像装置および撮像方法
US10462390B2 (en) * 2014-12-10 2019-10-29 Sony Corporation Image pickup apparatus, image pickup method, program, and image processing apparatus
US9726791B2 (en) * 2015-04-14 2017-08-08 Face International Corporation Systems and methods for producing objects incorporating selectably active electromagnetic energy filtering layers and coatings
JP2016213628A (ja) * 2015-05-07 2016-12-15 ソニー株式会社 撮像装置、撮像方法、およびプログラム、並びに画像処理装置
JP2017005401A (ja) * 2015-06-08 2017-01-05 ソニー株式会社 画像処理装置、画像処理方法、およびプログラム、並びに撮像装置
CN105338262B (zh) * 2015-10-09 2018-09-21 浙江大华技术股份有限公司 一种热成像图像处理方法及装置
JP2017112401A (ja) * 2015-12-14 2017-06-22 ソニー株式会社 撮像素子、画像処理装置および方法、並びにプログラム
US10638054B2 (en) * 2017-01-25 2020-04-28 Cista System Corp. System and method for visible and infrared high dynamic range sensing
CN108419062B (zh) * 2017-02-10 2020-10-02 杭州海康威视数字技术股份有限公司 图像融合设备和图像融合方法
US20180295336A1 (en) * 2017-04-11 2018-10-11 Himax Imaging Limited IMAGING SYSTEM FOR SENSING 3D image
US10401872B2 (en) * 2017-05-23 2019-09-03 Gopro, Inc. Method and system for collision avoidance
CN108289179A (zh) * 2018-02-08 2018-07-17 深圳泰华安全技术工程有限公司 一种提高视频信号采集抗干扰能力的方法
CN111684352B (zh) * 2018-02-09 2022-08-26 索尼公司 过滤器单元、过滤器选择方法和成像装置
CN108965654B (zh) * 2018-02-11 2020-12-25 浙江宇视科技有限公司 基于单传感器的双光谱摄像机系统和图像处理方法

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014122714A1 (ja) * 2013-02-07 2014-08-14 パナソニック株式会社 撮像装置及びその駆動方法
CN104661008A (zh) * 2013-11-18 2015-05-27 深圳中兴力维技术有限公司 低照度条件下彩色图像质量提升的处理方法和装置
CN108886593A (zh) * 2016-02-29 2018-11-23 松下知识产权经营株式会社 摄像装置、以及在其中使用的固体摄像元件
CN108419061A (zh) * 2017-02-10 2018-08-17 杭州海康威视数字技术股份有限公司 基于多光谱的图像融合设备、方法及图像传感器
CN107566747A (zh) * 2017-09-22 2018-01-09 浙江大华技术股份有限公司 一种图像亮度增强方法及装置
CN110493532A (zh) * 2018-12-12 2019-11-22 杭州海康威视数字技术股份有限公司 一种图像处理方法和系统
CN110493494A (zh) * 2019-05-31 2019-11-22 杭州海康威视数字技术股份有限公司 图像融合装置及图像融合方法
CN110493493A (zh) * 2019-05-31 2019-11-22 杭州海康威视数字技术股份有限公司 全景细节摄像机及获取图像信号的方法
CN110490041A (zh) * 2019-05-31 2019-11-22 杭州海康威视数字技术股份有限公司 人脸图像采集装置及方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3979622A4

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117939307A (zh) * 2024-03-19 2024-04-26 四川辰宇微视科技有限公司 一种适用于融合相机的自适应亮度调节方法
CN117939307B (zh) * 2024-03-19 2024-06-04 四川辰宇微视科技有限公司 一种适用于融合相机的自适应亮度调节方法

Also Published As

Publication number Publication date
EP3979622A4 (en) 2022-08-03
CN110493494B (zh) 2021-02-26
US20220222795A1 (en) 2022-07-14
CN110493494A (zh) 2019-11-22
EP3979622A1 (en) 2022-04-06

Similar Documents

Publication Publication Date Title
WO2020238807A1 (zh) 图像融合装置及图像融合方法
WO2020238970A1 (zh) 图像降噪装置及图像降噪方法
CN110493491B (zh) 一种图像采集装置及摄像方法
CN110505377B (zh) 图像融合设备和方法
CN110519489B (zh) 图像采集方法及装置
CN102892008B (zh) 双图像捕获处理
CN110706178B (zh) 图像融合装置、方法、设备及存储介质
CN110490187B (zh) 车牌识别设备和方法
CN110490041B (zh) 人脸图像采集装置及方法
CN108712608B (zh) 终端设备拍摄方法和装置
CN108111749B (zh) 图像处理方法和装置
CN110490042B (zh) 人脸识别装置和门禁设备
CN110490044B (zh) 人脸建模设备和人脸建模方法
CN110493536B (zh) 图像采集装置和图像采集的方法
CN110493535B (zh) 图像采集装置和图像采集的方法
CN110493496B (zh) 图像采集装置及方法
CN110493495B (zh) 图像采集装置和图像采集的方法
CN110493493B (zh) 全景细节摄像机及获取图像信号的方法
CN107820066A (zh) 一种低照度彩色摄像机
CN110493537B (zh) 图像采集装置及图像采集方法
WO2020238804A1 (zh) 图像采集装置及图像采集方法
CN103053163A (zh) 图像生成装置、图像生成系统、方法及程序
Lv et al. An integrated enhancement solution for 24-hour colorful imaging
CN110493533B (zh) 图像采集装置及图像采集方法
CN107454318A (zh) 图像处理方法、装置、移动终端及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20814163

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020814163

Country of ref document: EP

Effective date: 20220103