WO2020238970A1 - 图像降噪装置及图像降噪方法 - Google Patents

图像降噪装置及图像降噪方法 Download PDF

Info

Publication number
WO2020238970A1
WO2020238970A1 PCT/CN2020/092656 CN2020092656W WO2020238970A1 WO 2020238970 A1 WO2020238970 A1 WO 2020238970A1 CN 2020092656 W CN2020092656 W CN 2020092656W WO 2020238970 A1 WO2020238970 A1 WO 2020238970A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
image signal
noise reduction
pixel
filtering
Prior art date
Application number
PCT/CN2020/092656
Other languages
English (en)
French (fr)
Inventor
罗丽红
聂鑫鑫
於敏杰
Original Assignee
杭州海康威视数字技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视数字技术股份有限公司 filed Critical 杭州海康威视数字技术股份有限公司
Publication of WO2020238970A1 publication Critical patent/WO2020238970A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image

Definitions

  • This application relates to the field of computer vision technology, and in particular to an image noise reduction device and image noise reduction method.
  • noise reduction processing may be performed on the captured image.
  • the related art provides a method for time-domain noise reduction of visible light images. In this method, the dynamic and static quantization noise of the pixel in the visible light image can be judged, so that different intensities are used for noise reduction in different dynamic and static image areas.
  • the embodiments of the present application provide an image noise reduction device and an image noise reduction method, which can improve the image quality of captured images.
  • the technical solution is as follows:
  • an image noise reduction device which includes an image acquisition unit and an image noise reduction unit;
  • the image acquisition unit is configured to acquire a first image signal and a second image signal, wherein the first image signal is a near-infrared light image signal, and the second image signal is a visible light image signal;
  • the image noise reduction unit is used to perform joint noise reduction on the first image signal and the second image signal to obtain a near-infrared light noise reduction image and a visible light noise reduction image.
  • an image noise reduction method includes:
  • first image signal is a near-infrared light image signal
  • second image signal is a visible light image signal
  • Joint noise reduction is performed on the first image signal and the second image signal to obtain a near-infrared light noise reduction image and a visible light noise reduction image.
  • an electronic device in another aspect, includes a processor, a communication interface, a memory, and a communication bus.
  • the processor, the communication interface, and the memory communicate with each other through the communication bus,
  • the memory is used to store a computer program, and the processor is used to execute the computer program stored in the memory to implement the aforementioned image noise reduction method.
  • a computer-readable storage medium is provided, and a computer program is stored in the storage medium, and the computer program implements the aforementioned image noise reduction method when executed by a processor.
  • a computer program product includes instructions that, when the instructions are run on a computer, cause the computer to execute the aforementioned image noise reduction method.
  • the first image signal is a near-infrared light image signal
  • the second image signal is a visible light image signal. Since the near-infrared light image signal has a high signal-to-noise ratio, the image noise reduction unit introduces the first image signal to The joint noise reduction of the first image signal and the second image signal can more accurately distinguish the noise and effective information in the image, thereby effectively reducing image smearing and image detail loss, and improving image quality.
  • FIG. 1 is a schematic structural diagram of an image noise reduction device provided by an embodiment of the present application.
  • Fig. 2 is a schematic structural diagram of a first image noise reduction unit provided by an embodiment of the present application.
  • Fig. 3 is a schematic structural diagram of a second image noise reduction unit provided by an embodiment of the present application.
  • Fig. 4 is a schematic structural diagram of a third image noise reduction unit provided by an embodiment of the present application.
  • Fig. 5 is a schematic structural diagram of an image acquisition unit provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of the relationship between the wavelength and relative intensity of the near-infrared supplement light performed by a first light supplement device provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of the relationship between the wavelength of the light passing through the first filter and the pass rate according to an embodiment of the present application.
  • Fig. 8 is a schematic structural diagram of a second image acquisition device provided by an embodiment of the present application.
  • Fig. 9 is a schematic diagram of an RGB sensor provided by an embodiment of the present application.
  • Fig. 10 is a schematic diagram of an RGBW sensor provided by an embodiment of the present application.
  • Fig. 11 is a schematic diagram of an RCCB sensor provided by an embodiment of the present application.
  • Fig. 12 is a schematic diagram of a RYYB sensor provided by an embodiment of the present application.
  • FIG. 13 is a schematic diagram of a sensing curve of an image sensor provided by an embodiment of the present application.
  • FIG. 14 is a schematic diagram of a rolling shutter exposure method provided by an embodiment of the present application.
  • FIG. 15 is a schematic diagram of the timing relationship between the first near-infrared fill light and the first preset exposure and the second preset exposure in the global exposure mode provided by an embodiment of the present application.
  • FIG. 16 is a schematic diagram of the timing relationship between the second near-infrared fill light provided by the embodiment of the present application and the first preset exposure and the second preset exposure in the global exposure mode.
  • FIG. 17 is a schematic diagram of the timing relationship between the third near-infrared fill light provided by an embodiment of the present application and the first preset exposure and the second preset exposure in the global exposure mode.
  • FIG. 18 is a schematic diagram of the timing relationship between the first preset exposure and the second preset exposure in the first near-infrared fill light and the rolling shutter exposure mode provided by an embodiment of the present application.
  • FIG. 19 is a schematic diagram of the timing relationship between the first preset exposure and the second preset exposure in the second near-infrared fill light and the rolling shutter exposure mode provided by an embodiment of the present application.
  • 20 is a schematic diagram of the timing relationship between the first preset exposure and the second preset exposure in the third near-infrared fill light and the rolling shutter exposure mode provided by the embodiment of the present application.
  • FIG. 21 is a flowchart of an image noise reduction method provided by an embodiment of the present application.
  • 011 Image sensor
  • 012 Filler
  • 013 Filter component
  • 014 Lens
  • 0121 the first light supplement device
  • 0122 the second light supplement device
  • 0131 the first filter
  • FIG. 1 is a schematic structural diagram of an image noise reduction device provided by an embodiment of the present application. As shown in Fig. 1, the image noise reduction device includes an image acquisition unit 01 and an image noise reduction unit 02;
  • the image acquisition unit 01 is used to acquire a first image signal and a second image signal, the first image signal is a near-infrared light image signal, and the second image signal is a visible light image signal;
  • the image noise reduction unit 02 is used to The image signal and the second image signal perform joint noise reduction to obtain a near-infrared light noise reduction image and a visible light noise reduction image.
  • the first image signal is a near-infrared light image signal
  • the second image signal is a visible light image signal. Since the near-infrared light image signal has a high signal-to-noise ratio, the image noise reduction unit introduces the first image signal to The joint noise reduction of the first image signal and the second image signal can more accurately distinguish the noise and effective information in the image, thereby effectively reducing image smearing and image detail loss.
  • the image noise reduction device may further include an image fusion unit 03 and an image preprocessing unit 04.
  • the image acquisition unit 01, the image noise reduction unit 02, the image fusion unit 03, and the image preprocessing unit 04 included in the image noise reduction device will be separately described below.
  • the image noise reduction unit 02 performs joint noise reduction processing on the first image signal and the second image signal to obtain a near-infrared light noise reduction image and a visible light noise reduction image. Or, when the image preprocessing unit outputs the first image and the second image, the image noise reduction unit 02 processes the first image and the second image. In the embodiment of the present application, the image noise reduction unit 02 performs joint noise reduction on the first image signal and the second image signal as an example for explanation. For the case of the first image and the second image, reference may be made to the processing manner of the first image signal and the second image signal in the embodiments of the present application.
  • the image noise reduction unit 02 may include a temporal noise reduction unit 021.
  • the time domain noise reduction unit 021 is used to perform motion estimation according to the first image signal and the second image signal to obtain a motion estimation result, and perform time domain filtering processing on the first image signal according to the motion estimation result to obtain near-infrared light noise reduction
  • time-domain filtering is performed on the second image signal according to the motion estimation result to obtain a visible light noise reduction image.
  • the motion estimation result may include one time domain filtering weight, or it may include multiple different time domain filtering weights.
  • the time domain filtering process for the first image signal and the second image signal may be It is the different time domain filtering weight in the motion estimation result.
  • the temporal noise reduction unit 021 may include a motion estimation unit 0211 and a temporal filtering unit 0212.
  • the motion estimation unit 0211 may be used to generate a first frame difference image according to the first image signal and the first historical noise reduction image, and determine the first image according to the first frame difference image and multiple first frame difference thresholds
  • the first time-domain filtering strength of each pixel in the signal where the first historical denoised image refers to an image after denoising any one of the first N frames of the first image signal; the time domain
  • the filtering unit 0212 is used to perform time-domain filtering processing on the first image signal according to the first time-domain filtering strength of each pixel to obtain a near-infrared light noise reduction image, and to perform the first time-domain filtering according to the first time-domain filtering strength of each pixel
  • the image signal is subjected to time-domain filtering processing to obtain a visible light noise reduction image.
  • the motion estimation unit 0211 may perform difference processing on each pixel in the first image signal and the pixel value of the corresponding pixel in the first historical denoising image to obtain the original frame difference image, and the original The frame difference image is used as the first frame difference image.
  • the spatial position of each pixel in the first frame difference image, the first image signal, the first historical noise reduction image, and the second image signal are corresponding, that is, the pixels on the same pixel coordinates are corresponding.
  • the motion estimation unit 0211 may perform difference processing on each pixel in the first image signal and the pixel value of the corresponding pixel in the first historical noise reduction image to obtain the original frame difference image. Afterwards, the original frame difference image is processed to obtain the first frame difference image.
  • processing the original frame difference image may refer to performing spatial smoothing processing or block quantization processing on the original frame difference image.
  • the motion estimation unit 0211 can determine the first time domain of each pixel in the first image signal according to each pixel in the first frame difference image and multiple first frame difference thresholds. Filter strength.
  • each pixel in the first frame difference image corresponds to a first frame difference threshold
  • the first frame difference threshold corresponding to each pixel may be the same or different, that is, multiple first frame differences
  • the threshold corresponds to a plurality of pixels in the first frame difference image one to one.
  • the first frame difference threshold corresponding to each pixel can be set by an external user.
  • the motion estimation unit 0211 may perform difference processing between the first historical noise-reduction image and the image before noise reduction corresponding to the first historical noise-reduction image to obtain the first noise intensity image.
  • the first frame difference threshold value of the pixel point at the corresponding position in the first frame difference image is determined according to the noise intensity of each pixel point in the first noise intensity image.
  • the first frame difference threshold corresponding to each pixel point can also be determined in other ways, which is not limited in the embodiment of the present application.
  • the motion estimation unit 0211 can determine the corresponding pixel according to the frame difference of the pixel and the first frame difference threshold corresponding to the pixel by the following formula (1) The first temporal filtering strength of the point.
  • the frame difference of each pixel in the first frame difference image is the pixel value of the corresponding pixel.
  • (x, y) is the position of the pixel in the image; ⁇ nir (x, y) refers to the first temporal filtering intensity of the pixel with coordinates (x, y), dif nir (x, y) refers to a frame of the pixel difference, dif_thr nir (x, y) means that the first frame of the pixel difference threshold value corresponding to the point.
  • the frame difference of the pixel point is smaller than the first frame difference threshold, which means that the pixel point tends to be more stationary, that is, the pixel point
  • the corresponding exercise level is smaller. From the above formula (1), it can be seen that for any pixel, the frame difference of the pixel is smaller than the first frame difference threshold, and the first time domain filter intensity of the pixel located at the same position as the pixel is Bigger.
  • the exercise level is used to indicate the intensity of the exercise, the higher the exercise level, the more intense the exercise.
  • the value of the first temporal filtering strength can be between 0 and 1.
  • the time domain filtering unit 0212 may directly perform time domain filtering on the first image signal and the second image signal directly according to the first time domain filtering strength. Processing to obtain near-infrared light noise reduction image and visible light noise reduction image.
  • the first image signal is a near-infrared light image signal and has a high signal-to-noise ratio
  • each of the first image signals is used
  • the first time-domain filtering strength of the pixels performs time-domain filtering on the second image signal, which can more accurately distinguish the noise and effective information in the image, thereby avoiding the loss of image detail information and image drag in the denoised image. The problem with the tail.
  • the motion estimation unit 0211 may generate at least one first frame difference image according to the first image signal and at least one first historical noise reduction image, and according to the at least one frame difference image and each The multiple first frame difference thresholds corresponding to the frame difference image determine the first temporal filtering strength of each pixel in the first image signal.
  • At least one historical noise reduction image refers to an image obtained by performing noise reduction on the first N frames of the first image signal.
  • the motion estimation unit 0211 may determine according to the first historical noise-reduction image and the first image signal with reference to the related implementation described above. The corresponding first frame difference image. After that, the motion estimation unit 0211 can determine each first frame difference image according to each first frame difference image and the multiple first frame difference thresholds corresponding to each first frame difference image with reference to the aforementioned related implementations. The intensity of the temporal filtering of the pixel.
  • the motion estimation unit 0211 may fuse the temporal filtering intensities of the corresponding pixels in each first frame difference image, so as to obtain the first time domain corresponding to the pixel at the corresponding pixel position in the first image signal Filter strength.
  • the corresponding pixels in each first frame difference image refer to pixels located at the same position.
  • the motion estimation unit 0211 may select the temporal filter with the highest motion level from at least one temporal filter intensity corresponding to the at least one pixel. Intensity, and then use the selected temporal filter intensity as the first temporal filter intensity of the pixel at the corresponding pixel position in the first image signal.
  • the motion estimation unit 0211 may generate a first frame difference image according to the first image signal and the first historical noise reduction image, and determine the first image signal according to the first frame difference image and multiple first frame difference thresholds.
  • the first time-domain filter strength of each pixel in the first historical noise reduction image refers to the image after noise reduction is performed on any one of the first N frames of the first image signal; the motion estimation unit 0211 also uses To generate a second frame difference image according to the second image signal and the second historical noise reduction image, and determine the second time domain filtering of each pixel in the second image signal according to the second frame difference image and multiple second frame difference thresholds
  • the second historical noise-reduction image refers to the image after noise reduction is performed on any one of the first N frames of the second image signal; the motion estimation unit 0211 is also used for each pixel in the first image signal
  • the first time-domain filtering strength of each pixel in the second image signal and the second time-domain filtering strength of each pixel in the second image signal determine the joint time-domain filtering strength
  • the motion estimation unit 0211 can not only determine the first time domain filtering intensity of each pixel in the first image signal through the implementation described above, but also determine the second time domain of each pixel in the second image signal. Filter strength.
  • the motion estimation unit 0211 may first perform the calculation of the pixel value of each pixel in the second image signal and the corresponding pixel in the second historical noise reduction image. Difference processing to obtain the second frame difference image.
  • the spatial position of each pixel point in the first image signal, the second image signal, and the second historical noise reduction image are corresponding, that is, the pixel points on the same pixel coordinates are corresponding.
  • the motion estimation unit 0211 may determine the second time domain of each pixel in the second image signal according to each pixel in the second frame difference image and multiple second frame difference thresholds. Filter strength.
  • each pixel in the second frame difference image corresponds to a second frame difference threshold, that is, multiple second frame difference thresholds correspond to multiple pixels in the second frame difference image one-to-one.
  • the second frame difference threshold corresponding to each pixel may be the same or different.
  • the second frame difference threshold corresponding to each pixel can be set by an external user.
  • the motion estimation unit 0211 may perform difference processing between the second historical noise reduction image and the image before noise reduction corresponding to the second historical noise reduction image, so as to obtain the second noise intensity image.
  • the second frame difference threshold of the pixel at the corresponding position in the second frame difference image is determined according to the noise strength of each pixel in the second noise intensity image.
  • the second frame difference threshold corresponding to each pixel can also be determined in other ways, which is not limited in the embodiment of the present application.
  • the motion estimation unit 0211 can determine the corresponding pixel according to the frame difference of the pixel and the second frame difference threshold corresponding to the pixel by the following formula (2) The second time domain filtering strength of the point.
  • the frame difference of each pixel in the second frame difference image is the pixel value of the corresponding pixel.
  • ⁇ vis (x, y) refers to the second time domain filter intensity of the pixel with coordinates (x, y)
  • dif vis (x, y) represents the frame difference of the pixel
  • dif_thr vis (x, y) ) Represents the second frame difference threshold corresponding to the pixel.
  • the frame difference of the pixel is smaller than the second frame difference threshold, and the second temporal filtering intensity of the pixel at the same position as the pixel is Bigger.
  • the value of the second time domain filtering strength can be between 0 and 1.
  • the motion estimation unit 0211 may weight the first temporal filtering strength and the second temporal filtering strength of each pixel, thus, the joint time domain weight of each pixel is obtained.
  • the determined joint time domain weight of each pixel is the motion estimation result of the first image signal and the second image signal.
  • the first time domain filter intensity and the second time domain filter intensity of any pixel point refer to the first The temporal filtering strength of the pixel points at the same pixel position in the one image signal and the second image signal.
  • the motion estimation unit 0211 may weight the first temporal filtering strength and the second temporal filtering strength of each pixel by the following formula (3), thereby obtaining the joint temporal filtering of each pixel strength.
  • refers to the neighborhood range centered on the pixel with coordinates (x,y), that is, the local image area centered on the pixel with coordinates (x,y), (x+i,y +j) refers to the pixel coordinates in the local image area, Refers to the first temporal filtering strength in the local image area centered on the pixel with coordinates (x, y), Refers to the second time domain filter strength in the local image area centered on the pixel with coordinates (x, y), ⁇ fus (x, y) refers to the joint time of pixels with coordinates (x, y) Domain filtering strength.
  • the ratio of the first time domain filter strength and the second time domain filter strength in the joint time domain filter strength is adjusted by the first time domain filter strength and the second time domain filter strength in the local image area, that is, the higher the local motion level The larger the proportion of time-domain filtering strength.
  • the first temporal filtering strength can be used to indicate the motion level of pixels in the first image signal
  • the second temporal filtering strength can be used to indicate the motion level of pixels in the second image signal.
  • the joint time-domain filtering strength determined by the above-mentioned method simultaneously fuses the first time-domain filtering strength and the second time-domain filtering strength, that is, the joint time-domain filtering strength also takes into account that the pixel point appears in the first image signal. The movement trend of and the movement trend shown in the second image signal.
  • the joint time domain filtering strength can more accurately characterize the motion trend of the pixel points.
  • the subsequent time domain filtering is performed with the joint time domain filtering strength At this time, image noise can be removed more effectively, and problems such as image smearing caused by misjudgment of the motion level of pixels can be alleviated.
  • the joint time-domain filtering strength is determined by the above formula (3)
  • different parameters can be used to fuse the first time-domain filtering strength and the second time-domain filtering strength, so as to obtain two different filters.
  • the intensity is the first filter intensity and the second filter intensity respectively.
  • the joint time-domain filter intensity includes the first filter intensity and the second filter intensity.
  • the motion estimation unit may calculate the sum of the first temporal filtering strength of the pixel.
  • a time domain filtering strength is selected as the joint time domain filtering weight of the pixel.
  • the time-domain filtering unit 0212 may perform time-domain filtering processing on the first image signal and the second image signal respectively according to the joint time-domain filtering strength, so as to obtain near-infrared light drop Noise image and visible light noise reduction image.
  • the time-domain filtering unit 0212 may perform time-domain processing on each pixel in the first image signal and the first historical noise reduction image by the following formula (4) according to the joint time-domain filtering strength of each pixel. Weighted processing to obtain a near-infrared light noise reduction image. According to the joint time-domain filtering strength of each pixel, the second image signal and each pixel in the second historical noise reduction image are processed by the following formula (5) Time-domain weighting processing to obtain visible light noise reduction image.
  • ⁇ fus (x, y) refers to the joint temporal filtering strength of the pixel with the coordinates (x, y)
  • I nir (x ,y,t) refers to the pixel with coordinates (x,y) in the first image signal
  • I vis (x, y, t) refers to the pixel with the coordinate (x, y) in the second image signal.
  • the time domain filtering unit 0212 may also perform time domain filtering on the first image signal according to the first time domain filtering strength of each pixel. , Obtain the near-infrared light noise reduction image, and perform time domain filtering processing on the second image signal according to the joint time domain filtering strength of each pixel, thereby obtaining the visible light noise reduction image.
  • the weaker the motion level may be used.
  • the time domain filtering strength filters it.
  • the time-domain filtering unit 0212 may use the first filtering strength to perform time-domain filtering on the first image signal to obtain near-infrared light noise reduction
  • the second image signal is time-domain filtered using the second filter intensity to obtain a visible light denoising image. That is, in the embodiment of the present application, when the motion estimation unit 0211 uses different parameters to fuse the first time domain filter strength and the second time domain filter strength to obtain two different joint time domain filter strengths, the time domain filter The unit 0212 may respectively use two different joint time-domain filtering strengths to perform time-domain filtering on the first image signal and the second image signal.
  • the image noise reduction unit 02 may include a spatial noise reduction unit 022.
  • the spatial noise reduction unit 022 is configured to perform edge estimation according to the first image signal and the second image signal to obtain an edge estimation result, and perform spatial filtering processing on the first image signal according to the edge estimation result to obtain a near-infrared light noise reduction image , Performing spatial filtering processing on the second image signal according to the edge estimation result to obtain a visible light noise reduction image.
  • the spatial noise reduction unit 022 may include an edge estimation unit 0221 and a spatial filtering unit 0222.
  • the edge estimation unit 0221 is used to determine the first spatial filtering strength of each pixel in the first image signal; the spatial filtering unit 0222 is used to determine the first spatial filtering strength corresponding to each pixel.
  • An image signal is subjected to spatial filtering processing to obtain a near-infrared light noise reduction image, and the second image signal is spatially filtered according to the first spatial filtering intensity corresponding to each pixel to obtain a visible light noise reduction image.
  • the edge estimation unit 0221 may determine the first spatial filtering intensity of the corresponding pixel according to the difference between each pixel of the first image signal and other pixels in its neighborhood. Wherein, the edge estimation unit 0221 can generate the first spatial filtering intensity of each pixel through the following formula (6).
  • refers to a neighborhood range centered on a pixel with coordinates (x, y), that is, a local image area centered on a pixel with coordinates (x, y).
  • (x+i,y+j) refers to the pixel coordinates in the local image area
  • img nir (x,y) refers to the pixel value of the pixel with coordinates (x,y) in the first image signal
  • ⁇ 1 and ⁇ 2 refer to the standard deviation of Gaussian distribution
  • It refers to the first spatial filtering strength determined according to the difference between the pixel point (x, y) and the pixel point (x+i, y+j) in the local image area.
  • the neighborhood of each pixel includes multiple pixels. In this way, for any pixel, the difference between each pixel in the local image area of the pixel and the pixel can be determined. Obtain multiple first spatial filtering intensities corresponding to the pixel. After determining the multiple first spatial filtering intensities of each pixel, the spatial filtering unit 0222 may perform spatial filtering processing on the first image signal and the second image signal according to the multiple first spatial filtering intensities of each pixel. Thus, the near-infrared light noise reduction image and the visible light noise reduction image are obtained.
  • the edge estimation unit 0221 is used to determine the first spatial filtering strength of each pixel in the first image signal, and determine the second spatial filtering strength of each pixel in the second image signal; Extract local information from the image signal to obtain the first local information, and extract the local information from the second image signal to obtain the second local information; according to the first spatial filtering strength, the second spatial filtering strength, the first local information and the second local The information determines the joint spatial filtering strength corresponding to each pixel; the spatial filtering unit 0222 is used to perform spatial filtering on the first image signal according to the first spatial filtering strength corresponding to each pixel to obtain a near-infrared light noise reduction image, Perform spatial filtering processing on the second image signal according to the joint spatial filtering intensity corresponding to each pixel to obtain a visible light denoising image.
  • the first local information and the second local information include at least one of local gradient information, local brightness information, and local information entropy.
  • the edge estimation unit 0221 can not only determine the first spatial filtering strength of each pixel in the first image signal through the implementation described above, but also determine the second spatial filtering of each pixel in the second image signal. strength.
  • the edge estimation unit 0221 may determine the second spatial domain of the corresponding pixel according to the difference between each pixel of the second image signal and other pixels in its neighborhood Filter strength. Wherein, the edge estimation unit 0221 can generate the second spatial filtering intensity of each pixel through the following formula (7).
  • refers to a neighborhood range centered on a pixel with coordinates (x, y), that is, a local image area centered on a pixel with coordinates (x, y).
  • (x+i,y+j) refers to the pixel coordinates in the local image area
  • img vis (x,y) refers to the pixel value of the pixel with coordinates (x,y) in the second image signal
  • ⁇ 1 and ⁇ 2 refer to the standard deviation of Gaussian distribution
  • It refers to the second spatial filtering strength determined by the pixel point with coordinates (x, y) in the local image area according to the difference between it and the pixel point (x+i, y+j).
  • the neighborhood of each pixel includes multiple pixels, for each pixel, multiple second spatial filtering intensities of each pixel can be obtained through the method described above.
  • the edge estimation unit 0221 can use the Sobel edge detection operator to perform convolution processing on the first image signal and the second image signal respectively to obtain the first The texture image and the second texture image, and use this as a weight to weight the multiple first spatial filter intensities and multiple second spatial filter intensities of each pixel to generate the multiple of each pixel in the local image area. Joint spatial filtering strength.
  • the first texture image is the first local information
  • the second texture image is the second local information.
  • the Sobel edge detection operator is shown in the following equation (8).
  • the edge estimation unit 0221 can generate the joint spatial filtering strength through the following equation (9).
  • sobel H refers to the Sobel edge detection operator in the horizontal direction
  • sobel V refers to the Sobel edge detection operator in the vertical direction
  • ⁇ fus (x+i,y+j) refers to the coordinates (x,y) Any joint spatial filtering strength of pixels in its neighborhood ⁇ , Refers to the texture information of the pixel with coordinates (x, y) in the first texture image, Refers to the texture information of the pixel with the coordinate (x, y) in the second texture image.
  • the joint spatial filtering strength is relatively larger. That is, in the embodiment of the present application, when performing spatial filtering, a weaker filtering strength is used at the edges, and a stronger spatial filtering strength is used at the non-edges, thereby improving the noise reduction effect.
  • the edge estimation unit 0221 may perform different processing on the first image signal to obtain different first partial information, or perform different processing on the second image signal to obtain different second partial information.
  • Different sets of first partial information and second partial information are used as weights, and different spatial filtering strengths, that is, the third filtering strength and the fourth filtering strength, are obtained through the aforementioned formula 9.
  • the joint spatial filtering strength may include the third filtering strength and the fourth filtering strength.
  • the spatial filtering unit 0222 may perform spatial filtering processing on the first image signal and the second image signal respectively according to the joint spatial filtering strength to obtain a near-infrared light noise reduction image and a visible light noise reduction image.
  • the spatial filtering unit 0222 may perform spatial filtering processing on the first image signal according to the first spatial filtering intensity of each pixel. Perform spatial filtering processing on the second image signal according to the joint spatial filtering strength of each pixel.
  • the spatial filtering unit 0222 may perform spatial weighting processing on each pixel in the first image signal according to the first spatial filtering intensity of each pixel through the following formula (10), thereby obtaining near-infrared light
  • each pixel in the second image signal is weighted by the following formula (11) to obtain a visible light noise-reduced image.
  • I nir (x+i, y+j) refers to the neighbor of the pixel with the coordinates (x, y) in the first image signal Pixels in the domain
  • ⁇ nir (x+i, y+j) is the first spatial filtering strength of the pixel with coordinates (x, y) in the neighborhood
  • refers to the pixel with coordinates (x ,y) is the center of the neighborhood
  • I vis (x+i, y+j) refers to the neighborhood of the pixel with coordinates (x, y) in the second image signal
  • ⁇ fus (x+i,y+j) is the joint spatial filtering strength of the pixel with coordinates (x,y) in the neighborhood.
  • the spatial noise reduction unit 0222 may perform spatial filtering processing on the first image signal according to the third filtering strength, respectively, to obtain near-infrared light drop
  • the second image signal is subjected to spatial filtering processing according to the fourth filtering strength to obtain a visible light noise reduction image. That is, when using different local information to fuse the first spatial filtering strength and the second spatial filtering strength to obtain two different joint spatial filtering strengths, the spatial noise reduction unit 0222 may use two different joint spatial filtering strengths respectively. Perform spatial filtering on the two image signals.
  • the image noise reduction unit 02 may also include the above-mentioned temporal noise reduction unit 021 and the spatial noise reduction unit 022 at the same time.
  • Time domain noise reduction image After that, the spatial noise reduction unit 022 performs spatial filtering on the obtained first temporal noise reduction image and the second temporal noise reduction image, thereby obtaining a near-infrared light noise reduction image and a visible light noise reduction image.
  • the spatial noise reduction unit 022 may first perform spatial filtering on the first image signal and the second image signal to obtain the first spatial noise reduction image and the second spatial noise reduction image.
  • the time domain noise reduction unit 021 performs time domain filtering on the obtained first spatial domain noise reduction image and the second spatial domain noise reduction image, thereby obtaining a near-infrared light noise reduction image and a visible light noise reduction image. That is, the sequence of the spatial filtering and the temporal filtering in the embodiment of the present application is not limited.
  • the image fusion unit 03 in FIG. 1 can perform the near-infrared light noise reduction image and the visible light noise reduction image. Fusion to obtain a fused image.
  • the image fusion unit 03 may include a first fusion unit.
  • the first fusion unit is used for fusing the near-infrared light noise reduction image and the visible light noise reduction image through the first fusion process to obtain a fused image.
  • the possible implementation of the first fusion processing may include the following:
  • the first fusion unit can traverse each pixel position, and calculate the RGB color at the same pixel position of the near-infrared light noise reduction image and the visible light noise reduction image according to the preset fusion weight of each pixel position The vector is fused.
  • the first fusion unit may generate a fusion image through the following model (12).
  • img refers to the fusion image
  • img 1 refers to the near-infrared light noise reduction image
  • img 2 refers to the visible light noise reduction image
  • w refers to the fusion weight. It should be noted that the value range of the fusion weight is (0, 1), for example, the fusion weight can be 0.5.
  • the fusion weight can also be obtained by processing the near-infrared light noise reduction image and the visible light noise reduction image.
  • the first fusion unit may perform edge extraction on the near-infrared noise reduction image to obtain the first edge image.
  • Edge extraction is performed on the visible light noise reduction image to obtain a second edge image.
  • the fusion weight of each pixel position is determined according to the first edge image and the second edge image.
  • the first fusion unit may process the brightness signal in the visible light noise reduction image through a low-pass filter to obtain a low-frequency signal.
  • the near-infrared light noise reduction image is processed by a high-pass filter to obtain a high-frequency signal.
  • the low-frequency signal and the high-frequency signal are added to obtain a fused brightness signal.
  • the fused luminance signal and the chrominance signal in the visible light noise reduction image are synthesized to obtain the fused image.
  • the first fusion unit may perform color space conversion on the near-infrared light noise reduction image, thereby separating the first brightness image and the first color image.
  • Pyramid decomposition is performed on the first brightness image and the second brightness image to obtain a plurality of basic images and detailed images with different scale information, and the comparison is made according to the relative magnitude of the information entropy or gradient between the first brightness image and the second brightness image
  • Multiple basic images and detail images are weighted and reconstructed to obtain a fused brightness image.
  • the first fusion unit may select the color image with higher color accuracy among the first color image and the second color image as the color component of the fusion image, and then select the brightness image corresponding to the fusion brightness image and the selected color image. Adjust the color components of the fused image to improve color accuracy.
  • the image fusion unit 03 may include a second fusion unit and a third fusion unit.
  • the second fusion unit is used for fusing the near-infrared light noise reduction image and the visible light noise reduction image through the second fusion process to obtain the first target image.
  • the third fusion unit is used for fusing the near-infrared light noise reduction image and the visible light noise reduction image through the third fusion process to obtain the second target image.
  • the fusion image includes a first target image and a second target image.
  • the second fusion process and the third fusion process may be different.
  • the second fusion process and the third fusion process may be any two of the three possible implementations of the first fusion process described above.
  • the second fusion process is any one of the three possible implementation manners of the first fusion process described above, and the third fusion process is a processing manner other than the foregoing three possible implementation manners.
  • the third fusion process is any one of the three possible implementation manners of the first fusion process described above, and the second fusion process is a processing manner other than the foregoing three possible implementation manners.
  • the second fusion process and the third fusion process may also be the same.
  • the fusion parameter of the second fusion process is the first fusion parameter
  • the fusion parameter of the third fusion process is the second fusion parameter
  • the first fusion parameter and the second fusion parameter are different.
  • the second fusion processing and the third fusion processing may both be the first possible implementation manner in the first fusion processing described above.
  • the fusion weights in the second fusion process and the third fusion process may be different.
  • Different fusion units use different fusion processing to fuse the two preprocessed images, or use the same fusion process but different fusion parameters to fuse the two preprocessed images, you can get two different image styles Fusion image.
  • the image fusion unit can respectively output different fused images to different subsequent units, so as to meet the requirements of different subsequent operations on the fused images.
  • the unprocessed image after acquisition may be referred to as an image
  • the processed image may also be referred to as an image
  • the image acquisition unit 01 may be an image acquisition device, or the image acquisition unit 01 may also be a receiving device for receiving images transmitted by other devices.
  • the image acquisition unit 01 may include an image sensor 011, a light supplement 012, and a filter component 013.
  • the image sensor 011 is located on the light exit side of the filter component 013.
  • the image sensor 011 is used to generate and output a first image signal and a second image signal through multiple exposures.
  • the first image signal is an image generated according to the first preset exposure
  • the second image signal is an image generated according to the second preset exposure
  • the first preset exposure and the second preset exposure are among the multiple exposures. Two exposures.
  • the light supplement 012 includes a first light supplement device 0121, and the first light supplement device 0121 is used to perform near-infrared supplement light, wherein at least the near-infrared supplement light exists in the partial exposure period of the first preset exposure, and the second There is no near-infrared fill light in the exposure time period of the preset exposure.
  • the filter assembly 013 includes a first filter 0131.
  • the first filter 0131 passes visible light and part of the near-infrared light.
  • the first light-filling device 0121 passes through the first filter 0131 when the near-infrared light is filled.
  • the intensity of the near-infrared light is higher than the intensity of the near-infrared light passing through the first filter 0131 when the first light supplement device 0121 does not perform the near-infrared light supplement.
  • the image acquisition unit 01 may further include a lens 014.
  • the filter assembly 013 may be located between the lens 014 and the image sensor 011, and the image sensor 011 is located at the light output of the filter assembly 013. side.
  • the lens 014 is located between the filter component 013 and the image sensor 011, and the image sensor 011 is located on the light exit side of the lens 014.
  • the first filter 0131 can be a filter film. In this way, when the filter assembly 013 is located between the lens 014 and the image sensor 011, the first filter 0131 can be attached to the light emitting side of the lens 014. On the surface, or, when the lens 014 is located between the filter assembly 013 and the image sensor 011, the first filter 0131 may be attached to the surface of the lens 014 on the light incident side.
  • the image acquisition unit 01 may be an image acquisition device such as a video camera, a capture machine, a face recognition camera, a code reading camera, a vehicle-mounted camera, a panoramic detail camera, etc.
  • the light supplement 012 may be inside the image acquisition device, as Part of the image capture device.
  • the image acquisition unit 01 may also include an image acquisition device and a light supplement 012.
  • the light supplement 012 is located outside the image acquisition device and is connected to the image acquisition device to ensure that the image sensor in the image acquisition unit 01
  • the first supplementary light device 0121 is a device that can emit near-infrared light, such as a near-infrared supplementary light.
  • the near-infrared supplementary light is performed in a manner, which is not limited in the embodiment of the application.
  • the first light supplement device 0121 when the first light supplement device 0121 performs near-infrared supplement light in a stroboscopic manner, the first light supplement device 0121 can be manually controlled to perform near-infrared supplement light in a stroboscopic manner, or through a software program Or a specific device controls the first light supplement device 0121 to perform near-infrared supplement light in a strobe mode, which is not limited in the embodiment of the present application.
  • the time period during which the first light supplement device 0121 performs near-infrared supplement light may coincide with the exposure time period of the first preset exposure, or may be greater than the exposure time period of the first preset exposure or less than the exposure time period of the first preset exposure The time period, as long as there is near-infrared supplement light in the entire exposure time period or part of the exposure time period of the first preset exposure, and there is no near-infrared supplement light in the exposure time period of the second preset exposure.
  • the exposure time period of the second preset exposure may be between the start exposure time and the end exposure time.
  • Time period, for the rolling shutter exposure mode, the exposure time period of the second preset exposure may be the time period between the start exposure time of the first row of effective images of the second image signal and the end exposure time of the last row of effective images, but it is not limited to this.
  • the exposure time period of the second preset exposure may also be the exposure time period corresponding to the target image in the second image signal, and the target image is several rows of effective images corresponding to the target object or target area in the second image signal.
  • the time period between the start exposure time and the end exposure time of the several rows of effective images can be regarded as the exposure time period of the second preset exposure.
  • the first light supplement device 0121 performs near-infrared light supplementation on an external scene
  • the near-infrared light incident on the surface of the object may be reflected by the object and enter the first filter 0131.
  • ambient light may include visible light and near-infrared light
  • near-infrared light in the ambient light is also reflected by the object when it is incident on the surface of the object, and thus enters the first filter 0131.
  • the near-infrared light that passes through the first filter 0131 when there is near-infrared supplementary light may include the near-infrared light reflected by the object and enters the first filter 0131 when the first supplementary light device 0121 performs near-infrared supplementary light.
  • the near-infrared light passing through the first filter 0131 when there is no near-infrared supplement light may include the near-infrared light reflected by the object and entering the first filter 0131 when the first supplementary light device 0121 does not perform near-infrared supplementary light.
  • the near-infrared light passing through the first filter 0131 when there is near-infrared supplementary light includes the near-infrared light emitted by the first supplementary light device 0121 and reflected by the object, and the ambient light reflected by the object Near-infrared light
  • the near-infrared light passing through the first filter 0131 when there is no near-infrared supplementary light includes near-infrared light reflected by an object in the ambient light.
  • the image acquisition unit 01 acquires the first image signal and
  • the process of the second image signal is: when the image sensor 011 performs the first preset exposure, the first light supplement device 0121 has near-infrared supplement light, and the ambient light in the shooting scene and the first light supplement device perform near-infrared supplement light.
  • the image sensor 011 After the near-infrared light reflected by objects in the scene passes through the lens 014 and the first filter 0131, the image sensor 011 generates a first image signal through the first preset exposure; when the image sensor 011 performs the second preset exposure , The first light supplement device 0121 does not have near-infrared supplement light. At this time, after the ambient light in the shooting scene passes through the lens 014 and the first filter 0131, the image sensor 011 generates a second image signal through the second preset exposure.
  • the first filter 0131 can pass part of the near-infrared light band.
  • the near-infrared light band passing through the first filter 0131 can be part of the near-infrared light band, or it can be all
  • the near-infrared light band is not limited in the embodiment of the present application.
  • the first light-filling device 0121 passes through the first filter 0131 when performing near-infrared light-filling.
  • the intensity of the near-infrared light is higher than the intensity of the near-infrared light passing through the first filter 0131 when the first light supplement device 0121 does not perform the near-infrared light supplement.
  • the wavelength range of the first light supplement device 0121 for near-infrared supplement light can be the second reference wavelength range, and the second reference wavelength range can be 700 nanometers to 800 nanometers, or 900 nanometers to 1000 nanometers, etc., which can reduce common
  • the interference caused by the 850 nm near-infrared lamp is not limited in the embodiment of this application.
  • the wavelength range of the near-infrared light incident on the first filter 0131 may be the first reference wavelength range, and the first reference wavelength range is 650 nanometers to 1100 nanometers.
  • the near-infrared light passing through the first filter 0131 may include the near-infrared light reflected by the object and entering the first filter 0131 when the first supplementary light device 0121 performs near-infrared light supplementation, And the near-infrared light reflected by the object in the ambient light. Therefore, at this time, the intensity of the near-infrared light entering the filter assembly 013 is relatively strong. However, when there is no near-infrared complementary light, the near-infrared light passing through the first filter 0131 includes the near-infrared light reflected by the object into the filter assembly 013 in the ambient light.
  • the intensity of the near-infrared light passing through the first filter 0131 is weak at this time. Therefore, the intensity of the near infrared light included in the first image signal generated and output according to the first preset exposure is higher than the intensity of the near infrared light included in the second image signal generated and output according to the second preset exposure.
  • the center wavelength and/or wavelength range of the first light supplement device 0121 for near-infrared supplement light there are multiple choices for the center wavelength and/or wavelength range of the first light supplement device 0121 for near-infrared supplement light.
  • the center wavelength of the near-infrared supplement light of the first light supplement device 0121 can be designed, and the characteristics of the first filter 0131 can be selected, so that the center of the first light supplement device 0121 is the center of the near-infrared light supplement.
  • the center wavelength and/or band width of the near-infrared light passing through the first filter 0131 may meet the constraint conditions.
  • This constraint is mainly used to restrict the center wavelength of the near-infrared light passing through the first filter 0131 as accurate as possible, and the band width of the near-infrared light passing through the first filter 0131 is as narrow as possible, so as to avoid The infrared light band width is too wide and introduces wavelength interference.
  • the center wavelength of the near-infrared supplement light performed by the first supplementary light device 0121 may be the average value in the wavelength range of the highest energy in the spectrum of the near-infrared light emitted by the first supplementary light device 0121, or it may be understood as the first supplementary light
  • the wavelength in the middle of the spectrum of the near-infrared light emitted by the device 0121 whose energy exceeds a certain threshold.
  • the set characteristic wavelength or the set characteristic wavelength range can be preset.
  • the center wavelength of the first light supplement device 0121 for near-infrared supplement light may be any wavelength within the wavelength range of 750 ⁇ 10 nanometers; or, the center wavelength of the first light supplement device 0121 for near-infrared supplement light It is any wavelength within the wavelength range of 780 ⁇ 10 nanometers; or, the center wavelength of the near-infrared supplement light performed by the first light supplement device 0121 is any wavelength within the wavelength range of 940 ⁇ 10 nanometers.
  • the set characteristic wavelength range may be a wavelength range of 750 ⁇ 10 nanometers, or a wavelength range of 780 ⁇ 10 nanometers, or a wavelength range of 940 ⁇ 10 nanometers.
  • the center wavelength of the near-infrared supplement light performed by the first light supplement device 0121 is 940 nanometers, and the relationship between the wavelength and the relative intensity of the near-infrared supplement light performed by the first light supplement device 0121 is shown in FIG. 6. It can be seen from FIG. 6 that the wavelength range of the first light supplement device 0121 for near-infrared supplement light is 900 nm to 1000 nm, and the relative intensity of near-infrared light is the highest at 940 nm.
  • the above-mentioned constraint conditions may include: the difference between the center wavelength of the near-infrared light passing through the first filter 0131 and the center wavelength of the near-infrared light of the first light supplement device 0121 lies in the wavelength fluctuation Within the range, as an example, the wavelength fluctuation range may be 0-20 nanometers.
  • the center wavelength of the near-infrared supplement light passing through the first filter 0131 can be the wavelength at the peak position in the near-infrared band in the near-infrared light pass rate curve of the first filter 0131, or it can be understood as the first
  • the near-infrared light pass rate curve of a filter 0131 is the wavelength at the middle position in the near-infrared waveband whose pass rate exceeds a certain threshold.
  • the above constraint conditions may include: the first band width may be smaller than the second band width.
  • the first waveband width refers to the waveband width of the near-infrared light passing through the first filter 0131
  • the second waveband width refers to the waveband width of the near-infrared light blocked by the first filter 0131.
  • the wavelength band width refers to the width of the wavelength range in which the wavelength of light lies.
  • the first wavelength band width is 800 nanometers minus 700 nanometers, that is, 100 nanometers.
  • the wavelength band width of the near-infrared light passing through the first filter 0131 is smaller than the wavelength band width of the near-infrared light blocked by the first filter 0131.
  • FIG. 7 is a schematic diagram of the relationship between the wavelength of light that can pass through the first filter 0131 and the pass rate.
  • the wavelength range of the near-infrared light incident on the first filter 0131 is 650 nanometers to 1100 nanometers.
  • the first filter 0131 can pass visible light with a wavelength of 380 nanometers to 650 nanometers and a wavelength of near 900 nanometers to 1100 nanometers.
  • Infrared light passes through and blocks near-infrared light with a wavelength between 650 nanometers and 900 nanometers. That is, the width of the first band is 1000 nanometers minus 900 nanometers, that is, 100 nanometers.
  • the second band width is 900 nanometers minus 650 nanometers, plus 1100 nanometers minus 1000 nanometers, or 350 nanometers. 100 nanometers are smaller than 350 nanometers, that is, the wavelength band width of the near-infrared light passing through the first filter 0131 is smaller than the wavelength band width of the near-infrared light blocked by the first filter 0131.
  • the above relationship curve is just an example. For different filters, the wavelength range of the near-red light that can pass through the filter can be different, and the wavelength range of the near-infrared light blocked by the filter can also be different. different.
  • the above-mentioned constraint conditions may include: passing the first filter
  • the half bandwidth of the near-infrared light of the light sheet 0131 is less than or equal to 50 nanometers.
  • the half bandwidth refers to the band width of near-infrared light with a pass rate greater than 50%.
  • the above constraint conditions may include: the third band width may be smaller than the reference band width.
  • the third waveband width refers to the waveband width of near-infrared light with a pass rate greater than a set ratio.
  • the reference waveband width may be any waveband width in the range of 50 nanometers to 100 nanometers.
  • the set ratio can be any ratio from 30% to 50%.
  • the set ratio can also be set to other ratios according to usage requirements, which is not limited in the embodiment of the application.
  • the band width of the near-infrared light whose pass rate is greater than the set ratio may be smaller than the reference band width.
  • the wavelength range of the near-infrared light incident on the first filter 0131 is 650 nm to 1100 nm
  • the setting ratio is 30%
  • the reference wavelength band width is 100 nm. It can be seen from FIG. 7 that in the wavelength band of near-infrared light from 650 nanometers to 1100 nanometers, the band width of near-infrared light with a pass rate greater than 30% is significantly less than 100 nanometers.
  • the first light supplement device 0121 provides near-infrared supplementary light at least during a part of the exposure time period of the first preset exposure, it does not provide near-infrared supplementary light during the entire exposure time period of the second preset exposure, and the first preset exposure
  • the exposure and the second preset exposure are two of the multiple exposures of the image sensor 011, that is, the first light supplement device 0121 provides near-infrared supplement light during the exposure period of the partial exposure of the image sensor 011, The near-infrared supplementary light is not provided during the exposure time period when another part of the image sensor 011 is exposed.
  • the number of times of supplementary light in the unit time length of the first supplementary light device 0121 can be lower than the number of exposures of the image sensor 011 in the unit time length, wherein, within the interval of two adjacent times of supplementary light, there is one interval. Or multiple exposures.
  • the light supplement 012 may also include a second light supplement.
  • the light device 0122, and the second light supplement device 0122 are used for visible light supplement light.
  • the second light supplement device 0122 provides visible light supplement light at least during part of the exposure time of the first preset exposure, that is, at least the near-infrared supplement light and visible light supplement light are present during the partial exposure time period of the first preset exposure.
  • the mixed color of the two lights can be distinguished from the color of the red light in the traffic light, so as to avoid the human eye from confusing the color of the light fill 012 for near-infrared fill light with the color of the red light in the traffic light.
  • the second light supplement device 0122 provides visible light supplement light during the exposure time period of the second preset exposure, since the intensity of visible light is not particularly high during the exposure time period of the second preset exposure, When the visible light supplement is performed during the exposure time period of the exposure, the brightness of the visible light in the second image signal can also be increased, thereby ensuring the quality of image collection.
  • the second light supplement device 0122 can be used to perform visible light supplement light in a constant light mode; or, the second light supplement device 0122 can be used to perform visible light supplement light in a stroboscopic manner, wherein, at least in the first Visible light supplement light exists in part of the exposure time period of the preset exposure, and there is no visible light supplement light during the entire exposure time period of the second preset exposure; or, the second light supplement device 0122 can be used to perform visible light supplement light in a strobe mode There is no visible light supplementary light at least during the entire exposure time period of the first preset exposure, and visible light supplementary light exists during the partial exposure time period of the second preset exposure.
  • the second light supplement device 0122 When the second light supplement device 0122 performs visible light supplement light in a constant light mode, it can not only prevent the human eye from confusing the color of the first light supplement device 0121 with the color of the red light in the traffic light, but also can improve The brightness of the visible light in the second image signal ensures the quality of image collection.
  • the second light supplement device 0122 performs visible light supplement light in a stroboscopic manner, it can prevent human eyes from confusing the color of the first light supplement device 0121 with the color of the red light in the traffic light, or it can improve The brightness of the visible light in the second image signal in turn ensures the quality of image collection, and can also reduce the number of times of supplementary light of the second supplementary light device 0122, thereby prolonging the service life of the second supplementary light device 0122.
  • the aforementioned multiple exposure refers to multiple exposures within one frame period, that is, the image sensor 011 performs multiple exposures within one frame period, thereby generating and outputting at least one frame of the first image signal and At least one frame of the second image signal.
  • one second includes 25 frame periods, and the image sensor 011 performs multiple exposures in each frame period, thereby generating at least one frame of the first image signal and at least one frame of the second image signal, and the The first image signal and the second image signal are called a set of images, so that 25 sets of images are generated within 25 frame periods.
  • the first preset exposure and the second preset exposure can be two adjacent exposures in multiple exposures in one frame period, or two non-adjacent exposures in multiple exposures in one frame period. The application embodiment does not limit this.
  • the first image signal is generated and output by the first preset exposure
  • the second image signal is generated and output by the second preset exposure.
  • the first image can be The signal and the second image signal are processed.
  • the purposes of the first image signal and the second image signal may be different, so in some embodiments, at least one exposure parameter of the first preset exposure and the second preset exposure may be different.
  • the at least one exposure parameter may include but is not limited to one or more of exposure time, analog gain, digital gain, and aperture size. Wherein, the exposure gain includes analog gain and/or digital gain.
  • the intensity of the near-infrared light sensed by the image sensor 011 is stronger, and the first image signal generated and output accordingly includes the near-infrared light
  • the brightness of the light will also be higher.
  • near-infrared light with higher brightness is not conducive to the acquisition of external scene information.
  • the exposure gain of the first preset exposure may be smaller than the second preset exposure. Set the exposure gain for exposure.
  • the exposure time of the first preset exposure may be less than the exposure time of the second preset exposure.
  • the first light supplement device 0121 performs near-infrared supplement light
  • the brightness of the near-infrared light contained in the first image signal generated and output by the image sensor 011 will not be affected by the first light supplement device 0121 performing near-infrared supplement light. Too high.
  • the shorter exposure time makes the motion trailing of the moving object in the external scene appear shorter in the first image signal, thereby facilitating the recognition of the moving object.
  • the exposure time of the first preset exposure is 40 milliseconds
  • the exposure time of the second preset exposure is 60 milliseconds, and so on.
  • the exposure time of the first preset exposure may not only be less than the exposure time of the second preset exposure , Can also be equal to the exposure time of the second preset exposure.
  • the exposure gain of the first preset exposure may be less than the exposure gain of the second preset exposure, or may be equal to the second preset exposure The exposure gain.
  • the purposes of the first image signal and the second image signal may be the same.
  • the exposure time of the first preset exposure may be equal to the exposure time of the second preset exposure. If the exposure time of the first preset exposure and the exposure time of the second preset exposure are different, the exposure time will be longer. There is a motion trailing in one channel of the image, resulting in different definitions of the two channels.
  • the exposure gain of the first preset exposure may be equal to the exposure gain of the second preset exposure.
  • the exposure gain of the first preset exposure may be less than the exposure gain of the second preset exposure. It can also be equal to the exposure gain of the second preset exposure.
  • the exposure time of the first preset exposure may be less than the exposure time of the second preset exposure, or may be equal to the second preset exposure The exposure time.
  • the image sensor 011 may include multiple photosensitive channels, and each photosensitive channel may be used to sense at least one type of light in the visible light band and to sense light in the near-infrared band. That is, each photosensitive channel can not only sense at least one kind of light in the visible light band, but also can sense light in the near-infrared band. In this way, it can be ensured that the first image signal and the second image signal have complete resolution without missing pixels. value.
  • the multiple photosensitive channels can be used to sense at least two different visible light wavelength bands.
  • the plurality of photosensitive channels may include at least two of R photosensitive channels, G photosensitive channels, B photosensitive channels, Y photosensitive channels, W photosensitive channels, and C photosensitive channels.
  • the R photosensitive channel is used to sense the light in the red and near-infrared bands
  • the G photosensitive channel is used to sense the green and near-infrared light
  • the B photosensitive channel is used to sense the blue and near-infrared light.
  • Y The photosensitive channel is used to sense light in the yellow band and near-infrared band.
  • W can be used to represent the light-sensing channel used to sense full-wavelength light
  • C can be used to represent the light-sensing channel used to sense full-wavelength light, so when there is more
  • this photosensitive channel may be a W photosensitive channel or a C photosensitive channel. That is, in practical applications, the photosensitive channel for sensing the light of the whole waveband can be selected according to the use requirements.
  • the image sensor 011 may be an RGB sensor, RGBW sensor, or RCCB sensor, or RYYB sensor.
  • the distribution of the R photosensitive channel, the G photosensitive channel and the B photosensitive channel in the RGB sensor can be seen in Figure 9.
  • the distribution of the R photosensitive channel, the G photosensitive channel, the B photosensitive channel and the W photosensitive channel in the RGBW sensor can be seen in the figure 10.
  • the distribution of the R photosensitive channel, the C photosensitive channel and the B photosensitive channel in the RCCB sensor can be seen in Figure 11, and the distribution of the R photosensitive channel, the Y photosensitive channel and the B photosensitive channel in the RYYB sensor can be seen in Figure 12.
  • some photosensitive channels may only sense light in the near-infrared waveband, but not light in the visible light waveband.
  • the plurality of photosensitive channels may include at least two of R photosensitive channels, G photosensitive channels, B photosensitive channels, and IR photosensitive channels.
  • the R photosensitive channel is used to sense red light and near-infrared light
  • the G photosensitive channel is used to sense green light and near-infrared light
  • the B photosensitive channel is used to sense blue light and near-infrared light.
  • IR The photosensitive channel is used to sense light in the near-infrared band.
  • the image sensor 011 may be an RGBIR sensor, where each IR photosensitive channel in the RGBIR sensor can sense light in the near-infrared waveband, but not light in the visible light waveband.
  • the image sensor 011 is an RGB sensor
  • the RGB information collected by the RGB sensor is more complete.
  • Some of the photosensitive channels of the RGBIR sensor cannot collect visible light, so the RGB sensor collects The color details of the image are more accurate.
  • the multiple photosensitive channels included in the image sensor 011 may correspond to multiple sensing curves.
  • the R curve in FIG. 13 represents the sensing curve of the image sensor 011 to light in the red light band
  • the G curve represents the sensing curve of the image sensor 011 to light in the green light band
  • the B curve represents the image sensor 011.
  • the W (or C) curve represents the sensing curve of the image sensor 011 for sensing the light in the full band
  • the NIR (Near infrared) curve represents the sensing of the image sensor 011 for sensing the light in the near infrared band. curve.
  • the image sensor 011 may adopt a global exposure method or a rolling shutter exposure method.
  • the global exposure mode means that the exposure start time of each row of effective images is the same, and the exposure end time of each row of effective images is the same.
  • the global exposure mode is an exposure mode in which all rows of effective images are exposed at the same time and the exposure ends at the same time.
  • Rolling shutter exposure mode means that the exposure time of different rows of effective images does not completely coincide, that is, the exposure start time of a row of effective images is later than the exposure start time of the previous row of effective images of the row, and the exposure of a row of effective images The end time is later than the exposure end time of the effective image in the previous row of the effective image.
  • data can be output after each line of effective image is exposed. Therefore, the time from the start of output of the first line of effective image to the end of output of the last line of effective image can be expressed as reading Time out.
  • FIG. 14 is a schematic diagram of a rolling shutter exposure method. It can be seen from Figure 14 that the effective image of the first line starts to be exposed at time T1, and the exposure ends at time T3. The effective image of the second line starts to be exposed at time T2, and the exposure ends at time T4. Time T2 is backward compared to time T1. A period of time has passed, and time T4 has passed a period of time backward compared to time T3. In addition, the effective image of the first line ends exposure at time T3 and begins to output data, and the output of data ends at time T5. The effective image of line n ends exposure at time T6 and starts outputting data. At time T7, the output of data ends, then T3 The time between ⁇ T7 is the read time.
  • the time period of the near-infrared supplement light and the exposure time period of the nearest second preset exposure do not exist.
  • the time period of near-infrared fill light is a subset of the exposure time period of the first preset exposure, or the time period of near-infrared fill light and the exposure time period of the first preset exposure overlap, or the first preset The exposure period of exposure is a subset of the near-infrared fill light.
  • the time period of near-infrared fill light does not overlap with the exposure time period of the nearest second preset exposure, and the time period of near-infrared fill light is the first preset A subset of the exposure time period for exposure.
  • the time period of near-infrared fill light does not overlap with the exposure time period of the nearest second preset exposure, and the time period of near-infrared fill light is equal to that of the first preset exposure. There is an intersection of exposure time periods.
  • the time period of near-infrared fill light does not overlap with the exposure time period of the nearest second preset exposure, and the exposure time period of the first preset exposure is near-infrared fill light A subset of.
  • FIGS. 15 to 17 are only an example, and the sorting of the first preset exposure and the second preset exposure may not be limited to these examples.
  • the time period of the near-infrared fill light is the same as the exposure time period of the nearest second preset exposure There is no intersection.
  • the start time of the near-infrared fill light is no earlier than the exposure start time of the last line of the effective image in the first preset exposure
  • the end time of the near-infrared fill light is no later than the exposure of the first line of the effective image in the first preset exposure End time.
  • the start time of the near-infrared fill light is not earlier than the exposure end time of the last line of the effective image of the nearest second preset exposure before the first preset exposure and is not later than the first line of the first preset exposure.
  • the exposure end time of the image, the end time of the near-infrared fill light is no earlier than the exposure start time of the last line of the effective image in the first preset exposure and no later than the nearest second preset exposure after the first preset exposure
  • the start time of the near-infrared fill light is not earlier than the exposure end time of the last line of the effective image of the nearest second preset exposure before the first preset exposure and is not later than the first line of the first preset exposure.
  • the image exposure start time, the end time of the near-infrared fill light is no earlier than the exposure end time of the last line of the effective image in the first preset exposure and no later than the nearest second preset exposure after the first preset exposure The start time of the exposure of the first line of the effective image.
  • the time period of near-infrared fill light does not overlap with the exposure time period of the nearest second preset exposure, and the start time of near-infrared fill light is no earlier than The exposure start time of the last line of the effective image in the first preset exposure, and the end time of the near-infrared fill light is no later than the exposure end time of the first line of the effective image in the first preset exposure.
  • the time period of near-infrared fill light does not overlap with the exposure time period of the nearest second preset exposure, and the start time of near-infrared fill light is no earlier than the first
  • the exposure end time of the last effective image line of the nearest second preset exposure before the preset exposure and not later than the exposure end time of the first effective image line in the first preset exposure, and the end time of the near-infrared fill light is not It is earlier than the exposure start time of the last line of the effective image in the first preset exposure and not later than the exposure start time of the first line of the effective image of the nearest second preset exposure after the first preset exposure.
  • the time period of near-infrared fill light does not overlap with the exposure time period of the nearest second preset exposure, and the start time of near-infrared fill light is no earlier than the first
  • the exposure end time of the last line of the effective image of the nearest second preset exposure before the preset exposure and not later than the exposure start time of the first line of the effective image in the first preset exposure the end time of the near-infrared fill light is not It is earlier than the exposure end time of the last line of the effective image in the first preset exposure and not later than the exposure start time of the first line of the effective image of the nearest second preset exposure after the first preset exposure.
  • 18 to 20 are only an example, and the sorting of the first preset exposure and the second preset exposure may not be limited to these examples.
  • the multiple exposures may include odd-numbered exposures and even-numbered exposures.
  • the first preset exposure and the second preset exposure may include but are not limited to the following methods:
  • the first preset exposure is one exposure in an odd number of exposures
  • the second preset exposure is one exposure in an even number of exposures.
  • the multiple exposures may include the first preset exposure and the second preset exposure arranged in a parity order.
  • the odd number of exposures such as the first exposure, the third exposure, and the fifth exposure in the multiple exposure are all the first preset exposures
  • the second exposure, the fourth exposure, and the sixth exposure are even numbered times.
  • the exposure is the second preset exposure.
  • the first preset exposure is one exposure in an even number of exposures
  • the second preset exposure is one exposure in an odd number of exposures.
  • the multiple exposures may include the first exposure in a parity order.
  • the preset exposure and the second preset exposure For example, odd-numbered exposures such as the first exposure, third exposure, and fifth exposure in multiple exposures are all second preset exposures, and even-numbered exposures such as second exposure, fourth exposure, and sixth exposure
  • the exposure is the first preset exposure.
  • the first preset exposure is one of the specified odd-numbered exposures
  • the second preset exposure is one of the exposures other than the specified odd-numbered exposures, that is, The second preset exposure may be an odd number of exposures in multiple exposures, or an even number of exposures in multiple exposures.
  • the first preset exposure is one exposure in the specified even number of exposures
  • the second preset exposure is one exposure in the other exposures except the specified even number of exposures, that is, The second preset exposure may be an odd number of exposures in multiple exposures, or an even number of exposures in multiple exposures.
  • the first preset exposure is one exposure in the first exposure sequence
  • the second preset exposure is one exposure in the second exposure sequence.
  • the first preset exposure is one exposure in the second exposure sequence
  • the second preset exposure is one exposure in the first exposure sequence
  • the above multiple exposure includes multiple exposure sequences
  • the first exposure sequence and the second exposure sequence are the same exposure sequence or two different exposure sequences in the multiple exposure sequences
  • each exposure sequence includes N exposures
  • the N exposures include 1 first preset exposure and N-1 second preset exposures, or the N exposures include 1 second preset exposure and N-1 second preset exposures, where N is A positive integer greater than 2.
  • each exposure sequence includes 3 exposures, and these 3 exposures can include 1 first preset exposure and 2 second preset exposures.
  • the first exposure of each exposure sequence can be the first preset Exposure
  • the second and third exposures are the second preset exposure. That is, each exposure sequence can be expressed as: a first preset exposure, a second preset exposure, and a second preset exposure.
  • these 3 exposures can include 1 second preset exposure and 2 first preset exposures, so that the first exposure of each exposure sequence can be the second preset exposure, the second and the third The exposure is the first preset exposure. That is, each exposure sequence can be expressed as: the second preset exposure, the first preset exposure, and the first preset exposure.
  • the filter assembly 013 further includes a second filter and a switching component, and both the first filter 0131 and the second filter are connected to the switching component.
  • the switching component is used to switch the second filter to the light incident side of the image sensor 011. After the second filter is switched to the light incident side of the image sensor 011, the second filter allows light in the visible light band to pass through, Blocking light in the near-infrared light band, the image sensor 011 is used to generate and output a third image signal through exposure.
  • the switching component is used to switch the second filter to the light incident side of the image sensor 011. It can also be understood that the second filter replaces the first filter 0131 on the light incident side of the image sensor 011. position. After the second filter is switched to the light incident side of the image sensor 011, the first light supplement device 0121 may be in a closed state or an open state.
  • the first light supplement device 0121 can be used to make the image sensor 011 generate and output the first image signal containing the near-infrared brightness information through the stroboscopic supplement light of the first light supplement device 0121.
  • the second image signal containing visible light brightness information and because the first image signal and the second image signal are both acquired by the same image sensor 011, the viewpoint of the first image signal is the same as the viewpoint of the second image signal, so that the The first image signal and the second image signal can obtain complete information of the external scene.
  • the intensity of visible light is strong, for example, during the daytime, the proportion of near-infrared light during the day is relatively strong, and the color reproduction of the collected image is not good.
  • the third image signal containing the visible light brightness information can be generated and output by the image sensor 011, so Even during the day, images with better color reproduction can be collected, so that regardless of the intensity of visible light, or whether it is day or night, the true color information of the external scene can be obtained efficiently and simply.
  • the image acquisition unit may also include a dual-sensor image acquisition device.
  • the first image signal sensor is used to generate and output a first image signal
  • the second image signal sensor is used to generate and output a second image signal.
  • the image acquisition unit may also be a binocular camera, including two cameras.
  • the first camera is a near-infrared light camera for generating and outputting a first image signal
  • the second camera is a visible light camera for generating and outputting a second image signal.
  • the image acquisition unit 01 may directly output the collected first image signal and second image signal to the image noise reduction unit 02.
  • the image acquisition unit 01 may output the collected first image signal and the second image signal to the image preprocessing unit 04, and the image preprocessing unit 04 performs the respective processing on the first image signal And the second image signal are processed to obtain the first image and the second image.
  • the image noise reduction device of the present application may further include an image preprocessing unit 04, where the image preprocessing unit 04 is used to process the first image signal and the second image signal respectively to obtain The first image and the second image are outputted to the image noise reduction unit.
  • the first image signal and the second image signal may be mosaic images collected by image sensors arranged in a bayer manner. Therefore, the image preprocessing unit 04 may further perform repair processing such as demosaicing on the first image signal and the second image signal, the first image signal after the repair processing is a grayscale image, and the second image signal after the repair processing It is a color image.
  • repair processing such as demosaicing on the first image signal and the second image signal
  • the first image signal after the repair processing is a grayscale image
  • the second image signal after the repair processing It is a color image.
  • first image signal and the second image signal are demosaiced, methods such as bilinear interpolation or adaptive interpolation can be used for processing, which will not be described in detail in this embodiment of the application. .
  • the first image signal and the second image signal can also be processed such as black level, dead pixel correction, and Gamma correction.
  • the embodiment does not limit this.
  • the image preprocessing unit 04 may output the first image and the second image to the image noise reduction unit 02.
  • the image preprocessing unit 04 can also register the first image signal and the second image signal.
  • an embodiment of the present application also provides an image noise reduction method.
  • the image noise reduction method will be described with the image noise reduction device provided based on the embodiment shown in FIGS. 1-20. Referring to Figure 21, the method includes:
  • Step 2101 Acquire a first image signal and a second image signal, where the first image signal is a near-infrared light image signal, and the second image signal is a visible light image signal;
  • Step 2102 Perform joint noise reduction on the first image signal and the second image signal to obtain a near-infrared light noise reduction image and a visible light noise reduction image.
  • performing joint noise reduction on the first image signal and the second image signal to obtain a near-infrared light noise reduction image and a visible light noise reduction image including:
  • the signal is filtered in the time domain to obtain a visible light noise reduction image;
  • performing motion estimation according to the first image signal and the second image signal to obtain the motion estimation result includes:
  • the first historical noise reduction image refers to the image after noise reduction is performed on any one of the first N frames of the first image signal, N is greater than or equal to 1, and multiple first frame difference thresholds are different from the first frame One-to-one correspondence between multiple pixels in the image;
  • the second historical noise reduction image refers to the image signal after noise reduction is performed on any one of the first N frames of the second image signal, and multiple second frame difference thresholds and multiple second frame difference images One-to-one correspondence number of pixels;
  • the first time domain filter intensity and the second time domain filter intensity of each pixel are merged to obtain the joint time domain filter intensity of each pixel; or, from the first time domain filter intensity and the first time domain filter intensity of each pixel Choose a time-domain filtering strength from the two time-domain filtering strengths as the joint time-domain filtering strength of the corresponding pixel;
  • the motion estimation result includes the first time domain filtering strength of each pixel and/or the joint time domain filtering strength of each pixel.
  • fusing the first temporal filtering strength and the second temporal filtering strength of each pixel to obtain the joint temporal filtering strength of each pixel includes:
  • the combined time domain filter strength includes the first filter strength and the second filter strength. Filter strength.
  • the first image signal is subjected to time-domain filtering processing according to the motion estimation result to obtain a near-infrared light noise reduction image
  • the second image signal is subjected to time-domain filtering processing according to the motion estimation result to obtain the visible light reduction Noisy images, including:
  • the first frame difference image refers to the original frame difference image obtained by performing difference processing on the first image signal and the first historical noise reduction image; or, the first frame difference image refers to the original frame difference image.
  • the frame difference image obtained after processing the frame difference image.
  • the second frame difference image refers to the original frame difference image obtained by performing difference processing on the second image signal and the second historical noise reduction image; or, the second frame difference image refers to the frame obtained after processing the original frame difference image Poor image.
  • the first frame difference threshold corresponding to each pixel is different, or the first frame difference threshold corresponding to each pixel is the same;
  • the second frame difference threshold corresponding to each pixel is different, or the second frame difference threshold corresponding to each pixel is the same.
  • the multiple first frame difference thresholds are determined according to the noise intensity of multiple pixels in the first noise intensity image, and the first noise intensity image is denoised according to the corresponding noise reduction of the first historical noise reduction image.
  • the previous image and the first historical noise reduction image are determined;
  • the multiple second frame difference thresholds are determined according to the noise intensity of multiple pixels in the second noise intensity image, and the second noise intensity image is based on the image before noise reduction corresponding to the second historical noise reduction image and the second historical noise reduction The image is confirmed.
  • performing edge estimation according to the first image signal and the second image signal to obtain an edge estimation result includes:
  • the edge estimation result includes the first spatial filtering strength and/or the joint spatial filtering strength of each pixel.
  • the joint spatial filtering strength includes a third filtering strength and a fourth filtering strength, and the third filtering strength and the fourth filtering strength are different.
  • the first image signal is subjected to spatial filtering processing according to the edge estimation result to obtain a near-infrared light noise reduction image
  • the second image signal is subjected to spatial filtering processing according to the edge estimation result to obtain a visible light noise reduction image
  • the joint spatial filtering strength includes the third filtering strength and the fourth filtering strength
  • the first image signal is subjected to spatial filtering processing according to the third filtering strength corresponding to each pixel to obtain the near-infrared light
  • the second image signal is subjected to spatial filtering processing according to the fourth filter intensity corresponding to each pixel to obtain a visible light noise-reduced image.
  • the first local information and the second local information include at least one of local gradient information, local brightness information, and local information entropy.
  • performing joint noise reduction on the first image signal and the second image signal to obtain a near-infrared light noise reduction image and a visible light noise reduction image including:
  • the edge estimation result performs edge estimation according to the first time domain denoised image and the second time domain denoised image to obtain the edge estimation result.
  • the edge estimation result perform spatial filtering on the first time domain denoised image to obtain the near-infrared denoised image.
  • the edge estimation result performs spatial filtering on the second time domain denoising image to obtain a visible light denoising image;
  • time-domain filtering is performed on the second spatial domain noise reduction image to obtain a visible light noise reduction image.
  • the method further includes:
  • the near-infrared light noise reduction image and the visible light noise reduction image are fused to obtain a fused image.
  • fusing the near-infrared light noise reduction image and the visible light noise reduction image to obtain a fused image includes:
  • the near-infrared light noise reduction image and the visible light noise reduction image are fused to obtain a fused image.
  • fusing the near-infrared light noise reduction image and the visible light noise reduction image to obtain a fused image includes:
  • the near-infrared light noise reduction image and the visible light noise reduction image are fused through the third fusion process to obtain a second target image, and the fusion image includes the first target image and the second target image.
  • the second fusion process and the third fusion process are different;
  • the second fusion process is the same as the third fusion process, but the fusion parameter of the second fusion process is the first fusion parameter, the fusion parameter of the third fusion process is the second fusion parameter, and the first fusion parameter and the second fusion parameter are different .
  • the method further includes:
  • the first image signal and the second image signal are pre-processed to obtain a processed first image signal and a processed second image signal.
  • acquiring the first image signal and the second image signal includes:
  • the near-infrared supplementary light is carried out by the first light-filling device, wherein the near-infrared supplementary light is carried out at least during a partial exposure time period of the first preset exposure, and the near-infrared supplementary light is not carried out during the exposure time period of the second preset exposure ,
  • the first preset exposure and the second preset exposure are two of the multiple exposures of the image sensor;
  • Multiple exposures are performed through the image sensor to generate and output a first image signal and a second image signal, the first image signal is an image generated according to the first preset exposure, and the second image signal is generated according to the second preset exposure image.
  • acquiring the first image signal and the second image signal includes:
  • the first image signal and the second image signal are registered.
  • acquiring the first image signal and the second image signal includes:
  • the first image signal is generated and output by the first camera, and the second image signal is generated and output by the second camera;
  • the first image signal and the second image signal are registered.
  • the present application also provides a computer-readable storage medium in which a computer program is stored, and when the computer program is executed by a processor, the steps of the image noise reduction method in the foregoing embodiments are implemented.
  • the computer-readable storage medium may be ROM (Read Only Memory), RAM (Random Access Memory, random access memory), CD-ROM (Compact Disc Read-Only Memory, compact disc read-only memory) Devices), magnetic tapes, floppy disks and optical data storage devices.
  • the computer-readable storage medium mentioned in this application may be a non-volatile storage medium, in other words, it may be a non-transitory storage medium.
  • a computer program product containing instructions is also provided, which when run on a computer, causes the computer to execute the steps of the image noise reduction method described above.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Picture Signal Circuits (AREA)

Abstract

本申请公开了一种图像降噪装置及图像降噪方法,属于计算机视觉技术领域。在本申请中,在本申请中,第一图像信号是近红外光图像信号,第二图像信号是可见光图像信号,由于近红外光图像信号具有高信噪比,因此,图像降噪单元引入第一图像信号,对第一图像信号和第二图像信号进行联合降噪,可以更为准确的区分图像中的噪声和有效信息,从而可以有效的减轻图像拖尾以及图像细节损失。

Description

图像降噪装置及图像降噪方法
本申请要求于2019年5月31日提交的申请号为201910472708.X、发明名称为“图像降噪装置及图像降噪方法”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机视觉技术领域,特别涉及一种图像降噪装置及图像降噪方法。
背景技术
目前,各种各样的拍摄设备被广泛的应用于诸如智能交通、安保等领域。其中,由于拍摄得到的图像中存在较多的噪声信息,因此,为了提高拍摄的图像的质量,可以对拍摄得到的图像进行降噪处理。相关技术中提供了一种对可见光图像进行时域降噪的方法。在该方法中可以对可见光图像中像素点所处分块的量化噪声进行动静判决,从而在动静不同的图像区域内采用不同强度进行降噪处理。
然而,在环境光强度较弱的情况下,可见光图像中的噪声信息很可能将有效信息淹没,从而导致无法准确进行动静判决,进而导致降噪后的图像中存在图像拖尾问题,且图像细节信息损失。
发明内容
本申请实施例提供了一种图像降噪装置和图像降噪方法,可以提高拍摄图像的图像质量。所述技术方案如下:
一方面,提供了一种图像降噪装置,所述图像降噪装置包括图像获取单元和图像降噪单元;
所述图像获取单元,用于获取第一图像信号和第二图像信号,其中,所述第一图像信号为近红外光图像信号,所述第二图像信号为可见光图像信号;
所述图像降噪单元用于对所述第一图像信号和所述第二图像信号进行联 合降噪,得到近红外光降噪图像和可见光降噪图像。
另一方面,提供一种图像降噪方法,所述方法包括:
获取第一图像信号和第二图像信号,其中,所述第一图像信号为近红外光图像信号,所述第二图像信号为可见光图像信号;
对所述第一图像信号和所述第二图像信号进行联合降噪,得到近红外光降噪图像和可见光降噪图像。
另一方面,提供一种电子设备,所述电子设备包括处理器、通信接口、存储器和通信总线,所述处理器、所述通信接口和所述存储器通过所述通信总线完成相互间的通信,所述存储器用于存放计算机程序,所述处理器用于执行所述存储器上存放的所述计算机程序,以实现前述的图像降噪方法。
另一方面,提供一种计算机可读存储介质,所述存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现前述的图像降噪方法。
另一方面,提供一种计算机程序产品,所述计算机程序产品包括指令,当所述指令在计算机上运行时,使得所述计算机执行前述的图像降噪方法。
本申请实施例提供的技术方案带来的有益效果是:
在本申请中,第一图像信号是近红外光图像信号,第二图像信号是可见光图像信号,由于近红外光图像信号具有高信噪比,因此,图像降噪单元引入第一图像信号,对第一图像信号和第二图像信号进行联合降噪,可以更为准确的区分图像中的噪声和有效信息,从而可以有效的减轻图像拖尾以及图像细节损失,提高图像质量。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种图像降噪装置的结构示意图。
图2是本申请实施例提供的第一种图像降噪单元的结构示意图。
图3是本申请实施例提供的第二种图像降噪单元的结构示意图。
图4是本申请实施例提供的第三种图像降噪单元的结构示意图。
图5是本申请实施例提供的一种图像获取单元的结构示意图。
图6是本申请实施例提供的一种第一补光装置进行近红外补光的波长和相对强度之间的关系示意图。
图7是本申请实施例提供的一种第一滤光片通过的光的波长与通过率之间的关系的示意图。
图8是本申请实施例提供的第二种图像采集装置的结构示意图。
图9是本申请实施例提供的一种RGB传感器的示意图。
图10是本申请实施例提供的一种RGBW传感器的示意图。
图11是本申请实施例提供的一种RCCB传感器的示意图。
图12是本申请实施例提供的一种RYYB传感器的示意图。
图13是本申请实施例提供的一种图像传感器的感应曲线示意图。
图14是本申请实施例提供的一种卷帘曝光方式的示意图。
图15是本申请实施例提供的第一种近红外补光与全局曝光方式中的第一预设曝光和第二预设曝光之间的时序关系示意图。
图16是本申请实施例提供的第二种近红外补光与全局曝光方式中的第一预设曝光和第二预设曝光之间的时序关系示意图。
图17是本申请实施例提供的第三种近红外补光与全局曝光方式中的第一预设曝光和第二预设曝光之间的时序关系示意图。
图18是本申请实施例提供的第一种近红外补光与卷帘曝光方式中第一预设曝光和第二预设曝光之间的时序关系示意图。
图19是本申请实施例提供的第二种近红外补光与卷帘曝光方式中第一预设曝光和第二预设曝光之间的时序关系示意图。
图20是本申请实施例提供的第三种近红外补光与卷帘曝光方式中第一预设曝光和第二预设曝光之间的时序关系示意图。
图21是本申请实施例提供的一种图像降噪方法的流程图。
附图标记:
011:图像传感器,012:补光器,013:滤光组件;014:镜头;
0121:第一补光装置,0122:第二补光装置,0131:第一滤光片。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
图1是本申请实施例提供的一种图像降噪装置的结构示意图。如图1所示,该图像降噪装置包括图像获取单元01和图像降噪单元02;
其中,图像获取单元01,用于获取第一图像信号和第二图像信号,第一图像信号为近红外光图像信号,第二图像信号为可见光图像信号;图像降噪单元02用于对第一图像信号和第二图像信号进行联合降噪,得到近红外光降噪图像和可见光降噪图像。
在本申请中,第一图像信号是近红外光图像信号,第二图像信号是可见光图像信号,由于近红外光图像信号具有高信噪比,因此,图像降噪单元引入第一图像信号,对第一图像信号和第二图像信号进行联合降噪,可以更为准确的区分图像中的噪声和有效信息,从而可以有效的减轻图像拖尾以及图像细节损失。
需要说明的是,参见图1,在一些可能的情况下,该图像降噪装置中还可以包括图像融合单元03和图像预处理单元04。
下面对该图像降噪装置中包括的图像获取单元01、图像降噪单元02、图像融合单元03和图像预处理单元04分别进行说明。
1、图像降噪单元02
在本申请实施例中,当图像获取单元01获取第一图像信号和第二图像信号之后。由图像降噪单元02对第一图像信号和第二图像信号进行联合降噪处理,得到近红外光降噪图像和可见光降噪图像。或者,当图像预处理单元输出第一图像和第二图像时,由图像降噪单元02对第一图像和第二图像进行处理。在本申请实施例中,以图像降噪单元02对第一图像信号和第二图像信号进行联合降噪为例来进行解释说明。对于第一图像和第二图像的情况,可以参考本申请实施例对第一图像信号和第二图像信号的处理方式。
在一些可能的实现方式中,参见图2,图像降噪单元02可以包括时域降噪 单元021。其中,时域降噪单元021用于根据第一图像信号和第二图像信号进行运动估计,得到运动估计结果,根据运动估计结果对第一图像信号进行时域滤波处理,得到近红外光降噪图像,根据运动估计结果对第二图像信号进行时域滤波处理,得到可见光降噪图像。其中,运动估计结果可能包括一个时域滤波权重,也可以包括多个不同的时域滤波权重,在这种情况下,用于对第一图像信号和第二图像信号进行时域滤波处理的可以是运动估计结果中不同的时域滤波权重。
需要说明的是,参见图3,该时域降噪单元021可以包括运动估计单元0211和时域滤波单元0212。
在一些示例中,该运动估计单元0211可以用于根据第一图像信号和第一历史降噪图像生成第一帧差图像,根据第一帧差图像和多个第一帧差阈值确定第一图像信号中每个像素点的第一时域滤波强度,其中,第一历史降噪图像是指对第一图像信号的前N帧图像中的任一帧图像进行降噪后的图像;该时域滤波单元0212用于根据每个像素点的第一时域滤波强度对第一图像信号进行时域滤波处理,得到近红外光降噪图像,根据每个像素点的第一时域滤波强度对第二图像信号进行时域滤波处理,得到可见光降噪图像。
示例性地,该运动估计单元0211可以将第一图像信号中的每个像素点和第一历史降噪图像中对应的像素点的像素值进行作差处理,得到原始帧差图像,将该原始帧差图像作为第一帧差图像。其中,第一帧差图像、第一图像信号、第一历史降噪图像和第二图像信号中的各个像素点的空间位置是对应的,也即,相同像素坐标上的像素点是对应的。
或者,该运动估计单元0211可以将第一图像信号中的每个像素点和第一历史降噪图像中对应的像素点的像素值进行作差处理,得到原始帧差图像。之后,对该原始帧差图像进行处理,从而得到第一帧差图像。其中,对原始帧差图像进行处理,可以是指对原始帧差图像进行空域平滑处理或分块量化处理。
在得到第一帧差图像之后,该运动估计单元0211可以根据第一帧差图像中的每个像素点和多个第一帧差阈值确定第一图像信号中每个像素点的第一时域滤波强度。其中,第一帧差图像中的每个像素点均对应一个第一帧差阈值,且各个像素点对应的第一帧差阈值有可能相同,也可能不同,也即,多个第一帧差阈值与第一帧差图像中的多个像素点一一对应。在一种可能的实现方式中,每个像素点对应的第一帧差阈值可以由外部用户自行进行设置。在另一种 可能的实现方式中,该运动估计单元0211可以将第一历史降噪图像与第一历史降噪图像对应的降噪前的图像进行作差处理,从而得到第一噪声强度图像,根据第一噪声强度图像中每个像素点的噪声强度确定第一帧差图像中相应位置处的像素点的第一帧差阈值。当然,每个像素点对应的第一帧差阈值也可以通过其他方式确定得到,本申请实施例对此不做限定。
对于第一帧差图像中的每个像素点,该运动估计单元0211可以根据该像素点的帧差和该像素点对应的第一帧差阈值,通过下述公式(1)来确定得到相应像素点的第一时域滤波强度。其中,第一帧差图像中的每个像素点的帧差为相应像素点的像素值。
Figure PCTCN2020092656-appb-000001
其中,(x,y)为像素点在图像中的位置;α nir(x,y)是指坐标为(x,y)的像素点的第一时域滤波强度,dif nir(x,y)是指该像素点的帧差,dif_thr nir(x,y)是指该像素点对应的第一帧差阈值。
需要说明的是,对于第一帧差图像中的每个像素点,像素点的帧差相较于第一帧差阈值越小,则说明该像素点越趋向于静止,也即,该像素点所对应的运动级别越小。而由上述公式(1)可知,对于任意一个像素点,该像素点的帧差相对于第一帧差阈值越小,则与该像素点位于同一位置上的像素点的第一时域滤波强度越大。其中,运动级别用于指示运动的剧烈程度,运动级别越大,则说明运动越剧烈。第一时域滤波强度的取值可以在0到1之间。
在确定第一图像信号中每个像素点的第一时域滤波强度之后,则时域滤波单元0212可以直接根据第一时域滤波强度分别对第一图像信号和第二图像信号进行时域滤波处理,从而得到近红外光降噪图像和可见光降噪图像。
需要说明的是,当第一图像信号的图像质量明显优于第二图像信号时,由于第一图像信号是近红外光图像信号,具有高信噪比,因此,利用第一图像信号中每个像素点的第一时域滤波强度对第二图像信号进行时域滤波处理,可以更为准确的区分图像中的噪声和有效信息,从而避免降噪后的图像中图像细节信息的损失以及图像拖尾的问题。
需要说明的是,在一些可能的情况中,运动估计单元0211可以根据第一图像信号和至少一个第一历史降噪图像生成至少一个第一帧差图像,并根据至少一个帧差图像和每个帧差图像对应的多个第一帧差阈值确定第一图像信号 中每个像素点的第一时域滤波强度。
其中,至少一个历史降噪图像是指对第一图像信号的前N帧图像进行降噪得到的图像。对于该至少一个第一历史降噪图像中的每个第一历史降噪图像,该运动估计单元0211可以根据该第一历史降噪图像和第一图像信号,参考前述介绍的相关实现方式来确定对应的第一帧差图像。之后,该运动估计单元0211可以根据每个第一帧差图像以及每个第一帧差图像对应的多个第一帧差阈值,参考前述相关实现方式确定每个第一帧差图像中每个像素点的时域滤波强度。之后,该运动估计单元0211可以将对各个第一帧差图像中相对应的像素点的时域滤波强度进行融合,从而得到第一图像信号中相应像素位置处的像素点对应的第一时域滤波强度。其中,各个第一帧差图像中相对应的像素点是指位于同一位置的像素点。或者,对于至少一个第一帧差图像中处于同一位置上的像素点,运动估计单元0211可以从这至少一个像素点对应的至少一个时域滤波强度中选择所表示的运动级别最大的时域滤波强度,进而将选择的时域滤波强度作为第一图像信号中相应像素位置上的像素点的第一时域滤波强度。
在另一些示例中,该运动估计单元0211可以根据第一图像信号和第一历史降噪图像生成第一帧差图像,根据第一帧差图像和多个第一帧差阈值确定第一图像信号中每个像素点的第一时域滤波强度,第一历史降噪图像是指对第一图像信号的前N帧图像中的任一帧图像进行降噪后的图像;运动估计单元0211还用于根据第二图像信号和第二历史降噪图像生成第二帧差图像,根据第二帧差图像和多个第二帧差阈值确定第二图像信号中每个像素点的第二时域滤波强度,第二历史降噪图像是指对第二图像信号的前N帧图像中的任一帧图像进行降噪后的图像;运动估计单元0211还用于根据第一图像信号中每个像素点的第一时域滤波强度和第二图像信号中每个像素点的第二时域滤波强度确定每个像素点的联合时域滤波强度;时域滤波单元0212用于根据每个像素点的第一时域滤波强度或联合时域滤波强度对第一图像信号进行时域滤波处理,得到近红外光降噪图像,根据每个像素点的联合时域滤波强度对第二图像信号进行时域滤波处理,得到可见光降噪图像。
也即,运动估计单元0211不仅可以通过前述介绍的实现方式确定第一图像信号中每个像素点的第一时域滤波强度,还可以确定第二图像信号中每个像素点的第二时域滤波强度。
在确定每个像素点的第二时域滤波强度时,该运动估计单元0211可以首 先将第二图像信号中的每个像素点和第二历史降噪图像中对应的像素点的像素值进行作差处理,得到第二帧差图像。其中,第一图像信号、第二图像信号和第二历史降噪图像中中的各个像素点的空间位置是对应的,也即,相同像素坐标上的像素点是对应的。
在得到第二帧差图像之后,该运动估计单元0211可以根据第二帧差图像中的每个像素点和多个第二帧差阈值确定第二图像信号中每个像素点的第二时域滤波强度。其中,第二帧差图像中的每个像素点均对应一个第二帧差阈值,也即,多个第二帧差阈值与第二帧差图像中的多个像素点一一对应。并且,各个像素点对应的第二帧差阈值有可能相同,也可能不同。在一种可能的实现方式中,每个像素点对应的第二帧差阈值可以由外部用户自行进行设置。在另一种可能的实现方式中,该运动估计单元0211可以将第二历史降噪图像与第二历史降噪图像对应的降噪前的图像进行作差处理,从而得到第二噪声强度图像,根据第二噪声强度图像中每个像素点的噪声强度确定第二帧差图像中相应位置处的像素点的第二帧差阈值。当然,每个像素点对应的第二帧差阈值也可以通过其他方式确定得到,本申请实施例对此不做限定。
对于第二帧差图像中的每个像素点,该运动估计单元0211可以根据该像素点的帧差和该像素点对应的第二帧差阈值,通过下述公式(2)来确定得到相应像素点的第二时域滤波强度。其中,第二帧差图像中的每个像素点的帧差即为相应像素点的像素值。
Figure PCTCN2020092656-appb-000002
其中,α vis(x,y)是指坐标为(x,y)的像素点的第二时域滤波强度,dif vis(x,y)表示该像素点的帧差,dif_thr vis(x,y)表示该像素点对应的第二帧差阈值。
需要说明的是,对于第二帧差图像中的每个像素点,像素点的帧差相对于第二帧差阈值越小,则说明该像素点越趋向于静止,也即,该像素点的运动级别越小。而由上述公式(2)可知,对于任意一个像素点,该像素点的帧差相对于第二帧差阈值越小,则与该像素点位于同一位置上的像素点的第二时域滤波强度越大。综上可知,在本申请实施例中,像素点的运动级别越小,则对应的第二时域滤波强度的取值越大。其中,第二时域滤波强度的取值可以在0到1之间。
在确定每个像素点的第一时域滤波强度和第二时域滤波强度之后,该运动 估计单元0211可以将每个像素点的第一时域滤波强度和第二时域滤波强度进行加权,从而得到每个像素点的联合时域权重。此时,确定的每个像素点的联合时域权重即为第一图像信号和第二图像信号的运动估计结果。其中,由于第一图像信号和第二图像信号中处于同一个像素坐标上的各个像素点是对应的,所以,任一像素点的第一时域滤波强度和第二时域滤波强度是指第一图像信号和第二图像信号中的同一像素位置的像素点的时域滤波强度。
示例性地,该运动估计单元0211可以通过下式(3)来对每个像素点的第一时域滤波强度和第二时域滤波强度进行加权,从而得到每个像素点的联合时域滤波强度。
Figure PCTCN2020092656-appb-000003
其中,Ω是指以坐标为(x,y)的像素点为中心的邻域范围,也即,以坐标为(x,y)的像素点为中心的局部图像区域,(x+i,y+j)是指该局部图像区域内的像素点坐标,
Figure PCTCN2020092656-appb-000004
是指以坐标为(x,y)的像素点为中心的局部图像区域内的第一时域滤波强度,
Figure PCTCN2020092656-appb-000005
是指以坐标为(x,y)的像素点为中心的局部图像区域内的第二时域滤波强度,α fus(x,y)是指坐标为(x,y)的像素点的联合时域滤波强度。通过局部图像区域内的第一时域滤波强度和第二时域滤波强度来调整第一时域滤波强度、第二时域滤波强度在联合时域滤波强度中的占比,即局部运动级别越大的一方其时域滤波强度占比越大。
需要说明的是,第一时域滤波强度可以用于表示像素点在第一图像信号中的运动级别,第二时域滤波强度可以用于表示像素点在第二图像信号中的运动级别,而通过上述方式确定的联合时域滤波强度同时融合了第一时域滤波强度和第二时域滤波强度,也即,该联合时域滤波强度同时考虑了该像素点在第一图像信号中表现出的运动趋势以及在第二图像信号中表现出的运动趋势。这样,相较于第一时域滤波强度或第二时域滤波强度,该联合时域滤波强度能够更加准确的表征像素点的运动趋势,这样,后续以该联合时域滤波强度进行时域滤波时,可以更有效的去除图像噪声,并且,可以减轻由于对像素点的运动级别的误判所导致的图像拖尾等问题。
在一些示例中,在通过上述公式(3)确定联合时域滤波强度时,可以采用不同的参数来对第一时域滤波强度和第二时域滤波强度进行融合,从而得到两个不同的滤波强度,分别为第一滤波强度和第二滤波强度,此时,联合时域滤波强度包括第一滤波强度和第二滤波强度。
在一些示例中,在确定每个像素点的第一时域滤波强度和第二时域滤波强度之后,对于任一像素点,该运动估计单元可以从该像素点的第一时域滤波强度和第二时域滤波强度中选择一个时域滤波强度作为该像素点的联合时域滤波权重。其中,在选择时,可以选择两个时域滤波强度中表征该像素点的运动级别较大的一个时域滤波强度作为联合时域滤波强度。
在确定每个像素点的联合时域滤波强度之后,时域滤波单元0212可以根据该联合时域滤波强度对第一图像信号和第二图像信号分别进行时域滤波处理,从而得到近红外光降噪图像和可见光降噪图像。
示例性地,时域滤波单元0212可以根据每个像素点的联合时域滤波强度,通过下述公式(4)对第一图像信号和第一历史降噪图像中的每个像素点进行时域加权处理,从而得到近红外光降噪图像,根据每个像素点的联合时域滤波强度,通过下述公式(5)对第二图像信号和第二历史降噪图像中的每个像素点进行时域加权处理,从而得到可见光降噪图像。
Figure PCTCN2020092656-appb-000006
Figure PCTCN2020092656-appb-000007
其中,
Figure PCTCN2020092656-appb-000008
是指近红外光降噪图像中坐标为(x,y)的像素点,
Figure PCTCN2020092656-appb-000009
是指第一历史降噪图像中坐标为(x,y)的像素点,α fus(x,y)是指坐标为(x,y)的像素点的联合时域滤波强度,I nir(x,y,t)是指第一图像信号中坐标为(x,y)的像素点,
Figure PCTCN2020092656-appb-000010
是指可见光降噪图像中坐标为(x,y)的像素点,
Figure PCTCN2020092656-appb-000011
是指第二历史降噪图像中坐标为(x,y)的像素点,I vis(x,y,t)是指第二图像信号中坐标为(x,y)的像素点。
或者,考虑到第一图像信号为具有高信噪比的近红外光图像信号,该时域滤波单元0212也可以根据每个像素点的第一时域滤波强度对第一图像信号进行时域滤波,得到近红外光降噪图像,根据每个像素点的联合时域滤波强度对第二图像信号进行时域滤波处理,从而得到可见光降噪图像。
需要说明的是,由前述对于时域滤波强度与运动级别的关系的介绍可知,在本申请实施例中,对于第一图像信号和第二图像信号中运动越激烈的区域, 可以采用越弱的时域滤波强度对其进行滤波。
在一些示例中,当联合时域滤波强度包括第一滤波强度和第二滤波强度时,时域滤波单元0212可以采用第一滤波强度对第一图像信号进行时域滤波,得到近红外光降噪图像,采用第二滤波强度对第二图像信号进行时域滤波,得到可见光降噪图像。也即,在本申请实施例中,当运动估计单元0211采用不同的参数来融合第一时域滤波强度和第二时域滤波强度,得到两个不同的联合时域滤波强度时,时域滤波单元0212可以分别采用两个不同的联合时域滤波强度对第一图像信号和第二图像信号分别进行时域滤波。
在另一些可能的实现方式中,参见图2,该图像降噪单元02可以包括空域降噪单元022。其中,该空域降噪单元022用于根据第一图像信号和第二图像信号进行边缘估计,得到边缘估计结果,根据边缘估计结果对第一图像信号进行空域滤波处理,得到近红外光降噪图像,根据边缘估计结果对第二图像信号进行空域滤波处理,得到可见光降噪图像。
需要说明的是,参见图4,该空域降噪单元022可以包括边缘估计单元0221和空域滤波单元0222。
在一些示例中,该边缘估计单元0221用于确定第一图像信号中每个像素点的第一空域滤波强度;该空域滤波单元0222用于根据每个像素点对应的第一空域滤波强度对第一图像信号进行空域滤波处理,得到近红外光降噪图像,根据每个像素点对应的第一空域滤波强度对第二图像信号进行空域滤波处理,得到可见光降噪图像。
示例性地,该边缘估计单元0221可以根据第一图像信号的每个像素点与其邻域内的其他像素点之间的差异,确定相应像素点的第一空域滤波强度。其中,该边缘估计单元0221可以通过下式(6)来生成每个像素点的第一空域滤波强度。
Figure PCTCN2020092656-appb-000012
其中,Ω是指以坐标为(x,y)的像素点为中心的邻域范围,也即,以坐标为(x,y)的像素点为中心的局部图像区域。(x+i,y+j)是指该局部图像区域内的像素点坐标,img nir(x,y)是指第一图像信号中坐标为(x,y)的像素点的像素值,δ 1和δ 2是指高斯分布标准差,
Figure PCTCN2020092656-appb-000013
是指在该局部图像区域内根据像素点(x,y)与像素点(x+i,y+j)之间的差异确定的第一空域滤波强度。
需要说明的是,每个像素点的邻域内包括多个像素点,这样,对于任一像素点,根据该像素点与该像素点的局部图像区域内的各个像素点之间的差异分别可以确定得到该像素点对应的多个第一空域滤波强度。在确定每个像素点的多个第一空域滤波强度之后,空域滤波单元0222可以根据每个像素点的多个第一空域滤波强度分别对第一图像信号和第二图像信号进行空域滤波处理,从而得到近红外光降噪图像和可见光降噪图像。
在另一些示例中,该边缘估计单元0221用于确定第一图像信号中每个像素点的第一空域滤波强度,确定第二图像信号中每个像素点的第二空域滤波强度;对第一图像信号进行局部信息提取,得到第一局部信息,对第二图像信号进行局部信息提取,得到第二局部信息;根据第一空域滤波强度、第二空域滤波强度、第一局部信息和第二局部信息确定每个像素点对应的联合空域滤波强度;该空域滤波单元0222用于根据每个像素点对应的第一空域滤波强度对第一图像信号进行空域滤波处理,得到近红外光降噪图像,根据每个像素点对应的联合空域滤波强度对第二图像信号进行空域滤波处理,得到可见光降噪图像。其中,第一局部信息和第二局部信息包括局部梯度信息、局部亮度信息和局部信息熵中的至少一种。
也即,该边缘估计单元0221不仅可以通过前述介绍的实现方式确定第一图像信号中每个像素点的第一空域滤波强度,还可以确定第二图像信号中每个像素点的第二空域滤波强度。
在确定每个像素点的第二空域滤波强度时,该边缘估计单元0221可以根据第二图像信号的每个像素点与其邻域内的其他像素点之间的差异,确定相应像素点的第二空域滤波强度。其中,该边缘估计单元0221可以通过下式(7)来生成每个像素点的第二空域滤波强度。
Figure PCTCN2020092656-appb-000014
其中,Ω是指以坐标为(x,y)的像素点为中心的邻域范围,也即,以坐标为(x,y)的像素点为中心的局部图像区域。(x+i,y+j)是指该局部图像区域内的像素点坐标,img vis(x,y)是指第二图像信号中坐标为(x,y)的像素点的像素值,δ 1和δ 2是指高斯分布标准差,
Figure PCTCN2020092656-appb-000015
是指坐标为(x,y)的像素点在该局部图像区域内根据其与像素点(x+i,y+j)之间的差异确定的第二空域滤波强度。同理,由于每个像素点的邻域内包括多个像素点,因此,对于每个像素点,通过前述 介绍的方法可以得到每个像素点的多个第二空域滤波强度。
由上述公式6和7可知,对于以坐标为(x,y)的像素点为中心的局部图像区域,该像素点与该局部图像区域内的像素点之间的差异越小,则该像素点对应的多个空域滤波强度越大。也即,该像素点的空域滤波强度的大小与该像素点与对应的局部图像区域内的像素点之间的差异大小呈负相关。
在确定每个像素点的第一空域滤波强度和第二空域滤波强度之后,边缘估计单元0221可以利用Sobel边缘检测算子分别对第一图像信号和第二图像信号进行卷积处理,得到第一纹理图像和第二纹理图像,并以此为权重对每个像素点的多个第一空域滤波强度和多个第二空域滤波强度进行加权处理,生成每个像素点在局部图像区域内的多个联合空域滤波强度。其中,第一纹理图像即为第一局部信息,第二纹理图像即为第二局部信息。
示例性地,Sobel边缘检测算子如下式(8)所示。边缘估计单元0221可以通过下式(9)来生成联合空域滤波强度。
Figure PCTCN2020092656-appb-000016
Figure PCTCN2020092656-appb-000017
其中,sobel H是指水平方向上的Sobel边缘检测算子,sobel V是指垂直方向上的Sobel边缘检测算子;β fus(x+i,y+j)是指坐标为(x,y)的像素点在其邻域范围Ω内的任一联合空域滤波强度,
Figure PCTCN2020092656-appb-000018
是指第一纹理图像中坐标为(x,y)的像素点的纹理信息,
Figure PCTCN2020092656-appb-000019
是指第二纹理图像中坐标为(x,y)的像素点的纹理信息。
需要说明的是,在确定联合空域滤波强度时,通过边缘检测算子进行了相应处理,所以,最终得到的每个像素点的多个联合空域滤波强度越小,则表明该像素点与局部图像区域内的其他像素点之间的差异越大,因此可见,在本申请实施例中,图像中相邻像素点亮度差异越大的区域,联合空域滤波强度越小,而相邻像素点亮度差异较小的区域,联合空域滤波强度则相对较大。也即,在本申请实施例中,在进行空域滤波时,在边缘采用较弱的滤波强度,在非边缘采用较强的空域滤波强度,从而提高了降噪效果。
在一些示例中,边缘估计单元0221可以对第一图像信号进行不同处理,从而得到不同的第一局部信息,或者,对第二图像信号进行不同的处理,得到 不同的第二局部信息,采用两组不同的第一局部信息和第二局部信息作为权重,通过前述公式9来得到不同的空域滤波强度,也即第三滤波强度和第四滤波强度。此时,联合空域滤波强度可以包括该第三滤波强度和第四滤波强度。
在得到联合空域滤波强度之后,空域滤波单元0222可以根据联合空域滤波强度分别对第一图像信号和第二图像信号进行空域滤波处理,从而得到近红外光降噪图像和可见光降噪图像。
或者,考虑到第一图像信号是具有高信噪比的近红外光图像信号,因此,在第一图像信号的质量明显优于第二图像信号时,无需利用第二图像信号的边缘信息来辅助第一图像信号进行空域滤波处理。在这种情况下,该空域滤波单元0222可以根据每个像素点的第一空域滤波强度对第一图像信号进行空域滤波处理。根据每个像素点的联合空域滤波强度对第二图像信号进行空域滤波处理。
示例性地,该空域滤波单元0222可以根据每个像素点的第一空域滤波强度通过下述公式(10)来对第一图像信号中的每个像素点进行空域加权处理,从而得到近红外光降噪图像,根据每个像素点的联合时域滤波强度,通过下述公式(11)对第二图像信号中的每个像素点进行加权处理,从而得到可见光降噪图像。
Figure PCTCN2020092656-appb-000020
Figure PCTCN2020092656-appb-000021
其中,
Figure PCTCN2020092656-appb-000022
是指近红外光降噪图像中坐标为(x,y)的像素点,I nir(x+i,y+j)是指第一图像信号中坐标为(x,y)的像素点的邻域范围内的像素点,β nir(x+i,y+j)为坐标为(x,y)的像素点在该邻域范围内的第一空域滤波强度,Ω是指以坐标为(x,y)的像素点为中心的邻域范围,
Figure PCTCN2020092656-appb-000023
为可见光降噪图像中坐标为(x,y)的像素点,I vis(x+i,y+j)是指第二图像信号中坐标为(x,y)的像素点的邻域范围内的像素点,β fus(x+i,y+j)为坐标为(x,y)的像素点在该邻域范围内的联合空域滤波强度。
在一些示例中,当联合空域滤波强度包括第三滤波强度和第四滤波强度时,空域降噪单元0222可以分别根据第三滤波强度来对第一图像信号进行空 域滤波处理,得到近红外光降噪图像,根据第四滤波强度来对第二图像信号进行空域滤波处理,得到可见光降噪图像。也即,当采用不同的局部信息对第一空域滤波强度和第二空域滤波强度进行融合得到两个不同的联合空域滤波强度时,空域降噪单元0222可以采用两个不同的联合空域滤波强度分别对两个图像信号进行空域滤波处理。
值得注意的是,在本申请实施例中,图像降噪单元02也可以同时包括上述的时域降噪单元021和空域降噪单元022。在这种情况下,可以参照前述介绍的相关实现方式,先通过时域降噪单元021对第一图像信号和第二图像信号进行时域滤波,得到的第一时域降噪图像和第二时域降噪图像。之后,再由空域降噪单元022对得到的第一时域降噪图像和第二时域降噪图像进行空域滤波,从而得到近红外光降噪图像和可见光降噪图像。或者,可以先由空域降噪单元022对第一图像信号和第二图像信号进行空域滤波,得到第一空域降噪图像和第二空域降噪图像。之后,再由时域降噪单元021对得到的第一空域降噪图像和第二空域降噪图像进行时域滤波,从而得到近红外光降噪图像和可见光降噪图像。也即,本申请实施例中对空域滤波和时域滤波的先后顺序不做限定。
2、图像融合单元
在一种可能的实施例中,图像降噪单元02在得到近红外光降噪图像和可见光降噪图像之后,图1中的图像融合单元03可以将近红外光降噪图像和可见光降噪图像进行融合,得到融合图像。
在一种可能的实现方式中,图像融合单元03可以包括第一融合单元。其中,第一融合单元用于通过第一融合处理对近红外光降噪图像和可见光降噪图像进行融合,得到融合图像。
需要说明的是,第一融合处理的可能实现方式可以包括以下几种:
第一种可能的实现方式,第一融合单元可以遍历每个像素位置,根据预设的每个像素位置的融合权重对近红外光降噪图像和可见光降噪图像的相同像素位置处的RGB色彩向量进行融合。
示例性地,第一融合单元可以通过下述模型(12)来生成融合图像。
img=img 1*(1-w)+img 2*w  (12)
其中,img是指融合图像,img 1是指近红外光降噪图像,img 2是指可见光降噪图像,w是指融合权重。需要说明的是,融合权重的取值范围为(0,1),例如,融合权重可以为0.5。
需要说明的是,在上述模型(12)中,融合权重也可以是通过对近红外光降噪图像和可见光降噪图像进行处理得到的。
示例性地,第一融合单元可以对近红外降噪图像进行边缘提取,得到第一边缘图像。对可见光降噪图像进行边缘提取,得到第二边缘图像。根据第一边缘图像和第二边缘图像确定每个像素位置的融合权重。
在第二种可能的实现方式,第一融合单元可以将可见光降噪图像中的亮度信号通过低通滤波器进行处理,得到低频信号。将近红外光降噪图像通过高通滤波器进行处理,得到高频信号。将低频信号与高频信号相加,得到融合亮度信号。最后将融合亮度信号和可见光降噪图像中的色度信号进行合成,即可得到融合图像。
第三种可能的实现方式中,第一融合单元可以对近红外光降噪图像进行色彩空间转换,从而分离出第一亮度图像和第一色彩图像。对可见光降噪图像进行色彩空间转换,从而分离出第二亮度图像和第二色彩图像。分别对第一亮度图像和第二亮度图像进行金字塔分解,得到多个具有不同尺度信息的基础图像和细节图像,并根据第一亮度图像与第二亮度图像之间信息熵或梯度的相对大小对多个基础图像和细节图像进行加权,重构得到融合亮度图像。之后,第一融合单元可以选取第一色彩图像和第二色彩图像中具有更高色彩准确性的色彩图像来作为融合图像的色彩分量,并根据融合亮度图像与选取的色彩图像所对应的亮度图像之间的差异来调整融合图像的色彩分量,提升色彩准确性。
在另一些可能的实现方式中,该图像融合单元03可以包括第二融合单元和第三融合单元。其中,第二融合单元用于通过第二融合处理对近红外光降噪图像和可见光降噪图像进行融合,得到第一目标图像。第三融合单元用于通过第三融合处理对近红外光降噪图像和可见光降噪图像进行融合,得到第二目标图像。融合图像包括第一目标图像和第二目标图像。
需要说明的是,第二融合处理和第三融合处理可以不同。例如,第二融合处理和第三融合处理可以为前述介绍的第一融合处理的三种可能实现方式中的任意两种。或者,第二融合处理为前述介绍的第一融合处理的三种可能实现方式中的任意一种,第三融合处理为除上述三种可能实现方式之外的其余的处理方式。或者,第三融合处理为前述介绍的第一融合处理的三种可能实现方式中的任意一种,第二融合处理为除上述三种可能实现方式之外的其余的处理方式。
第二融合处理和第三融合处理也可以相同。在这种情况下,第二融合处理的融合参数为第一融合参数,第三融合处理的融合参数为第二融合参数,第一融合参数和第二融合参数不同。例如,第二融合处理和第三融合处理可以均为上述介绍的第一融合处理中的第一种可能的实现方式。但是,第二融合处理和第三融合处理中的融合权重可以不同。
通过不同的融合单元采用不同的融合处理来对两个预处理图像进行融合,或者是采用相同的融合处理但是不同的融合参数来对两个预处理图像进行融合,可以得到两种不同图像风格的融合图像。图像融合单元可以分别向后续的不同单元输出不同的融合图像,以此来满足后续不同操作对融合图像的要求。
需要说明的是,在上述实施例中,对于采集后未经处理的图像可以称为图像,对于经过处理的图像也可以称为图像。
3、图像获取单元
其中,该图像获取单元01可以是一个图像采集设备,或者,该图像获取单元01也可以是用于接收其他设备传输的图像的接收设备。
当该图像获取单元01是一个图像采集设备时,参见图5,该图像获取单元01可以包括图像传感器011、补光器012和滤光组件013,图像传感器011位于滤光组件013的出光侧。图像传感器011用于通过多次曝光产生并输出第一图像信号和第二图像信号。其中,第一图像信号是根据第一预设曝光产生的图像,第二图像信号是根据第二预设曝光产生的图像,第一预设曝光和第二预设曝光为该多次曝光中的其中两次曝光。补光器012包括第一补光装置0121,第一补光装置0121用于进行近红外补光,其中,至少在第一预设曝光的部分曝光时间段内存在近红外补光,在第二预设曝光的曝光时间段内不存在近红外补光。滤光组件013包括第一滤光片0131,第一滤光片0131使可见光和部分近红外光通过,其中,第一补光装置0121进行近红外光补光时通过第一滤光片0131的近红外光的强度高于第一补光装置0121未进行近红外补光时通过第一滤光片0131的近红外光的强度。
在本申请实施例中,参见图5,图像获取单元01还可以包括镜头014,此时,滤光组件013可以位于镜头014和图像传感器011之间,且图像传感器011位于滤光组件013的出光侧。或者,镜头014位于滤光组件013与图像传感器011之间,且图像传感器011位于镜头014的出光侧。作为一种示例,第一滤光片0131可以是滤光薄膜,这样,当滤光组件013位于镜头014和图像传感 器011之间时,第一滤光片0131可以贴在镜头014的出光侧的表面,或者,当镜头014位于滤光组件013与图像传感器011之间时,第一滤光片0131可以贴在镜头014的入光侧的表面。
作为一种示例,图像获取单元01可以为诸如摄像机、抓拍机、人脸识别相机、读码相机、车载相机、全景细节相机等图像采集设备,补光器012可以是在图像采集设备内部,作为图像采集设备的一部分。
作为另一种示例,图像获取单元01还可以包括图像采集设备和补光器012,补光器012位于图像采集设备的外部,与图像采集设备进行连接,可以保证图像获取单元01中的图像传感器011的曝光时序与补光器012包括的第一补光装置0121的近红外补光时序存在一定的关系,如至少在第一预设曝光的部分曝光时间段内存在近红外补光,在第二预设曝光的曝光时间段内不存在近红外补光。
另外,第一补光装置0121为可以发出近红外光的装置,例如近红外补光灯等,第一补光装置0121可以以频闪方式进行近红外补光,也可以以类似频闪的其他方式进行近红外补光,本申请实施例对此不做限定。在一些示例中,当第一补光装置0121以频闪方式进行近红外补光时,可以通过手动方式来控制第一补光装置0121以频闪方式进行近红外补光,也可以通过软件程序或特定设备来控制第一补光装置0121以频闪方式进行近红外补光,本申请实施例对此不做限定。其中,第一补光装置0121进行近红外补光的时间段可以与第一预设曝光的曝光时间段重合,也可以大于第一预设曝光的曝光时间段或者小于第一预设曝光的曝光时间段,只要在第一预设曝光的整个曝光时间段或者部分曝光时间段内存在近红外补光,而在第二预设曝光的曝光时间段内不存在近红外补光即可。
需要说明的是,第二预设曝光的曝光时间段内不存在近红外补光,对于全局曝光方式来说,第二预设曝光的曝光时间段可以是开始曝光时刻和结束曝光时刻之间的时间段,对于卷帘曝光方式来说,第二预设曝光的曝光时间段可以是第二图像信号第一行有效图像的开始曝光时刻与最后一行有效图像的结束曝光时刻之间的时间段,但并不局限于此。例如,第二预设曝光的曝光时间段也可以是第二图像信号中目标图像对应的曝光时间段,目标图像为第二图像信号中与目标对象或目标区域所对应的若干行有效图像,这若干行有效图像的开始曝光时刻与结束曝光时刻之间的时间段可以看作第二预设曝光的曝光时间 段。
需要说明的另一点是,由于第一补光装置0121在对外部场景进行近红外补光时,入射到物体表面的近红外光可能会被物体反射,从而进入到第一滤光片0131中。并且由于通常情况下,环境光可以包括可见光和近红外光,且环境光中的近红外光入射到物体表面时也会被物体反射,从而进入到第一滤光片0131中。因此,在存在近红外补光时通过第一滤光片0131的近红外光可以包括第一补光装置0121进行近红外补光时经物体反射进入第一滤光片0131的近红外光,在不存在近红外补光时通过第一滤光片0131的近红外光可以包括第一补光装置0121未进行近红外补光时经物体反射进入第一滤光片0131的近红外光。也即是,在存在近红外补光时通过第一滤光片0131的近红外光包括第一补光装置0121发出的且经物体反射后的近红外光,以及环境光中经物体反射后的近红外光,在不存在近红外补光时通过第一滤光片0131的近红外光包括环境光中经物体反射后的近红外光。以图像获取单元01中,滤光组件013可以位于镜头014和图像传感器011之间,且图像传感器011位于滤光组件013的出光侧的结构特征为例,图像获取单元01采集第一图像信号和第二图像信号的过程为:在图像传感器011进行第一预设曝光时,第一补光装置0121存在近红外补光,此时拍摄场景中的环境光和第一补光装置进行近红外补光时被场景中物体反射的近红外光经由镜头014、第一滤光片0131之后,由图像传感器011通过第一预设曝光产生第一图像信号;在图像传感器011进行第二预设曝光时,第一补光装置0121不存在近红外补光,此时拍摄场景中的环境光经由镜头014、第一滤光片0131之后,由图像传感器011通过第二预设曝光产生第二图像信号,在图像采集的一个帧周期内可以有M个第一预设曝光和N个第二预设曝光,第一预设曝光和第二预设曝光之间可以有多种组合的排序,在图像采集的一个帧周期中,M和N的取值以及M和N的大小关系可以根据实际需求设置,例如,M和N的取值可相等,也可不相等。
需要说明的是,第一滤光片0131可以使部分近红外光波段的光通过,换句话说,通过第一滤光片0131的近红外光波段可以为部分近红外光波段,也可以为全部近红外光波段,本申请实施例对此不做限定。
另外,由于环境光中的近红外光的强度低于第一补光装置0121发出的近红外光的强度,因此,第一补光装置0121进行近红外补光时通过第一滤光片0131的近红外光的强度高于第一补光装置0121未进行近红外补光时通过第一 滤光片0131的近红外光的强度。
其中,第一补光装置0121进行近红外补光的波段范围可以为第二参考波段范围,第二参考波段范围可以为700纳米~800纳米,或者900纳米~1000纳米等,这样,可以减轻常见的850纳米的近红外灯造成的干扰,本申请实施例对此不做限定。另外,入射到第一滤光片0131的近红外光的波段范围可以为第一参考波段范围,第一参考波段范围为650纳米~1100纳米。
由于在存在近红外补光时,通过第一滤光片0131的近红外光可以包括第一补光装置0121进行近红外光补光时经物体反射进入第一滤光片0131的近红外光,以及环境光中的经物体反射后的近红外光。所以此时进入滤光组件013的近红外光的强度较强。但是,在不存在近红外补光时,通过第一滤光片0131的近红外光包括环境光中经物体反射进入滤光组件013的近红外光。由于没有第一补光装置0121进行补光的近红外光,所以此时通过第一滤光片0131的近红外光的强度较弱。因此,根据第一预设曝光产生并输出的第一图像信号包括的近红外光的强度,要高于根据第二预设曝光产生并输出的第二图像信号包括的近红外光的强度。
第一补光装置0121进行近红外补光的中心波长和/或波段范围可以有多种选择,本申请实施例中,为了使第一补光装置0121和第一滤光片0131有更好的配合,可以对第一补光装置0121进行近红外补光的中心波长进行设计,以及对第一滤光片0131的特性进行选择,从而使得在第一补光装置0121进行近红外补光的中心波长为设定特征波长或者落在设定特征波长范围时,通过第一滤光片0131的近红外光的中心波长和/或波段宽度可以达到约束条件。该约束条件主要是用来约束通过第一滤光片0131的近红外光的中心波长尽可能准确,以及通过第一滤光片0131的近红外光的波段宽度尽可能窄,从而避免出现因近红外光波段宽度过宽而引入波长干扰。
其中,第一补光装置0121进行近红外补光的中心波长可以为第一补光装置0121发出的近红外光的光谱中能量最大的波长范围内的平均值,也可以理解为第一补光装置0121发出的近红外光的光谱中能量超过一定阈值的波长范围内的中间位置处的波长。
其中,设定特征波长或者设定特征波长范围可以预先设置。作为一种示例,第一补光装置0121进行近红外补光的中心波长可以为750±10纳米的波长范围内的任一波长;或者,第一补光装置0121进行近红外补光的中心波长为780±10 纳米的波长范围内的任一波长;或者,第一补光装置0121进行近红外补光的中心波长为940±10纳米的波长范围内的任一波长。也即是,设定特征波长范围可以为750±10纳米的波长范围、或者780±10纳米的波长范围、或者940±10纳米的波长范围。示例性地,第一补光装置0121进行近红外补光的中心波长为940纳米,第一补光装置0121进行近红外补光的波长和相对强度之间的关系如图6所示。从图6可以看出,第一补光装置0121进行近红外补光的波段范围为900纳米~1000纳米,其中,在940纳米处,近红外光的相对强度最高。
由于在存在近红外补光时,通过第一滤光片0131的近红外光大部分为第一补光装置0121进行近红外补光时经物体反射进入第一滤光片0131的近红外光,因此,在一些实施例中,上述约束条件可以包括:通过第一滤光片0131的近红外光的中心波长与第一补光装置0121进行近红外补光的中心波长之间的差值位于波长波动范围内,作为一种示例,波长波动范围可以为0~20纳米。
其中,通过第一滤光片0131的近红外补光的中心波长可以为第一滤光片0131的近红外光通过率曲线中的近红外波段范围内波峰位置处的波长,也可以理解为第一滤光片0131的近红外光通过率曲线中通过率超过一定阈值的近红外波段范围内的中间位置处的波长。
为了避免通过第一滤光片0131的近红外光的波段宽度过宽而引入波长干扰,在一些实施例中,上述约束条件可以包括:第一波段宽度可以小于第二波段宽度。其中,第一波段宽度是指通过第一滤光片0131的近红外光的波段宽度,第二波段宽度是指被第一滤光片0131阻挡的近红外光的波段宽度。应当理解的是,波段宽度是指光线的波长所处的波长范围的宽度。例如,通过第一滤光片0131的近红外光的波长所处的波长范围为700纳米~800纳米,那么第一波段宽度为800纳米减去700纳米,即100纳米。换句话说,通过第一滤光片0131的近红外光的波段宽度小于第一滤光片0131阻挡的近红外光的波段宽度。
例如,参见图7,图7为第一滤光片0131可以通过的光的波长与通过率之间的关系的一种示意图。入射到第一滤光片0131的近红外光的波段为650纳米~1100纳米,第一滤光片0131可以使波长位于380纳米~650纳米的可见光通过,以及波长位于900纳米~1100纳米的近红外光通过,阻挡波长位于650纳米~900纳米的近红外光。也即是,第一波段宽度为1000纳米减去900纳米,即100纳米。第二波段宽度为900纳米减去650纳米,加上1100纳米减去1000 纳米,即350纳米。100纳米小于350纳米,即通过第一滤光片0131的近红外光的波段宽度小于第一滤光片0131阻挡的近红外光的波段宽度。以上关系曲线仅是一种示例,对于不同的滤光片,能够通过滤光片的近红光波段的波段范围可以有所不同,被滤光片阻挡的近红外光的波段范围也可以有所不同。
为了避免在非近红外补光的时间段内,通过第一滤光片0131的近红外光的波段宽度过宽而引入波长干扰,在一些实施例中,上述约束条件可以包括:通过第一滤光片0131的近红外光的半带宽小于或等于50纳米。其中,半带宽是指通过率大于50%的近红外光的波段宽度。
为了避免通过第一滤光片0131的近红外光的波段宽度过宽而引入波长干扰,在一些实施例中,上述约束条件可以包括:第三波段宽度可以小于参考波段宽度。其中,第三波段宽度是指通过率大于设定比例的近红外光的波段宽度,作为一种示例,参考波段宽度可以为50纳米~100纳米的波段范围内的任一波段宽度。设定比例可以为30%~50%中的任一比例,当然设定比例还可以根据使用需求设置为其他比例,本申请实施例对此不做限定。换句话说,通过率大于设定比例的近红外光的波段宽度可以小于参考波段宽度。
例如,参见图7,入射到第一滤光片0131的近红外光的波段为650纳米~1100纳米,设定比例为30%,参考波段宽度为100纳米。从图7可以看出,在650纳米~1100纳米的近红外光的波段中,通过率大于30%的近红外光的波段宽度明显小于100纳米。
由于第一补光装置0121至少在第一预设曝光的部分曝光时间段内提供近红外补光,在第二预设曝光的整个曝光时间段内不提供近红外补光,而第一预设曝光和第二预设曝光为图像传感器011的多次曝光中的其中两次曝光,也即是,第一补光装置0121在图像传感器011的部分曝光的曝光时间段内提供近红外补光,在图像传感器011的另外一部分曝光的曝光时间段内不提供近红外补光。所以,第一补光装置0121在单位时间长度内的补光次数可以低于图像传感器011在该单位时间长度内的曝光次数,其中,每相邻两次补光的间隔时间段内,间隔一次或多次曝光。
可选地,由于人眼容易将第一补光装置0121进行近红外光补光的颜色与交通灯中的红灯的颜色混淆,所以,参见图8,补光器012还可以包括第二补光装置0122,第二补光装置0122用于进行可见光补光。这样,如果第二补光装置0122至少在第一预设曝光的部分曝光时间提供可见光补光,也即是,至 少在第一预设曝光的部分曝光时间段内存在近红外补光和可见光补光,这两种光的混合颜色可以区别于交通灯中的红灯的颜色,从而避免了人眼将补光器012进行近红外补光的颜色与交通灯中的红灯的颜色混淆。另外,如果第二补光装置0122在第二预设曝光的曝光时间段内提供可见光补光,由于第二预设曝光的曝光时间段内可见光的强度不是特别高,因此,在第二预设曝光的曝光时间段内进行可见光补光时,还可以提高第二图像信号中的可见光的亮度,进而保证图像采集的质量。
在一些实施例中,第二补光装置0122可以用于以常亮方式进行可见光补光;或者,第二补光装置0122可以用于以频闪方式进行可见光补光,其中,至少在第一预设曝光的部分曝光时间段内存在可见光补光,在第二预设曝光的整个曝光时间段内不存在可见光补光;或者,第二补光装置0122可以用于以频闪方式进行可见光补光,其中,至少在第一预设曝光的整个曝光时间段内不存在可见光补光,在第二预设曝光的部分曝光时间段内存在可见光补光。当第二补光装置0122以常亮方式进行可见光补光时,不仅可以避免人眼将第一补光装置0121进行近红外补光的颜色与交通灯中的红灯的颜色混淆,还可以提高第二图像信号中的可见光的亮度,进而保证图像采集的质量。当第二补光装置0122以频闪方式进行可见光补光时,可以避免人眼将第一补光装置0121进行近红外补光的颜色与交通灯中的红灯的颜色混淆,或者,可以提高第二图像信号中的可见光的亮度,进而保证图像采集的质量,而且还可以减少第二补光装置0122的补光次数,从而延长第二补光装置0122的使用寿命。
在一些实施例中,上述多次曝光是指一个帧周期内的多次曝光,也即是,图像传感器011在一个帧周期内进行多次曝光,从而产生并输出至少一帧第一图像信号和至少一帧第二图像信号。例如,1秒内包括25个帧周期,图像传感器011在每个帧周期内进行多次曝光,从而产生至少一帧第一图像信号和至少一帧第二图像信号,将一个帧周期内产生的第一图像信号和第二图像信号称为一组图像,这样,25个帧周期内就会产生25组图像。其中,第一预设曝光和第二预设曝光可以是一个帧周期内多次曝光中相邻的两次曝光,也可以是一个帧周期内多次曝光中不相邻的两次曝光,本申请实施例对此不做限定。
第一图像信号是第一预设曝光产生并输出的,第二图像信号是第二预设曝光产生并输出的,在产生并输出第一图像信号和第二图像信号之后,可以对第一图像信号和第二图像信号进行处理。在某些情况下,第一图像信号和第二图 像信号的用途可能不同,所以在一些实施例中,第一预设曝光与第二预设曝光的至少一个曝光参数可以不同。作为一种示例,该至少一个曝光参数可以包括但不限于曝光时间、模拟增益、数字增益、光圈大小中的一种或多种。其中,曝光增益包括模拟增益和/或数字增益。
在一些实施例中。可以理解的是,与第二预设曝光相比,在存在近红外补光时,图像传感器011感应到的近红外光的强度较强,相应地产生并输出的第一图像信号包括的近红外光的亮度也会较高。但是较高亮度的近红外光不利于外部场景信息的获取。而且在一些实施例中,曝光增益越大,图像传感器011输出的图像的亮度越高,曝光增益越小,图像传感器011输出的图像的亮度越低,因此,为了保证第一图像信号包含的近红外光的亮度在合适的范围内,在第一预设曝光和第二预设曝光的至少一个曝光参数不同的情况下,作为一种示例,第一预设曝光的曝光增益可以小于第二预设曝光的曝光增益。这样,在第一补光装置0121进行近红外补光时,图像传感器011产生并输出的第一图像信号包含的近红外光的亮度,不会因第一补光装置0121进行近红外补光而过高。
在另一些实施例中,曝光时间越长,图像传感器011得到的图像包括的亮度越高,并且外部场景中的运动的对象在图像中的运动拖尾越长;曝光时间越短,图像传感器011得到的图像包括的亮度越低,并且外部场景中的运动的对象在图像中的运动拖尾越短。因此,为了保证第一图像信号包含的近红外光的亮度在合适的范围内,且外部场景中的运动的对象在第一图像信号中的运动拖尾较短。在第一预设曝光和第二预设曝光的至少一个曝光参数不同的情况下,作为一种示例,第一预设曝光的曝光时间可以小于第二预设曝光的曝光时间。这样,在第一补光装置0121进行近红外补光时,图像传感器011产生并输出的第一图像信号包含的近红外光的亮度,不会因第一补光装置0121进行近红外补光而过高。并且较短的曝光时间使外部场景中的运动的对象在第一图像信号中出现的运动拖尾较短,从而有利于对运动对象的识别。示例性地,第一预设曝光的曝光时间为40毫秒,第二预设曝光的曝光时间为60毫秒等。
值得注意的是,在一些实施例中,当第一预设曝光的曝光增益小于第二预设曝光的曝光增益时,第一预设曝光的曝光时间不仅可以小于第二预设曝光的曝光时间,还可以等于第二预设曝光的曝光时间。同理,当第一预设曝光的曝光时间小于第二预设曝光的曝光时间时,第一预设曝光的曝光增益可以小于第 二预设曝光的曝光增益,也可以等于第二预设曝光的曝光增益。
在另一些实施例中,第一图像信号和第二图像信号的用途可以相同,例如第一图像信号和第二图像信号都用于智能分析时,为了能使进行智能分析的人脸或目标在运动时能够有同样的清晰度,第一预设曝光与第二预设曝光的至少一个曝光参数可以相同。作为一种示例,第一预设曝光的曝光时间可以等于第二预设曝光的曝光时间,如果第一预设曝光的曝光时间和第二预设曝光的曝光时间不同,会出现曝光时间较长的一路图像存在运动拖尾,导致两路图像的清晰度不同。同理,作为另一种示例,第一预设曝光的曝光增益可以等于第二预设曝光的曝光增益。
值得注意的是,在一些实施例中,当第一预设曝光的曝光时间等于第二预设曝光的曝光时间时,第一预设曝光的曝光增益可以小于第二预设曝光的曝光增益,也可以等于第二预设曝光的曝光增益。同理,当第一预设曝光的曝光增益等于第二预设曝光的曝光增益时,第一预设曝光的曝光时间可以小于第二预设曝光的曝光时间,也可以等于第二预设曝光的曝光时间。
其中,图像传感器011可以包括多个感光通道,每个感光通道可以用于感应至少一种可见光波段的光,以及感应近红外波段的光。也即是,每个感光通道既能感应至少一种可见光波段的光,又能感应近红外波段的光,这样,可以保证第一图像信号和第二图像信号具有完整的分辨率,不缺失像素值。可选地,该多个感光通道可以用于感应至少两种不同的可见光波段的光。
在一些实施例中,该多个感光通道可以包括R感光通道、G感光通道、B感光通道、Y感光通道、W感光通道和C感光通道中的至少两种。其中,R感光通道用于感应红光波段和近红外波段的光,G感光通道用于感应绿光波段和近红外波段的光,B感光通道用于感应蓝光波段和近红外波段的光,Y感光通道用于感应黄光波段和近红外波段的光。由于在一些实施例中,可以用W来表示用于感应全波段的光的感光通道,在另一些实施例中,可以用C来表示用于感应全波段的光的感光通道,所以当该多个感光通道包括用于感应全波段的光的感光通道时,这个感光通道可以是W感光通道,也可以是C感光通道。也即是,在实际应用中,可以根据使用需求来选择用于感应全波段的光的感光通道。示例性地,图像传感器011可以为RGB传感器、RGBW传感器,或RCCB传感器,或RYYB传感器。其中,RGB传感器中的R感光通道、G感光通道和B感光通道的分布方式可以参见图9,RGBW传感器中的R感光通道、G 感光通道、B感光通道和W感光通道的分布方式可以参见图10,RCCB传感器中的R感光通道、C感光通道和B感光通道分布方式可以参见图11,RYYB传感器中的R感光通道、Y感光通道和B感光通道分布方式可以参见图12。
在另一些实施例中,有些感光通道也可以仅感应近红外波段的光,而不感应可见光波段的光。作为一种示例,该多个感光通道可以包括R感光通道、G感光通道、B感光通道、IR感光通道中的至少两种。其中,R感光通道用于感应红光波段和近红外波段的光,G感光通道用于感应绿光波段和近红外波段的光,B感光通道用于感应蓝光波段和近红外波段的光,IR感光通道用于感应近红外波段的光。
示例地,图像传感器011可以为RGBIR传感器,其中,RGBIR传感器中的每个IR感光通道都可以感应近红外波段的光,而不感应可见光波段的光。
其中,当图像传感器011为RGB传感器时,相比于其他图像传感器,如RGBIR传感器等,,RGB传感器采集的RGB信息更完整,RGBIR传感器有一部分的感光通道采集不到可见光,所以RGB传感器采集的图像的色彩细节更准确。
值得注意的是,图像传感器011包括的多个感光通道可以对应多条感应曲线。示例性地,参见图13,图13中的R曲线代表图像传感器011对红光波段的光的感应曲线,G曲线代表图像传感器011对绿光波段的光的感应曲线,B曲线代表图像传感器011对蓝光波段的光的感应曲线,W(或者C)曲线代表图像传感器011感应全波段的光的感应曲线,NIR(Near infrared,近红外光)曲线代表图像传感器011感应近红外波段的光的感应曲线。
作为一种示例,图像传感器011可以采用全局曝光方式,也可以采用卷帘曝光方式。其中,全局曝光方式是指每一行有效图像的曝光开始时刻均相同,且每一行有效图像的曝光结束时刻均相同。换句话说,全局曝光方式是所有行有效图像同时进行曝光并且同时结束曝光的一种曝光方式。卷帘曝光方式是指不同行有效图像的曝光时间不完全重合,也即是,一行有效图像的曝光开始时刻晚于该行有效图像的上一行有效图像的曝光开始时刻,且一行有效图像的曝光结束时刻晚于该行有效图像的上一行有效图像的曝光结束时刻。另外,卷帘曝光方式中每一行有效图像结束曝光后可以进行数据输出,因此,从第一行有效图像的数据开始输出时刻到最后一行有效图像的数据结束输出时刻之间的时间可以表示为读出时间。
示例性地,参见图14,图14为一种卷帘曝光方式的示意图。从图14可以看出,第1行有效图像在T1时刻开始曝光,在T3时刻结束曝光,第2行有效图像在T2时刻开始曝光,在T4时刻结束曝光,T2时刻相比于T1时刻向后推移了一个时间段,T4时刻相比于T3时刻向后推移了一个时间段。另外,第1行有效图像在T3时刻结束曝光并开始输出数据,在T5时刻结束数据的输出,第n行有效图像在T6时刻结束曝光并开始输出数据,在T7时刻结束数据的输出,则T3~T7时刻之间的时间即为读出时间。
在一些实施例中,当图像传感器011采用全局曝光方式进行多次曝光时,对于任意一次近红外补光,近红外补光的时间段与最邻近的第二预设曝光的曝光时间段不存在交集,近红外补光的时间段是第一预设曝光的曝光时间段的子集,或者,近红外补光的时间段与第一预设曝光的曝光时间段存在交集,或者第一预设曝光的曝光时间段是近红外补光的子集。这样,即可实现至少在第一预设曝光的部分曝光时间段内存在近红外补光,在第二预设曝光的整个曝光时间段内不存在近红外补光,从而不会对第二预设曝光造成影响。
例如,参见图15,对于任意一次近红外补光,近红外补光的时间段与最邻近的第二预设曝光的曝光时间段不存在交集,近红外补光的时间段是第一预设曝光的曝光时间段的子集。参见图16,对于任意一次近红外补光,近红外补光的时间段与最邻近的第二预设曝光的曝光时间段不存在交集,近红外补光的时间段与第一预设曝光的曝光时间段存在交集。参见图17,对于任意一次近红外补光,近红外补光的时间段与最邻近的第二预设曝光的曝光时间段不存在交集,第一预设曝光的曝光时间段是近红外补光的子集。图15至图17仅是一种示例,第一预设曝光和第二预设曝光的排序可以不限于这些示例。
在另一些实施例中,当图像传感器011采用卷帘曝光方式进行多次曝光时,对于任意一次近红外补光,近红外补光的时间段与最邻近的第二预设曝光的曝光时间段不存在交集。并且,近红外补光的开始时刻不早于第一预设曝光中最后一行有效图像的曝光开始时刻,近红外补光的结束时刻不晚于第一预设曝光中第一行有效图像的曝光结束时刻。或者,近红外补光的开始时刻不早于第一预设曝光之前的最邻近的第二预设曝光的最后一行有效图像的曝光结束时刻且不晚于第一预设曝光中第一行有效图像的曝光结束时刻,近红外补光的结束时刻不早于第一预设曝光中最后一行有效图像的曝光开始时刻且不晚于第一预设曝光之后的最邻近的第二预设曝光的第一行有效图像的曝光开始时刻。或 者,近红外补光的开始时刻不早于第一预设曝光之前的最邻近的第二预设曝光的最后一行有效图像的曝光结束时刻且不晚于第一预设曝光中第一行有效图像的曝光开始时刻,近红外补光的结束时刻不早于第一预设曝光中最后一行有效图像的曝光结束时刻且不晚于第一预设曝光之后的最邻近的第二预设曝光的第一行有效图像的曝光开始时刻。
例如,参见图18,对于任意一次近红外补光,近红外补光的时间段与最邻近的第二预设曝光的曝光时间段不存在交集,并且,近红外补光的开始时刻不早于第一预设曝光中最后一行有效图像的曝光开始时刻,近红外补光的结束时刻不晚于第一预设曝光中第一行有效图像的曝光结束时刻。参见图19,对于任意一次近红外补光,近红外补光的时间段与最邻近的第二预设曝光的曝光时间段不存在交集,并且,近红外补光的开始时刻不早于第一预设曝光之前的最邻近的第二预设曝光的最后一行有效图像的曝光结束时刻且不晚于第一预设曝光中第一行有效图像的曝光结束时刻,近红外补光的结束时刻不早于第一预设曝光中最后一行有效图像的曝光开始时刻且不晚于第一预设曝光之后的最邻近的第二预设曝光的第一行有效图像的曝光开始时刻。参见图20,对于任意一次近红外补光,近红外补光的时间段与最邻近的第二预设曝光的曝光时间段不存在交集,并且,近红外补光的开始时刻不早于第一预设曝光之前的最邻近的第二预设曝光的最后一行有效图像的曝光结束时刻且不晚于第一预设曝光中第一行有效图像的曝光开始时刻,近红外补光的结束时刻不早于第一预设曝光中最后一行有效图像的曝光结束时刻且不晚于第一预设曝光之后的最邻近的第二预设曝光的第一行有效图像的曝光开始时刻。图18至图20仅是一种示例,第一预设曝光和第二预设曝光的排序可以不限于这些示例。
其中,多次曝光可以包括奇数次曝光和偶数次曝光,这样,第一预设曝光和第二预设曝光可以包括但不限于如下几种方式:
第一种可能的实现方式,第一预设曝光为奇数次曝光中的一次曝光,第二预设曝光为偶数次曝光中的一次曝光。这样,多次曝光可以包括按照奇偶次序排列的第一预设曝光和第二预设曝光。例如,多次曝光中的第1次曝光、第3个曝光、第5次曝光等奇数次曝光均为第一预设曝光,第2次曝光、第4次曝光、第6次曝光等偶数次曝光均为第二预设曝光。
第二种可能的实现方式,第一预设曝光为偶数次曝光中的一次曝光,第二预设曝光为奇数次曝光中的一次曝光,这样,多次曝光可以包括按照奇偶次序 排列的第一预设曝光和第二预设曝光。例如,多次曝光中的第1次曝光、第3个曝光、第5次曝光等奇数次曝光均为第二预设曝光,第2次曝光、第4次曝光、第6次曝光等偶数次曝光均为第一预设曝光。
第三种可能的实现方式,第一预设曝光为指定的奇数次曝光中的一次曝光,第二预设曝光为除指定的奇数次曝光之外的其他曝光中的一次曝光,也即是,第二预设曝光可以为多次曝光中的奇数次曝光,也可以为多次曝光中的偶数次曝光。
第四种可能的实现方式,第一预设曝光为指定的偶数次曝光中的一次曝光,第二预设曝光为除指定的偶数次曝光之外的其他曝光中的一次曝光,也即是,第二预设曝光可以为多次曝光中的奇数次曝光,也可以为多次曝光中的偶数次曝光。
第五种可能的实现方式,第一预设曝光为第一曝光序列中的一次曝光,第二预设曝光为第二曝光序列中的一次曝光。
第六种可能的实现方式,第一预设曝光为第二曝光序列中的一次曝光,第二预设曝光为第一曝光序列中的一次曝光。
其中,上述多次曝光包括多个曝光序列,第一曝光序列和第二曝光序列为该多个曝光序列中的同一个曝光序列或者两个不同的曝光序列,每个曝光序列包括N次曝光,该N次曝光包括1次第一预设曝光和N-1次第二预设曝光,或者,该N次曝光包括1次第二预设曝光和N-1次第二预设曝光,N为大于2的正整数。
例如,每个曝光序列包括3次曝光,这3次曝光可以包括1次第一预设曝光和2次第二预设曝光,这样,每个曝光序列的第1次曝光可以为第一预设曝光,第2次和第3次曝光为第二预设曝光。也即是,每个曝光序列可以表示为:第一预设曝光、第二预设曝光、第二预设曝光。或者,这3次曝光可以包括1次第二预设曝光和2次第一预设曝光,这样,每个曝光序列的第1次曝光可以为第二预设曝光,第2次和第3次曝光为第一预设曝光。也即是,每个曝光序列可以表示为:第二预设曝光、第一预设曝光、第一预设曝光。
上述仅提供了六种第一预设曝光和第二预设曝光的可能的实现方式,实际应用中,不限于上述六种可能的实现方式,本申请实施例对此不做限定。
在一些实施例中,滤光组件013还包括第二滤光片和切换部件,第一滤光片0131和第二滤光片均与切换部件连接。切换部件,用于将第二滤光片切换 到图像传感器011的入光侧,在第二滤光片切换到图像传感器011的入光侧之后,第二滤光片使可见光波段的光通过,阻挡近红外光波段的光,图像传感器011,用于通过曝光产生并输出第三图像信号。
需要说明的是,切换部件用于将第二滤光片切换到图像传感器011的入光侧,也可以理解为第二滤光片替换第一滤光片0131在图像传感器011的入光侧的位置。在第二滤光片切换到图像传感器011的入光侧之后,第一补光装置0121可以处于关闭状态也可以处于开启状态。
综上,当环境光中的可见光强度较弱时,例如夜晚,可以通过第一补光装置0121频闪式的补光,使图像传感器011产生并输出包含近红外亮度信息的第一图像信号,以及包含可见光亮度信息的第二图像信号,且由于第一图像信号和第二图像信号均由同一个图像传感器011获取,所以第一图像信号的视点与第二图像信号的视点相同,从而通过第一图像信号和第二图像信号可以获取完整的外部场景的信息。在可见光强度较强时,例如白天,白天近红外光的占比比较强,采集的图像的色彩还原度不佳,可以通过图像传感器011产生并输出的包含可见光亮度信息的第三图像信号,这样即使白天,也可以采集到色彩还原度比较好的图像,从而达到不论可见光强度的强弱,或者说不论白天还是夜晚,均能高效、简便地获取外部场景的真实色彩信息。
值得注意的是,上述仅是本申请实施例给出的一种图像采集设备的实现方式,在一些可能的实现方式中,该图像获取单元也可以包括双传感器的图像采集设备。其中,第一图像信号传感器用于产生并输出第一图像信号,第二图像信号传感器用于产生并输出第二图像信号。
或者,该图像获取单元也可以为双目相机,包括两个摄像头。其中,第一摄像头为近红外光摄像头,用于产生并输出第一图像信号,第二摄像头为可见光摄像头,用于产生并输出第二图像信号。
在一种可能的实现方式中,图像获取单元01可以直接将采集到的第一图像信号和第二图像信号输出至图像降噪单元02。或者,在另一种可能的实现方式中,图像获取单元01可以将采集到的第一图像信号和第二图像信号输出至图像预处理单元04,由图像预处理单元04分别对第一图像信号和第二图像信号进行处理,从而得到第一图像和第二图像。
4、图像预处理单元
在一种可能的实现方式中,本申请的图像降噪装置还可以包括图像预处理 单元04,其中,该图像预处理单元04用于分别对第一图像信号和第二图像信号进行处理,得到第一图像和第二图像,输出第一图像和第二图像至图像降噪单元。
考虑到第一图像信号和第二图像信号可能是由以bayer方式排列的图像传感器采集的马赛克图像。因此,该图像预处理单元04还可以进一步地对第一图像信号和第二图像信号进行去马赛克等修复处理,修复处理后的第一图像信号为灰度图像,修复处理后的第二图像信号为彩色图像。
需要说明的是,在对第一图像信号和第二图像信号进行去马赛克处理时,可以采用双线性插值法或自适应插值法等方法来进行处理,本申请实施例对此不再详述。
另外,除了可以对第一图像信号和第二图像信号进行去马赛克处理之外,还可以对第一图像信号和第二图像信号进行诸如黑电平、坏点校正、Gamma校正等处理,本申请实施例对此不做限定。
在对第一图像信号和第二图像信号进行修复处理得到第一图像和第二图像之后,图像预处理单元04可以输出该第一图像和第二图像至图像降噪单元02。
除此之外,当第一图像信号和第二图像信号是由两个传感器或者是两个摄像头采集得到时,图像预处理单元04还可以对第一图像信号和第二图像信号进行配准。
基于上述图像降噪装置,本申请实施例还提供了一种图像降噪方法。接下来,以基于上述图1-20所示的实施例提供的图像降噪装置来对图像降噪方法进行说明。参见图21,该方法包括:
步骤2101:获取第一图像信号和第二图像信号,其中,第一图像信号为近红外光图像信号,第二图像信号为可见光图像信号;
步骤2102:对第一图像信号和第二图像信号进行联合降噪,得到近红外光降噪图像和可见光降噪图像。
在一种可能的实现方式中,对第一图像信号和第二图像信号进行联合降噪,得到近红外光降噪图像和可见光降噪图像,包括:
根据第一图像信号和第二图像信号进行运动估计,得到运动估计结果,根据运动估计结果对第一图像信号进行时域滤波处理,得到近红外光降噪图像, 根据运动估计结果对第二图像信号进行时域滤波处理,得到可见光降噪图像;
根据第一图像信号和第二图像信号进行边缘估计,得到边缘估计结果,根据边缘估计结果对第一图像信号进行空域滤波处理,得到近红外光降噪图像,根据边缘估计结果对第二图像信号进行空域滤波处理,得到可见光降噪图像。
在一种可能的实现方式中,根据第一图像信号和第二图像信号进行运动估计,得到运动估计结果,包括:
根据第一图像信号和第一历史降噪图像生成第一帧差图像,根据第一帧差图像和多个第一帧差阈值确定第一图像信号中每个像素点的第一时域滤波强度,第一历史降噪图像是指对第一图像信号的前N帧图像中的任一帧图像进行降噪后的图像,N大于或等于1,多个第一帧差阈值与第一帧差图像中的多个像素点一一对应;
根据第二图像信号和第二历史降噪图像生成第二帧差图像,根据第二帧差图像和多个第二帧差阈值确定第二图像信号中每个像素点的第二时域滤波强度,第二历史降噪图像是指对第二图像信号的前N帧图像中的任一帧图像进行降噪后的图像信,多个第二帧差阈值与第二帧差图像中的多个像素点一一对应号;
对每个像素点的第一时域滤波强度和第二时域滤波强度进行融合,得到每个像素点的联合时域滤波强度;或者,从每个像素点的第一时域滤波强度和第二时域滤波强度中选择一个时域滤波强度作为相应像素点的联合时域滤波强度;
其中,运动估计结果包括每个像素点的第一时域滤波强度和/或每个像素点的联合时域滤波强度。
在一种可能的实现方式中,对每个像素点的第一时域滤波强度和第二时域滤波强度进行融合,得到每个像素点的联合时域滤波强度,包括:
采用不同的参数对每个像素点的第一时域滤波强度和第二时域滤波强度进行融合,得到第一滤波强度和第二滤波强度,联合时域滤波强度包括第一滤波强度和第二滤波强度。
在一种可能的实现方式中,根据运动估计结果对第一图像信号进行时域滤波处理,得到近红外光降噪图像,根据运动估计结果对第二图像信号进行时域滤波处理,得到可见光降噪图像,包括:
根据每个像素点的第一时域滤波强度对第一图像信号进行时域滤波处理, 得到近红外光降噪图像,根据每个像素点的第一时域滤波强度对第二图像信号进行时域滤波处理,得到可见光降噪图像;或者
根据每个像素点的第一时域滤波强度对第一图像信号进行时域滤波处理,得到近红外光降噪图像,根据每个像素点的联合时域滤波强度对第二图像信号进行时域滤波处理,得到可见光降噪图像;或者
根据每个像素点的联合时域滤波强度对第一图像信号进行时域滤波处理,得到近红外光降噪图像,根据每个像素点的联合时域滤波强度对第二图像信号进行时域滤波处理,得到可见光降噪图像;其中,当联合时域滤波强度包括第一滤波强度和第二滤波强度时,根据每个像素点的第一滤波强度对第一图像信号进行时域滤波处理,得到近红外光降噪图像,根据每个像素点的第二滤波强度对第二图像信号进行时域滤波处理,得到可见光降噪图像。
在一种可能的实现方式中,第一帧差图像是指对第一图像信号和第一历史降噪图像进行作差处理得到的原始帧差图像;或者,第一帧差图像是指对原始帧差图像进行处理后得到的帧差图像。
第二帧差图像是指对第二图像信号和第二历史降噪图像进行作差处理得到的原始帧差图像;或者,第二帧差图像是指对原始帧差图像进行处理后得到的帧差图像。
在一种可能的实现方式中,每个像素点对应的第一帧差阈值不同,或者,每个像素点对应的第一帧差阈值相同;
每个像素点对应的第二帧差阈值不同,或者,每个像素点对应的第二帧差阈值相同。
在一种可能的实现方式中,多个第一帧差阈值是根据第一噪声强度图像中多个像素点的噪声强度确定得到,第一噪声强度图像根据第一历史降噪图像对应的降噪前的图像和第一历史降噪图像确定得到;
多个第二帧差阈值是根据第二噪声强度图像中多个像素点的噪声强度确定得到,第二噪声强度图像根据第二历史降噪图像对应的降噪前的图像和第二历史降噪图像确定得到。
在一种可能的实现方式中,根据第一图像信号和第二图像信号进行边缘估计,得到边缘估计结果,包括:
确定第一图像信号中每个像素点的第一空域滤波强度;
确定第二图像信号中每个像素点的第二空域滤波强度;
对第一图像信号进行局部信息提取,得到第一局部信息,对第二图像信号进行局部信息提取,得到第二局部信息;根据第一空域滤波强度、第二空域滤波强度、第一局部信息和第二局部信息确定每个像素点对应的联合空域滤波强度;
其中,边缘估计结果包括每个像素点的第一空域滤波强度和/或联合空域滤波强度。
在一种可能的实现方式中,联合空域滤波强度包括第三滤波强度和第四滤波强度,第三滤波强度和第四滤波强度不同。
在一种可能的实现方式中,根据边缘估计结果对第一图像信号进行空域滤波处理,得到近红外光降噪图像,根据边缘估计结果对第二图像信号进行空域滤波处理,得到可见光降噪图像,包括:
根据每个像素点对应的第一空域滤波强度对第一图像信号进行空域滤波处理,得到近红外光降噪图像,根据每个像素点对应的第一空域滤波强度对第二图像信号进行空域滤波处理,得到可见光降噪图像;或者
根据每个像素点对应的第一空域滤波强度对第一图像信号进行空域滤波处理,得到近红外光降噪图像,根据每个像素点对应的联合空域滤波强度对第二图像信号进行空域滤波处理,得到可见光降噪图像;或者
根据每个像素点对应的联合空域滤波强度对第一图像信号进行空域滤波处理,得到近红外光降噪图像,根据每个像素点对应的联合空域滤波强度对第二图像信号进行空域滤波处理,得到可见光降噪图像;其中,当联合空域滤波强度包括第三滤波强度和第四滤波强度时,根据每个像素点对应的第三滤波强度对第一图像信号进行空域滤波处理,得到近红外光降噪图像,根据每个像素点对应的第四滤波强度对第二图像信号进行空域滤波处理,得到可见光降噪图像。
在一种可能的实现方式中,第一局部信息和第二局部信息包括局部梯度信息、局部亮度信息和局部信息熵中的至少一种。
在一种可能的实现方式中,对第一图像信号和第二图像信号进行联合降噪,得到近红外光降噪图像和可见光降噪图像,包括:
根据第一图像信号和第二图像信号进行运动估计,得到运动估计结果,根据运动估计结果对第一图像信号进行时域滤波,得到第一时域降噪图像,根据运动估计结果对第二图像信号进行时域滤波,得到第二时域降噪图像;
根据第一时域降噪图像和第二时域降噪图像进行边缘估计,得到边缘估计结果,根据边缘估计结果对第一时域降噪图像进行空域滤波,得到近红外光降噪图像,根据边缘估计结果对第二时域降噪图像进行空域滤波,得到可见光降噪图像;
或者,
根据第一图像信号和第二图像信号进行边缘估计,得到边缘估计结果,根据边缘估计结果对第一图像信号进行空域滤波,得到第一空域降噪图像,根据边缘估计结果对第二图像信号进行空域滤波,得到第二空域降噪图像;
根据第一空域降噪图像和第二空域降噪图像进行运动估计,得到运动估计结果,根据运动估计结果对第一空域降噪图像进行时域滤波,得到近红外光降噪图像,根据运动估计结果对第二空域降噪图像进行时域滤波,得到可见光降噪图像。
在一种可能的实现方式中,该方法还包括:
对近红外光降噪图像和可见光降噪图像进行融合,得到融合图像。
在一种可能的实现方式中,对近红外光降噪图像和可见光降噪图像进行融合,得到融合图像,包括:
通过第一融合处理对近红外光降噪图像和可见光降噪图像进行融合,得到融合图像。
在一种可能的实现方式中,对近红外光降噪图像和可见光降噪图像进行融合,得到融合图像,包括:
通过第二融合处理对近红外光降噪图像和可见光降噪图像进行融合,得到第一目标图像;
通过第三融合处理对近红外光降噪图像和可见光降噪图像进行融合,得到第二目标图像,融合图像包括第一目标图像和第二目标图像。
在一种可能的实现方式中,第二融合处理和第三融合处理不同;
或者,第二融合处理和第三融合处理相同,但第二融合处理的融合参数为第一融合参数,第三融合处理的融合参数为第二融合参数,第一融合参数和第二融合参数不同。
在一种可能的实现方式中,该方法还包括:
对第一图像信号和第二图像信号进行预处理处理,得到处理后的第一图像信号和处理后的第二图像信号。
在一种可能的实现方式中,获取第一图像信号和第二图像信号,包括:
通过第一补光装置进行近红外补光,其中,至少在第一预设曝光的部分曝光时间段内进行近红外补光,在第二预设曝光的曝光时间段内不进行近红外补光,第一预设曝光和第二预设曝光为图像传感器的多次曝光中的其中两次曝光;
通过第一滤光片使可见光和部分近红外光通过;
通过图像传感器进行多次曝光,以产生并输出第一图像信号和第二图像信号,第一图像信号是根据第一预设曝光产生的图像,第二图像信号是根据第二预设曝光产生的图像。
在一种可能的实现方式中,获取第一图像信号和第二图像信号,包括:
通过第一图像传感器产生并输出第一图像信号,通过第二图像传感器产生并输出第二图像信号;
对第一图像信号和第二图像信号进行配准。
在一种可能的实现方式中,获取第一图像信号和第二图像信号,包括:
通过第一摄像头产生并输出第一图像信号,通过第二摄像头用于产生并输出第二图像信号;
对第一图像信号和第二图像信号进行配准。
在一些实施例中,本申请还提供了一种计算机可读存储介质,该存储介质内存储有计算机程序,该计算机程序被处理器执行时实现上述实施例中图像降噪方法的步骤。例如,该计算机可读存储介质可以是ROM(Read Only Memory,只读存储器)、RAM(Random Access Memory,随机存取存储器)、CD-ROM(Compact Disc Read-Only Memory,紧凑型光盘只读储存器)、磁带、软盘和光数据存储设备等。
值得注意的是,本申请提到的计算机可读存储介质可以为非易失性存储介质,换句话说,可以是非瞬时性存储介质。
应当理解的是,实现上述实施例的全部或部分步骤可以通过软件、硬件、固件或者其任意结合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。该计算机程序产品包括一个或多个计算机指令。该计算机指令可以存储在上述计算机可读存储介质中。
也即是,在一些实施例中,还提供了一种包含指令的计算机程序产品,当 其在计算机上运行时,使得计算机执行上述所述的图像降噪方法的步骤。
以上所述仅为本申请的可选实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (34)

  1. 一种图像降噪装置,所述图像降噪装置包括图像获取单元(01)和图像降噪单元(02);
    所述图像获取单元(01),用于获取第一图像信号和第二图像信号,其中,所述第一图像信号为近红外光图像信号,所述第二图像信号为可见光图像信号;
    所述图像降噪单元(02)用于对所述第一图像信号和所述第二图像信号进行联合降噪,得到近红外光降噪图像和可见光降噪图像。
  2. 如权利要求1所述的图像降噪装置,所述图像降噪单元(02)包括时域降噪单元(021)或空域降噪单元(022);
    所述时域降噪单元(021)用于根据所述第一图像信号和所述第二图像信号进行运动估计,得到运动估计结果,根据所述运动估计结果对所述第一图像信号进行时域滤波处理,得到所述近红外光降噪图像,根据所述运动估计结果对所述第二图像信号进行时域滤波处理,得到所述可见光降噪图像;
    所述空域降噪单元(022)用于根据所述第一图像信号和所述第二图像信号进行边缘估计,得到边缘估计结果,根据所述边缘估计结果对所述第一图像信号进行空域滤波处理,得到所述近红外光降噪图像,根据所述边缘估计结果对所述第二图像信号进行空域滤波处理,得到所述可见光降噪图像。
  3. 如权利要求2所述的图像降噪装置,所述时域降噪单元(021)包括运动估计单元(0211);
    所述运动估计单元(0211)用于根据所述第一图像信号和第一历史降噪图像生成第一帧差图像,根据所述第一帧差图像和多个第一帧差阈值确定所述第一图像信号中每个像素点的第一时域滤波强度,所述第一历史降噪图像是指对所述第一图像信号的前N帧图像中的任一帧图像进行降噪后的图像,所述N大于或等于1,所述多个第一帧差阈值与所述第一帧差图像中的多个像素点一一对应;
    所述运动估计单元(0211)还用于根据所述第二图像信号和第二历史降噪图像生成第二帧差图像,根据所述第二帧差图像和多个第二帧差阈值确定所述 第二图像信号中每个像素点的第二时域滤波强度,所述第二历史降噪图像是指对所述第二图像信号的前N帧图像中的任一帧图像进行降噪后的图像信,所述多个第二帧差阈值与所述第二帧差图像中的多个像素点一一对应;
    所述运动估计单元(0211)还用于对每个像素点的第一时域滤波强度和第二时域滤波强度进行融合,得到相应像素点的联合时域滤波强度;或者,所述运动估计单元(0211)还用于从每个像素点的第一时域滤波强度和第二时域滤波强度中选择一个时域滤波强度作为相应像素点的联合时域滤波强度;
    其中,所述运动估计结果包括每个像素点的第一时域滤波强度和/或每个像素点的联合时域滤波强度。
  4. 如权利要求3所述的图像降噪装置,所述运动估计单元(0211)具体用于采用不同的参数对每个像素点的第一时域滤波强度和第二时域滤波强度进行融合,得到第一滤波强度和第二滤波强度,所述第一滤波强度和所述第二滤波强度不同,所述联合时域滤波强度包括所述第一滤波强度和所述第二滤波强度。
  5. 如权利要求4所述的图像降噪装置,所述时域降噪单元(021)还包括时域滤波单元(0212);
    所述时域滤波单元(0212)用于根据每个像素点的第一时域滤波强度对所述第一图像信号进行时域滤波处理,得到所述近红外光降噪图像,根据每个像素点的第一时域滤波强度对所述第二图像信号进行时域滤波处理,得到所述可见光降噪图像;或者
    所述时域滤波单元(0212)用于根据每个像素点的第一时域滤波强度对所述第一图像信号进行时域滤波处理,得到所述近红外光降噪图像,根据每个像素点的联合时域滤波强度对所述第二图像信号进行时域滤波处理,得到所述可见光降噪图像;或者
    所述时域滤波单元(0212)用于根据每个像素点的联合时域滤波强度对所述第一图像信号进行时域滤波处理,得到所述近红外光降噪图像,根据每个像素点的联合时域滤波强度对所述第二图像信号进行时域滤波处理,得到所述可见光降噪图像;其中,当所述联合时域滤波强度包括第一滤波强度和第二滤波强度时,所述时域滤波单元(0212)用于根据每个像素点的第一滤波强度对所 述第一图像信号进行时域滤波处理,得到所述近红外光降噪图像,根据每个像素点的第二滤波强度对所述第二图像信号进行时域滤波处理,得到所述可见光降噪图像。
  6. 如权利要求3-5任一所述的图像降噪装置,所述第一帧差图像是指对所述第一图像信号和所述第一历史降噪图像进行作差处理得到的原始帧差图像;或者,所述第一帧差图像是指对所述原始帧差图像进行处理后得到的帧差图像。
  7. 如权利要求3-5任一所述的图像降噪装置,每个像素点对应的第一帧差阈值不同,或者,每个像素点对应的第一帧差阈值相同。
  8. 如权利要求7所述的图像降噪装置,所述多个第一帧差阈值是根据第一噪声强度图像中多个像素点的噪声强度确定得到,所述第一噪声强度图像根据所述第一历史降噪图像对应的降噪前的图像和所述第一历史降噪图像确定得到。
  9. 如权利要求3-5任一所述的图像降噪装置,所述第二帧差图像是指对所述第二图像信号和所述第二历史降噪图像进行作差处理得到的原始帧差图像;或者,所述第二帧差图像是指对所述原始帧差图像进行处理后得到的帧差图像。
  10. 如权利要求3-5任一所述的图像降噪装置,每个像素点对应的第二帧差阈值不同,或者,每个像素点对应的第二帧差阈值相同。
  11. 如权利要求10所述的图像降噪装置,所述多个第二帧差阈值是根据第二噪声强度图像中多个像素点的噪声强度确定得到,所述第二噪声强度图像根据所述第二历史降噪图像对应的降噪前的图像和所述第二历史降噪图像确定得到。
  12. 如权利要求2所述的图像降噪装置,所述空域降噪单元(022)包括 边缘估计单元(0221);
    所述边缘估计单元(0221)用于确定所述第一图像信号中每个像素点的第一空域滤波强度;
    所述边缘估计单元(0221)还用于确定所述第二图像信号中每个像素点的第二空域滤波强度;
    所述边缘估计单元(0221)还用于对所述第一图像信号进行局部信息提取,得到第一局部信息,对所述第二图像信号进行局部信息提取,得到第二局部信息;根据所述第一空域滤波强度、所述第二空域滤波强度、所述第一局部信息和所述第二局部信息确定每个像素点对应的联合空域滤波强度;
    其中,所述边缘估计结果包括每个像素点的第一空域滤波强度和/或联合空域滤波强度。
  13. 如权利要求12所述的图像降噪装置,所述联合空域滤波强度包括所述第三滤波强度和所述第四滤波强度,所述第三滤波强度和所述第四滤波强度不同。
  14. 如权利要求13所述的图像降噪装置,所述空域降噪单元(022)还包括空域滤波单元(0222);
    所述空域滤波单元(0222)用于根据每个像素点对应的第一空域滤波强度对所述第一图像信号进行空域滤波处理,得到所述近红外光降噪图像,根据每个像素点对应的第一空域滤波强度对所述第二图像信号进行空域滤波处理,得到所述可见光降噪图像;或者
    所述空域滤波单元(0222)用于根据每个像素点对应的第一空域滤波强度对所述第一图像信号进行空域滤波处理,得到所述近红外光降噪图像,根据每个像素点对应的联合空域滤波强度对所述第二图像信号进行空域滤波处理,得到所述可见光降噪图像;或者
    所述空域滤波单元(0222)用于根据每个像素点对应的联合空域滤波强度对所述第一图像信号进行空域滤波处理,得到所述近红外光降噪图像,根据每个像素点对应的联合空域滤波强度对所述第二图像信号进行空域滤波处理,得到所述可见光降噪图像;其中,当所述联合空域滤波强度包括第三滤波强度和第四滤波强度时,所述空域滤波单元(0222)用于根据每个像素点对应的第三 滤波强度对所述第一图像信号进行空域滤波处理,得到所述近红外光降噪图像,根据每个像素点对应的第四滤波强度对所述第二图像信号进行空域滤波处理,得到所述可见光降噪图像。
  15. 如权利要求14所述的图像降噪装置,所述第一局部信息和所述第二局部信息包括局部梯度信息、局部亮度信息和局部信息熵中的至少一种。
  16. 如权利要求1所述的图像降噪装置,所述图像降噪单元(02)包括时域降噪单元(021)和空域降噪单元(022);
    所述时域降噪单元(021)用于根据所述第一图像信号和所述第二图像信号进行运动估计,得到运动估计结果,根据所述运动估计结果对所述第一图像信号进行时域滤波,得到第一时域降噪图像,根据所述运动估计结果对所述第二图像信号进行时域滤波,得到第二时域降噪图像;
    所述空域降噪单元(022)用于根据所述第一时域降噪图像和所述第二时域降噪图像进行边缘估计,得到边缘估计结果,根据所述边缘估计结果对所述第一时域降噪图像进行空域滤波,得到所述近红外光降噪图像,根据所述边缘估计结果对所述第二时域降噪图像进行空域滤波,得到所述可见光降噪图像;
    或者,
    所述空域降噪单元(022)用于根据所述第一图像信号和所述第二图像信号进行边缘估计,得到边缘估计结果,根据所述边缘估计结果对所述第一图像信号进行空域滤波,得到第一空域降噪图像,根据所述边缘估计结果对所述第二图像信号进行空域滤波,得到第二空域降噪图像;
    所述时域降噪单元(021)用于根据所述第一空域降噪图像和所述第二空域降噪图像进行运动估计,得到运动估计结果,根据所述运动估计结果对所述第一空域降噪图像进行时域滤波,得到所述近红外光降噪图像,根据所述运动估计结果对所述第二空域降噪图像进行时域滤波,得到所述可见光降噪图像。
  17. 如权利要求1-16任一所述的图像降噪装置,所述图像降噪装置还包括图像融合单元(03);
    所述图像融合单元(03)用于对所述近红外光降噪图像和所述可见光降噪图像进行融合,得到融合图像。
  18. 如权利要求1-17任一所述的图像降噪装置,所述图像获取单元(01)包括图像传感器(011)、补光器(012)和滤光组件(013),所述图像传感器(011)位于所述滤光组件(013)的出光侧;
    所述图像传感器(011),用于通过多次曝光产生并输出第一图像信号和第二图像信号,其中,所述第一图像信号是根据第一预设曝光产生的图像,所述第二图像信号是根据第二预设曝光产生的图像,所述第一预设曝光和所述第二预设曝光为所述多次曝光中的其中两次曝光;
    所述补光器(012)包括第一补光装置(0121),所述第一补光装置(0121)用于进行近红外补光,其中,至少在所述第一预设曝光的部分曝光时间段内进行近红外补光,在所述第二预设曝光的曝光时间段内不进行近红外补光;
    所述滤光组件(013)包括第一滤光片(0131),所述第一滤光片(0131)使可见光和部分近红外光通过。
  19. 如权利要求1-17任一所述的图像降噪装置,所述图像获取单元包括第一图像传感器和第二图像传感器;
    所述第一图像传感器用于产生并输出所述第一图像信号,所述第二图像传感器用于产生并输出所述第二图像信号;或者
    所述图像获取单元为双目摄像机,所述双目摄像机包括第一摄像头和第二摄像头;
    所述第一摄像头用于产生并输出所述第一图像信号,所述第二摄像头用于产生并输出所述第二图像信号。
  20. 如权利要求1-19任一所述的图像降噪装置,所述图像降噪装置还包括图像预处理单元(04);
    所述图像预处理单元(04)用于分别对所述第一图像信号和所述第二图像信号进行预处理处理,得到第一图像和第二图像,输出所述第一图像和所述第二图像至所述图像降噪单元(02)。
  21. 如权利要求19所述的图像降噪装置,所述图像降噪装置还包括图像预处理单元(04);
    所述图像预处理单元(04)用于所述所第一图像信号和所述第二图像信号进行配准。
  22. 一种图像降噪方法,所述方法包括:
    获取第一图像信号和第二图像信号,其中,所述第一图像信号为近红外光图像信号,所述第二图像信号为可见光图像信号;
    对所述第一图像信号和所述第二图像信号进行联合降噪,得到近红外光降噪图像和可见光降噪图像。
  23. 如权利要求22所述的方法,所述对所述第一图像信号和所述第二图像信号进行联合降噪,得到近红外光降噪图像和可见光降噪图像,包括:
    根据所述第一图像信号和所述第二图像信号进行运动估计,得到运动估计结果,根据所述运动估计结果对所述第一图像信号进行时域滤波处理,得到所述近红外光降噪图像,根据所述运动估计结果对所述第二图像信号进行时域滤波处理,得到所述可见光降噪图像;
    根据所述第一图像信号和所述第二图像信号进行边缘估计,得到边缘估计结果,根据所述边缘估计结果对所述第一图像信号进行空域滤波处理,得到所述近红外光降噪图像,根据所述边缘估计结果对所述第二图像信号进行空域滤波处理,得到所述可见光降噪图像。
  24. 如权利要求23所述的方法,所述根据所述第一图像信号和所述第二图像信号进行运动估计,得到运动估计结果,包括:
    根据所述第一图像信号和第一历史降噪图像生成第一帧差图像,根据所述第一帧差图像和多个第一帧差阈值确定所述第一图像信号中每个像素点的第一时域滤波强度,所述第一历史降噪图像是指对所述第一图像信号的前N帧图像中的任一帧图像进行降噪后的图像,所述N大于或等于1,所述多个第一帧差阈值与所述第一帧差图像中的多个像素点一一对应;
    根据所述第二图像信号和第二历史降噪图像生成第二帧差图像,根据所述第二帧差图像和多个第二帧差阈值确定所述第二图像信号中每个像素点的第二时域滤波强度,所述第二历史降噪图像是指对所述第二图像信号的前N帧图像中的任一帧图像进行降噪后的图像信,所述多个第二帧差阈值与所述第二帧 差图像中的多个像素点一一对应;
    对每个像素点的第一时域滤波强度和第二时域滤波强度进行融合,得到每个像素点的联合时域滤波强度;或者,从每个像素点的第一时域滤波强度和第二时域滤波强度中选择一个时域滤波强度作为相应像素点的联合时域滤波强度;
    其中,所述运动估计结果包括每个像素点的第一时域滤波强度和/或每个像素点的联合时域滤波强度。
  25. 如权利要求24所述的方法,其特征在于,所述对每个像素点的第一时域滤波强度和第二时域滤波强度进行融合,得到每个像素点的联合时域滤波强度,包括:
    采用不同的参数对每个像素点的第一时域滤波强度和第二时域滤波强度进行融合,得到第一滤波强度和第二滤波强度,所述联合时域滤波强度包括所述第一滤波强度和所述第二滤波强度。
  26. 如权利要求25所述的方法,所述根据所述运动估计结果对所述第一图像信号进行时域滤波处理,得到所述近红外光降噪图像,根据所述运动估计结果对所述第二图像信号进行时域滤波处理,得到所述可见光降噪图像,包括:
    根据每个像素点的第一时域滤波强度对所述第一图像信号进行时域滤波处理,得到所述近红外光降噪图像,根据每个像素点的第一时域滤波强度对所述第二图像信号进行时域滤波处理,得到所述可见光降噪图像;或者
    根据每个像素点的第一时域滤波强度对所述第一图像信号进行时域滤波处理,得到所述近红外光降噪图像,根据每个像素点的联合时域滤波强度对所述第二图像信号进行时域滤波处理,得到所述可见光降噪图像;或者
    根据每个像素点的联合时域滤波强度对所述第一图像信号进行时域滤波处理,得到所述近红外光降噪图像,根据每个像素点的联合时域滤波强度对所述第二图像信号进行时域滤波处理,得到所述可见光降噪图像;其中,当所述联合时域滤波强度包括第一滤波强度和第二滤波强度时,根据每个像素点的第一滤波强度对所述第一图像信号进行时域滤波处理,得到所述近红外光降噪图像,根据每个像素点的第二滤波强度对所述第二图像信号进行时域滤波处理,得到所述可见光降噪图像。
  27. 如权利要求23所述的方法,所述根据所述第一图像信号和所述第二图像信号进行边缘估计,得到边缘估计结果,包括:
    确定所述第一图像信号中每个像素点的第一空域滤波强度;
    确定所述第二图像信号中每个像素点的第二空域滤波强度;
    对所述第一图像信号进行局部信息提取,得到第一局部信息,对所述第二图像信号进行局部信息提取,得到第二局部信息;根据所述第一空域滤波强度、所述第二空域滤波强度、所述第一局部信息和所述第二局部信息确定每个像素点对应的联合空域滤波强度;
    其中,所述边缘估计结果包括每个像素点的第一空域滤波强度和/或联合空域滤波强度。
  28. 如权利要求27所述的方法,所述联合空域滤波强度包括所述第三滤波强度和所述第四滤波强度,所述第三滤波强度和所述第四滤波强度不同。
  29. 如权利要求28所述的方法,所述根据所述边缘估计结果对所述第一图像信号进行空域滤波处理,得到所述近红外光降噪图像,根据所述边缘估计结果对所述第二图像信号进行空域滤波处理,得到所述可见光降噪图像,包括:
    根据每个像素点对应的第一空域滤波强度对所述第一图像信号进行空域滤波处理,得到所述近红外光降噪图像,根据每个像素点对应的第一空域滤波强度对所述第二图像信号进行空域滤波处理,得到所述可见光降噪图像;或者
    根据每个像素点对应的第一空域滤波强度对所述第一图像信号进行空域滤波处理,得到所述近红外光降噪图像,根据每个像素点对应的联合空域滤波强度对所述第二图像信号进行空域滤波处理,得到所述可见光降噪图像;或者
    根据每个像素点对应的联合空域滤波强度对所述第一图像信号进行空域滤波处理,得到所述近红外光降噪图像,根据每个像素点对应的联合空域滤波强度对所述第二图像信号进行空域滤波处理,得到所述可见光降噪图像;其中,当所述联合空域滤波强度包括第三滤波强度和第四滤波强度时,根据每个像素点对应的第三滤波强度对所述第一图像信号进行空域滤波处理,得到所述近红外光降噪图像,根据每个像素点对应的第四滤波强度对所述第二图像信号进行空域滤波处理,得到所述可见光降噪图像。
  30. 如权利要求22所述的方法,所述对所述第一图像信号和所述第二图像信号进行联合降噪,得到近红外光降噪图像和可见光降噪图像,包括:
    根据所述第一图像信号和所述第二图像信号进行运动估计,得到运动估计结果,根据所述运动估计结果对所述第一图像信号进行时域滤波,得到第一时域降噪图像,根据所述运动估计结果对所述第二图像信号进行时域滤波,得到第二时域降噪图像;
    根据所述第一时域降噪图像和所述第二时域降噪图像进行边缘估计,得到边缘估计结果,根据所述边缘估计结果对所述第一时域降噪图像进行空域滤波,得到所述近红外光降噪图像,根据所述边缘估计结果对所述第二时域降噪图像进行空域滤波,得到所述可见光降噪图像;
    或者,
    根据所述第一图像信号和所述第二图像信号进行边缘估计,得到边缘估计结果,根据所述边缘估计结果对所述第一图像信号进行空域滤波,得到第一空域降噪图像,根据所述边缘估计结果对所述第二图像信号进行空域滤波,得到第二空域降噪图像;
    根据所述第一空域降噪图像和所述第二空域降噪图像进行运动估计,得到运动估计结果,根据所述运动估计结果对所述第一空域降噪图像进行时域滤波,得到所述近红外光降噪图像,根据所述运动估计结果对所述第二空域降噪图像进行时域滤波,得到所述可见光降噪图像。
  31. 如权利要求22-30任一所述的方法,所述方法还包括:
    对所述近红外光降噪图像和所述可见光降噪图像进行融合,得到融合图像。
  32. 一种电子设备,其特征在于,所述电子设备包括处理器、通信接口、存储器和通信总线,所述处理器、所述通信接口和所述存储器通过所述通信总线完成相互间的通信,所述存储器用于存放计算机程序,所述处理器用于执行所述存储器上存放的所述计算机程序,以实现权利要求22-31任一所述方法的步骤。
  33. 一种计算机可读存储介质,其特征在于,所述存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现权利要求22-31任一所述方法的步骤。
  34. 一种计算机程序产品,其特征在于,所述计算机程序产品包括指令,当所述指令在计算机上运行时,使得所述计算机执行权利要求22-31任一所述方法的步骤。
PCT/CN2020/092656 2019-05-31 2020-05-27 图像降噪装置及图像降噪方法 WO2020238970A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910472708.XA CN110490811B (zh) 2019-05-31 2019-05-31 图像降噪装置及图像降噪方法
CN201910472708.X 2019-05-31

Publications (1)

Publication Number Publication Date
WO2020238970A1 true WO2020238970A1 (zh) 2020-12-03

Family

ID=68545885

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/092656 WO2020238970A1 (zh) 2019-05-31 2020-05-27 图像降噪装置及图像降噪方法

Country Status (2)

Country Link
CN (1) CN110490811B (zh)
WO (1) WO2020238970A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113538255A (zh) * 2021-05-31 2021-10-22 浙江大华技术股份有限公司 一种运动融合降噪方法、设备及计算机可读存储介质
CN114821195A (zh) * 2022-06-01 2022-07-29 南阳师范学院 计算机图像智能化识别方法
CN117853365A (zh) * 2024-03-04 2024-04-09 济宁职业技术学院 基于计算机图像处理的艺术成果展示方法

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110490811B (zh) * 2019-05-31 2022-09-09 杭州海康威视数字技术股份有限公司 图像降噪装置及图像降噪方法
US11689822B2 (en) 2020-09-04 2023-06-27 Altek Semiconductor Corp. Dual sensor imaging system and privacy protection imaging method thereof
CN112258407B (zh) * 2020-10-20 2024-07-26 北京集创北方科技股份有限公司 一种图像采集设备的信噪比获取方法、装置及存储介质
CN112435183B (zh) * 2020-11-17 2024-07-16 浙江大华技术股份有限公司 一种图像降噪方法和装置以及存储介质
CN112950502B (zh) * 2021-02-26 2024-02-13 Oppo广东移动通信有限公司 图像处理方法及装置、电子设备、存储介质
CN114088658B (zh) * 2021-10-09 2024-08-09 池明旻 用于近红外织物纤维成分无损清洁分析的降噪处理方法

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140198992A1 (en) * 2013-01-15 2014-07-17 Apple Inc. Linear Transform-Based Image Processing Techniques
CN109005333A (zh) * 2018-10-19 2018-12-14 天津天地基业科技有限公司 一种红外爆闪卡口相机及图像合成方法
CN109406446A (zh) * 2018-10-12 2019-03-01 四川长虹电器股份有限公司 对近红外数据的预处理方法及其调用方法
CN109410124A (zh) * 2016-12-27 2019-03-01 深圳开阳电子股份有限公司 一种视频图像的降噪方法及装置
CN109618099A (zh) * 2019-01-10 2019-04-12 深圳英飞拓科技股份有限公司 双光谱摄像机图像融合方法及装置
CN110493494A (zh) * 2019-05-31 2019-11-22 杭州海康威视数字技术股份有限公司 图像融合装置及图像融合方法
CN110490187A (zh) * 2019-05-31 2019-11-22 杭州海康威视数字技术股份有限公司 车牌识别设备和方法
CN110490811A (zh) * 2019-05-31 2019-11-22 杭州海康威视数字技术股份有限公司 图像降噪装置及图像降噪方法
CN110505377A (zh) * 2019-05-31 2019-11-26 杭州海康威视数字技术股份有限公司 图像融合设备和方法

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4501855B2 (ja) * 2005-12-22 2010-07-14 ソニー株式会社 画像信号処理装置、撮像装置、および画像信号処理方法、並びにコンピュータ・プログラム
CN102254309B (zh) * 2011-07-27 2016-03-23 清华大学 一种基于近红外图像的运动模糊图像去模糊方法和装置
CN102769722B (zh) * 2012-07-20 2015-04-29 上海富瀚微电子股份有限公司 时域与空域结合的视频降噪装置及方法
CN202887451U (zh) * 2012-10-26 2013-04-17 青岛海信网络科技股份有限公司 融合红外和可见光补光的电子警察系统
JP2016096430A (ja) * 2014-11-13 2016-05-26 パナソニックIpマネジメント株式会社 撮像装置及び撮像方法
CN107918929B (zh) * 2016-10-08 2019-06-21 杭州海康威视数字技术股份有限公司 一种图像融合方法、装置及系统
CN107977924A (zh) * 2016-10-21 2018-05-01 杭州海康威视数字技术股份有限公司 一种基于双传感器成像的图像处理方法、系统
CN107566747B (zh) * 2017-09-22 2020-02-14 浙江大华技术股份有限公司 一种图像亮度增强方法及装置

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140198992A1 (en) * 2013-01-15 2014-07-17 Apple Inc. Linear Transform-Based Image Processing Techniques
CN109410124A (zh) * 2016-12-27 2019-03-01 深圳开阳电子股份有限公司 一种视频图像的降噪方法及装置
CN109406446A (zh) * 2018-10-12 2019-03-01 四川长虹电器股份有限公司 对近红外数据的预处理方法及其调用方法
CN109005333A (zh) * 2018-10-19 2018-12-14 天津天地基业科技有限公司 一种红外爆闪卡口相机及图像合成方法
CN109618099A (zh) * 2019-01-10 2019-04-12 深圳英飞拓科技股份有限公司 双光谱摄像机图像融合方法及装置
CN110493494A (zh) * 2019-05-31 2019-11-22 杭州海康威视数字技术股份有限公司 图像融合装置及图像融合方法
CN110490187A (zh) * 2019-05-31 2019-11-22 杭州海康威视数字技术股份有限公司 车牌识别设备和方法
CN110490811A (zh) * 2019-05-31 2019-11-22 杭州海康威视数字技术股份有限公司 图像降噪装置及图像降噪方法
CN110505377A (zh) * 2019-05-31 2019-11-26 杭州海康威视数字技术股份有限公司 图像融合设备和方法

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113538255A (zh) * 2021-05-31 2021-10-22 浙江大华技术股份有限公司 一种运动融合降噪方法、设备及计算机可读存储介质
CN114821195A (zh) * 2022-06-01 2022-07-29 南阳师范学院 计算机图像智能化识别方法
CN114821195B (zh) * 2022-06-01 2022-12-16 南阳师范学院 计算机图像智能化识别方法
CN117853365A (zh) * 2024-03-04 2024-04-09 济宁职业技术学院 基于计算机图像处理的艺术成果展示方法
CN117853365B (zh) * 2024-03-04 2024-05-17 济宁职业技术学院 基于计算机图像处理的艺术成果展示方法

Also Published As

Publication number Publication date
CN110490811B (zh) 2022-09-09
CN110490811A (zh) 2019-11-22

Similar Documents

Publication Publication Date Title
WO2020238970A1 (zh) 图像降噪装置及图像降噪方法
WO2020238807A1 (zh) 图像融合装置及图像融合方法
CN110519489B (zh) 图像采集方法及装置
CN110505377B (zh) 图像融合设备和方法
CN108712608B (zh) 终端设备拍摄方法和装置
CN102077572B (zh) 用于在成像系统中防止运动模糊和重影的方法及装置
KR102266649B1 (ko) 이미지 처리 방법 및 장치
CN102892008B (zh) 双图像捕获处理
JP6492055B2 (ja) 双峰性の画像を取得するための装置
CN108111749B (zh) 图像处理方法和装置
CN110365961B (zh) 图像去马赛克装置及方法
CN110706178B (zh) 图像融合装置、方法、设备及存储介质
CN110490187B (zh) 车牌识别设备和方法
CN110493532B (zh) 一种图像处理方法和系统
EP2088787A1 (en) Image picking-up processing device, image picking-up device, image processing method and computer program
US10410078B2 (en) Method of processing images and apparatus
CN111711755B (zh) 图像处理方法及装置、终端和计算机可读存储介质
CN110493506A (zh) 一种图像处理方法和系统
CN112785534A (zh) 一种动态场景下去鬼影多曝光图像融合方法
US20180075586A1 (en) Ghost artifact removal system and method
CN110493535B (zh) 图像采集装置和图像采集的方法
CN108024057A (zh) 背景虚化处理方法、装置及设备
CN110493493B (zh) 全景细节摄像机及获取图像信号的方法
CN110493531A (zh) 一种图像处理方法和系统
CN107454318A (zh) 图像处理方法、装置、移动终端及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20814450

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20814450

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20814450

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 19.09.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20814450

Country of ref document: EP

Kind code of ref document: A1