WO2021134642A1 - 图像处理方法、装置及存储介质 - Google Patents

图像处理方法、装置及存储介质 Download PDF

Info

Publication number
WO2021134642A1
WO2021134642A1 PCT/CN2019/130858 CN2019130858W WO2021134642A1 WO 2021134642 A1 WO2021134642 A1 WO 2021134642A1 CN 2019130858 W CN2019130858 W CN 2019130858W WO 2021134642 A1 WO2021134642 A1 WO 2021134642A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
images
motion vector
preset
sensor
Prior art date
Application number
PCT/CN2019/130858
Other languages
English (en)
French (fr)
Inventor
张青涛
曹子晟
龙余斌
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201980010510.8A priority Critical patent/CN111699511A/zh
Priority to PCT/CN2019/130858 priority patent/WO2021134642A1/zh
Publication of WO2021134642A1 publication Critical patent/WO2021134642A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing

Definitions

  • This application relates to the field of image processing technology, and in particular to an image processing method, device and storage medium.
  • the collected images often contain noise.
  • the noise is particularly obvious.
  • infrared sensors due to their low resolution and signal-to-noise ratio, often have a lot of strong noise in the collected infrared images, which makes some low-temperature objects submerged in the noise and reduces the infrared detection ability. Therefore, it is necessary to improve the image noise reduction processing method to achieve better noise reduction effect.
  • this application provides an image processing method and device.
  • an image processing method including:
  • the first group of images are collected by a first sensor
  • the second group of images are collected by a second sensor
  • the first sensor and the second sensor The relative position of is fixed, and the signal-to-noise ratio of the image collected by the first sensor is greater than the signal-to-noise ratio of the image collected by the second sensor;
  • an image processing device including a processor, a memory, and a computer program stored in the memory, and the processor implements the following steps when the processor executes the computer program:
  • the first group of images are collected by a first sensor
  • the second group of images are collected by a second sensor
  • the first sensor and the second sensor The relative position of is fixed, and the signal-to-noise ratio of the image collected by the first sensor is greater than the signal-to-noise ratio of the image collected by the second sensor;
  • an image processing device including a pan/tilt, a first sensor, a second sensor, a processor, a memory, and a computer program stored in the memory, the first image sensor And the second image sensor are fixed to the pan/tilt, the signal-to-noise ratio of the image collected by the first sensor is greater than the signal-to-noise ratio of the image collected by the second sensor, and the first image sensor is used to collect the first group Image, the second image sensor is used to collect a second group of images, and the processor is used to:
  • a computer-readable storage medium having a computer program stored on the computer-readable storage medium, and when the computer program is executed by a processor, any one of the image processing methods of the present application is implemented.
  • the present application separately collects the first group of images and the second group of images through two sensors with fixed relative positions and different collected image signal-to-noise ratios, where the signal-to-noise ratio of the first group of images is greater than that of the second group of images, Then determine the motion vector between the second image in the second group of images and the reference image of the second image according to the motion vector between the first image in the first group of images and the reference image of the first image, and according to The motion vector between the second image and the reference image of the second image and the reference image thereof perform denoising processing on the second image.
  • the determination of the motion vector of the image with a low signal-to-noise ratio is more accurate, thereby improving the denoising effect of the image.
  • Fig. 1 is a flow chart of an image processing method provided by an embodiment of the present invention.
  • Fig. 2 is a block diagram of the logical structure of an image processing device according to an embodiment of the present invention.
  • Fig. 3 is a block diagram of the logical structure of an image processing device according to an embodiment of the present invention.
  • Fig. 4 is a schematic diagram of the first sensor and the second sensor being fixed on the same pan/tilt according to an embodiment of the present invention.
  • the collected images often contain noise.
  • the noise is particularly obvious.
  • the infrared sensor due to the difference in the response characteristics of the photosensitive unit on the infrared focal plane array, causes the collected infrared images to often have various fringe noises, ghosts, black spots, etc., making some objects with poor low temperature submerged in the noise , Which reduces the infrared detection capability.
  • time-domain filtering technology is often used to denoise the image, that is, the pixel value of the denoised image is determined by combining the pixel value of the multi-frame image collected by the image sensor. This method is used to denoise the image.
  • multiple frames of images collected before or after the current image to be denoised can usually be selected as the reference image, and then motion estimation can be performed on the image to determine the motion vector between the image to be denoised and each frame of reference image.
  • S102 Acquire a first group of images and a second group of images, where the first group of images is collected by a first sensor, and the second group of images is collected by a second sensor.
  • the relative positions of the two sensors are fixed, and the signal-to-noise ratio of the image collected by the first sensor is greater than the signal-to-noise ratio of the image collected by the second sensor;
  • S104 Determine the reference between the second image in the second group of images and the reference image of the first image according to the motion vector between the first image in the first group of images and the reference image of the first image. Motion vector between images;
  • S106 Perform denoising processing on the second image in the second group of images according to the motion vector between the second image and the reference image of the second image and the reference image of the second image .
  • the image denoising method of the present application can be used in an image acquisition device that includes at least two sensors, where the signal-to-noise ratio of the images collected by the two sensors is different, for example, it can be an image acquisition device that includes both an infrared sensor and a visible light sensor. equipment.
  • the image processing method of the present application can also be applied to a device obtained by a fixed connection and combination of two image capture devices in a certain manner, each device includes a sensor, and the signal-to-noise ratio of the images collected by the sensors is different. .
  • it can be a device obtained by combining an infrared camera and a visible light camera.
  • the image processing method of the present application can also be used only for terminal devices or cloud servers that only have image processing functions, and after receiving the images collected by the image collection device, subsequent image processing is performed. It should be pointed out that the image processing method of this application is not limited to the scene of two sensors, more than multiple sensors, for example, three or more sensors are fixedly connected, and the image signal-to-noise ratio collected by each sensor is different. The same applies to the scenario, and the processing steps are similar to the two sensor scenario.
  • the time domain filtering technology when used to denoise the image, in order to more accurately determine the motion vector between the image to be denoised and the reference image, this application uses the image collected by a sensor with a high image signal to noise ratio to guide the image signal to noise
  • the motion vector is estimated from the image collected by the sensor with a lower ratio in order to obtain a more accurate motion vector. Therefore, the first sensor and the second sensor in the present application may be rigidly connected.
  • the first sensor and the second sensor are two sensors with relatively fixed positions, and may be on the same device or on different devices.
  • the first sensor and the second sensor may be fixed on the same pan/tilt and rotate with the rotation of the pan/tilt. Therefore, the directions and angles of the rotation are the same.
  • the images collected by the first sensor are collectively referred to as the first group of images
  • the images collected by the second sensor are collectively referred to as the second group of images.
  • the signal-to-noise ratio of the image collected by the first sensor is higher than the signal-to-noise ratio of the image collected by the second sensor, that is, the signal-to-noise ratio of the first group of images is higher than the second group of images.
  • the first sensor is a visible light sensor.
  • the first group of images can be visible light images
  • the second sensor is an infrared sensor or an ultraviolet light sensor.
  • the second group of images can be An infrared image, an ultraviolet image, or a Time-Of-Flight (TOF) image.
  • the first set of images may be infrared images
  • the second set of images may be visible light images.
  • the type of the first sensor and the second sensor can be determined according to actual conditions. For example, when the preset conditions are met, the first sensor is a visible light sensor, and the second sensor is an infrared sensor or an ultraviolet light sensor; when the preset conditions are not met, the first sensor is a visible light sensor, and correspondingly, the first sensor It is an infrared sensor or an ultraviolet light sensor, and the second sensor is a visible light sensor.
  • the preset condition may be that the current moment is in a preset daytime period or the visibility of the current environment is greater than the preset visibility threshold. For example, when it is daytime, the light is brighter. At this time, the signal-to-noise ratio of the visible light image collected by the visible light sensor is significantly higher than that of the infrared image, ultraviolet image or TOF image. At this time, the visible image can be used to guide the infrared image, Ultraviolet image or TOF image determines the motion vector.
  • the signal-to-noise ratio of the image collected by the visible light sensor will be lower than that of the infrared image, and the infrared image can be used to guide the visible light image to determine the motion vector.
  • the second image in the second group of images can be determined according to the motion vector between the first image in the first group of images and the reference image of the first image
  • the motion vector between the second image and the reference image of the second image, where the second image is the image to be denoised in the second group of images, the second image can be one or more frames, and the reference image of the second image can also be It is one or more frames of images in the second group of images.
  • the reference image can be several frames of images collected by the second sensor before collecting the second image, or it can be the second sensor collecting several frames after collecting the second image
  • the image of course, can also include several frames before the second image and several frames after the second image at the same time.
  • the first image and the second image may be images respectively collected by the first sensor and the second sensor at the same relative position.
  • the reference image of the first image and the reference image of the second image are also images respectively collected by the first sensor and the second sensor at the same relative position.
  • the first sensor and the second sensor may be collected at the same relative position at the same time, or they may be collected sequentially.
  • the first sensor and the second sensor are fixed on a pan/tilt. When the pan/tilt rotates to angle A, the image captured by the first sensor is the first image. When the pan/tilt is at this position, the second sensor captures The image of is the second image.
  • the reference image of the first image and the reference image of the second image may be the images respectively collected by the two sensors when the pan/tilt is rotated to angle B, angle C and other angles.
  • the first image and its reference image can be used to determine the motion vector, and the second image and the second image can be determined based on the motion vector of the first image and its reference image.
  • the motion vector before the reference image After determining the motion vector of the second image and the reference image, the second image can be denoised according to the motion vector and the reference image.
  • a gray-scale histogram correlation matching method may be used to determine the motion vector. Multiple image regions can be determined from the intermediate positions of the first image and the reference image of the first image, and then the grayscale histograms of these multiple image regions can be determined, and correlation matching can be performed on the multiple grayscale histograms to determine all The motion vector between the first image and the reference image of the first image. Generally, since the image quality of the image area in the middle position of the image is often better, the image area can be determined from the middle position. Of course, the image area with better quality can also be determined from other positions in the image.
  • the horizontal and vertical histogram statistics can be performed on these image areas to obtain the horizontal and vertical histogram statistics of the first image and its reference image, and according to The histogram statistical value performs correlation matching, determines the correlation coefficient of each small image block in the image area, and determines the motion vector between the first image and its reference image and the confidence level corresponding to the motion vector according to the correlation coefficient.
  • the gray-scale histogram correlation matching is only a method to determine the motion vector between the first image and its reference image.
  • the motion vector between the image and its reference image can be flexibly selected according to the actual application scenario, and this application does not limit this.
  • the global motion vector of the image collected by the first sensor can be used as a reference for the global motion vector of the image collected by the second sensor. After the motion vector between the first image and the reference image is determined, the motion vector between the second image and the reference image of the second image may be determined according to the motion vector between the first image and the reference image.
  • the field of view (FOV) and resolution of the first sensor and the second sensor are different, such as an infrared sensor
  • the field of view angle of is usually smaller than that of the visible light sensor, and the resolution of the infrared image is lower than that of the visible light image. Therefore, in some embodiments, after determining the motion vector between the first image and the reference image of the first image, a preset transformation matrix may be used to compare the difference between the first image and the reference image of the first image. Matrix mapping is performed on the motion vector of the second image to obtain the motion vector between the second image and the reference image of the second image.
  • the first sensor and the second sensor need to be calibrated for the respective internal parameter matrix and distortion parameter, as well as the rotation matrix and translation vector between them.
  • the detailed calibration method may be based on Zhang Zhengyou’s checkerboard calibration, for example. Method or its improvement.
  • the mapping matrix from visible light image pixel points to infrared image pixel points can be calculated after the above parameters are obtained.
  • the transformation matrix can be obtained in advance according to the position parameters of the first sensor and the second sensor, and/or the resolution of the first sensor and the second sensor.
  • a transformation matrix may be used to perform affine transformation on the motion vector between the first image and the reference image of the first image to obtain the motion between the second image and the reference image of the second image.
  • Vector may be used to perform perspective transformation on the motion vector between the first image and the reference image of the first image, so as to obtain the motion vector between the second image and the reference image of the second image.
  • a transformation matrix can be used to perform affine transformation and perspective transformation on the motion vector between the first image and the reference image of the first image at the same time, so as to obtain the difference between the second image and the second image.
  • Motion vector between reference images The specific conversion method may be set according to factors such as the position, resolution, and field of view of the two sensors, which is not limited in this application.
  • the second image may be denoised according to the motion vector and the reference image of the second image.
  • the integrated filter coefficient of each pixel in the second image can be determined according to the motion vector between the second image and the reference image of the second image, and then the second image is denoised according to the integrated filter coefficient and the reference image of the second image deal with.
  • the comprehensive filter coefficient may be the weight of the pixel value of the corresponding pixel on the second image or the reference image when the pixel value of the denoised image is determined according to the second image and its reference image. For example, suppose there is a pixel point P0 on the second image. According to the motion vector between the second image and its reference image, it can be determined that P0 is the corresponding pixel point P1 of the reference image, and the denoised image of the second image is at The pixel value of the corresponding pixel can be determined according to the pixel value of these two pixels. At this time, the weight of the pixel value of P0 and P1 can be determined in determining the pixel value of the pixel after denoising, which is called the integrated filter coefficient .
  • the comprehensive filter coefficient is related to the global motion of the image, and is also related to the local motion of the image.
  • the global movement is the movement of the overall image brought about by the change of the sensor position
  • the local movement is the movement caused by the movement of the shooting object. Both motions will affect the pixel point matching of the final second image and its reference image. Since the motion vector between the second image and the reference image of the second image determined according to the motion vector between the first image and its reference image characterizes the global motion of the image, in some embodiments, the The local motion of the image can be considered when filtering coefficients.
  • the first filter coefficient can be determined according to the degree of matching between each pixel of the second image and the corresponding pixel, and then according to the second image and the second image.
  • the confidence of the motion vector between the reference images of the image determines the second filter coefficient, and the second filter coefficient reflects the accuracy of the motion vector.
  • the confidence of the motion vector between the second image and the reference image of the second image may be determined according to the confidence of the motion vector between the first image and the reference image of the first image.
  • the integrated filter coefficient may be determined according to the first filter coefficient and the second filter coefficient. In this way, the global motion and local motion of the image are comprehensively considered, so that the determined filter coefficients will be more accurate.
  • a characteristic pixel may be determined according to the pixel value of each pixel of the second image and the corresponding pixel.
  • Characteristic parameter of the degree of point matching may be the absolute value of the difference between the pixel value of each pixel of the second image and the pixel value of the corresponding pixel.
  • the characterizing parameter may also be the difference between the pixel value of a pixel on a small image area where a certain pixel on the second image is located and the pixel value of the pixel of the image area in the corresponding area of the reference image.
  • the sum of absolute values is SAD (Sum of Absolute Differences).
  • SAD Sud of Absolute Differences
  • the first threshold value, the preset second threshold value, and the preset value can be preset according to the characterizing parameters.
  • the maximum filter coefficient is used to determine the first filter coefficient.
  • the preset first threshold and the preset second threshold are thresholds related to the image noise level, and the preset first threshold is smaller than the preset second threshold, and the maximum filter coefficient is a fixed coefficient between 0-1.
  • the first filter coefficient is equal to the preset maximum filter coefficient; if the characterizing parameter is greater than the preset second threshold, the first filter coefficient is equal to 0, if the characterizing parameter Greater than the preset first threshold and less than the preset second threshold, the first filter coefficient is equal to the product of the maximum filter coefficient and the specified coefficient, where the specified coefficient is obtained based on the preset second threshold, the characterizing parameter, and the preset first threshold.
  • the preset first threshold is lowthres
  • the preset second threshold is highthres. Lowthres and highthres are thresholds related to the image noise level, and highthres>lowthres, ratio is the maximum filter coefficient , 0 ⁇ ratio ⁇ 1. Then the first filter coefficient can be calculated by formula (1).
  • the second filter coefficient may be determined according to the confidence of the motion vector between the second image and the reference image of the second image, and then the integrated filter coefficient may be determined according to the first filter coefficient and the second filter coefficient.
  • the denoising can be determined according to the pixel value of each pixel in the second image, the pixel value of the corresponding pixel in the reference image, and the comprehensive filter coefficient.
  • the pixel value of each pixel in the second image is the pixel value of each pixel in the second image.
  • time-domain filtering can use FIR filtering, assuming that the pixel value of the pixel with coordinates (p, q) in the second image is V(p, q), and the coordinates in the reference image of the second image are (p, q)
  • the coordinates of the reference pixel corresponding to the pixel of is (p+dp, q+dq), and the pixel value of the reference pixel is W(p+dp, q+dq), then in the second image after denoising
  • the pixel value V o (p, q) of the pixel with coordinates (p, q) can be calculated by formula (2),
  • V o (p,q) (1-s(p,q))V(p,q)+s(p,q)W(p+dp,q+dq)
  • s(p,q) is the comprehensive filter coefficient
  • dp,dq is the motion vector of the pixel with the coordinate (p,q) in the second image.
  • the denoising pixel value can be obtained by using formula (2) for each frame of the reference image, and then the average value is taken as the final denoising pixel value.
  • the determined motion vectors of images with low signal-to-noise ratios can be more accurate, so that images with low signal-to-noise ratios can be denoised through motion vectors. , Can improve the denoising effect.
  • the signal-to-noise ratio of the infrared image is low.
  • the time-domain filtering technology is used to denoise the infrared image, it is necessary to determine the motion vector between the infrared image to be denoised and the reference infrared image, and then denoise according to the motion vector.
  • the pixel points of the infrared image and the pixel points of the reference infrared image are matched to determine the pixel value of the infrared image after denoising. Due to the low signal-to-noise ratio of the infrared image, the motion vector determined from the infrared image is not accurate, resulting in poor noise reduction.
  • the signal-to-noise ratio of the image collected by the visible light sensor is higher than that of the infrared sensor.
  • an infrared sensor fixed to the same pan/tilt and in a fixed relative position is adopted. It collects images separately from a visible light sensor. Since the visible light image has a high signal-to-noise ratio, the visible light image can be used to guide the infrared image to determine the motion vector, so that the determined motion vector of the infrared image is more accurate, and then the infrared image is denoised.
  • the specific denoising process is as follows:
  • the reference infrared image and the reference visible light image are the images collected by the infrared sensor and the visible light sensor when the pan-tilt is located in other positions.
  • the reference infrared image and the reference visible light image may be one frame or multiple frames of images.
  • a region of interest is selected from the middle position of the visible light image and the reference visible light image, and the gray histogram statistics in the row direction and column direction are performed to obtain the visible light image and the reference visible light image in the row direction and column respectively.
  • the statistic value of the gray histogram of the direction use the gray histogram correlation method to search for the global motion vector, the row direction and the column direction are independently carried out, and the gray histogram is correlated to determine the visible light image and each reference visible light Motion vector between images.
  • the offset calculation of the histogram correlation method in the row and column direction can be attributed to the problem of the maximum cross-correlation between the visible light image and the reference visible light image.
  • a total of k 2*dx+1 cross-correlation coefficients and the displacement offset of the two sets of calculation vectors, the displacement offset corresponding to the maximum cross-correlation number is the displacement offset in the row/column direction, and the offset position is the visible light image and Refer to the motion vector between the visible light images, and determine the confidence level of the motion vector between the visible light image and the reference visible light image according to the absolute value of the cross-correlation coefficient.
  • a transformation matrix can be determined in advance according to the position parameters, resolution, and angle of view of the infrared sensor and the visible light sensor, and then the determined visible light image and the motion vector of the reference visible light image can be subjected to perspective transformation and anti-radiation transformation through the transformation matrix.
  • the motion vector between the infrared image to be denoised and the reference infrared image of each frame can be obtained, and the infrared image to be denoised and the reference infrared image of each frame can be determined according to the confidence of the motion vector between the visible light image and the reference visible light image Confidence of the motion vector between.
  • the comprehensive filter coefficient can be comprehensively determined according to the global motion and local motion of the image.
  • the global motion is the motion caused by the position change of the sensor
  • the local motion is the motion caused by the motion of the shooting object. Since the motion vector between the infrared image to be denoised and the reference infrared image of each frame determined according to the motion vector between the visible light image and the reference visible light image, the global motion of the image is considered, and the local motion of the object is not considered.
  • the corresponding pixel points of each pixel of the infrared image to be denoised in the reference infrared image can be determined according to the motion vector between the infrared image to be denoised and the reference infrared image of each frame, and the pixel value difference between each pixel and the pixel of the reference image can be determined.
  • the absolute value of the value assuming H, preset the first threshold lowthres, the second threshold highthres, lowthres and highthres are the thresholds related to the image noise level, and highthres>lowthres, and the maximum time domain filter coefficient ratio, 0 ⁇ ratio ⁇ 1.
  • the first filter coefficient S1 can be calculated by formula (1).
  • each frame of reference infrared image corresponding to each pixel of the infrared image to be denoised it can be determined according to the pixel value of each pixel of the infrared image to be denoised, the pixel value of the corresponding pixel in the reference image, and the filter coefficient.
  • the pixel value of each pixel of the infrared image after denoising it can be determined according to the pixel value of each pixel of the infrared image to be denoised, the pixel value of the corresponding pixel in the reference image, and the filter coefficient.
  • Time-domain filtering can use FIR filtering, assuming that the pixel value of the pixel with the coordinates (p, q) of the infrared image to be denoised is V(p, q) and the coordinates of the reference infrared image are (p+dp, q+dq) The pixel value of the pixel is W(p+dp, q+dq), then the pixel value of the corresponding pixel of the denoised infrared image can be calculated by the following formula:
  • V o (p,q) (1-s(p,q))V(p,q)+s(p,q)W(p+dp,q+dq)
  • s(p, q) is the comprehensive filter coefficient
  • dp, dq are the determined motion vectors.
  • the present application also provides an image processing device.
  • the device 20 includes a processor 21, a memory 22, and a computer program stored in the memory.
  • the processor executes the computer program, Implement the following steps:
  • the first group of images are collected by a first sensor
  • the second group of images are collected by a second sensor
  • the first sensor and the second sensor The relative position of is fixed, and the signal-to-noise ratio of the image collected by the first sensor is greater than the signal-to-noise ratio of the image collected by the second sensor;
  • the processor is configured to determine the first image in the second group of images according to a motion vector between a first image in the first group of images and a reference image of the first image.
  • the motion vector between the second image and the reference image of the second image specifically includes:
  • the motion vector between the first image and the reference image of the first image is mapped and transformed by a preset transformation matrix to obtain the motion vector between the second image and the reference image of the second image , wherein the transformation matrix is obtained based on the position parameters and resolution of the first sensor and the second sensor.
  • the processor when the processor is configured to map and transform the motion vector between the first image and the reference image of the first image by using a preset transformation matrix, it specifically includes:
  • the processing when used to determine the motion vector between the first image and the reference image of the first image, it specifically includes:
  • Correlation matching is performed on the grayscale histogram, and a motion vector between the first image and the reference image of the first image is determined.
  • the processor is configured to compare the second set of images according to the motion vector between the second image and the reference image of the second image and the reference image of the second image.
  • the processor specifically includes:
  • the processor is configured to determine each pixel of the second image according to the motion vector between the second image and the reference image of the second image and the reference image of the second image
  • the comprehensive filter coefficients include:
  • the integrated filter coefficient is obtained according to the first filter coefficient and the second filter coefficient.
  • the integrated filter coefficient is equal to the product of the first filter coefficient and the second filter coefficient.
  • the method when the processor is configured to determine the first filter coefficient according to the degree of matching between each pixel of the second image and the corresponding pixel, the method includes:
  • the first filter coefficient is determined based on the characterizing parameter, a preset first threshold, a preset second threshold, and a preset maximum filter coefficient, where the preset first threshold is smaller than the preset second threshold.
  • the characterizing parameters include:
  • the method when the processor determines the first filter coefficient based on the characterizing parameter, the preset first threshold, the preset second threshold, and the preset maximum filter coefficient, the method includes:
  • the first filter coefficient is equal to the preset maximum filter coefficient
  • the first filter coefficient is equal to 0;
  • the first filter coefficient is equal to the product of the maximum filter coefficient and a specified coefficient, and the specified coefficient is based on the preset It is obtained by assuming a second threshold, the characterizing parameter, and the preset first threshold.
  • the first set of images are visible light images
  • the second set of images are one of infrared images, ultraviolet images, or TOF images
  • the first group of images are infrared images
  • the second group of images are visible light images.
  • the preset condition is that the current moment is in a preset daytime period or the visibility of the current environment is greater than a preset visibility threshold.
  • the first sensor and the second sensor are fixed to the same pan/tilt.
  • the present application also provides an image processing device.
  • the device includes a first sensor 31, a second sensor 32, a processor 33, a memory 34, and a computer program stored in the memory.
  • the first sensor and the second sensor are fixed on the pan/tilt, as shown in FIG. 4, which is a schematic diagram of the first sensor and the second sensor fixed on the same pan/tilt in an embodiment of the application.
  • the first sensor and the second sensor can be simultaneously As the pan/tilt rotates and rotates, its relative position is always fixed.
  • the first sensor 31, the second sensor 32 and the pan/tilt communicate with the processor 33 through the bus, the processor 33 communicates with the memory 34 through the bus, and the processor 33 from the memory
  • the computer program is read in 34, and then the first sensor 31 and the second sensor 32 are controlled to collect images, and the pan/tilt is controlled to rotate to a specified position.
  • the signal-to-noise ratio of the image collected by the first sensor is greater than the signal-to-noise ratio of the image collected by the second sensor
  • the first image sensor is used to collect the first group of images
  • the second image sensor is used to collect the second group of images
  • the processor 33 is used for:
  • the specific denoising process can refer to the embodiments of the above-mentioned image processing method, which will not be repeated here.
  • the image processing device may be a drone, a camera, a car, an airplane, or a boat.
  • drones, cars, airplanes, or ships can be equipped with two sensors with different collected image resolutions, such as an infrared sensor and a visible light sensor, and the image collected by the visible light sensor guides the image collected by the infrared sensor to perform denoising processing.
  • two sensors with different collected image resolutions, such as an infrared sensor and a visible light sensor, and the image collected by the visible light sensor guides the image collected by the infrared sensor to perform denoising processing.
  • it can also be a camera with dual sensors, and the image resolutions collected by the two sensors are different.
  • connection relationship, position relationship, and placement relationship between the pan/tilt and the two sensors shown in Figure 4 are only an example. In other implementations, the connection relationship or position relationship and placement relationship between the pan/tilt and the two sensors Relations, etc. can all be adjusted.
  • an embodiment of this specification also provides a computer storage medium in which a program is stored, and the program is executed by a processor to implement the image processing method in any of the foregoing embodiments.
  • the embodiments of this specification may adopt the form of a computer program product implemented on one or more storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing program codes.
  • Computer usable storage media include permanent and non-permanent, removable and non-removable media, and information storage can be achieved by any method or technology.
  • the information can be computer-readable instructions, data structures, program modules, or other data.
  • Examples of computer storage media include, but are not limited to: phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disc (DVD) or other optical storage, Magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices.
  • PRAM phase change memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • RAM random access memory
  • ROM read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • flash memory or other memory technology
  • CD-ROM compact disc
  • DVD digital versatile disc
  • Magnetic cassettes magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices.
  • the relevant part can refer to the part of the description of the method embodiment.
  • the device embodiments described above are merely illustrative.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in One place, or it can be distributed to multiple network units.
  • Some or all of the modules can be selected according to actual needs to achieve the objectives of the solutions of the embodiments. Those of ordinary skill in the art can understand and implement without creative work.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

一种图像处理方法、装置及存储介质。图像处理方法包括:获取第一组图像和第二组图像,第一组图像通过第一传感器采集,第二组图像通过第二传感器采集,第一传感器和第二传感器的相对位置固定,第一传感器采集的图像的信噪比大于第二传感器采集的图像的信噪比;根据第一组图像中的第一图像与第一图像的参考图像之间的运动矢量确定第二组图像中的第二图像与第二图像的参考图像之间的运动矢量;根据第二图像与第二图像的参考图像之间的运动矢量以及第二图像的参考图像对第二组图像中的第二图像进行去噪处理。通过信噪比高的图像指导信噪比低的图像确定运动矢量,可以使得确定的运动矢量更准确,从而提升图像的去噪效果。

Description

图像处理方法、装置及存储介质 技术领域
本申请涉及图像处理技术领域,具体而言,涉及一种图像处理方法、装置及存储介质。
背景技术
由于图像传感器材质、工艺水平的限制以及外界信号的干扰,导致采集的图像中往往含有噪声,对于一些分辨率较弱、信噪比较低的图像传感器,其噪声尤为明显。比如,红外传感器,因其分辨率和信噪比较低,导致采集的红外图像往往存在很多很强的噪声,使得一些低温差的物体淹没在噪声当中,降低了红外的探测能力。因此,有必要对图像降噪处理的方法加以改进,以取得更好的降噪效果。
发明内容
有鉴于此,本申请提供一种图像处理方法及装置。
根据本申请的第一方面,提供一种图像处理方法,所述方法包括:
获取第一组图像和第二组图像,所述第一组图像通过第一传感器采集,所述第二组图像通过第二传感器采集,所述第一传感器和所述第二传感器的相对位置固定,所述第一传感器采集的图像的信噪比大于所述第二传感器采集的图像的信噪比;
根据所述第一组图像中的第一图像与所述第一图像的参考图像之间的运动矢量确定所述第二组图像中的第二图像与所述第二图像的参考图像之间的运动矢量;
根据所述第二图像与所述第二图像的参考图像之间的运动矢量以及所 述第二图像的参考图像对所述第二组图像中的所述第二图像进行去噪处理。
根据本申请的第二方面,提供一种图像处理装置,所述装置包括处理器、存储器以及存储于所述存储器的计算机程序,所述处理器执行所述计算机程序时实现以下步骤:
获取第一组图像和第二组图像,所述第一组图像通过第一传感器采集,所述第二组图像通过第二传感器采集,所述第一传感器和所述第二传感器的相对位置固定,所述第一传感器采集的图像的信噪比大于所述第二传感器采集的图像的信噪比;
根据所述第一组图像中的第一图像与所述第一图像的参考图像之间的运动矢量确定所述第二组图像中的第二图像与所述第二图像的参考图像之间的运动矢量;
根据所述第二图像与所述第二图像的参考图像之间的运动矢量以及所述第二图像的参考图像对所述第二组图像中的所述第二图像进行去噪处理。
根据本申请的第三方面,提供一种图像处理装置,所述装置包括云台、第一传感器、第二传感器、处理器、存储器以及存储于所述存储器的计算机程序,所述第一图像传感器和第二图像传感器固定于所述云台,所述第一传感器采集的图像的信噪比大于所述第二传感器采集的图像的信噪比,所述第一图像传感器用于采集第一组图像,所述第二图像传感器用于采集第二组图像,所述处理器用于:
根据所述第一组图像中的第一图像与所述第一图像的参考图像之间的运动矢量确定所述第二组图像中的第二图像与所述第二图像的参考图像之间的运动矢量;
根据所述第二图像与所述第二图像的参考图像之间的运动矢量以及所述第二图像的参考图像对所述第二组图像中的所述第二图像进行去噪处理。
根据本申请的第四方面,提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现本申请任一项图像处理方法。
本申请通过相对位置固定且采集的图像信噪比不同的两个传感器分别采集第一组图像和第二组图像,其中,第一组图像的信噪比大于第二组图像,然后根据第一组图像中的第一图像和第一图像的参考图像之间的运动矢量确定第二组图像中的第二图像和第二图像的参考图像之间的运动矢量,并根据第二图像和第二图像的参考图像之间的运动矢量及其参考图像对第二图像进行去噪处理。通过采用信噪比高的图像来指导信噪比低的图像确定运动矢量,使得信噪比低的图像的运动矢量的确定更加准确,从而提升图像的去噪效果。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本发明一个实施例提供的一种图像处理方法的流程。
图2是本发明一个实施例提供的一种图像处理装置的逻辑结构框图。
图3是本发明一个实施例提供的一种图像处理装置的逻辑结构框图。
图4是本发明一个实施例提供的第一传感器和第二传感器固定于同一云台的示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
由于图像传感器材质、工艺水平的限制以及外界信号的干扰,导致采集的图像中往往含有噪声,对于一些分辨率较弱、信噪比较低的图像传感器,其噪声尤为明显。比如,红外传感器,由于其红外焦平面阵列上的感光单元的响应特性存在差异,导致采集的红外图像往往存在各种条纹噪声、鬼影、黑点等,使得一些低温差的物体淹没在噪声当中,降低了红外的探测能力。
噪声的存在严重影响图像的质量,因而需要对图像进行去噪处理。相关的图像去噪技术中,多采用时域滤波技术对图像进行去噪,即结合图像传感器采集的多帧图像的像素值来确定去噪后的图像的像素值,采用这种方式对图像去噪时,通常可以选取在当前待去噪图像之前采集或者之后采集的多帧图像作为参考图像,然后可以对图像进行运动估计,确定待去噪图像与各帧参考图像之间的运动矢量,以根据运动矢量将待去噪图像上的各像素点与各帧参考图像的像素点进行匹配,然后根据待去噪图像上的像素点的像素值与各帧参考图像上的匹配像素点的像素值来确定去噪后的图像的像素值。
但是对于红外传感器、紫外传感器等一类传感器,由于其采集的图像信噪比本来就比较低,分辨率较差,因而通过这些图像进行运动估计,确定的图像之间的运动矢量不太准确,导致最终的降噪效果不理想。因此,有必要对图像降噪处理的方法加以改进,以取得更好的降噪效果。
基于此,本申请提供了一种图像处理方法,所述方法的具体流程如图1所示,包括以下步骤:
S102、获取第一组图像和第二组图像,所述第一组图像通过第一传感器采集,所述第二组图像通过第二传感器采集,所述第一传感器和所述第二传感器的相对位置固定,所述第一传感器采集的图像的信噪比大于所述第二传感器采集的图像的信噪比;
S104、根据所述第一组图像中的第一图像与所述第一图像的参考图像之间的运动矢量确定所述第二组图像中的第二图像与所述第二图像的参考 图像之间的运动矢量;
S106、根据所述第二图像与所述第二图像的参考图像之间的运动矢量以及所述第二图像的参考图像对所述第二组图像中的所述第二图像进行去噪处理。
本申请的图像去噪方法可以用于一个包含至少两个传感器的图像采集设备,其中,两个传感器采集的图像的信噪比不一样,比如可以是一个同时包含红外传感器和可见光传感器的图像采集设备。在某些实施例中,本申请的图像处理方法也可以用于两个图像采集设备通过一定的方式固定连接组合得到的设备,每个设备包含一个传感器,传感器采集的图像信噪比各不相同。比如,可以是一个红外相机和一个可见光相机组合得到的设备。当然,本申请的图像处理方法也可以是仅仅用于只具备图像处理功能的终端设备或者云端服务器,在接收到图像采集设备采集的图像后,再进行后期的图像处理。需要指出的是,本申请的图像处理方法不限于两个传感器的场景,多于多个传感器,比如,三个甚至更多个传感器固定连接,每个传感器的采集的图像信噪比均不相同的场景同样适用,处理步骤与两个传感器的场景类似。
在采用时域滤波技术对图像进行去噪时,为了可以更加准确地确定待去噪图像与参考图像之间的运动矢量,本申请通过图像信噪比高的传感器采集的图像来指导图像信噪比低的传感器采集的图像进行运动矢量的估计,以便获得更加准确的运动矢量。因而,本申请中的第一传感器和第二传感器可以是刚性连接的,例如第一传感器和第二传感器是位置相对固定的两个传感器,可以在同一个设备上或者在不同的设备上。由于两个传感器相对位置固定,因而两个传感器采集的图像的运动变化是一致的,所以可以通过其中一个传感器采集的图像的运动矢量来确定另一个传感器采集的图像的运动矢量。比如,在某些实施例中,第一传感器和第二传感器可以固定于同一个云台上,同时随着云台的转动而转动,因此,其转动的方向和角度均是一致的。
本申请中将第一传感器采集的图像统称为第一组图像,第二传感器采集的图像统称为第二组图像。其中,第一传感器采集的图像的信噪比高于第二传感器采集的图像的信噪比,即第一组图像的信噪比高于第二组图像。比如,在某些实施例中,第一传感器为可见光传感器,相应的,第一组图像可以是可见光图像,第二传感器为红外传感器或紫外光传感器,相应的,第二组图像可以是红外图像、紫外光图像或者是飞行时间(Time-Of-Flight,TOF)图像中的一种。在某些实施例中,第一组图像可以是红外图像,第二组图像可以是可见光图像。在某些实施例中,可以根据实际情况来决策第一传感器和第二传感器的类型。例如,当满足预设的条件时,第一传感器为可见光传感器,第二传感器为红外传感器或紫外光传感器;当不满足预设的条件时,第一传感器为可见光传感器,相应的,第一传感器为红外传感器或紫外光传感器,第二传感器为可见光传感器。其中,在某些实施例中,预设的条件可以是当前时刻处于预设白天时间段或者当前环境能见度大于预设能见度阈值。举个例子,当处于白天时,光线较明亮,这时可见光传感器采集的可见光图像的信噪比明显高于红外图像、紫外光图像或者是TOF图像,这时可以用可见图像来指导红外图像、紫外光图像或者是TOF图像确定运动矢量。而当处于黑夜状态,光线较差,这时可见光传感器采集的图像的信噪比会低于红外图像,则可以用红外图像来指导可见光图像确定运动矢量。
在获取第一组图像和第二组图像后,可以根据第一组图像中的第一图像与第一图像的参考图像之间的运动矢量确定第二组图像中的第二图像与第二图像的参考图像之间的运动矢量,其中,第二图像为第二组图像中待去噪的图像,第二图像可以是一帧或多帧,第二图像的参考图像也可以是第二组图像中的一帧或多帧图像,该参考图像可以是第二传感器在采集第二图像之前采集的若干帧图像,也可以是第二传感器在采集第二图像之后采集若干帧图像,当然,也可以同时包括第二图像之前的若干帧图像以及之后的若干帧图像。
其中,第一图像和第二图像可以是第一传感器和第二传感器在同一个相对位置分别采集的图像。同样的,第一图像的参考图像和第二图像的参考图像也是第一传感器和第二传感器在同一个相对位置分别采集的图像。当然,可以是第一传感器和第二传感器在同一个相对位置同时采集,也可以是先后采集。举个例子,第一传感和第二传感器固定于一云台,当云台转动至A角度时,第一传感器采集的图像为第一图像,则云台在此位置时,第二传感器采集的图像则为第二图像。同样的,第一图像的参考图像和第二图像的参考图像则可以是云台转动至B角度,C角度等其他角度时两个传感器分别采集的图像。
由于第一传感器采集的图像的信噪比更高,因而可以用第一图像及其参考图像来确定运动矢量,并根据第一图像与其参考图像的运动矢量来确定第二图像与第二图像的参考图像之前的运动矢量。在确定第二图像与其参考图像的运动矢量后,即可以根据该运动矢量和参考图像对第二图像进行去噪处理。
在某些实施例中,在确定第一图像和第一图像的参考图像之间的运动矢量时,可以采用灰度直方图相关性匹配的方法来确定。可以分别从第一图像和第一图像的参考图像的中间位置确定多个图像区域,然后确定这多个图像区域的灰度直方图,对这多个灰度直方图进行相关性匹配,确定所述第一图像与所述第一图像的参考图像之间的运动矢量。通常情况由于图像中间位置的图像区域的图像质量往往较好,因而可以从中间位置确定图像区域,当然,也可以从图像中的其他位置确定质量较好的图像区域。在从第一图像和其参考图像确定出图像区域后,可以对于这些图像区域进行横向和纵向的直方图统计,得到第一图像和其参考图像各自在横向和纵向的直方图统计值,并根据直方图统计值进行相关性匹配,确定图像区域中各个小图像区块的互相关系数,根据互相关系数来确定第一图像与其参考图像之间的运动矢量,以及运动矢量对应的置信度水平。
当然,灰度直方图相关性匹配只是确定第一图像及其参考图像之间的 运动矢量的一种方法,在某些实施例中,还可以采用特征点匹配、光流法等方法确定第一图像及其参考图像之间的运动矢量,可根据实际运用场景灵活选择,本申请对此不作限制。
由于第一传感器和第二传感的位置相对固定,当传感器的位置发生变化时,两个传感器采集的图像整体上的运动变化是一致的,即全局的运动矢量一致,反映的都是传感器的位置变化。因而第一传感器采集的图像在全局的运动矢量可以作为第二传感器采集的图像在全局的运动矢量的参考。在确定第一图像及其参考图像之间的运动矢量后,可以根据第一图像及其参考图像之间的运动矢量确定第二图像与第二图像的参考图像之间的运动矢量。由于第一传感器和第二传感器所在的位置并不一样,且某些实施例中,第一传感器和第二传感器的视场角(Field of Vision,FOV)以及分辨率也不一样,比如红外传感器的视场角通常小于可见光传感器的视场角,且红外图像的分辨率低于可见光图像的分辨率。因此,在某些实施例中,在确定第一图像与第一图像的参考图像之间的运动矢量后,可以采用预设的变换矩阵对第一图像与所述第一图像的参考图像之间的运动矢量进行矩阵映射,得到第二图像与所述第二图像的参考图像之间的运动矢量。示例性的,第一传感器和第二传感器在出厂时需要进行各自内参数矩阵与畸变参数以及二者之间旋转矩阵与平移向量等参数的标定,详细标定方法例如可以是基于张正友的棋盘格标定法或其改进。以第一传感器为可见光传感器,第二传感器为红外传感器为例,在得到以上参数后可以计算出从可见光图像像素点映射到红外图像像素点的映射矩阵。其中,变换矩阵可以预先根据第一传感器和第二传感器的位置参数、和/或第一传感器和第二传感器分辨率得到。
在某些实施例中,可以采用变换矩阵对第一图像与第一图像的参考图像之间的运动矢量进行仿射变换,以得到第二图像与所述第二图像的参考图像之间的运动矢量。在某些实施例中,可以采用变换矩阵对第一图像与第一图像的参考图像之间的运动矢量进行透视变换,以得到二图像与所述 第二图像的参考图像之间的运动矢量。当然,在某些实施例中,可以同时采用变换矩阵对第一图像与述第一图像的参考图像之间的运动矢量进行仿射变换和透视变换,以得到二图像与所述第二图像的参考图像之间的运动矢量。具体的变换方式可以是根据两个传感器的位置、分辨率以及视场角等因素设置,本申请不作限制。
在确定第二图像和第二图像的参考图像之间的运动矢量后,可以根据该运动矢量以及第二图像的参考图像对第二图像进行去噪处理。在某些实施例中。可以根据第二图像与和第二图像的参考图像之间的运动矢量确定第二图像中各像素点的综合滤波系数,然后根据综合滤波系数以及第二图像的参考图像对第二图像进行去噪处理。
其中,综合滤波系数可以是在根据第二图像和其参考图像确定去噪后的图像的像素点的像素值时,第二图像或参考图像上的对应像素点的像素值所占的权重。举个例子,假设第二图像上有一像素点P0,根据第二图像与其参考图像之间的运动矢量,可以确定P0在参考图像的对应像素点P1,而第二图像进行去噪后的图像在对应像素点的像素值可以根据这两个像素点的像素值确定,这时,可以确定P0和P1的像素值在确定去噪后的像素点的像素值所占的权重,称为综合滤波系数。
综合滤波系数与图像的全局运动有关,也和图像的局部运动有关。其中,全局运动是由传感器位置发生变化带来的整体图像的运动,而局部运动是由于拍摄物体的运动引起的运动。这两者运动都会影响最终第二图像和其参考图像的像素点的匹配。由于根据第一图像和其参考图像之间的运动矢量确定的第二图像和第二图像的参考图像之间的运动矢量表征的是图像的全局运动,因而在某些实施例中,在确定综合滤波系数时可以考虑图像的局部运动。可以先根据确定的第二图像和其参考图像之间的运动矢量确定第二图像的各像素点在参考图像上的对应像素点,由于第二图像和其参考图像之间的运动矢量只考虑了全局运动,因而根据该矢量确定的对应像素点不一定准确,所以可以先根据第二图像的各像素点和该对应像素点 的匹配程度确定第一滤波系数,然后再根据第二图像和第二图像的参考图像之间的运动矢量的置信度确定第二滤波系数,第二滤波系数反映的是该运动矢量的准确程度。其中,第二图像和第二图像的参考图像之间的运动矢量的置信度可以根据第一图像和第一图像的参考图像之间的运动矢量的置信度来确定。在确定第一滤波系数和第二滤波系数后,可以根据第一滤波系数和第二滤波系数确定综合滤波系数。通过这种方式,综合考虑了图像的全局运动和局部运动,使得确定出来的滤波系数会更加准确。
在某些实施例中,在根据第二图像的各像素点和对应像素点的匹配程度确定第一滤波系数时,可以根据第二图像的各像素点和对应像素点的像素值确定一个表征像素点匹配程度的表征参数。在某些实施例中,该表征参数可以是第二图像各像素点的像素值与对应像素点的像素值的差值的绝对值。在某些实施例中,表征参数也可以是第二图像上某个像素点所在的一个小图像区域上的像素点与该图像区域在参考图像的对应区域的像素点的像素值的差值的绝对值之和,即SAD(Sum of Absolute Differences)。像素值差值的绝对值或者SAD越小,说明像素点和对应像素点越匹配,即应将第一滤波系数设置得大一些,否则应将第一滤波系数设置的小一些。在某些实施例中,在确定表征第二图像的像素点与参考图像上的对应像素点匹配程度的表征参数后,可以根据表征参数,预设第一阈值、预设第二阈值以及预设最大滤波系数来确定第一滤波系数。其中,预设第一阈值和预设第二阈值为与图像噪声水平相关的阈值,且预设第一阈值小于预设第二阈值,最大滤波系数为在0-1之间的一个固定系数。
在某些实施例中,若表征参数小于预设第一阈值,则第一滤波系数等于预设最大滤波系数,若表征参数大于预设第二阈值,则第一滤波系数等于0,若表征参数大于预设第一阈值,小于预设第二阈值,则第一滤波系数等于最大滤波系数与指定系数的乘积,其中,指定系数基于预设第二阈值、表征参数以及预设第一阈值得到。举个例中,假设表征参数为H,预设第一阈值为lowthres,预设第二阈值为highthres,lowthres与highthres 分别为和图像噪声水平相关的阈值,且highthres>lowthres,ratio为最大滤波系数,0<ratio<1。则可以通过公式(1)来计算第一滤波系数。
Figure PCTCN2019130858-appb-000001
在确定第一滤波系数后,可以根据第二图像和第二图像的参考图像之间的运动矢量置信度确定第二滤波系数,然后根据第一滤波系数和第二滤波系数确定综合滤波系数。在某些实施例中,综合滤波系数可以是第一滤波系数与第二滤波系数的乘积。比如第一滤波系数为S1,第二滤波系数为S2,则综合滤波系数S=S1*S2。
在确定第二图像中各像素点对应的各帧参考红外图像的综合滤波系数后,可以根据第二图像各像素点的像素值、参考图像中对应像素点的像素值以及综合滤波系数确定去噪后的第二图像各像素点的像素值。其中,时域滤波可采用FIR滤波,假设第二图像中坐标为(p,q)的像素点的像素值为V(p,q),第二图像的参考图像中坐标为(p,q)的像素点对应的参考像素点的坐标为(p+dp,q+dq),且该参考像素点的像素值为W(p+dp,q+dq),则去噪后的第二图像中坐标为(p,q)的像素点的像素值V o(p,q)可以通过公式(2)计算,
V o(p,q)=(1-s(p,q))V(p,q)+s(p,q)W(p+dp,q+dq)   公式(2)
s(p,q)为综合滤波系数,dp,dq为第二图像中坐标为(p,q)的像素点的运动矢量。
当然,如果参考图像有多帧,可以针对每一帧参考图像利用公式(2)求得去噪后的像素值,再取均值作为最终的去噪后像素值。
通过采用信噪比高的图像来指导信噪比低的图像确定运动矢量,可以使得确定的信噪比低的图像的运动矢量更加准确,从而通过运动矢量对信噪比低的图像进行去噪时,可以提升去噪效果。
为了进一步解释本申请提供的图像处理方法,以下以一个具体的实施 例加以解释。
红外传图像的信噪比较低,在采用时域滤波技术对红外图像进行去噪时,由于要确定当前待去噪红外图像和参考红外图像之间的运动矢量,再根据运动矢量对待去噪红外图像的像素点和参考红外图像的像素点进行匹配,确定去噪后的红外图像的像素值。由于红外图像信噪比低,因而根据红外图像确定的运动矢量不准确,导致降噪效果差。
通常在白天的时候,可见光传感器采集的图像的信噪比都要高于红外传感器,为了提升红外图像的降噪效果,本实施例中通过固定于同一个云台且相对位置固定的一个红外传感器和一个可见光传感器分别采集图像,由于可见光图像的信噪比较高,因而可以用可见光图像指导红外图像确定运动矢量,使得确定的红外图像的运动矢量更加准确,再对红外图像进行去噪。具体的去噪过程如下:
1、可见光图像的运动矢量的确定
确定待去噪红外图像及其参考红外图像对应的可见光图像及参考可见光图像,其中,待去噪红外图像和可见光图像为云台位于某个位置时,红外传感器和可见光传感器分别采集的图像,而参考红外图像和参考可见图像为云台位于其他的位置时,红外传感器和可见光传感器分别采集的图像。参考红外图像和参考可见光图像可以是一帧或多帧图像。
分别从可见光图像以及参考可见光图像的中间位置选取一感兴趣区域(Region of Interest,ROI),进行行方向和列方向的灰度直方图统计,得到可见光图像以及参考可见光图像各自在行方向和列方向的灰度直方图统计值,使用灰度直方图相关法进行全局运动向量的搜索,行方向和列方向分别独立进行,对灰度直方图进行相关性匹配,从而确定可见光图像与各参考可见光图像之间的运动矢量。具体过程如下:
直方图相关法在行列方向的偏置计算可归结为可见光图像以及参考可见光图像的最大互相关问题。假设直方图统计长度为x,正负向的最大可计算偏置为dx,则相关计算长度为y=x-2*dx。计算最大互相关系数时,从 参考可见光图像直方图中截取中间长度为y的区域参与互相关系数计算,可见光图像直方图在x长度上从左到右滑动截取y长度参与相关计算,共获得k=2*dx+1个互相关系数及两组计算向量的位移偏置,最大互相关系数对应的位移偏置即为行/列方向上的位移偏置,该偏移位置即为可见光图像与参考可见光图像之间的运动矢量,并根据互相关系数的绝对值大小确定可见光图像与参考可见光图像之间的运动矢量的置信度水平。
2、红外图像的运动矢量的确定
由于红外传感器和可见光传感器的位置、分辨率以及视场角都不一样,因而两者采集的图像的运动矢量也并非完全一样。因而,可以预先根据红外传感器和可见光传感器的位置参数、分辨率以及视场角等确定一个变换矩阵,然后通过该变换矩阵对确定的可见光图像和参考可见光图像的运动矢量进行透视变换和防射变换,即可得到待去噪红外图像和各帧参考红外图像之间的运动矢量,并且可以根据可见光图像与参考可见光图像之间的运动矢量的置信度确定待去噪红外图像和各帧参考红外图像之间的运动矢量的置信度。
3、综合滤波系数的确定
综合滤波系数可以根据图像的全局运动和局部运动情况来综合决定。全局运动为由于传感器的位置变化引起的运动,局部运动为拍摄物体的运动引起的运动。由于根据可见光图像与参考可见光图像之间的运动矢量确定的待去噪红外图像和各帧参考红外图像之间的运动矢量,考虑的是图像的全局运动,没有考虑物体的局部运动。因而可以根据待去噪红外图像和各帧参考红外图像之间的运动矢量确定待去噪红外图像各像素点在参考红外图像的对应像素点,确定各像素点与参考像像素点的像素值差值的绝对值,假设为H,预先设置第一阈值lowthres,第二阈值highthres,lowthres与highthres分别为和图像噪声水平相关的阈值,且highthres>lowthres,以及最大时域滤波系数ratio,0<ratio<1。然后,可以通过公式(1)来计算第一滤波系数S1。
Figure PCTCN2019130858-appb-000002
确定S1后,根据待去噪红外图像和各帧参考红外图像之间的运动矢量的置信度确定第二滤波系数S2,然后根据S1和S2确定综合滤波系数S。其中,S=S1*S2。
4、红外图像的去噪处理
在确定待去噪红外图像各像素点对应的各帧参考红外图像的综合率系数后,可以根据待去噪红外图像各像素点的像素值、参考图像中对应像素点的像素值以及滤波系数确定去噪后的红外图像各像素点的像素值。时域滤波可采用FIR滤波,假设待去噪红外图像坐标为(p,q)的像素点的像素值为V(p,q)和参考红外图像坐标为(p+dp,q+dq)的像素点的像素值为W(p+dp,q+dq),则去噪后的红外图像的对应像素点的像素值可以通过以下公式计算,
V o(p,q)=(1-s(p,q))V(p,q)+s(p,q)W(p+dp,q+dq)
s(p,q)为综合滤波系数,dp,dq为确定的运动矢量。
另外,本申请还提供了一种图像处理装置,如图2所示,所述装置20包括处理器21、存储器22以及存储于所述存储器的计算机程序,所述处理器执行所述计算机程序时实现以下步骤:
获取第一组图像和第二组图像,所述第一组图像通过第一传感器采集,所述第二组图像通过第二传感器采集,所述第一传感器和所述第二传感器的相对位置固定,所述第一传感器采集的图像的信噪比大于所述第二传感器采集的图像的信噪比;
根据所述第一组图像中的第一图像与所述第一图像的参考图像之间的运动矢量确定所述第二组图像中的第二图像与所述第二图像的参考图像之间的运动矢量;
根据所述第二图像与所述第二图像的参考图像之间的运动矢量以及所 述第二图像的参考图像对所述第二组图像中的所述第二图像进行去噪处理。
在某些实施例中,所述处理器用于根据所述第一组图像中的第一图像与所述第一图像的参考图像之间的运动矢量确定所述第二组图像中的第二图像与所述第二图像的参考图像之间的运动矢量时,具体包括:
通过预设的变换矩阵对所述第一图像与所述第一图像的参考图像之间的运动矢量进行映射变换,得到所述第二图像与所述第二图像的参考图像之间的运动矢量,其中,所述变换矩阵基于所述第一传感器和所述第二传感器的位置参数以及分辨率得到。
在某些实施例中,所述处理器用于通过预设的变换矩阵对所述第一图像与所述第一图像的参考图像之间的运动矢量进行映射变换时,具体包括:
通过预设的变换矩阵对所述第一图像与所述第一图像的参考图像之间的运动矢量进行仿射变换;和/或
通过预设的变换矩阵对所述第一图像与所述第一图像的参考图像之间的运动矢量进行透视变换。
在某些实施例中,所述处理用于确定所述第一图像与所述第一图像的参考图像之间的运动矢量时,具体包括:
分别从所述第一图像和所述第一图像的参考图像的中间位置确定多个图像区域;
确定所述多个图像区域的灰度直方图;
对所述灰度直方图进行相关性匹配,确定所述第一图像与所述第一图像的参考图像之间的运动矢量。
在某些实施例中,所述处理器用于根据所述第二图像与所述第二图像的参考图像之间的运动矢量以及所述第二图像的参考图像对所述第二组图像中的所述第二图像进行去噪处理时,具体包括:
根据所述第二图像与所述第二图像的参考图像之间的运动矢量以及所述第二图像的参考图像确定所述第二图像各像素点的综合滤波系数;
根据所述综合滤波系数以及所述第二图像的参考图像对所述第二图像 进行去噪处理。
在某些实施例中,所述处理器用于根据所述第二图像与所述第二图像的参考图像之间的运动矢量以及所述第二图像的参考图像确定所述第二图像各像素点的综合滤波系数时,包括:
根据所述第二图像与所述第二图像的参考图像之间的运动矢量确定所述第二图像各像素点在所述第二图像的参考图像的对应像素点;
根据所述第二图像各像素点与所述对应像素点的匹配程度确定第一滤波系数;
根据所述第二图像与所述第二图像的参考图像之间的运动矢量的置信度确定第二滤波系数;
根据所述第一滤波系数和所述第二滤波系数得到所述综合滤波系数。
在某些实施例中,所述综合滤波系数等于所述第一滤波系数和所述第二滤波系数的乘积。
在某些实施例中,所述处理器用于根据所述第二图像各像素点与所述对应像素点的匹配程度确定第一滤波系数时,包括:
根据所述第二图像各像素点与所述对应像素点的像素值确定所述匹配程度的表征参数;
基于所述表征参数、预设第一阈值、预设第二阈值以及预设最大滤波系数确定所述第一滤波系数,其中,所述预设第一阈值小于所述预设第二阈值。
在某些实施例中,所述表征参数包括:
所述第二图像各像素点与所述对应像素点的像素值差值的绝对值;和/或
所述第二图像的各图像区块的像素点与所述图像区块在所述参考图像中的对应图像区块的像素点的像素值差值的绝对值之和。
在某些实施例中,所述处理器基于所述表征参数、预设第一阈值、预设第二阈值以及预设最大滤波系数确定所述第一滤波系数时,包括:
若所述表征参数小于所述预设第一阈值,则所述第一滤波系数等于所述预设最大滤波系数;
若所述表征参数大于所述预设第二阈值,则所述第一滤波系数等于0;
若所述表征参数大于所述预设第一阈值,小于所述预设第二阈值,则所述第一滤波系数等于所述最大滤波系数与指定系数的乘积,所述指定系数基于所述预设第二阈值、所述表征参数以及所述预设第一阈值得到。
在某些实施例中,当满足预设条件时,所述第一组图像为可见光图像,所述第二组图像为红外图像、紫外光图像或TOF图像中的一种;当不满足所述预设条件时,所述第一组图像为红外图像,所述第二组图像为可见光图像。
在某些实施例中,所述预设条件为当前时刻处于预设白天时间段或者当前环境能见度大于预设能见度阈值。
在某些实施例中,所述第一传感器和所述第二传感器固定于同一个云台。
进一步地,本申请还提供一种图像处理装置,如图3所示,所述装置包括第一传感器31、第二传感器32、处理器33、存储器34以及存储于所述存储器的计算机程序,其中,第一传感器和第二传感器固定于云台,如图4所示,为本申请一个实施例中第一传感器和第二传感器固定于同一云台的示意图,第一传感器和第二传感器可以同时随着云台转动而转动,其相对位置始终保持固定,第一传感器31、第二传感器32以及云台通过总线和处理器33通信,处理器33通过总线与存储器34通信,处理器33从存储器34中读取计算机程序,然后控制第一传感器31以及第二传感器32进行图像采集,并且控制云台转动至指定位置。第一传感器采集的图像的信噪比大于第二传感器采集的图像的信噪比,第一图像传感器用于采集第一组图像,第二图像传感器用于采集第二组图像,其中,处理器33用于:
根据第一组图像中的第一图像与第一图像的参考图像之间的运动矢量确定第二组图像中的第二图像与第二图像的参考图像之间的运动矢量;
根据第二图像与第二图像的参考图像之间的运动矢量以及第二图像的参考图像对第二组图像中的第二图像进行去噪处理。
其中,具体的去噪的过程可参考上述图像处理方法的各实施例,在此不再赘述。
在某些实施例中,所述图像处理装置可以是无人机、相机、汽车、飞机或者船。比如,无人机、汽车、飞机或者船上可以设置采集的图像分辨率不同的两个传感器,比如一个红外传感器和一个可见光传感器,通过可见光传感器采集的图像指导红外传感器采集的图像进行去噪处理。当然,也可以是一个具有双传感器的相机,两个传感器采集的图像分辨率不同。
需要说明的是,图4所示云台和两个传感器的连接关系、位置关系摆放关系仅作为一种示例,在其他实现方式中,云台和两个传感器的连接关系或者位置关系、摆放关系等均可以调整。
相应地,本说明书实施例还提供一种计算机存储介质,所述存储介质中存储有程序,所述程序被处理器执行时实现上述任一实施例中图像处理方法。
本说明书实施例可采用在一个或多个其中包含有程序代码的存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。计算机可用存储介质包括永久性和非永久性、可移动和非可移动媒体,可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括但不限于:相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。
对于装置实施例而言,由于其基本对应于方法实施例,所以相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性 的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上对本发明实施例所提供的方法和装置进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (29)

  1. 一种图像处理方法,其特征在于,所述方法包括:
    获取第一组图像和第二组图像,所述第一组图像通过第一传感器采集,所述第二组图像通过第二传感器采集,所述第一传感器和所述第二传感器的相对位置固定,所述第一传感器采集的图像的信噪比大于所述第二传感器采集的图像的信噪比;
    根据所述第一组图像中的第一图像与所述第一图像的参考图像之间的运动矢量确定所述第二组图像中的第二图像与所述第二图像的参考图像之间的运动矢量;
    根据所述第二图像与所述第二图像的参考图像之间的运动矢量以及所述第二图像的参考图像对所述第二组图像中的所述第二图像进行去噪处理。
  2. 根据权利要求1所述的图像处理方法,其特征在于,根据所述第一组图像中的第一图像与所述第一图像的参考图像之间的运动矢量确定所述第二组图像中的第二图像与所述第二图像的参考图像之间的运动矢量,包括:
    通过预设的变换矩阵对所述第一图像与所述第一图像的参考图像之间的运动矢量进行映射变换,得到所述第二图像与所述第二图像的参考图像之间的运动矢量,其中,所述变换矩阵基于所述第一传感器和所述第二传感器的位置参数以及分辨率得到。
  3. 根据权利要求2所述的图像处理方法,其特征在于,通过预设的变换矩阵对所述第一图像与所述第一图像的参考图像之间的运动矢量进行映射变换,包括:
    通过预设的变换矩阵对所述第一图像与所述第一图像的参考图像之间的运动矢量进行仿射变换;和/或
    通过预设的变换矩阵对所述第一图像与所述第一图像的参考图像之间的运动矢量进行透视变换。
  4. 根据权利要求2或3所述的图像处理方法,其特征在于,确定所述第一图像与所述第一图像的参考图像之间的运动矢量,包括:
    分别从所述第一图像和所述第一图像的参考图像的中间位置确定多个图像区域;
    确定所述多个图像区域的灰度直方图;
    对所述灰度直方图进行相关性匹配,确定所述第一图像与所述第一图像的参考图像之间的运动矢量。
  5. 根据权利要求1-4任一项所述的图像处理方法,其特征在于,根据所述第二图像与所述第二图像的参考图像之间的运动矢量以及所述第二图像的参考图像对所述第二组图像中的所述第二图像进行去噪处理,包括:
    根据所述第二图像与所述第二图像的参考图像之间的运动矢量以及所述第二图像的参考图像确定所述第二图像各像素点的综合滤波系数;
    根据所述综合滤波系数以及所述第二图像的参考图像对所述第二图像进行去噪处理。
  6. 根据权利要求5所述的图像处理方法,其特征在于,根据所述第二图像与所述第二图像的参考图像之间的运动矢量以及所述第二图像的参考图像确定所述第二图像各像素点的综合滤波系数,包括:
    根据所述第二图像与所述第二图像的参考图像之间的运动矢量确定所述第二图像各像素点在所述第二图像的参考图像的对应像素点;
    根据所述第二图像各像素点与所述对应像素点的匹配程度确定第一滤波系数;
    根据所述第二图像与所述第二图像的参考图像之间的运动矢量的置信度确定第二滤波系数;
    根据所述第一滤波系数和所述第二滤波系数得到所述综合滤波系数。
  7. 根据权利要求6所述的图像处理方法,其特征在于,所述综合滤波系数等于所述第一滤波系数和所述第二滤波系数的乘积。
  8. 根据权利要求6或7所述的图像处理方法,其特征在于,根据所 述第二图像各像素点与所述对应像素点的匹配程度确定第一滤波系数,包括:
    根据所述第二图像各像素点与所述对应像素点的像素值确定所述匹配程度的表征参数;
    基于所述表征参数、预设第一阈值、预设第二阈值以及预设最大滤波系数确定所述第一滤波系数,其中,所述预设第一阈值小于所述预设第二阈值。
  9. 根据权利要求8所述的图像处理方法,其特征在于,所述表征参数包括:
    所述第二图像各像素点与所述对应像素点的像素值差值的绝对值;和/或
    所述第二图像的各图像区块的像素点与所述图像区块在所述参考图像中的对应图像区块的像素点的像素值差值的绝对值之和。
  10. 根据权利要求8或9所述的图像处理方法,其特征在于,基于所述表征参数、预设第一阈值、预设第二阈值以及预设最大滤波系数确定所述第一滤波系数,包括:
    若所述表征参数小于所述预设第一阈值,则所述第一滤波系数等于所述预设最大滤波系数;
    若所述表征参数大于所述预设第二阈值,则所述第一滤波系数等于0;
    若所述表征参数大于所述预设第一阈值,小于所述预设第二阈值,则所述第一滤波系数等于所述最大滤波系数与指定系数的乘积,所述指定系数基于所述预设第二阈值、所述表征参数以及所述预设第一阈值得到。
  11. 根据权利要求1-10任一项所述的图像处理方法,其特征在于,当满足预设条件时,所述第一组图像为可见光图像,所述第二组图像为红外图像、紫外光图像或TOF图像中的一种;当不满足所述预设条件时,所述第一组图像为红外图像,所述第二组图像为可见光图像。
  12. 根据权利要求1-11任一项所述的图像处理方法,其特征在于,所 述预设条件为当前时刻处于预设白天时间段或者当前环境能见度大于预设能见度阈值。
  13. 根据权利要求1-12任一项所述的图像处理方法,其特征在于,所述第一传感器和所述第二传感器固定于同一个云台。
  14. 一种图像处理装置,其特征在于,所述装置包括处理器、存储器以及存储于所述存储器的计算机程序,所述处理器执行所述计算机程序时实现以下步骤:
    获取第一组图像和第二组图像,所述第一组图像通过第一传感器采集,所述第二组图像通过第二传感器采集,所述第一传感器和所述第二传感器的相对位置固定,所述第一传感器采集的图像的信噪比大于所述第二传感器采集的图像的信噪比;
    根据所述第一组图像中的第一图像与所述第一图像的参考图像之间的运动矢量确定所述第二组图像中的第二图像与所述第二图像的参考图像之间的运动矢量;
    根据所述第二图像与所述第二图像的参考图像之间的运动矢量以及所述第二图像的参考图像对所述第二组图像中的所述第二图像进行去噪处理。
  15. 根据权利要求14所述的图像处理装置,其特征在于,所述处理器用于根据所述第一组图像中的第一图像与所述第一图像的参考图像之间的运动矢量确定所述第二组图像中的第二图像与所述第二图像的参考图像之间的运动矢量时,具体包括:
    通过预设的变换矩阵对所述第一图像与所述第一图像的参考图像之间的运动矢量进行映射变换,得到所述第二图像与所述第二图像的参考图像之间的运动矢量,其中,所述变换矩阵基于所述第一传感器和所述第二传感器的位置参数以及分辨率得到。
  16. 根据权利要求15所述的图像处理装置,其特征在于,所述处理器用于通过预设的变换矩阵对所述第一图像与所述第一图像的参考图像之 间的运动矢量进行映射变换时,具体包括:
    通过预设的变换矩阵对所述第一图像与所述第一图像的参考图像之间的运动矢量进行仿射变换;和/或
    通过预设的变换矩阵对所述第一图像与所述第一图像的参考图像之间的运动矢量进行透视变换。
  17. 根据权利要求15或16所述的图像处理装置,其特征在于,所述处理用于确定所述第一图像与所述第一图像的参考图像之间的运动矢量时,具体包括:
    分别从所述第一图像和所述第一图像的参考图像的中间位置确定多个图像区域;
    确定所述多个图像区域的灰度直方图;
    对所述灰度直方图进行相关性匹配,确定所述第一图像与所述第一图像的参考图像之间的运动矢量。
  18. 根据权利要求14-17任一项所述的图像处理装置,其特征在于,所述处理器用于根据所述第二图像与所述第二图像的参考图像之间的运动矢量以及所述第二图像的参考图像对所述第二组图像中的所述第二图像进行去噪处理时,具体包括:
    根据所述第二图像与所述第二图像的参考图像之间的运动矢量以及所述第二图像的参考图像确定所述第二图像各像素点的综合滤波系数;
    根据所述综合滤波系数以及所述第二图像的参考图像对所述第二图像进行去噪处理。
  19. 根据权利要求18所述的图像处理装置,其特征在于,所述处理器用于根据所述第二图像与所述第二图像的参考图像之间的运动矢量以及所述第二图像的参考图像确定所述第二图像各像素点的综合滤波系数时,包括:
    根据所述第二图像与所述第二图像的参考图像之间的运动矢量确定所述第二图像各像素点在所述第二图像的参考图像的对应像素点;
    根据所述第二图像各像素点与所述对应像素点的匹配程度确定第一滤波系数;
    根据所述第二图像与所述第二图像的参考图像之间的运动矢量的置信度确定第二滤波系数;
    根据所述第一滤波系数和所述第二滤波系数得到所述综合滤波系数。
  20. 根据权利要求19所述的图像处理装置,其特征在于,所述综合滤波系数等于所述第一滤波系数和所述第二滤波系数的乘积。
  21. 根据权利要求19或20所述的图像处理装置,其特征在于,所述处理器用于根据所述第二图像各像素点与所述对应像素点的匹配程度确定第一滤波系数时,包括:
    根据所述第二图像各像素点与所述对应像素点的像素值确定所述匹配程度的表征参数;
    基于所述表征参数、预设第一阈值、预设第二阈值以及预设最大滤波系数确定所述第一滤波系数,其中,所述预设第一阈值小于所述预设第二阈值。
  22. 根据权利要求21所述的图像处理装置,其特征在于,所述表征参数包括:
    所述第二图像各像素点与所述对应像素点的像素值差值的绝对值;和/或
    所述第二图像的各图像区块的像素点与所述图像区块在所述参考图像中的对应图像区块的像素点的像素值差值的绝对值之和。
  23. 根据权利要求21或22所述的图像处理装置,其特征在于,所述处理器基于所述表征参数、预设第一阈值、预设第二阈值以及预设最大滤波系数确定所述第一滤波系数时,包括:
    若所述表征参数小于所述预设第一阈值,则所述第一滤波系数等于所述预设最大滤波系数;
    若所述表征参数大于所述预设第二阈值,则所述第一滤波系数等于0;
    若所述表征参数大于所述预设第一阈值,小于所述预设第二阈值,则所述第一滤波系数等于所述最大滤波系数与指定系数的乘积,所述指定系数基于所述预设第二阈值、所述表征参数以及所述预设第一阈值得到。
  24. 根据权利要求14-23任一项所述的图像处理装置,其特征在于,当满足预设条件时,所述第一组图像为可见光图像,所述第二组图像为红外图像、紫外光图像或TOF图像中的一种;当不满足所述预设条件时,所述第一组图像为红外图像,所述第二组图像为可见光图像。
  25. 根据权利要求14-24任一项所述的图像处理装置,其特征在于,所述预设条件为当前时刻处于预设白天时间段或者当前环境能见度大于预设能见度阈值。
  26. 根据权利要求14-25任一项所述的图像处理装置,其特征在于,所述第一传感器和所述第二传感器固定于同一个云台。
  27. 一种图像处理装置,其特征在于,所述装置包括云台、第一传感器、第二传感器、处理器、存储器以及存储于所述存储器的计算机程序,所述第一图像传感器和第二图像传感器固定于所述云台,所述第一传感器采集的图像的信噪比大于所述第二传感器采集的图像的信噪比,所述第一图像传感器用于采集第一组图像,所述第二图像传感器用于采集第二组图像,所述处理器用于:
    根据所述第一组图像中的第一图像与所述第一图像的参考图像之间的运动矢量确定所述第二组图像中的第二图像与所述第二图像的参考图像之间的运动矢量;
    根据所述第二图像与所述第二图像的参考图像之间的运动矢量以及所述第二图像的参考图像对所述第二组图像中的所述第二图像进行去噪处理。
  28. 根据权利要求27所述的图像处理装置,其特征在于,所述图像处理装置包括无人机、相机、汽车、飞机或船。
  29. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1-13任一项图像处理方法。
PCT/CN2019/130858 2019-12-31 2019-12-31 图像处理方法、装置及存储介质 WO2021134642A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980010510.8A CN111699511A (zh) 2019-12-31 2019-12-31 图像处理方法、装置及存储介质
PCT/CN2019/130858 WO2021134642A1 (zh) 2019-12-31 2019-12-31 图像处理方法、装置及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/130858 WO2021134642A1 (zh) 2019-12-31 2019-12-31 图像处理方法、装置及存储介质

Publications (1)

Publication Number Publication Date
WO2021134642A1 true WO2021134642A1 (zh) 2021-07-08

Family

ID=72476441

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/130858 WO2021134642A1 (zh) 2019-12-31 2019-12-31 图像处理方法、装置及存储介质

Country Status (2)

Country Link
CN (1) CN111699511A (zh)
WO (1) WO2021134642A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115100051A (zh) * 2022-06-14 2022-09-23 浙江华感科技有限公司 一种图像降噪的方法、装置及电子设备

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112565603B (zh) * 2020-11-30 2022-05-10 维沃移动通信有限公司 图像处理方法、装置及电子设备
CN113191965B (zh) * 2021-04-14 2022-08-09 浙江大华技术股份有限公司 图像降噪方法、设备及计算机存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100157073A1 (en) * 2008-12-22 2010-06-24 Yuhi Kondo Image processing apparatus, image processing method, and program
CN102201113A (zh) * 2010-03-23 2011-09-28 索尼公司 图像处理设备、图像处理方法以及程序
CN103606132A (zh) * 2013-10-31 2014-02-26 西安电子科技大学 基于空域和时域联合滤波的多帧数字图像去噪方法
CN107005623A (zh) * 2014-10-16 2017-08-01 三星电子株式会社 用于图像处理的方法和装置
CN108270945A (zh) * 2018-02-06 2018-07-10 上海通途半导体科技有限公司 一种运动补偿去噪方法及装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5588067A (en) * 1993-02-19 1996-12-24 Peterson; Fred M. Motion detection and image acquisition apparatus and method of detecting the motion of and acquiring an image of an object
JP2012049603A (ja) * 2010-08-24 2012-03-08 Olympus Corp 画像処理装置および画像処理プログラム

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100157073A1 (en) * 2008-12-22 2010-06-24 Yuhi Kondo Image processing apparatus, image processing method, and program
CN102201113A (zh) * 2010-03-23 2011-09-28 索尼公司 图像处理设备、图像处理方法以及程序
CN103606132A (zh) * 2013-10-31 2014-02-26 西安电子科技大学 基于空域和时域联合滤波的多帧数字图像去噪方法
CN107005623A (zh) * 2014-10-16 2017-08-01 三星电子株式会社 用于图像处理的方法和装置
CN108270945A (zh) * 2018-02-06 2018-07-10 上海通途半导体科技有限公司 一种运动补偿去噪方法及装置

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115100051A (zh) * 2022-06-14 2022-09-23 浙江华感科技有限公司 一种图像降噪的方法、装置及电子设备

Also Published As

Publication number Publication date
CN111699511A (zh) 2020-09-22

Similar Documents

Publication Publication Date Title
WO2021217643A1 (zh) 红外图像处理方法、装置及可移动平台
CN110717942B (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
CN107633536B (zh) 一种基于二维平面模板的相机标定方法及系统
US10762655B1 (en) Disparity estimation using sparsely-distributed phase detection pixels
WO2021134642A1 (zh) 图像处理方法、装置及存储介质
WO2020253618A1 (zh) 一种视频抖动的检测方法及装置
CN108470356B (zh) 一种基于双目视觉的目标对象快速测距方法
CN108510540B (zh) 立体视觉摄像机及其高度获取方法
WO2018171008A1 (zh) 一种基于光场图像的高光区域修复方法
CN112714287B (zh) 一种云台目标转换控制方法、装置、设备及存储介质
WO2021184302A1 (zh) 图像处理方法、装置、成像设备、可移动载体及存储介质
CN110345875B (zh) 标定及测距方法、装置、电子设备及计算机可读存储介质
WO2021223127A1 (zh) 基于全局运动估计的时域滤波方法、装置及存储介质
CN110992393B (zh) 一种基于视觉的目标运动跟踪方法
WO2021230157A1 (ja) 情報処理装置、情報処理方法、および情報処理プログラム
CN111951178A (zh) 显著提升图像质量的图像处理方法、装置和电子设备
US10257418B2 (en) Image processing method for movement detection and compensation
CN112837343B (zh) 基于相机阵列的低空无人机防控光电预警识别方法及系统
CN111630569A (zh) 双目匹配的方法、视觉成像装置及具有存储功能的装置
CN116266356A (zh) 全景视频转场渲染方法、装置和计算机设备
CN113723432A (zh) 一种基于深度学习的智能识别、定位追踪的方法及系统
CN110853065A (zh) 一种应用于光电跟踪系统的边缘提取方法
CN117647263B (zh) 基于非线性优化的单光子相机视觉惯性里程计方法及系统
AU2018204554A1 (en) Method, system and apparatus for determining velocity of an object in a scene
CN118334099B (zh) 一种开放海面场景海上目标深度估计方法及其系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19958412

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19958412

Country of ref document: EP

Kind code of ref document: A1