WO2022179251A1 - 图像处理方法及装置、电子设备、存储介质 - Google Patents

图像处理方法及装置、电子设备、存储介质 Download PDF

Info

Publication number
WO2022179251A1
WO2022179251A1 PCT/CN2021/137515 CN2021137515W WO2022179251A1 WO 2022179251 A1 WO2022179251 A1 WO 2022179251A1 CN 2021137515 W CN2021137515 W CN 2021137515W WO 2022179251 A1 WO2022179251 A1 WO 2022179251A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
pixel
light images
pixel value
visible light
Prior art date
Application number
PCT/CN2021/137515
Other languages
English (en)
French (fr)
Inventor
田毅
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2022179251A1 publication Critical patent/WO2022179251A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present application relates to the field of computer technology, and in particular, to an image processing method and apparatus, electronic equipment, and storage medium.
  • the spatial domain noise reduction method only needs to process the current video frame
  • the temporal domain noise reduction method needs to refer to a video frame other than the current video frame (referred to as a reference frame for short) to process the current video frame.
  • the reference frame needs to be registered with the current video frame before noise reduction processing, and the current registration method has the problem of a high registration error rate.
  • the embodiments of the present application provide an image processing method and apparatus, an electronic device, and a storage medium.
  • the embodiment of the present application provides an image processing method, and the method includes:
  • the plurality of visible light images are registered based on the image registration parameters.
  • An embodiment of the present application provides an image processing apparatus, and the apparatus includes:
  • an acquisition unit configured to acquire multiple visible light images and multiple invisible light images
  • a parameter determination unit configured to determine image registration parameters based on the plurality of invisible light images
  • the image registration unit is configured to register the plurality of visible light images based on the image registration parameters.
  • An embodiment of the present application further provides an electronic device, including a memory and a processor, wherein a computer program is stored in the memory, and when the computer program is executed by the processor, the processor causes the processor to execute the processes described in the foregoing embodiments. image processing method.
  • Embodiments of the present application further provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, implements the image processing method described in the foregoing embodiments.
  • FIG. 1 is a flowchart of a time-domain noise reduction provided by an embodiment of the present application
  • FIG. 2 is a schematic flowchart of an image processing method provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of the correspondence between a visible light image sequence and an invisible light image sequence provided by an embodiment of the present application;
  • FIG. 4 is a flowchart of image fusion according to a motion mask image provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram 1 of a principle provided by an embodiment of the present application.
  • FIG. 6 is a schematic flowchart of calculating a registration transformation matrix provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram 2 of a principle provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of determining a registration transformation parameter by region according to an embodiment of the present application.
  • FIG. 9 is a schematic structural composition diagram of an image processing apparatus provided by an embodiment of the present application.
  • FIG. 10 is a schematic structural composition diagram of an electronic device according to an embodiment of the present application.
  • the spatial noise reduction method is a noise reduction method only for a single frame of image. Specifically, neighborhood analysis and processing are performed in a single frame of image to achieve noise filtering.
  • the spatial noise reduction method can be achieved by bilateral filtering algorithm, Or non-local mean (Non-Local Means) algorithm to achieve;
  • the temporal noise reduction method is a noise reduction method that refers to the information of the video frame other than the current video frame (referred to as the reference frame) for analysis and processing, such as the current video frame.
  • the previous frame of the frame is used as the reference frame. Since the time domain noise is flickering in the time domain and the real information is unchanged, the addition of the reference frame can better distinguish the real image information and noise, so that the real image information and noise can be better distinguished. More accurate noise removal.
  • Figure 1 is a typical flow chart of video temporal noise reduction.
  • image registration is performed on the reference frame, so that the registered reference frame is aligned with the current frame; the registered reference frame is aligned with the current frame.
  • the frame is subjected to image fusion processing, that is, the denoising process is realized, and the denoised output frame is obtained.
  • the denoised output frame can be understood as the denoised frame image of the current frame.
  • the step of image registration is a key step. Since in consecutive video frames, the shooting position of each frame may move, and there may be moving objects in each frame, the image information may be offset in two adjacent frames. If the relative positions in the frames are inconsistent, the subsequent image fusion processing cannot distinguish the same point on two adjacent frames.
  • the image registration algorithm is the key to dealing with this problem. Through the image registration algorithm, the typical features of two adjacent frames can be analyzed, and the image information of the two adjacent frames can be aligned according to the feature matching.
  • the image registration algorithm adopts the scheme of feature point matching.
  • feature points are points with more obvious features in the image, such as corner points.
  • the disadvantage of this scheme is that in a darker scene, such as a dark street in a night scene, the picture information in the video will be very dark, and it is difficult to extract the real feature points in the picture, and the noise in the night scene is also very large, which will interfere with The judgment of the real information of the image leads to an error in the image registration, which will cause serious ghosting and other problems in the video after temporal noise reduction.
  • the first image sensor and the second image sensor are used to collect the visible light image sequence and the invisible light image sequence at the same time, and the invisible light image sequence is collected through the invisible light image sequence. Determining the image registration parameters suitable for the visible light image sequence can improve the image registration accuracy of the visible light image sequence, which in turn can help improve the signal-to-noise ratio of the visible light image sequence.
  • the image processing method provided in the embodiment of the present application is applied to an image processing apparatus, and the image processing apparatus may be set on an electronic device.
  • the electronic device is, for example, a mobile phone, a tablet computer, a wearable device, an interactive advertising machine, a game console, a desktop computer, an all-in-one computer, a vehicle-mounted terminal, and the like.
  • FIG. 2 is a schematic flowchart of an image processing method provided by an embodiment of the present application. As shown in FIG. 2 , the image processing method includes the following steps:
  • Step 201 Acquire multiple visible light images and multiple invisible light images.
  • the electronic device has a first image sensor and a second image sensor, wherein the first image sensor is used to collect visible light images, and the second image sensor is used to collect invisible light images.
  • the invisible light image is an infrared image.
  • the first image sensor is a Complementary Metal-Oxide-Semiconductor (CMOS) sensor (Sensor), and the second image sensor is an infrared (Infrared, IR) sensor ( Sensor).
  • CMOS Complementary Metal-Oxide-Semiconductor
  • IR infrared
  • the CMOS Sensor is used to collect (also known as capturing or shooting) visible light images.
  • the CMOS Sensor collects the visible light band in the scene.
  • the visible light band is red (R), green (G), blue ( B) band
  • the visible light band is processed by a processor (such as an image signal processor (Image Signal Processing, ISP)) to present a colored image, which is called a visible light image.
  • the data format of the visible light image can be YUV format.
  • the IR Sensor is used to collect invisible light images. Specifically, the IR Sensor collects infrared light (or infrared light) whose frequency is lower than that of red light in the scene. In extremely dark environments, the IR Sensor can collect the infrared rays generated by various objects in the environment. It can be seen that the infrared images collected by the IR Sensor can contain object information in darker environments.
  • acquiring multiple visible light images and multiple invisible light images includes: acquiring multiple visible light images collected by the first image sensor and multiple invisible light images collected by the second image sensor, wherein the multiple The visible light image has a corresponding relationship with the plurality of invisible light images according to the acquisition time.
  • the plurality of visible light images and the plurality of invisible light images have a corresponding relationship according to the collection time, which means that the visible light images and the invisible light images collected at the same time or within the same time period have a corresponding relationship.
  • the first image sensor and the second image sensor simultaneously acquire image sequences to obtain visible light image sequences (ie, original image sequences) and invisible light image sequences (eg, infrared image sequences).
  • the first image sensor collects the visible light image 1
  • the first image sensor collects the visible light image 2
  • the second image sensor collects the invisible light image 2
  • the visible light image 2 corresponds to the invisible light image 2.
  • the first image sensor collects the visible light image 3
  • the second image sensor collects the invisible light image 3
  • the visible light image 3 corresponds to the invisible light image 3.
  • Step 202 Determine image registration parameters based on the multiple invisible light images.
  • the visible light image (that is, the original image) collected by the first image sensor has a relatively poor signal-to-noise ratio in a dark scene (such as a night scene), and the visibility of the dark information in the visible light image is low.
  • the registration error rate may be high; the invisible light images collected by the second image sensor are in brighter scenes or darker scenes.
  • the visibility of dark information is high, and even in scenes that are difficult to perceive visually, the second image sensor can still collect the real information of the scene; since the first image sensor and the second image sensor collect images at the same time, the relative positions of multiple visible light images The relationship is similar to the relative positional relationship of multiple invisible light images.
  • the image registration parameters can be determined based on multiple invisible light images, and then the multiple visible light images can be registered by using the image registration parameters.
  • the technical solutions of the embodiments of the present application will be described below with “multiple sheets” as “two sheets”. It should be noted that the solutions of “more than two sheets” are also applicable to the technical solutions of the embodiments of the present application.
  • the plurality of visible light images include a first image and a second image
  • the plurality of invisible light images include a third image and a fourth image
  • the first image and the first image The three images correspond
  • the second image corresponds to the fourth image.
  • the first image sensor and the second image sensor respectively collect the first image and the third image at the same moment (eg, the first moment), so the first image corresponds to the third image
  • the first image sensor The second image and the fourth image are respectively collected by the second image sensor at the same moment (eg, the second moment), so the second image corresponds to the fourth image.
  • the acquisition time of the first image is located after the acquisition time of the second image
  • the acquisition time of the third image is located after the acquisition time of the fourth image.
  • the first image and the third image are the visible light image and the invisible light image currently collected by the first image sensor, respectively
  • the second image and the fourth image are the visible light image and the invisible light image previously collected by the first image sensor, respectively.
  • the second image is the previous frame of the first image
  • the first image may be referred to as the first current frame
  • the second image may be referred to as the first reference frame
  • the fourth image is the previous frame of the third image image
  • the third image may be referred to as the second current frame
  • the fourth image may be referred to as the second reference frame.
  • the second current frame is, for example, an infrared current frame
  • the second reference frame is, for example, an infrared reference frame.
  • the image registration parameter refers to the image registration parameter of one image relative to another image.
  • the image registration parameter refers to the image registration parameter of image A relative to image B.
  • the registration parameters are multiplied to obtain the registered image A, and the registered image A and the image B are aligned. Since the multiple visible light images collected by the first image sensor and the multiple invisible light images collected by the second image sensor have a corresponding relationship according to the collection time, the image registration parameters corresponding to the multiple invisible light images are different from those of the multiple visible light images. The corresponding image registration parameters are consistent. Therefore, the image registration parameters can be determined according to multiple invisible light images, and then the image registration parameters can be applied to the multiple visible light images, thereby realizing the registration of the multiple visible light images. The following describes how to determine image registration parameters from multiple invisible light images.
  • a first feature point set is extracted from the third image, and a second feature point set is extracted from the fourth image; the first feature point set and the second feature point are extracted from the In the collection, at least a pair of feature points having a matching relationship is determined; based on the coordinate information of the at least one pair of feature points, an image registration parameter is determined.
  • the third image is an infrared current frame
  • the fourth image is an infrared reference frame
  • the infrared reference frame may be a previous frame image of the infrared current frame.
  • the principle of the feature point extraction algorithm is: according to the neighborhood information of each pixel point in the image, it is judged whether the position of the pixel point is a point with significant features, such as whether it is a point on the edge of the object, so as to determine whether the pixel point is located on the edge of the object. Whether the point is a feature point. 2) Match the feature points extracted from the current infrared frame and the infrared reference frame, and calculate image registration parameters according to the matched feature points.
  • the image registration parameters can be embodied as a matrix, called a registration transformation matrix.
  • the registration transformation matrix is a 3 ⁇ 3 matrix, that is, the registration transformation matrix includes 9 parameters.
  • the registration transformation matrix is an affine transformation matrix, and the last row of the registration transformation matrix may be (0, 0, 1).
  • An example of the registration transformation matrix is given below:
  • a, b, c, d, e, f are parameters that need to be determined in the registration transformation matrix.
  • the coordinates of feature point A are (x1, y1, 1), and the coordinates of feature point a are (x2, y2, 1), Then, the coordinates of these two feature points satisfy the following formula:
  • each parameter in the registration transformation matrix can be determined, thereby determining the registration transformation matrix.
  • algorithms such as a random sampling consensus (RANSAC) algorithm can be used to match feature points and calculate a registration transformation matrix.
  • RANSAC random sampling consensus
  • the principle of the RANSAC algorithm is: randomly extract feature points in two frames of images, match the coordinate vectors corresponding to the feature points, and fit the registration transformation matrix according to the coordinate vectors of the matched feature points.
  • the first image sensor and the second image sensor share the same lens assembly (referred to as a lens group for short), in this case, it can be understood that the first image sensor and the second image sensor are on the electronic device
  • the visible light images collected by the first image sensor and the invisible light images collected by the second image sensor have no positional deviation, and the visible light images can be registered directly according to the image registration parameters determined by the invisible light images.
  • the first image sensor and the second image sensor use different lens assemblies respectively.
  • the positions of the first image sensor and the second image sensor on the electronic device are slightly different. Deviation, there is a positional deviation between the visible light image collected by the first image sensor and the invisible light image collected by the second image sensor.
  • the image registration parameters determined by the invisible light image need to be adjusted. Specifically, the image registration parameters are adjusted based on calibration data; wherein the calibration data is determined based on the relative positional relationship between the first image sensor and the second image sensor.
  • the calibration data can be the calibration data of the electronic device when it leaves the factory, the calibration data is related to the relative positional relationship between the first image sensor and the second image sensor, and the image registration parameters can be adjusted through the calibration data to realize invisible light images. Position alignment is performed to the visible light image to obtain image registration parameters that can be used for the visible light image.
  • Step 203 Register the multiple visible light images based on the image registration parameters.
  • the image registration parameter is determined based on a third image and a fourth image in the plurality of invisible light images, wherein the third image corresponds to the first image in the plurality of visible light images, and the fourth image corresponds to the first image in the plurality of visible light images.
  • a second image of the plurality of visible light images corresponds.
  • the second image is transformed based on the image registration parameters to obtain a fifth image registered with the first image.
  • the fifth image refers to an image obtained after the second image is transformed, and the image is aligned with the first image.
  • the first image is the current frame output by the CMOS Sensor
  • the second image is the reference frame output by the CMOS Sensor
  • the reference frame can be the previous frame image of the current frame.
  • the reference frame is transformed according to the registration transformation matrix to obtain a reference frame aligned with the current frame.
  • the coordinates of each pixel in the reference frame are multiplied by the registration transformation matrix to obtain new coordinates, and the transformation is completed after each pixel is rearranged according to the new coordinates.
  • the alignment operation of the current frame and the reference frame is completed.
  • the first image and the registered second image may be fused to complete temporal denoising.
  • image fusion processing is performed on the fifth image and the first image to obtain a sixth image, where the sixth image is an image of the first image after noise removal.
  • the image fusion processing on the fifth image and the first image may be completed in the following manner:
  • the pixels of the image are N ⁇ M, and N and M are positive integers
  • the pixel value of the pixel whose coordinates are (xi, yj, 1) in the fifth image and the coordinates in the first image are The pixel values of the pixel points of (xi, yj, 1) are averaged to obtain the pixel values of the pixel points whose coordinates are (xi, yj, 1) in the sixth image.
  • i is a positive integer greater than or equal to 1 and less than or equal to N
  • j is a positive integer greater than or equal to 1 and less than or equal to M.
  • Mode 2 generate a motion mask (Mask) image, the motion mask image is used to determine the motion area and the non-motion area; the pixel value of each pixel in the motion area in the first image is determined as the first The pixel value of each pixel in the motion area in the six images; compare the pixel value of each pixel in the non-motion area in the fifth image with each pixel in the non-motion area in the first image Average the corresponding pixel values of , to obtain the pixel value of each pixel in the non-motion area in the sixth image.
  • a motion mask Motion mask
  • motion detection may be performed on the image, and specifically, by comparing the fifth image and the first image, a motion area and a non-motion area in the image are determined.
  • a mask image is generated to reflect which are moving areas and which are non-moving areas in the image.
  • the mask image may be generated in the following manner: the pixel value of each pixel in the fifth image corresponds to the pixel value of each pixel in the first image and the difference is calculated, Obtain the pixel value of each pixel point in the motion mask image; wherein, the area formed by the pixel point in the motion mask image whose pixel value is greater than or equal to the threshold is the motion area, and the pixel value in the motion mask image is less than or equal to The area formed by the pixel points of the threshold is a non-motion area.
  • the pixel of the image is N ⁇ M, N and M are positive integers, i is a positive integer greater than or equal to 1 and less than or equal to N, j is a positive integer greater than or equal to 1 and less than or equal to M, if the coordinates (xi , yj, 1) is located in the motion area, then the pixel value of the pixel with coordinates (xi, yj, 1) in the sixth image is equal to the pixel value of the pixel with coordinates (xi, yj, 1) in the first image If the coordinates (xi, yj, 1) are located in the non-motion area, the pixel value of the pixel with the coordinates (xi, yj, 1) in the sixth image is equal to the coordinates (xi, yj, 1) in the fifth image. The average value of the pixel value of the pixel point in 1) and the pixel value of the pixel point whose coordinates are (xi, yj, 1) in the first image.
  • the mask image may be generated in the following manner: the pixel value of each pixel in the fifth image corresponds to the pixel value of each pixel in the first image and the difference is calculated, Compare the difference with the threshold. If the difference is greater than or equal to the threshold, set the pixel value of the corresponding pixel in the motion mask image to 1. If the difference is less than the threshold, set the corresponding pixel in the motion mask image to 1. The pixel value of the motion mask image is set to 0; wherein, the area formed by the pixel point with the pixel value of 1 in the motion mask image is the motion area, and the area formed by the pixel point with the pixel value of 0 in the motion mask image is non-exercise area.
  • the image fusion method of the fifth image and the first image is as follows: the motion area only uses the pixel values of each pixel of the first image, and the non-motion area uses the first image.
  • the pixel value of each pixel point of an image and the pixel value of each pixel point of the fifth image are correspondingly averaged.
  • the second image is registered and transformed to obtain a fifth image
  • the fifth image is aligned with the first image
  • a motion mask image can be obtained according to the difference between the fifth image and the first image
  • the first image sensor is used as the CMOS Sensor, and the first image and the second image collected by the first image sensor are called the current frame and the reference frame, respectively.
  • the second image sensor is an IR Sensor, and the third image and the fourth image collected by the second image sensor are called the infrared current frame and the infrared reference frame, respectively, and are described as an example.
  • the CMOS Sensor outputs the original image sequence (ie, the original video), and the IR Sensor outputs the infrared image sequence.
  • the feature point matching is performed on the infrared current frame and the infrared reference frame output by the IR Sensor, and the registration transformation matrix is calculated according to the coordinate information of the matched feature points.
  • the reference frame output by the CMOS Sensor is transformed by the registration transformation matrix to obtain the registered reference frame, wherein the registered reference frame is aligned with the current frame output by the CMOS Sensor.
  • the current frame is continuously updated and changed over time, so that continuous video denoising frames can be output.
  • the process shown in Figure 7 is roughly divided into two parts: one part is infrared image processing, That is, the registration transformation matrix is determined by the infrared image sequence; the other part is the original image processing, that is, the original image sequence is registered through the registration transformation matrix and image fusion processing is performed to complete the noise reduction of the video.
  • infrared image processing That is, the registration transformation matrix is determined by the infrared image sequence
  • original image processing that is, the original image sequence is registered through the registration transformation matrix and image fusion processing is performed to complete the noise reduction of the video.
  • the registration transformation parameters when determining the registration transformation parameters, may be determined without distinguishing the image area, that is, the entire image corresponds to one registration transformation parameter. Corresponds to the same registration transformation parameters. Not limited to this, the registration transformation parameters can also be determined by distinguishing regions. For example, referring to FIG. 8, the image is divided into 2 regions, region 1 corresponds to registration transformation parameter 1, and region 2 corresponds to registration transformation parameter 2. The method for determining the corresponding registration transformation parameters of each region can refer to the foregoing scheme. Then, each pixel in region 1 corresponds to registration transformation parameter 1, and each pixel in region 2 corresponds to registration transformation parameter 2.
  • the second image collected by the first sensor also needs to perform registration transformation in different regions. Specifically, the coordinates of each pixel in the first region of the second image are The transformation is performed according to the registration transformation parameter 1, and the coordinates of each pixel in the second area of the second image are transformed according to the registration transformation parameter 2.
  • the image is divided into 2 regions, and region 1 corresponds to registration transformation parameter 1, which is determined based on the matching feature points of region 1 in multiple invisible light images; region 2 corresponds to registration transformation Parameter 2, the registration transformation parameter 2 is determined based on the matching feature points of the region 2 in the multiple visible light images.
  • region 1 may be a dark area in the image, and area 2 may be a bright area in the image.
  • the division of the dark area and the bright area is based on the visible light image.
  • the brightness value of each pixel of the current frame in the visible light image can be analyzed, so as to divide the dark area and the brightness area.
  • the invisible light image is used to assist in determining the corresponding registration transformation parameters, and for the bright area, the corresponding registration transformation parameters are directly determined through the visible light image.
  • the method of determining the registration transformation parameters of a certain area can refer to the description of the above-mentioned related solutions.
  • the feature points of two images are extracted from the region, and the feature points are matched and based on the matched features
  • the coordinate information of the point calculates the registration transformation parameters corresponding to the region.
  • more frames can be used for image fusion processing instead of just two frames.
  • L is an integer greater than 2
  • it is necessary to analyze the L frames of invisible light images corresponding to the L frames of visible light images, and determine L-1 registration transformation parameters, with L 3 as For example, the registration transformation parameters between the invisible light image 1 and the invisible light image 2, and the registration transformation parameters between the invisible light image 1 and the invisible light image 3 can be determined.
  • the registration transformation parameter between the visible light image 2 realizes the registration of the visible light image 1 and the visible light image 2
  • the registration transformation parameter between the invisible light image 1 and the invisible light image 3 realizes the visible light image 1 and the visible light image 3.
  • the registration between visible light image 1, visible light image 2 and visible light image 3 is completed.
  • the invisible light image 1 may be the current infrared frame
  • the invisible light image 2 may be the previous infrared frame of the current infrared frame
  • the invisible light image 3 may be the first two infrared frames of the current infrared frame
  • the visible light image 1 may be In the current frame
  • the visible light image 2 may be the previous frame of the current frame
  • the visible light image 3 may be the previous two frames of the current frame.
  • the registration transformation parameters are obtained by analyzing the invisible light image output by the second image sensor.
  • the invisible light image has The dark part information is more abundant, and the collected real object features are more accurate, so the calculated registration transformation parameters will be more accurate. Avoid ghosting or smearing when noisy. For darker scenes, the noise of the video shooting itself is large, and the dark information is not clear. It is necessary to remove the noise and restore the information through multi-frame denoising between adjacent frames. The problem of dark noise and low visibility will hinder multi-frame removal. The effect of noise, the use of invisible light images just makes up for this deficiency, so that even in a very dark environment, the registration transformation parameters can be calculated accurately, and the temporal noise reduction can be better carried out.
  • FIG. 9 is a schematic structural composition diagram of an image processing apparatus provided by an embodiment of the present application. As shown in FIG. 9 , the image processing apparatus includes:
  • an acquisition unit 901 configured to acquire multiple visible light images and multiple invisible light images
  • a parameter determination unit 902 configured to determine image registration parameters based on the multiple invisible light images
  • the image registration unit 903 is configured to perform registration on the multiple visible light images based on the image registration parameters.
  • the acquiring unit 901 is configured to acquire multiple visible light images collected by the first image sensor and multiple invisible light images collected by the second image sensor, wherein the multiple visible light images The image and the multiple invisible light images have a corresponding relationship according to the acquisition time.
  • the plurality of visible light images include a first image and a second image
  • the plurality of invisible light images include a third image and a fourth image, wherein the first image and the the third image corresponds to the second image and the fourth image corresponds;
  • the parameter determination unit 902 is configured to extract a first feature point set from the third image, and extract a second feature point set from the fourth image; In the two feature point sets, at least a pair of feature points having a matching relationship is determined; based on the coordinate information of the at least one pair of feature points, an image registration parameter is determined.
  • the parameter determination unit 902 is further configured to adjust the image registration parameters based on calibration data; wherein the calibration data is based on the first image sensor and the The relative positional relationship of the second image sensor is determined.
  • the image registration unit 903 is configured to transform the second image based on the image registration parameters to obtain a fifth image registered with the first image.
  • the device further includes:
  • the image fusion unit 904 is configured to perform image fusion processing on the fifth image and the first image to obtain a sixth image, where the sixth image is an image of the first image after noise removal.
  • the image fusion unit 904 is configured to average the pixel values of each pixel in the fifth image corresponding to the pixel value of each pixel in the first image , obtain the pixel value of each pixel in the sixth image.
  • the image fusion unit 904 is configured to generate a motion mask image, and the motion mask image is used to determine a motion area and a non-motion area;
  • the pixel value of each pixel in the area is determined as the pixel value of each pixel in the motion area in the sixth image;
  • the pixel value of each pixel in the non-motion area in the fifth image is The pixel values of each pixel in the non-motion area in the first image are correspondingly averaged to obtain the pixel value of each pixel in the non-motion area in the sixth image.
  • the image fusion unit 904 is configured to calculate the corresponding difference between the pixel value of each pixel in the fifth image and the pixel value of each pixel in the first image value, obtain the pixel value of each pixel in the motion mask image; wherein, the area formed by the pixel value of the pixel value in the motion mask image is greater than or equal to the threshold value is the motion area, and the pixel value in the motion mask image The area formed by the pixels whose value is less than the threshold value is a non-motion area.
  • the invisible light image is an infrared image.
  • each unit in the image processing apparatus shown in FIG. 9 can be understood with reference to the relevant description of the foregoing image processing method.
  • the functions of each unit in the image processing apparatus shown in FIG. 9 can be realized by a program running on the processor, or can be realized by a specific logic circuit.
  • the above-mentioned neural network training apparatus in the embodiments of the present application is implemented in the form of a software function module and sold or used as an independent product, it may also be stored in a storage medium (eg, a computer-readable storage medium).
  • a storage medium eg, a computer-readable storage medium
  • the computer software products are stored in a storage medium and include several instructions for An electronic device (which may be a personal computer, a server, or a network device, etc.) is caused to execute all or part of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, Read Only Memory (ROM, Read Only Memory), magnetic disk or optical disk and other media that can store program codes.
  • the embodiments of the present application are not limited to any specific combination of hardware and software.
  • the embodiments of the present application further provide a computer program product, in which computer-executable instructions are stored, and when the computer-executable instructions are executed, the above-mentioned methods of the embodiments of the present application can be implemented.
  • FIG. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • the electronic device may include one or more (only one is shown in the figure) processor 1002 (the processor 1002 may include, but is not limited to, a microcomputer).
  • a processor MCU, Micro Controller Unit
  • a processing device such as a programmable logic device (FPGA, Field Programmable Gate Array), a memory 1004 for storing data, and a transmission device 1006 for communication functions.
  • FPGA Field Programmable Gate Array
  • FIG. 10 is only a schematic diagram, which does not limit the structure of the above electronic device.
  • the electronic device may also include more or fewer components than shown in FIG. 10 , or have a different configuration than that shown in FIG. 10 .
  • the memory 1004 can be used to store software programs and modules of application software, such as program instructions/modules corresponding to the methods in the embodiments of the present application, and the processor 1002 executes various functional applications by running the software programs and modules stored in the memory 1004. And data processing, that is, to realize the above method.
  • Memory 1004 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
  • memory 1004 may further include memory located remotely from processor 1002, which may be connected to the electronic device through a network. Examples of such networks include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
  • Transmission means 1006 is used to receive or transmit data via a network.
  • the specific example of the above-mentioned network may include a wireless network provided by a communication provider of the electronic device.
  • the transmission device 1006 includes a network adapter (NIC, Network Interface Controller), which can be connected to other network devices through a base station so as to communicate with the Internet.
  • the transmission device 1006 may be a radio frequency (RF, Radio Frequency) module, which is used for wirelessly communicating with the Internet.
  • RF Radio Frequency
  • the disclosed method and smart device may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined, or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the coupling, or direct coupling, or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be electrical, mechanical or other forms. of.
  • the unit described above as a separate component may or may not be physically separated, and the component displayed as a unit may or may not be a physical unit, that is, it may be located in one place or distributed to multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may all be integrated into one second processing unit, or each unit may be separately used as a unit, or two or more units may be integrated into one unit;
  • the above-mentioned integrated unit may be implemented in the form of hardware, or may be implemented in the form of hardware plus software functional units.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

本申请公开了一种图像处理方法及装置、电子设备、存储介质,其中,所述图像处理方法包括:获取多张可见光图像和多张不可见光图像;基于所述多张不可见光图像,确定图像配准参数;基于所述图像配准参数,对所述多张可见光图像进行配准。

Description

图像处理方法及装置、电子设备、存储介质
相关申请的交叉引用
本申请基于申请号为202110221009.5、申请日为2021年02月26日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本申请涉及计算机技术领域,尤其涉及一种图像处理方法及装置、电子设备、存储介质。
背景技术
目前,针对视频降噪主要有两类方法:一类是空域降噪方法,另一类是时域降噪方法。空域降噪方法仅需要对当前视频帧进行处理,而时域降噪方法需要参考除当前视频帧以外的视频帧(简称为参考帧)来对当前视频帧进行处理。对于时域降噪方法来说,在进行降噪处理之前需要将参考帧与当前视频帧进行配准,目前的配准方法具有配准出错率较高的问题。
发明内容
为解决上述技术问题,本申请实施例提供了一种图像处理方法及装置、电子设备、存储介质。
本申请实施例提供了一种图像处理方法,所述方法包括:
获取多张可见光图像和多张不可见光图像;
基于所述多张不可见光图像,确定图像配准参数;
基于所述图像配准参数,对所述多张可见光图像进行配准。
本申请实施例提供了一种图像处理装置,所述装置包括:
获取单元,配置为获取多张可见光图像和多张不可见光图像;
参数确定单元,配置为基于所述多张不可见光图像,确定图像配准参数;
图像配准单元,配置为基于所述图像配准参数,对所述多张可见光图像进行配准。
本申请实施例还提供了一种电子设备,包括存储器及处理器,所述存储器中储存有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行上述实施例所述的图像处理方法。
本申请实施例还提供了一种计算机可读存储介质,其上存储有计算机 程序,所述计算机程序被处理器执行时实现上述实施例所述的图像处理方法。
附图说明
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:
图1是本申请实施例提供的一种时域降噪的流程图;
图2是本申请实施例提供的图像处理方法的流程示意图;
图3是本申请实施例提供的可见光图像序列和不可见光图像序列之间的对应关系示意图;
图4是本申请实施例提供的根据运动掩码图像进行图像融合的流程图;
图5是本申请实施例提供的原理示意图一;
图6是本申请实施例提供的计算配准变换矩阵的流程示意图;
图7是本申请实施例提供的原理示意图二;
图8是本申请实施例提供的分区域确定配准变换参数的示意图;
图9是本申请实施例提供的图像处理装置的结构组成示意图;
图10是本申请实施例的电子设备的结构组成示意图。
具体实施方式
现在将参照附图来详细描述本申请的各种示例性实施例。应注意到:除非另外具体说明,否则在这些实施例中阐述的部件和步骤的相对布置、数字表达式和数值不限制本申请的范围。
同时,应当明白,为了便于描述,附图中所示出的各个部分的尺寸并不是按照实际的比例关系绘制的。
以下对至少一个示例性实施例的描述实际上仅仅是说明性的,决不作为对本申请及其应用或使用的任何限制。
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为说明书的一部分。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。
为便于理解本申请实施例的技术方案,以下对本申请实施例的相关技术进行说明,需要说明的是,以下相关技术的描述是用于理解本申请实施例的技术方案,并不造成对本申请实施例技术方案的限定。
目前,针对视频降噪主要有两类方法:一类是空域降噪方法,另一类是时域降噪方法。空域降噪方法是仅针对单独一帧图像的降噪方法,具体 地,在单独一帧图像中进行邻域分析和处理并实现噪声滤除,作为示例,空域降噪方法可以通过双边滤波算法、或者非局部均值(Non-Local Means)算法来实现;而时域降噪方法是参考除当前视频帧以外的视频帧(简称为参考帧)的信息进行分析和处理的降噪方法,例如将当前帧的前一帧作为参考帧,由于时域噪声在时域上是闪动的而真实信息是不变的,有参考帧的加入则能更好地区分真实的图像信息和噪声,如此便可更准确地去除噪声。
图1是一个较为典型的视频时域降噪的流程图,如图1所示,对参考帧进行图像配准,使得配准后的参考帧与当前帧对齐;配准后的参考帧与当前帧进行图像融合处理,也即实现了去噪处理,得到去噪输出帧。需要指出的是,去噪输出帧可以理解为当前帧去噪后的帧图像。
在图1所示的流程图中,图像配准这一步骤为关键步骤。由于在连续视频帧中,每一帧的拍摄位置都可能移动,每一帧中也都可能有运动物体,那么在相邻两帧中图像信息可能会有偏移,如果一个物体在相邻两帧中的相对位置不一致的话,后续的图像融合处理则不能够区分出相邻两帧上的同一点。图像配准算法则是处理这一问题的关键,通过图像配准算法可以分析出相邻两帧各自的典型特征,根据特征匹配将相邻两帧的图像信息进行位置上的对齐。
在一些方案中,图像配准算法都是采用特征点匹配的方案,所谓特征点就是图像中特征较为明显的点,如角点。这种方案的缺点在于,在较为阴暗的场景下,例如夜景下漆黑的街道,视频中的画面信息会非常暗,难以提取出画面中真实的特征点,而且夜景下噪声也非常大,会干扰图像真实信息的判断,导致图像配准出错,从而会使视频经过时域降噪后出现较为严重的鬼影等问题。
为此,提出了本申请实施例的以下技术方案,本申请实施例的技术方案中,利用第一图像传感器和第二图像传感器,同时采集可见光图像序列和不可见光图像序列,通过不可见光图像序列确定适用于可见光图像序列的图像配准参数,能够提高可见光图像序列的图像配准准确度,进而可以辅助提升可见光图像序列的信噪比。
本申请实施例提供的图像处理方法应用于图像处理装置,该图像处理装置可设置在电子设备上。在一些实施方式中,该电子设备例如是:手机、平板电脑、穿戴式设备、互动广告机、游戏机、台式机、一体机、车载终端等等。
以下对本申请实施例提供的图像处理方法以及图像处理装置进行说明。
图2是本申请实施例提供的图像处理方法的流程示意图,如图2所示,所述图像处理方法包括以下步骤:
步骤201:获取多张可见光图像和多张不可见光图像。
本申请实施例中,电子设备具有第一图像传感器和第二图像传感器, 其中,第一图像传感器用于采集可见光图像,第二图像传感器用于采集不可见光图像。在一些可选实施方式中,所述不可见光图像为红外图像。
在一些可选实施例中,所述第一图像传感器为互补金属氧化物半导体(Complementary Metal-Oxide-Semiconductor,CMOS)传感器(Sensor),所述第二图像传感器为红外线(Infrared,IR)传感器(Sensor)。其中,CMOS Sensor用于采集(也可以称为捕捉或者拍摄)可见光图像,具体地,CMOS Sensor采集的是场景中的可见光波段,这里,可见光波段为红(R),绿(G),蓝(B)波段,可见光波段经过处理器(如图像信号处理器(Image Signal Processing,ISP))处理后则呈现出彩色的图像,称为可见光图像,作为示例,可见光图像的数据格式可以是YUV格式。IR Sensor用于采集不可见光图像,具体地,IR Sensor采集的是场景中的频率低于红光的红外光(或者称为红外线),由于高于绝对零度的物质都可以产生红外线,所以即使在极暗环境,IR Sensor都可以采集到环境中各物体产生的红外线,可见,通过IR Sensor采集到的红外图像可以包含较暗环境下的物体信息。
本申请实施例中,获取多张可见光图像和多张不可见光图像,包括:获取第一图像传感器采集的多张可见光图像,以及第二图像传感器采集的多张不可见光图像,其中,所述多张可见光图像与所述多张不可见光图像按照采集时刻具有对应关系。
这里,所述多张可见光图像与所述多张不可见光图像按照采集时刻具有对应关系,是指:在同一时刻或者同一时间周期内采集到的可见光图像和不可见光图像具有对应关系。举个例子,参照图3,第一图像传感器和第二图像传感器同时进行图像序列的采集,得到可见光图像序列(也即原始图像序列)和不可见光图像序列(如红外图像序列)。在t1时刻,第一图像传感器采集到可见光图像1,第二图像传感器采集到不可见光图像1,可见光图像1和不可见光图像1对应。在t2时刻,第一图像传感器采集到可见光图像2,第二图像传感器采集到不可见光图像2,可见光图像2和不可见光图像2对应。在t3时刻,第一图像传感器采集到可见光图像3,第二图像传感器采集到不可见光图像3,可见光图像3和不可见光图像3对应。以此类推,第一图像传感器采集到的多张可见光图像和第二图像传感器采集到的多张不可见光图像按照采集时刻具有对应关系。
步骤202:基于所述多张不可见光图像,确定图像配准参数。
这里,第一图像传感器采集到的可见光图像(也即原始图像)在较暗场景(如夜景)下的信噪比相对较差,可见光图像中暗部信息的可见度较低,若直接对第一图像传感器输出的多张可见光图像进行配准(即相对位置配准),则配准出错率可能较高;第二图像传感器采集到的不可见光图像无论在较亮场景还是较暗场景下,其中的暗部信息的可见度较高,即使在视觉难以感知的场景第二图像传感器依然能够采集到场景的真实信息;由于第一图像传感器与第二图像传感器同时采集图像,因此,多张可见光图 像的相对位置关系与多张不可见光图像的相对位置关系近似。
综上考虑,可以基于多张不可见光图像确定图像配准参数,然后利用该图像配准参数对多张可见光图像进行配准。下面以“多张”为“两张”对本申请实施例的技术方案进行说明,需要指出的是,“两张以上”的方案同样适用于本申请实施例的技术方案。
在一些可选实施方式中,所述多张可见光图像包括第一图像和第二图像,所述多张不可见光图像包括第三图像和第四图像,其中,所述第一图像和所述第三图像对应,所述第二图像和所述第四图像对应。作为示例,第一图像传感器和第二图像传感器在同一时刻(如第一时刻)分别采集了第一图像和第三图像,因此所述第一图像和所述第三图像对应;第一图像传感器和第二图像传感器在同一时刻(如第二时刻)分别采集了第二图像和第四图像,因此所述第二图像和所述第四图像对应。
上述方案中,第一图像的采集时刻位于第二图像的采集时刻之后,第三图像的采集时刻位于第四图像的采集时刻之后。作为示例,第一图像和第三图像分别为第一图像传感器当前采集的可见光图像和不可见光图像,第二图像和第四图像分别为第一图像传感器之前采集的可见光图像和不可见光图像。作为示例,第二图像是第一图像的前一帧图像,可以将第一图像称为第一当前帧,将第二图像称为第一参考帧;第四图像是第三图像的前一帧图像,可以将第三图像称为第二当前帧,将第四图像称为第二参考帧。其中,第二当前帧例如是红外当前帧,第二参考帧例如是红外参考帧。
本申请实施例中,图像配准参数是指一个图像相对于另一个图像的图像配准参数,例如图像配准参数是指图像A相对于图像B的图像配准参数,通过图像A与图像配准参数相乘,可以得到配准后的图像A,并且配准后的图像A和图像B是对齐的。由于第一图像传感器采集到的多张可见光图像和第二图像传感器采集到的多张不可见光图像按照采集时刻具有对应关系,因此多张不可见光图像对应的图像配准参数,与多张可见光图像对应的图像配准参数一致,因此,可以根据多张不可见光图像确定图像配准参数,然后将该图像配准参数应用于多张可见光图像,从而实现对多张可见光图像的配准。以下对如何根据多张不可见光图像确定图像配准参数进行说明。
本申请实施例中,从所述第三图像中提取第一特征点集,以及从所述第四图像中提取第二特征点集;从所述第一特征点集和所述第二特征点集中,确定出具有匹配关系的至少一对特征点;基于所述至少一对特征点的坐标信息,确定图像配准参数。
举个例子:第三图像为红外当前帧,第四图像为红外参考帧,红外参考帧可以是红外当前帧的前一帧图像。1)对红外当前帧和红外参考帧进行特征点的提取,得到红外当前帧中的第一特征点集以及红外参考帧中的第二特征点集。这里,第一特征点集和第二特征点集中包括的特征点的数目 可以相同,也可以不同,本申请实施例的技术方案对此不作限制。具体实现时,可采用Harris算法、或者尺度不变特征变换(Scale-invariant feature transform,SIFT)算法等特征点提取算法来实现特征点的提取。作为示例,特征点提取算法的原理是:根据图像中每一个像素点的邻域信息判断该像素点坐在的位置是否为特征显著的点,如是否为处于物体棱角上的点,从而判断该点是否属于特征点。2)匹配红外当前帧和红外参考帧中提取出的特征点,并根据匹配的特征点计算图像配准参数。这里,图像配准参数可以体现为一个矩阵,称为配准变换矩阵。配准变换矩阵为3×3的矩阵,即配准变换矩阵包括9个参数。作为示例,配准变换矩阵是一个仿射变换矩阵,配准变换矩阵的最后一行可以是(0,0,1),以下给出了配准变换矩阵的一个示例:
Figure PCTCN2021137515-appb-000001
其中,a,b,c,d,e,f为配准变换矩阵中需要确定的参数。
假设第一特征点集中的特征点A与第二特征点集中的特征点a匹配,特征点A的坐标为(x1,y1,1),特征点a的坐标为(x2,y2,1),那么,这两个特征点的坐标满足如下公式:
Figure PCTCN2021137515-appb-000002
将多对匹配的特征点拟合上述公式,可以确定出配准变换矩阵中的各个参数,从而确定出配准变换矩阵。具体实现时,可采用随机抽样一致(Random Sample Consensus,RANSAC)算法等算法来实现匹配特征点以及计算配准变换矩阵。RANSAC算法的原理是:随机抽取两帧图像中的特征点,将特征点对应的坐标向量进行匹配,根据匹配的特征点的坐标向量拟合出配准变换矩阵。
在一些可选实施方式中,第一图像传感器与第二图像传感器共用同一镜头组件(简称为镜组),对于这种情况,可以理解为,第一图像传感器与第二图像传感器在电子设备上的位置相同,第一图像传感器采集到的可见光图像和第二图像传感器采集到的不可见光图像没有位置偏差,可以直接根据由不可见光图像确定的图像配准参数来对可见光图像进行配准。
在一些可选实施方式中,第一图像传感器与第二图像传感器分别使用不同的镜头组件,对于这种情况,可以理解为,第一图像传感器与第二图像传感器在电子设备上的位置有细小偏差,第一图像传感器采集到的可见光图像和第二图像传感器采集到的不可见光图像存在位置偏差,为了弥补这种偏差,需要对由不可见光图像确定的图像配准参数进行调整。具体地,基于标定数据,对所述图像配准参数进行调整;其中,所述标定数据基于所述第一图像传感器和所述第二图像传感器的相对位置关系确定。这里, 标定数据可以是电子设备出厂时的标定数据,该标定数据与第一图像传感器和第二图像传感器的相对位置关系有关,通过该标定数据对图像配准参数进行调整,可以实现不可见光图像向可见光图像进行位置对齐,从而得到可以给可见光图像使用的图像配准参数。
步骤203:基于所述图像配准参数,对所述多张可见光图像进行配准。
在一些可选实施方式中,图像配准参数基于多张不可见光图像中的第三图像和第四图像确定,其中,第三图像与多张可见光图像中的第一图像对应,第四图像与多张可见光图像中的第二图像对应。基于所述图像配准参数对所述第二图像进行变换,得到与所述第一图像配准的第五图像。这里,第五图像是指第二图像进行变换后得到的图像,该图像与第一图像是对齐的。
举个例子:第一图像为CMOS Sensor输出的当前帧,第二图像为CMOS Sensor输出的参考帧,参考帧可以是当前帧的前一帧图像。参考帧根据配准变换矩阵进行变换,得到与当前帧对齐的参考帧。具体地,参考帧中各个像素点的坐标乘以配准变换矩阵后得到新坐标,各个像素点按照新坐标重排后则完成了变换,此时,当前帧和参考帧完成了对齐操作。
通过本申请实施例的上述技术方案完成第一图像和第二图像的配准后,可以融合第一图像和配准后的第二图像(即第五图像),从而完成时域去噪。具体地,对所述第五图像和所述第一图像进行图像融合处理,得到第六图像,所述第六图像为所述第一图像去除噪声后的图像。
在一些可选实施方式中,可以采用以下方式完成对所述第五图像和所述第一图像进行图像融合处理:
方式一:将所述第五图像中的各个像素点的像素值与所述第一图像中的各个像素点的像素值对应求平均,得到第六图像中的各个像素点的像素值。
举个例子:假设图像的像素为N×M,N和M为正整数,那么,第五图像中的坐标为(xi,yj,1)的像素点的像素值和第一图像中的坐标为(xi,yj,1)的像素点的像素值求平均,得到第六图像中的坐标为(xi,yj,1)的像素点的像素值。其中,i为大于等于1且小于等于N的正整数,j为大于等于1且小于等于M的正整数。
方式二:生成运动掩码(Mask)图像,所述运动掩码图像用于确定运动区域和非运动区域;将所述第一图像中的运动区域内的各个像素点的像素值,确定为第六图像中的运动区域内的各个像素点的像素值;将所述第五图像中的非运动区域内的各个像素点的像素值与所述第一图像中的非运动区域内的各个像素点的像素值对应求平均,得到所述第六图像中的非运动区域内的各个像素点的像素值。
这里,为了更精准的去噪,可以对图像进行运动检测,具体地,通过对第五图像和第一图像进行比较,确定出图像中的运动区域和非运动区域。 具体实现时,通过生成掩码图像来体现图像中哪些是运动区域,哪些是非运动区域。
在一些可选实施方式中,可以通过以下方式生成掩码图像:将所述第五图像中的各个像素点的像素值与所述第一图像中的各个像素点的像素值对应求差值,得到运动掩码图像中的各个像素点的像素值;其中,所述运动掩码图像中的像素值大于等于阈值的像素点形成的区域为运动区域,所述运动掩码图像中的像素值小于所述阈值的像素点形成的区域为非运动区域。
举个例子:假设图像的像素为N×M,N和M为正整数,i为大于等于1且小于等于N的正整数,j为大于等于1且小于等于M的正整数,若坐标(xi,yj,1)位于运动区域,则第六图像中的坐标为(xi,yj,1)的像素点的像素值等于第一图像中的坐标为(xi,yj,1)的像素点的像素值;若坐标(xi,yj,1)位于非运动区域,则第六图像中的坐标为(xi,yj,1)的像素点的像素值等于第五图像中的坐标为(xi,yj,1)的像素点的像素值和第一图像中的坐标为(xi,yj,1)的像素点的像素值的平均值。
在一些可选实施方式中,可以通过以下方式生成掩码图像:将所述第五图像中的各个像素点的像素值与所述第一图像中的各个像素点的像素值对应求差值,将差值与阈值进行比较,若差值大于等于阈值,则将运动掩码图像中对应的像素点的像素值设置为1,若差值小于阈值,则将运动掩码图像中对应的像素点的像素值设置为0;其中,所述运动掩码图像中的像素值为1的像素点形成的区域为运动区域,所述运动掩码图像中的像素值为0的像素点形成的区域为非运动区域。
通过上述方案确定出图像中的运动区域和非运动区域后,第五图像和第一图像的图像融合方式为:运动区域仅采用第一图像的各个像素点的像素值,非运动区域则采用第一图像的各个像素点的像素值与第五图像的各个像素点的像素值对应求平均。
举个例子:参照图4,第二图像经过配准变换后得到第五图像,第五图像与第一图像是对齐的,根据第五图像和第一图像的差值可以得到运动掩码图像,参考该运动掩码图像对第一图像和第五图像进行图像融合处理,其中,第五图像和第一图像的图像融合方式为:运动区域仅采用第一图像的各个像素点的像素值,非运动区域则采用第一图像的各个像素点的像素值与第五图像的各个像素点的像素值对应求平均。
以下再结合图5至图7对本申请实施例的技术方案进行说明。需要说明的是,在图5至图7相关的实施例中,以第一图像传感器为CMOS Sensor,第一图像传感器采集到的第一图像和第二图像分别称为当前帧和参考帧,第二图像传感器为IR Sensor,第二图像传感器采集到的第三图像和第四图像分别称为红外当前帧和红外参考帧,为例进行说明。
参照图5,CMOS Sensor输出原始图像序列(即原始视频),IR Sensor 输出红外图像序列。对IR Sensor输出红外当前帧和红外参考帧进行特征点匹配,根据匹配的特征点的坐标信息计算配准变换矩阵。CMOS Sensor输出的参考帧经过配准变换矩阵进行变换,得到配准后的参考帧,其中,配准后的参考帧是与CMOS Sensor输出的当前帧对齐的。将CMOS Sensor输出的当前帧与配准后的参考帧进行图像融合处理,从而得到去噪输出帧,完成对当前帧的去噪处理。当前帧随着时间不断更新变化,从而可以输出连续的视频去噪帧。
对于图5中的配准变换矩阵的计算可以参照图6,分别对红外当前帧和红外参考帧进行特征点提取,然后对红外当前帧中的特征点和红外参考帧中的特征点进行匹配,并基于匹配的特征点的坐标信息计算出配准变换矩阵。将图6所示的流程结合到图5所示的流程中可以得到图7所示的流程,如图7所示,图7所示的流程大致分为两大部分:一部分是红外图像处理,即通过红外图像序列确定配准变换矩阵;另一部分是原始图像处理,即通过配准变换矩阵对原始图像序列进行配准并进行图像融合处理,从而完成视频的降噪。
本申请实施例的上述技术方案,在确定配准变换参数的时候可以不区分图像区域来确定配准变换参数,即整个图像对应一个配准变换参数,对于这种情况,整个图像的各个像素点对应于同一配准变换参数。不局限于此,也可以区分区域来确定配准变换参数,例如参照图8,将图像划分为2个区域,区域1对应配准变换参数1,区域2对应配准变换参数2,这里,每个区域确定其对应的配准变换参数的方法可以参照前述方案,那么,区域1内的各个像素点对应于配准变换参数1,区域2内的各个像素点对应于配准变换参数2。确定出不同区域的配准变换参数后,相应地,对于第一传感器采集到的第二图像也需要分区域进行配准变换,具体地,第二图像的第一区域内的各个像素点的坐标依据配准变换参数1进行变换,第二图像的第二区域内的各个像素点的坐标依据配准变换参数2进行变换。
作为本申请实施例上述技术方案的一种变形,采用本申请实施例的上述技术方案确定配准变换参数的时候可以只针对图像的部分区域,而图像的另一部分区域则可以采用张可见光图像来确定。依然参照图8,将图像划分为2个区域,区域1对应配准变换参数1,该配准变换参数1基于多张不可见光图像中的区域1的匹配特征点确定;区域2对应配准变换参数2,该配准变换参数2基于多张可见光图像中的区域2的匹配特征点确定。在一种应用场景下,区域1可以是图像中的暗部区域,区域2可以是图像中的亮部区域。这里,暗部区域和亮部区域的划分是以可见光图像为依据的,具体实现时,可以分析可见光图像中的当前帧的各个像素点的亮度值,从而划分出暗部区域和亮度区域,针对暗部区域,通过不可见光图像来辅助确定对应的配准变换参数,针对亮部区域,通过可见光图像直接确定对应的配准变换参数。需要说明的是,确定某个区域的配准变换参数的方式均 可以参照前述相关方案的描述,具体地,从该区域中提取两张图像的特征点,对特征点进行匹配并基于匹配的特征点的坐标信息计算该区域对应的配准变换参数。
作为本申请实施例上述技术方案的一种变形,图像融合处理阶段可以采用更多帧数进行图像融合处理而不只是两帧,对于这种情况,需要对多帧可见光图像进行配准,例如需要对L帧可见光图像进行配准,L为大于2的整数,那么,需要对L帧可见光图像对应的L帧不可见光图像进行分析,确定出L-1个配准变换参数,以L=3为例,可以确定出不可见光图像1和不可见光图像2之间的配准变换参数,以及不可见光图像1和不可见光图像3之间的配准变换参数,如此,可以通过不可见光图像1和不可见光图像2之间的配准变换参数实现将可见光图像1和可见光图像2进行配准,通过不可见光图像1和不可见光图像3之间的配准变换参数实现将可见光图像1和可见光图像3进行配准,如此,就完成了可见光图像1,可见光图像2和可见光图像3之间的配准,通过对配准后的可见光图像1,可见光图像2和可见光图像3进行图像融合处理,可以获得更好的去噪效果。作为示例,不可见光图像1可以是当前红外帧,不可见光图像2可以是当前红外帧的前一帧红外帧,不可见光图像3可以是当前红外帧的前两帧红外帧,可见光图像1可以是当前帧,可见光图像2可以是当前帧的前一帧,可见光图像3可以是当前帧的前两帧。
本申请实施例的技术方案,通过分析第二图像传感器输出的不可见光图像,得到配准变换参数,相较于直接从第一图像传感器输出的原始图像中计算配准变换参数,不可见光图像的暗部信息更为丰富,采集到的真实物体特征更为准确,这样计算出的配准变换参数会更为精准,通过该配准变换参数对可见光图像进行配准后,可以在后续进行多帧去噪时避免产生鬼影或拖影现象。对于较暗场景,视频拍摄本身的噪点较大,且暗部信息不清晰,需要通过相邻帧间的多帧去噪来去除噪声和恢复信息,而暗部噪点和可见度低的问题会阻碍多帧去噪的效果,不可见光图像的利用刚好弥补这一不足,使得即使在很暗的环境依然可以计算出准确地配准变换参数,推动时域降噪更好的进行。
图9是本申请实施例提供的图像处理装置的结构组成示意图,如图9所示,所述图像处理装置包括:
获取单元901,配置为获取多张可见光图像和多张不可见光图像;
参数确定单元902,配置为基于所述多张不可见光图像,确定图像配准参数;
图像配准单元903,配置为基于所述图像配准参数,对所述多张可见光图像进行配准。
在本申请一些可选实施方式中,所述获取单元901,配置为获取第一图像传感器采集的多张可见光图像,以及第二图像传感器采集的多张不可见 光图像,其中,所述多张可见光图像与所述多张不可见光图像按照采集时刻具有对应关系。
在本申请一些可选实施方式中,所述多张可见光图像包括第一图像和第二图像,所述多张不可见光图像包括第三图像和第四图像,其中,所述第一图像和所述第三图像对应,所述第二图像和所述第四图像对应;
所述参数确定单元902,配置为从所述第三图像中提取第一特征点集,以及从所述第四图像中提取第二特征点集;从所述第一特征点集和所述第二特征点集中,确定出具有匹配关系的至少一对特征点;基于所述至少一对特征点的坐标信息,确定图像配准参数。
在本申请一些可选实施方式中,所述参数确定单元902,还用于基于标定数据,对所述图像配准参数进行调整;其中,所述标定数据基于所述第一图像传感器和所述第二图像传感器的相对位置关系确定。
在本申请一些可选实施方式中,所述图像配准单元903,配置为基于所述图像配准参数对所述第二图像进行变换,得到与所述第一图像配准的第五图像。
在本申请一些可选实施方式中,所述装置还包括:
图像融合单元904,配置为对所述第五图像和所述第一图像进行图像融合处理,得到第六图像,所述第六图像为所述第一图像去除噪声后的图像。
在本申请一些可选实施方式中,所述图像融合单元904,配置为将所述第五图像中的各个像素点的像素值与所述第一图像中的各个像素点的像素值对应求平均,得到第六图像中的各个像素点的像素值。
在本申请一些可选实施方式中,所述图像融合单元904,配置为生成运动掩码图像,所述运动掩码图像用于确定运动区域和非运动区域;将所述第一图像中的运动区域内的各个像素点的像素值,确定为第六图像中的运动区域内的各个像素点的像素值;将所述第五图像中的非运动区域内的各个像素点的像素值与所述第一图像中的非运动区域内的各个像素点的像素值对应求平均,得到所述第六图像中的非运动区域内的各个像素点的像素值。
在本申请一些可选实施方式中,所述图像融合单元904,配置为将所述第五图像中的各个像素点的像素值与所述第一图像中的各个像素点的像素值对应求差值,得到运动掩码图像中的各个像素点的像素值;其中,所述运动掩码图像中的像素值大于等于阈值的像素点形成的区域为运动区域,所述运动掩码图像中的像素值小于所述阈值的像素点形成的区域为非运动区域。
在本申请一些可选实施方式中,所述不可见光图像为红外图像。
本领域技术人员应当理解,图9所示的图像处理装置中的各单元的实现功能可参照前述图像处理方法的相关描述而理解。图9所示的图像处理装置中的各单元的功能可通过运行于处理器上的程序而实现,也可通过具 体的逻辑电路而实现。
本申请实施例上述的神经网络的训练装置如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个存储介质(例如计算机可读取存储介质)中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台电子设备(可以是个人计算机、服务器、或者网络设备等)执行本申请各个实施例所述方法的全部或部分。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read Only Memory)、磁碟或者光盘等各种可以存储程序代码的介质。这样,本申请实施例不限制于任何特定的硬件和软件结合。
相应地,本申请实施例还提供一种计算机程序产品,其中存储有计算机可执行指令,该计算机可执行指令被执行时能够实现本申请实施例的上述的方法。
图10是本申请实施例的电子设备的结构组成示意图,如图10所示,电子设备可以包括一个或多个(图中仅示出一个)处理器1002(处理器1002可以包括但不限于微处理器(MCU,Micro Controller Unit)或可编程逻辑器件(FPGA,Field Programmable Gate Array)等的处理装置)、用于存储数据的存储器1004、以及用于通信功能的传输装置1006。本领域普通技术人员可以理解,图10所示的结构仅为示意,其并不对上述电子装置的结构造成限定。例如,电子设备还可包括比图10中所示更多或者更少的组件,或者具有与图10所示不同的配置。
存储器1004可用于存储应用软件的软件程序以及模块,如本申请实施例中的方法对应的程序指令/模块,处理器1002通过运行存储在存储器1004内的软件程序以及模块,从而执行各种功能应用以及数据处理,即实现上述的方法。存储器1004可包括高速随机存储器,还可包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器1004可进一步包括相对于处理器1002远程设置的存储器,这些远程存储器可以通过网络连接至电子设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
传输装置1006用于经由一个网络接收或者发送数据。上述的网络具体实例可包括电子设备的通信供应商提供的无线网络。在一个实例中,传输装置1006包括一个网络适配器(NIC,Network Interface Controller),其可通过基站与其他网络设备相连从而可与互联网进行通讯。在一个实例中,传输装置1006可以为射频(RF,Radio Frequency)模块,其用于通过无线方式与互联网进行通讯。
本申请实施例所记载的技术方案之间,在不冲突的情况下,可以任意组合。
在本申请所提供的几个实施例中,应该理解到,所揭露的方法和智能 设备,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元,即可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。
另外,在本申请各实施例中的各功能单元可以全部集成在一个第二处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。

Claims (22)

  1. 一种图像处理方法,所述方法包括:
    获取多张可见光图像和多张不可见光图像;
    基于所述多张不可见光图像,确定图像配准参数;
    基于所述图像配准参数,对所述多张可见光图像进行配准。
  2. 根据权利要求1所述的方法,其中,所述获取多张可见光图像和多张不可见光图像,包括:
    获取第一图像传感器采集的多张可见光图像,以及第二图像传感器采集的多张不可见光图像,其中,所述多张可见光图像与所述多张不可见光图像按照采集时刻具有对应关系。
  3. 根据权利要求2所述的方法,其中,所述多张可见光图像包括第一图像和第二图像,所述多张不可见光图像包括第三图像和第四图像,其中,所述第一图像和所述第三图像对应,所述第二图像和所述第四图像对应;
    所述基于所述多张不可见光图像,确定图像配准参数,包括:
    从所述第三图像中提取第一特征点集,以及从所述第四图像中提取第二特征点集;
    从所述第一特征点集和所述第二特征点集中,确定出具有匹配关系的至少一对特征点;
    基于所述至少一对特征点的坐标信息,确定图像配准参数。
  4. 根据权利要求3所述的方法,其中,所述方法还包括:
    基于标定数据,对所述图像配准参数进行调整;其中,所述标定数据基于所述第一图像传感器和所述第二图像传感器的相对位置关系确定。
  5. 根据权利要求3所述的方法,其中,所述基于所述图像配准参数,对所述多张可见光图像进行配准,包括:
    基于所述图像配准参数对所述第二图像进行变换,得到与所述第一图像配准的第五图像。
  6. 根据权利要求5所述的方法,其中,所述方法还包括:
    对所述第五图像和所述第一图像进行图像融合处理,得到第六图像,所述第六图像为所述第一图像去除噪声后的图像。
  7. 根据权利要求6所述的方法,其中,所述对所述第五图像和所述第一图像进行图像融合处理,包括:
    将所述第五图像中的各个像素点的像素值与所述第一图像中的各个像素点的像素值对应求平均,得到第六图像中的各个像素点的像素值。
  8. 根据权利要求6所述的方法,其中,所述对所述第五图像和所述第一图像进行图像融合处理,包括:
    生成运动掩码图像,所述运动掩码图像用于确定运动区域和非运动区 域;
    将所述第一图像中的运动区域内的各个像素点的像素值,确定为第六图像中的运动区域内的各个像素点的像素值;
    将所述第五图像中的非运动区域内的各个像素点的像素值与所述第一图像中的非运动区域内的各个像素点的像素值对应求平均,得到所述第六图像中的非运动区域内的各个像素点的像素值。
  9. 根据权利要求8所述的方法,其中,所述生成运动掩码图像,包括:
    将所述第五图像中的各个像素点的像素值与所述第一图像中的各个像素点的像素值对应求差值,得到运动掩码图像中的各个像素点的像素值;
    其中,所述运动掩码图像中的像素值大于等于阈值的像素点形成的区域为运动区域,所述运动掩码图像中的像素值小于所述阈值的像素点形成的区域为非运动区域。
  10. 根据权利要求1至9中任一项所述的方法,其中,所述不可见光图像为红外图像。
  11. 一种图像处理装置,所述装置包括:
    获取单元,配置为获取多张可见光图像和多张不可见光图像;
    参数确定单元,配置为基于所述多张不可见光图像,确定图像配准参数;
    图像配准单元,配置为基于所述图像配准参数,对所述多张可见光图像进行配准。
  12. 根据权利要求11所述的装置,其中,所述获取单元,配置为获取第一图像传感器采集的多张可见光图像,以及第二图像传感器采集的多张不可见光图像,其中,所述多张可见光图像与所述多张不可见光图像按照采集时刻具有对应关系。
  13. 根据权利要求12所述的装置,其中,所述多张可见光图像包括第一图像和第二图像,所述多张不可见光图像包括第三图像和第四图像,其中,所述第一图像和所述第三图像对应,所述第二图像和所述第四图像对应;
    所述参数确定单元,配置为从所述第三图像中提取第一特征点集,以及从所述第四图像中提取第二特征点集;从所述第一特征点集和所述第二特征点集中,确定出具有匹配关系的至少一对特征点;基于所述至少一对特征点的坐标信息,确定图像配准参数。
  14. 根据权利要求13所述的装置,其中,所述参数确定单元,还用于基于标定数据,对所述图像配准参数进行调整;其中,所述标定数据基于所述第一图像传感器和所述第二图像传感器的相对位置关系确定。
  15. 根据权利要求13所述的装置,其中,所述图像配准单元,配置为基于所述图像配准参数对所述第二图像进行变换,得到与所述第一图像配准的第五图像。
  16. 根据权利要求15所述的装置,其中,所述装置还包括:
    图像融合单元,配置为对所述第五图像和所述第一图像进行图像融合处理,得到第六图像,所述第六图像为所述第一图像去除噪声后的图像。
  17. 根据权利要求16所述的装置,其中,所述图像融合单元,配置为将所述第五图像中的各个像素点的像素值与所述第一图像中的各个像素点的像素值对应求平均,得到第六图像中的各个像素点的像素值。
  18. 根据权利要求16所述的装置,其中,所述图像融合单元,配置为生成运动掩码图像,所述运动掩码图像用于确定运动区域和非运动区域;将所述第一图像中的运动区域内的各个像素点的像素值,确定为第六图像中的运动区域内的各个像素点的像素值;将所述第五图像中的非运动区域内的各个像素点的像素值与所述第一图像中的非运动区域内的各个像素点的像素值对应求平均,得到所述第六图像中的非运动区域内的各个像素点的像素值。
  19. 根据权利要求18所述的装置,其中,所述图像融合单元,配置为将所述第五图像中的各个像素点的像素值与所述第一图像中的各个像素点的像素值对应求差值,得到运动掩码图像中的各个像素点的像素值;其中,所述运动掩码图像中的像素值大于等于阈值的像素点形成的区域为运动区域,所述运动掩码图像中的像素值小于所述阈值的像素点形成的区域为非运动区域。
  20. 根据权利要求11至19中任一项所述的装置,其中,所述不可见光图像为红外图像。
  21. 一种电子设备,包括存储器及处理器,所述存储器中储存有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行如权利要求1至10中任一项所述的方法。
  22. 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至10中任一项所述的方法。
PCT/CN2021/137515 2021-02-26 2021-12-13 图像处理方法及装置、电子设备、存储介质 WO2022179251A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110221009.5 2021-02-26
CN202110221009.5A CN112950502B (zh) 2021-02-26 2021-02-26 图像处理方法及装置、电子设备、存储介质

Publications (1)

Publication Number Publication Date
WO2022179251A1 true WO2022179251A1 (zh) 2022-09-01

Family

ID=76246713

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/137515 WO2022179251A1 (zh) 2021-02-26 2021-12-13 图像处理方法及装置、电子设备、存储介质

Country Status (2)

Country Link
CN (1) CN112950502B (zh)
WO (1) WO2022179251A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024108298A1 (en) * 2022-11-22 2024-05-30 Airy3D Inc. Edge angle and baseline angle correction in depth imaging

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112950502B (zh) * 2021-02-26 2024-02-13 Oppo广东移动通信有限公司 图像处理方法及装置、电子设备、存储介质
CN115361533B (zh) * 2022-08-19 2023-04-18 深圳市汇顶科技股份有限公司 图像数据处理方法和电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106548489A (zh) * 2016-09-20 2017-03-29 深圳奥比中光科技有限公司 一种深度图像与彩色图像的配准方法、三维图像采集装置
US20190188838A1 (en) * 2016-10-08 2019-06-20 Hangzhou Hikvision Digital Technology Co., Ltd. Method, Device and System for Image Fusion
CN110490811A (zh) * 2019-05-31 2019-11-22 杭州海康威视数字技术股份有限公司 图像降噪装置及图像降噪方法
CN111968057A (zh) * 2020-08-24 2020-11-20 浙江大华技术股份有限公司 图像降噪方法、装置、存储介质及电子装置
CN112950502A (zh) * 2021-02-26 2021-06-11 Oppo广东移动通信有限公司 图像处理方法及装置、电子设备、存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106548489A (zh) * 2016-09-20 2017-03-29 深圳奥比中光科技有限公司 一种深度图像与彩色图像的配准方法、三维图像采集装置
US20190188838A1 (en) * 2016-10-08 2019-06-20 Hangzhou Hikvision Digital Technology Co., Ltd. Method, Device and System for Image Fusion
CN110490811A (zh) * 2019-05-31 2019-11-22 杭州海康威视数字技术股份有限公司 图像降噪装置及图像降噪方法
CN111968057A (zh) * 2020-08-24 2020-11-20 浙江大华技术股份有限公司 图像降噪方法、装置、存储介质及电子装置
CN112950502A (zh) * 2021-02-26 2021-06-11 Oppo广东移动通信有限公司 图像处理方法及装置、电子设备、存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024108298A1 (en) * 2022-11-22 2024-05-30 Airy3D Inc. Edge angle and baseline angle correction in depth imaging

Also Published As

Publication number Publication date
CN112950502A (zh) 2021-06-11
CN112950502B (zh) 2024-02-13

Similar Documents

Publication Publication Date Title
WO2022179251A1 (zh) 图像处理方法及装置、电子设备、存储介质
US10165194B1 (en) Multi-sensor camera system
Kueng et al. Low-latency visual odometry using event-based feature tracks
WO2018176938A1 (zh) 红外光斑中心点提取方法、装置和电子设备
US9390511B2 (en) Temporally coherent segmentation of RGBt volumes with aid of noisy or incomplete auxiliary data
KR101524548B1 (ko) 영상 정합 장치 및 방법
CN106981078B (zh) 视线校正方法、装置、智能会议终端及存储介质
CN105049718A (zh) 一种图像处理方法及终端
CN112883940A (zh) 静默活体检测方法、装置、计算机设备及存储介质
CN111985281A (zh) 图像生成模型的生成方法、装置及图像生成方法、装置
CN109784230A (zh) 一种人脸视频图像质量寻优方法、系统及设备
CN114520906B (zh) 基于单目相机的三维人像补全方法和补全系统
CN105374051B (zh) 智能移动终端防镜头抖动视频运动目标检测方法
CN104243970A (zh) 基于立体视觉注意力机制和结构相似度的3d绘制图像的客观质量评价方法
CN110569840B (zh) 目标检测方法及相关装置
US9727973B2 (en) Image processing device using difference camera
CN115830064B (zh) 一种基于红外脉冲信号的弱小目标跟踪方法及装置
AU2011331381B2 (en) Change detection in video data
EP3407297B1 (en) Method and device for determining a characteristic of a display device
JPWO2017029784A1 (ja) 画像位置合わせシステム、方法および記録媒体
CN108076365B (zh) 人体姿势识别装置
WO2022156319A1 (zh) 摄像头移动的检测方法、设备、电子设备、存储介质和程序
Huang et al. Dual fusion paired environmental background and face region for face anti-spoofing
Szlávik et al. Video camera registration using accumulated co-motion maps
KR101731695B1 (ko) 시간적 일관성을 유지하는 다시점 이미지의 색상 왜곡 보정 방법 및 장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21927675

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21927675

Country of ref document: EP

Kind code of ref document: A1