CN113344820B - Image processing method and device, computer readable medium and electronic equipment - Google Patents

Image processing method and device, computer readable medium and electronic equipment Download PDF

Info

Publication number
CN113344820B
CN113344820B CN202110720734.7A CN202110720734A CN113344820B CN 113344820 B CN113344820 B CN 113344820B CN 202110720734 A CN202110720734 A CN 202110720734A CN 113344820 B CN113344820 B CN 113344820B
Authority
CN
China
Prior art keywords
image
noise
result
characteristic diagram
feature map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110720734.7A
Other languages
Chinese (zh)
Other versions
CN113344820A (en
Inventor
王舒瑶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110720734.7A priority Critical patent/CN113344820B/en
Publication of CN113344820A publication Critical patent/CN113344820A/en
Application granted granted Critical
Publication of CN113344820B publication Critical patent/CN113344820B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The disclosure relates to the technical field of image processing, and in particular relates to an image processing method and device, a computer readable medium and electronic equipment. The method comprises the following steps: acquiring an image to be processed and a corresponding reference image; respectively carrying out mean value filtering treatment on the image to be treated and the reference image to construct a first noise characteristic diagram by utilizing the difference value of the corresponding pixel points of the image to be treated and the reference image after the mean value filtering treatment; downsampling the current image and the reference original image in a target chromaticity channel, and carrying out mean value filtering processing on the downsampling result to construct a third noise characteristic diagram by using the mean value filtering processing result; determining a noise degree parameter corresponding to the current image by combining the first noise feature map and the brightness feature corresponding to the current image; and determining a target noise feature map based on the noise degree parameter in combination with the first noise feature map and the third noise feature map, so as to remove ghost noise from the current image according to the target noise feature map. The method can effectively remove the ghost.

Description

Image processing method and device, computer readable medium and electronic equipment
Technical Field
The present disclosure relates to the field of image processing technology, and in particular, to an image processing method, an image processing apparatus, a computer readable medium, and an electronic device.
Background
During the shooting of an image or video, a "ghost" may occur in the shot image or video due to problems of ambient light, shooting scene or hardware equipment. In general, ghosting refers to a phenomenon of a flare of a line of flare after strong light enters a lens.
In the prior art, a ghost removal scheme, for example, a scheme of removing high dynamic range image ghosts through multi-exposure fusion control; or the foreground and background separation is used to dynamically break up the foreground images of the communicated moving objects, and the scheme of eliminating the ghosts in the detection of the moving objects in a background updating mode based on the spatial similarity cannot be suitable for eliminating the ghosts generated in the video denoising process.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the present disclosure and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The present disclosure provides an image processing method, an image processing apparatus, a computer readable medium, and an electronic device, which can effectively remove ghosts generated in a video denoising process.
Other features and advantages of the present disclosure will be apparent from the following detailed description, or may be learned in part by the practice of the disclosure.
According to a first aspect of the present disclosure, there is provided an image processing method including:
Acquiring an image to be processed and a corresponding reference image; the image to be processed is a gray image corresponding to a current image, and the reference image is a gray image corresponding to a reference original image continuous with the current image;
Respectively carrying out mean value filtering treatment on the image to be treated and the reference image to construct a first noise characteristic diagram by utilizing the difference value of the corresponding pixel points of the image to be treated and the reference image after the mean value filtering treatment; and
Downsampling the current image and the reference original image in a target chromaticity channel, and carrying out mean value filtering processing on the downsampling result to construct a third noise characteristic diagram by using the mean value filtering processing result;
determining a noise degree parameter corresponding to the current image by combining the first noise feature map and the brightness feature corresponding to the current image;
And determining a target noise characteristic diagram based on the noise degree parameter and combining the first noise characteristic diagram and the third noise characteristic diagram so as to remove ghost noise from the current image according to the ghost map.
According to a second aspect of the present disclosure, there is provided an image processing apparatus including:
the image acquisition module is used for acquiring an image to be processed and a corresponding reference image; the image to be processed is a gray image corresponding to a current image, and the reference image is a gray image corresponding to a reference original image continuous with the current image;
The first noise characteristic diagram acquisition module is used for respectively carrying out mean value filtering processing on the image to be processed and the reference image so as to construct a first noise characteristic diagram by utilizing the difference value of the corresponding pixel points of the image to be processed and the reference image after the mean value filtering processing; and
The third noise characteristic diagram acquisition module is used for downsampling the current image and the reference original image in a target chromaticity channel, and carrying out mean value filtering processing on the downsampling result so as to construct a third noise characteristic diagram by utilizing the mean value filtering processing result;
The noise degree parameter acquisition module is used for combining the first noise feature map and the brightness features corresponding to the current image to determine the noise degree parameters corresponding to the current image;
And the denoising processing module is used for determining a target noise characteristic diagram based on the noise degree parameter and combining the first noise characteristic diagram and the third noise characteristic diagram so as to remove ghost noise from the current image according to the ghost map.
According to a third aspect of the present disclosure, there is provided a computer readable medium having stored thereon a computer program which, when executed by a processor, implements the above-described image processing method.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising:
One or more processors;
and a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the image processing method described above.
According to the image processing method provided by the embodiment of the disclosure, pixel point filtering and pixel block filtering are performed on gray images corresponding to a current image and a reference image, and a first noise characteristic diagram is constructed by utilizing a filtering result difference value; meanwhile, downsampling the current image and the reference image in a designated target chromaticity channel, and constructing a third noise characteristic diagram by utilizing a mean value filtering processing structure; constructing a noise degree parameter through the first noise characteristic diagram and the brightness characteristic of the current image; the noise degree parameter is used for guiding the first noise characteristic diagram and the third noise characteristic diagram to conduct image fusion to obtain a target noise characteristic diagram, and therefore the position where ghosting occurs is determined; and the fusion degree of the current image and the reference image is limited by utilizing the target noise characteristic image, so that the problem of ghost noise in each image frame in the video is fundamentally solved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort.
Fig. 1 schematically illustrates a flowchart of an image processing method in an exemplary embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow diagram of a method of constructing a first noise signature in an exemplary embodiment of the present disclosure;
FIG. 3 schematically illustrates a flow diagram of a method of constructing a third noise signature in an exemplary embodiment of the present disclosure;
fig. 4 schematically illustrates a flowchart of another image processing method in an exemplary embodiment of the present disclosure;
FIG. 5 schematically illustrates a flow diagram of a method of constructing a second noise signature in an exemplary embodiment of the present disclosure;
FIG. 6 schematically illustrates a schematic diagram of a method of constructing a target noise signature in an exemplary embodiment of the present disclosure;
Fig. 7 schematically illustrates a composition diagram of an image processing apparatus in an exemplary embodiment of the present disclosure;
fig. 8 schematically illustrates a structural diagram of an electronic device in an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
In the related art, the existing image de-ghosting scheme is designed for the problem of ghosting generated during high dynamic range video fusion. In one existing scheme, adjacent frame motion regions are detected through image registration, edge contour detection is utilized to obtain a ghost region, and high dynamic range image ghosts are removed through multi-exposure fusion control. However, the scheme mainly aims at the problem of ghost images generated in the multi-exposure fusion process of the high dynamic range image, and when the scheme is set, the characteristics of the multi-exposure image are considered, and the ghost images are eliminated in the fusion process; and is not suitable for eliminating ghosts generated by denoising fusion of video. In another existing scheme, foreground and background separation is used, foreground images of communicated moving objects are dynamically scattered, and ghosts in moving object detection are eliminated in a background updating mode based on spatial similarity. However, the technical scheme is mainly designed for the application of moving object detection, and is not suitable for solving the problem of ghosting in a video time domain denoising algorithm, and the ghosting in the denoising process cannot be eliminated by a foreground-background separation and background updating mode.
In view of the foregoing drawbacks and deficiencies of the prior art, a graphics processing method is provided in this exemplary embodiment, which can be applied to a problem of ghosting encountered in a video temporal denoising process. Referring to fig. 1, the above-described image processing method may include the steps of:
s11, acquiring an image to be processed and a corresponding reference image; the image to be processed is a gray image corresponding to a current image, and the reference image is a gray image corresponding to a reference original image continuous with the current image;
S12, respectively carrying out mean value filtering treatment on the image to be treated and the reference image to construct a first noise characteristic diagram by utilizing the difference value of the corresponding pixel points of the image to be treated and the reference image after the mean value filtering treatment; and
S13, downsampling the current image and the reference original image in a target chromaticity channel, and carrying out mean value filtering processing on the downsampling result so as to construct a third noise characteristic diagram by utilizing the mean value filtering processing result;
S14, determining a noise degree parameter corresponding to the current image by combining the first noise feature map and the brightness feature corresponding to the current image;
and S15, determining a target noise characteristic diagram based on the noise degree parameter and combining the first noise characteristic diagram and the third noise characteristic diagram so as to remove ghost noise from the current image according to the target noise characteristic diagram.
In the image processing method provided by the present exemplary embodiment, a first noise feature map based on the gray scale feature of an image is constructed by means of inter-frame difference; meanwhile, constructing a third noise feature map based on chromaticity information of the current image and the reference image; constructing a noise degree parameter through the first noise characteristic diagram and the brightness characteristic of the current image; the noise degree parameter is used for guiding the first noise characteristic diagram and the third noise characteristic diagram to conduct image fusion to obtain a target noise characteristic diagram, and therefore the position where ghosting occurs is determined; and the fusion degree of the current image and the reference image is limited by utilizing the target noise characteristic image, so that the problem of ghost noise in each image frame in the video is fundamentally solved.
Hereinafter, each step of the image processing method in the present exemplary embodiment will be described in more detail with reference to the accompanying drawings and examples.
In this exemplary embodiment, for example, the method described above may be applied to a server, and the user may upload video data, or continuous image frame data obtained by decomposing video data, to the server through a terminal device, so that the server may perform calculation in response to receiving the video data or the image data. Or the method can be applied to intelligent terminal equipment with the same computing capacity as the server side, such as intelligent terminals of mobile phones, tablets or computers. After the user inputs video data or image data comprising successive frames, the calculation can begin.
In step S11, an image to be processed and a corresponding reference image are acquired; the image to be processed is a gray image corresponding to a current image, and the reference image is a gray image corresponding to a reference original image continuous with the current image.
In this exemplary embodiment, taking the above-mentioned method performed at the server side as an example, when the user input data is video data, the video data may be split into continuous image frame sequence data. And sequentially processing each frame of image as a current image. Meanwhile, one or two previous frames of images consecutive to the current image may be used as the reference original image. When the current image is the first frame image of the image frame sequence, the current image may be configured as the reference original image. After the current frame image and the corresponding reference original image are selected, gray processing can be carried out on the current frame image and the corresponding reference original image, so that the corresponding gray image is obtained; and configuring a gray image corresponding to the current image as an image to be processed, and configuring a gray image corresponding to the reference original image as a reference image corresponding to the image to be processed.
In this exemplary embodiment, taking a previous two-frame continuous image with a reference original image as a current image as an example, the reference image is two-frame gray scale images corresponding to the two-frame reference original image, including a first reference image and a second reference image.
In step S12, average filtering processing is performed on the to-be-processed image and the reference image, so as to construct a first noise feature map by using the difference value between the corresponding pixel points of the to-be-processed image and the reference image after the average filtering processing.
In this exemplary embodiment, specifically, referring to fig. 2, the step S12 may include:
Step S121, respectively calculating a pixel point mean filtering result and a pixel block mean filtering result according to a preset window size for the image to be processed, the first reference image and the second reference image;
Step S122, respectively calculating pixel point average value filtering result differences between the image to be processed and the first reference image and the second reference image, and fusing the pixel point average value filtering result differences to determine a first pixel point filtering result; and
Step S123, respectively calculating a pixel block mean value filtering result difference value between the image to be processed and the first reference image and the second reference image, and fusing based on the pixel block mean value filtering result difference value to determine a first pixel block filtering result;
And step S124, fusing the first pixel point filtering result and the first pixel block filtering result to determine the first noise characteristic diagram.
For example, the input data is a gray scale image corresponding to the current image (i.e. the image to be processed) and a gray scale image corresponding to two consecutive frames of reference original images (i.e. the first reference image and the second reference image) before the current image. And respectively calculating average filtering values between each pixel point in each image and adjacent pixel points of the gray level images (ref 0 and ref 1) corresponding to the gray level image (curr) of the current image frame and the reference original image frame to obtain average filtering results of the pixels points corresponding to the image curr to be processed, the first reference image ref0 and the second reference image ref 1. Then, calculating a difference value of a pixel point average value filtering result between the image to be processed and the first reference image, wherein diff1=ref0-curr; and meanwhile, calculating a difference value of pixel point average value filtering results between the image to be processed and the second reference image, wherein diff2=ref1-curr. Comparing and fusing the difference value of the mean value filtering results of the two pixel points obtained by calculation; specifically, the integration may be performed in a mode of taking a maximum value, and diff=max (Diff 1, diff 2) to obtain the first pixel filtering result.
Meanwhile, the average filtering results of pixel block levels are calculated respectively according to window sizes of 3*3 for the image to be processed, the first reference image and the second reference image, namely, the average filtering value of five pixel blocks is calculated by the average value of the current 3*3 block and four adjacent upper, lower, left and right 3*3 blocks, and then the median value of the five average filtering values is taken, so that the median filtering results of the pixel blocks corresponding to the image to be processed, the first reference image and the second reference image are obtained. And respectively calculating the difference value of the pixel block median filtering results between the image to be processed and the first reference image and between the image to be processed and the second reference image to obtain the difference value of the two pixel block mean filtering results. The comparison and fusion method of the difference values of the pixel point mean value filtering results is used for fusing the difference values of the two pixel block mean value filtering results in a maximum value taking mode to obtain a first pixel block filtering result.
And then comparing and fusing the first pixel point filtering result and the first pixel block filtering result, and particularly, fusing by adopting a maximum value taking method. That is, for each pixel, the result with the largest value between the first pixel filtering result and the first pixel block filtering result is selected and retained, so as to generate the first noise characteristic diagram.
The gray level of the current image and the gray level of the expanded two frames of reference original images are used as input, and the ghost position information of a larger area is obtained by means of inter-frame difference, so that a first noise characteristic diagram which is used for describing the ghost position range and the ghost edge characteristic of a pixel level is obtained. Specifically, the first noise feature map describes the ghost edge and position information from the viewpoint of the gradation channel, which is a pixel-level ghost feature map (map).
In step S13, downsampling is performed on the current image and the reference original image in the target chromaticity channel, and an average filtering process is performed on the downsampled result, so as to construct a third noise feature map by using the average filtering process result.
In this example embodiment, referring to fig. 3, the step S13 may include:
Step S131, performing downsampling processing on the current image and the reference original image in a first chrominance channel and a second chrominance channel respectively to obtain corresponding downsampled images in each chrominance channel;
step S132, respectively carrying out mean value filtering processing on each downsampled image corresponding to the first chrominance channel, and carrying out fusion and upsampling processing by utilizing the mean value filtering processing result to obtain a noise characteristic result of the first chrominance channel; and
Step S133, respectively carrying out mean value filtering processing on each downsampled image corresponding to the second chromaticity channel, and carrying out fusion and upsampling processing by utilizing the mean value filtering processing result to obtain a noise characteristic result of the second chromaticity channel;
Step S134, comparing and fusing the first chrominance channel noise feature result and the second chrominance channel noise feature result to obtain the third noise feature map.
Specifically, the first chrominance channel and the second chrominance channel may be a chrominance channel U and a chrominance channel V. And respectively acquiring corresponding U-channel images and V-channel images for the current image, the first reference original image and the second reference original image, and respectively downsampling the U-channel images and the V-channel images to obtain U-channel downsampling images and V-channel downsampling images of the current image, U-channel downsampling images and V-channel downsampling images of the first reference original image, and U-channel downsampling images and V-channel downsampling images of the second reference original image. For example, the downsampling multiple may be 4*4.
Then, the same calculation method as that of step S121 to step S124 may be used to calculate the average filtering result of the pixel point for the U channel, the U channel downsampled image of the current image, the U channel downsampled image of the first reference original image, and the U channel downsampled image of the second reference original image, respectively; and respectively calculating the average filtering result of the pixel points in the V-channel image downsampling image of the current image, the V-channel downsampling image of the first reference original image and the V-channel downsampling image of the second reference original image under the V channel. Meanwhile, under the U channel, respectively calculating the average filtering result of the pixel block from the U channel downsampling image of the current image, the U channel downsampling image of the first reference original image and the U channel downsampling image of the second reference original image; and under the V channel, respectively calculating the average filtering result of the pixel block according to the V channel downsampling image of the current image, the V channel downsampling image of the first reference original image and the V channel downsampling image of the second reference original image.
Aiming at the data of the V channel, calculating the difference value between the pixel point average value filtering result of the V channel downsampling image of the current image and the pixel point average value filtering result of the V channel downsampling image of the first reference original image to obtain a first difference value result; and calculating a difference value between the pixel point average value filtering result of the V-channel image downsampling image of the current image and the pixel point average value filtering result of the V-channel downsampling image of the second reference original image to obtain a second difference value result. And comparing and fusing the first difference result and the second difference result. Specifically, a comparison fusion mode with a maximum value can be sampled; that is, for one pixel, a larger value is retained in the first difference result and the second difference result. Thus obtaining the pixel point average value filtering result of the image sampled under the V channel.
Meanwhile, calculating a difference value between a pixel block average value filtering result of the V-channel downsampling image of the current image and a pixel block average value filtering result of the V-channel downsampling image of the first reference original image to obtain a third difference value result; and calculating a difference value between the pixel block average value filtering result of the V-channel downsampling image of the current image and the pixel block average value filtering result of the V-channel downsampling image of the second reference original image to obtain a fourth difference value result. And comparing and fusing the third difference result and the fourth difference result in a mode of keeping the maximum value to obtain a pixel block average value filtering result of the V-channel downsampled image.
Comparing and fusing the pixel point average value filtering result of the V-channel downsampling image with the pixel block average value filtering result of the V-channel downsampling image, and reserving the maximum value corresponding to each pixel point in a maximum value mode to construct a V-channel downsampling feature map; and then carrying out up-sampling processing on the characteristic diagram, for example, restoring to the original size by using an interpolation up-sampling mode, thereby constructing a V channel noise characteristic result.
Based on the same calculation strategy as the V channel, calculating a difference value between a pixel point average value filtering result of a U channel downsampling image of the current image and a pixel point average value filtering result of a U channel downsampling image of a first reference original image aiming at the data of the U channel to obtain a fifth difference value result; and calculating a difference value between the pixel point average value filtering result of the U-channel image downsampling image of the current image and the pixel point average value filtering result of the U-channel downsampling image of the second reference original image to obtain a sixth difference value result. And comparing and fusing the fifth difference result and the sixth difference result, and sampling the comparison and fusion mode of taking the maximum value, thereby obtaining a pixel point average value filtering result of the sampled image under the U channel.
Meanwhile, calculating a difference value between a pixel block average value filtering result of the U-channel downsampling image of the current image and a pixel block average value filtering result of the U-channel downsampling image of the first reference original image to obtain a seventh difference value result; and calculating a difference value between the pixel block average value filtering result of the U-channel downsampling image of the current image and the pixel block average value filtering result of the U-channel downsampling image of the second reference original image to obtain an eighth difference value result. And comparing and fusing the seventh difference result and the eighth difference result in a mode of retaining the maximum value to obtain a pixel block average value filtering result of the U-channel downsampled image.
Then comparing and fusing the pixel point average value filtering result of the U-channel downsampling image with the pixel block average value filtering result of the U-channel downsampling image, and reserving the maximum value corresponding to each pixel point in a maximum value mode to construct a U-channel downsampling feature map; and then carrying out up-sampling processing on the characteristic diagram, for example, restoring to the original size by using an interpolation up-sampling mode, thereby constructing a U channel noise characteristic result.
And aiming at the obtained U channel noise characteristic result and V channel noise characteristic result, comparing and fusing in a mode of taking the maximum value, fusing two chromaticity characteristic graphs of the U channel and the V channel, and constructing a third noise characteristic graph. Specifically, the formula may include:
MapUV[i,j]=max(mapU[i,j],mapV[i,j])
Wherein mapU is the U channel noise feature result corresponding to the pixel with the coordinates (i, j), and mapV is the V channel noise feature result corresponding to the pixel with the coordinates (i, j).
In the third noise characteristic diagram, detection of a moving object is achieved by utilizing chromaticity information of the UV channel, the position of a ghost is calculated, and comparison and fusion are carried out on the U channel and the V channel in a mode of taking the maximum value, so that the fact that a channel with larger movement in UV can be represented in the third noise characteristic diagram is ensured.
In step S14, a noise level parameter corresponding to the current image is determined in combination with the first noise feature map and the brightness feature corresponding to the current image.
In this exemplary embodiment, for the current image, the luminance characteristics of each pixel point may be extracted, and the luminance characteristic map may be averaged. For example, the corresponding luminance can be calculated by the RGB value corresponding to each pixel point.
Meanwhile, for the first noise feature map, the whole map may be averaged. Then, the noise degree parameter is calculated from the result of taking the tie value from the full map of the luminance feature map and the full map average value result of the first noise feature map, so that the degree of ghosting described from the global angle can be obtained. Specifically, the formula may include:
ghostD=Adjust(AVE(Y))*AVE(map1)
Wherein map1 represents a first noise feature map, and Y represents a luminance feature map.
In step S15, a target noise feature map is determined based on the noise level parameter in combination with the first noise feature map and the third noise feature map, so as to remove ghost noise from the current image according to the ghost map.
In this exemplary embodiment, the obtained noise level parameter, the first noise feature map and the third noise feature map are calculated according to the above steps, the first noise feature map and the third noise feature map are multiplied and fused according to the noise level parameter, a ghost feature map is obtained by taking the maximum value of the final result, and then expansion is performed by taking the local maximum value of the ghost feature map, and then smoothing is performed by a low-pass filter, so as to obtain the target noise feature map. And guiding the interframe fusion degree of TNR time domain filtering through the target noise characteristic diagram, thereby reducing ghost.
In some exemplary embodiments of the present disclosure, referring to fig. 4, the method described above may further include:
step S21, obtaining an image to be processed and a corresponding reference image; the image to be processed is a gray image corresponding to a current image, and the reference image is a gray image corresponding to a reference original image continuous with the current image;
Step S22, respectively carrying out mean value filtering processing on the image to be processed and the reference image to construct a first noise characteristic diagram by utilizing the difference value of the corresponding pixel points of the image to be processed and the reference image after the mean value filtering processing; and
Step S23, respectively carrying out downsampling and mean filtering on the image to be processed and the reference image, and constructing a second noise characteristic diagram by utilizing a mean filtering processing result;
Step S24, downsampling the current image and the reference original image in a target chromaticity channel, and carrying out mean value filtering processing on the downsampling result so as to construct a third noise characteristic diagram by utilizing the mean value filtering processing result;
Step S25, determining a noise degree parameter corresponding to the current image by combining the first noise feature map and the brightness feature corresponding to the current image;
and step S26, determining a target noise characteristic diagram based on the noise degree parameter and combining the first noise characteristic diagram, the second noise characteristic diagram and the third noise characteristic diagram so as to remove ghost noise from the current image according to the target noise characteristic diagram.
In this exemplary embodiment, the above method may further downsample the gray image and construct a corresponding noise feature map. Specifically, downsampling and mean filtering may be performed on the image to be processed and the reference image, and a second noise feature map may be constructed by using a result of the mean filtering, so as to determine a target noise feature map by combining the second noise feature map with the first noise feature map and the third noise feature map based on the noise degree parameter.
Specifically, referring to fig. 5, the step S23 may include:
Step S231, respectively downsampling the image to be processed and the reference image to obtain corresponding sampled images;
step S232, performing mean filtering processing on the obtained downsampled images respectively, so as to perform fusion and upsampling processing by using the mean filtering processing result, so as to obtain the second noise feature map.
Specifically, the to-be-processed image, the first reference image and the second reference image in the gray map format may be respectively downsampled to obtain corresponding downsampled images. For example, downsampling is performed at a window size of 4*4.
The same calculation method as that in step S121 to step S124 may be used, where the average value filtering calculation of the pixel points and the average value filtering calculation of the pixel block are performed on the downsampled images of the image to be processed, the first reference image, and the second reference image, respectively, to obtain the average value filtering results of the pixel points and the average value filtering results of the pixel blocks of the downsampled images of the image to be processed, the first reference image, and the second reference image. Calculating a difference value between a pixel point average value filtering result of a downsampled image of the image to be processed and a pixel point average value filtering result of a sampled image of the first reference image to obtain a ninth difference value result; and meanwhile, calculating a difference value between the pixel point average value filtering result of the downsampled image of the image to be processed and the pixel point average value filtering result of the sampled image of the second reference image to obtain a tenth difference value result. And then comparing and fusing the ninth difference result and the tenth difference result, and reserving the maximum value, thereby obtaining a pixel point average value filtering result.
Meanwhile, calculating a difference value between a pixel block average value filtering result of a downsampled image of the image to be processed and a pixel block average value filtering result of a downsampled image of the first reference image to obtain an eleventh difference value result; and calculating a difference value between the pixel block average value filtering result of the downsampled image of the image to be processed and the pixel block average value filtering result of the downsampled image of the second reference image to obtain a twelfth difference value result. And then comparing and fusing the eleventh difference result and the twelfth difference result, and reserving the maximum value, thereby obtaining a pixel block average value filtering result.
And then comparing and fusing the pixel point filtering result and the pixel block filtering result of the downsampled image, fusing by adopting a maximum value taking method, and restoring the size of the fused result in a mode of upsampling the adjacent points so as to generate a second noise characteristic diagram. The block-level ghost feature map can be obtained by downsampling the gray level image, and the local block-level ghost information can be described; and up-sampling is carried out, so that ghost position information which is wider than the edge of the detected content can be obtained.
In this exemplary embodiment, in step S26, referring to fig. 6, specifically, it may include:
Step S261, performing product transformation fusion on the first noise feature map, the second noise feature map and the third noise feature map based on the noise degree parameter, so as to construct the preliminary noise map according to a screening result of a maximum value of a pixel point;
Step S262, sequentially performing expansion processing and smoothing processing on the preliminary noise map to obtain the target noise feature map.
Specifically, the first noise feature map, the second noise feature map, and the third noise feature map may be respectively subjected to product transform fusion according to the noise level parameter ghostD. The principle of multiplicative transformation fusion is to directly perform multiplication operation on corresponding pixel gray values on images with different spatial resolutions, so as to obtain new pixel gray values corresponding to the images. The calculation formula may include:
Map1_new[i,j]=map1[i,j]*ghostD
Map2_new[i,j]=map2[i,j]*ghostD
MapUV_new[i,j]=mapUV[i,j]*ghostD
Wherein map1 represents the first noise profile, map2 represents the second noise profile, and mapUV represents the third noise profile.
And then comparing and fusing the three results to obtain a preliminary noise map, wherein the formula can comprise:
map[i,j]=max(map1[i,j],map2[i,j],mapUV[i,j])
Then, local maximum expansion is performed again. Specifically, for the preliminary noise map, for the block with any current point as the center 3*3, the maximum value of the 3*3 nine points is taken and assigned to the current point. Then, smoothing processing is performed, thereby obtaining a target noise feature map.
In this example embodiment, after the target noise feature map is acquired, the target noise feature map may be used to guide time domain noise reduction (TNR). Specifically, the current image and the reference original image may be first subjected to image fusion processing to obtain an initial fusion image; and guiding the fusion degree between the current image and the initial fusion image by using the target noise characteristic image and performing image fusion processing to remove the ghost noise of the current image.
For example, for a current image and two frames of reference original images, the following may be applied as 1:1:1, performing image fusion processing to obtain a primary fusion image. And guiding the fusion degree of the primary fusion image and the current image by using the target noise characteristic map ghostD. The formula may include:
Out=(255-map)*merge+map*ImgYCurr
where merge represents the preliminary fusion image and ImgYCurr represents the gray-scale image of the current image.
When ghostD is larger, the probability of generating the ghost of the current pixel is higher, the interframe fusion degree is reduced, the fusion proportion is reduced, and the fusion proportion is more prone to taking the current frame itself, so that the ghost degree is reduced.
In this example embodiment, after the current image is acquired, any one or more of corresponding image type information, scene type information, and resolution information may be further identified, so as to configure a downsampling parameter and/or a size of a pixel block according to the acquired information. For example, when the current image is a night scene, a daytime scene, a portrait or a static object, different sampling windows can be configured to adapt to different image contents due to different background contents and different color richness and brightness of the image, and the processing speed of the image can be improved.
Based on the above, in other exemplary embodiments of the present disclosure, the noise degree parameter may also be calculated using the first noise feature map, the second noise feature map; alternatively, the second noise profile and the third noise profile may be used to calculate the noise level parameter.
When each noise characteristic diagram is calculated, the calculation can be sequentially performed according to the steps; or multiple processes may be created so that the noise signatures may be computed simultaneously. Thereby improving the calculation efficiency.
The image processing method provided by the embodiment of the disclosure can be applied to the fusion denoising of the ghost generated under the video denoising scene. And constructing a noise characteristic diagram by utilizing an inter-frame difference mode, and describing the ghost map. The method comprises the steps of constructing a pixel-level ghost map based on a gray channel through a first noise feature map, constructing a local block ghost map based on the gray channel through a second noise feature map, constructing a UV channel ghost map based on chromaticity information through a third noise feature map, detecting a moving object, and describing global ghost degree under the gray channel through constructing a noise degree parameter. Positioning a moving object in the continuous image through gray level and chromaticity information, positioning the position of the moving object, and giving out ghost information; the gray level difference information is obtained by carrying out ghost positioning through three dimensions of global, local and pixel levels, and the ghost position is calculated more comprehensively. And when the noise degree parameter is calculated, the influence of the brightness change of the image on the sensitivity of the ghost map is considered, when the brightness of the image is very low, the ghost can be generated when the edge noise mean value is very small, and the fusion proportion is adaptively adjusted by combining the image brightness mean value. The method comprises the steps of combining each characteristic noise figure with a ghost degree parameter to obtain a ghost map by combining gray information and chromaticity information, so as to obtain accurate ghost position information; and referencing the ghost position information in the time domain denoising process, and eliminating the ghost generated by fusion. The scheme divides the gray value from three dimensions: calculating the difference between frames at the global, local and pixel level, and simultaneously calculating the difference value between chroma frames; the position of the ghost generated by time domain fusion is accurately positioned by combining three-dimensional gray scale and chromaticity information; by limiting the fusion degree, the problem of ghosting is fundamentally solved.
It is noted that the above-described figures are only schematic illustrations of processes involved in a method according to an exemplary embodiment of the invention, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
Further, referring to fig. 7, there is also provided an image processing apparatus 70 in the embodiment of the present example, including: an image acquisition module 701, a first noise feature map acquisition module 702, a third noise feature map acquisition module 703, a noise level parameter acquisition module 704, and a denoising processing module 705. Wherein,
The image acquisition module 701 may be configured to acquire an image to be processed and a corresponding reference image; the image to be processed is a gray image corresponding to a current image, and the reference image is a gray image corresponding to a reference original image continuous with the current image.
The first noise feature map obtaining module 702 may be configured to perform mean filtering processing on the to-be-processed image and the reference image, so as to construct a first noise feature map by using differences between corresponding pixels of the to-be-processed image and the reference image after the mean filtering processing.
The third noise feature map obtaining module 703 may be configured to downsample the current image and the reference original image in the target chromaticity channel, and perform an average filtering process on the downsampled result, so as to construct a third noise feature map using the average filtering process result.
The noise level parameter obtaining module 704 may be configured to determine a noise level parameter corresponding to the current image in combination with the first noise feature map and the brightness feature corresponding to the current image.
The denoising processing module 705 may be configured to determine a target noise feature map based on the noise level parameter in combination with the first noise feature map and the third noise feature map, so as to remove ghost noise from the current image according to the target noise feature map.
In one example of the present disclosure, the apparatus 70 may further include: a second noise profile acquisition module (not shown).
The second noise feature map obtaining module may be configured to perform downsampling and mean filtering processing on the image to be processed and the reference image, and construct a second noise feature map by using a result of the mean filtering processing, so as to determine a target noise feature map by combining the second noise feature map, the first noise feature map, and the third noise feature map based on the noise degree parameter.
In one example of the present disclosure, the reference original image is a previous two-frame image that is continuous with the current image; the reference image is two frames of gray images corresponding to two frames of reference original images, and comprises a first reference image and a second reference image.
In one example of the disclosure, the first noise feature map obtaining module 702 may be configured to calculate a corresponding pixel point average filtering result and a pixel block average filtering result according to a preset window size for the image to be processed and the first reference image and the second reference image, respectively;
Respectively calculating pixel point average value filtering result differences between the image to be processed and the first reference image and the second reference image, and fusing the pixel point average value filtering result differences based on the pixel point average value filtering result differences to determine a first pixel point filtering result; and
Respectively calculating pixel block mean value filtering result differences between the image to be processed and the first reference image and the second reference image, and fusing the pixel block mean value filtering result differences based on the pixel block mean value filtering result differences to determine a first pixel block filtering result;
And fusing the first pixel point filtering result and the first pixel block filtering result to determine the first noise characteristic diagram.
In one example of the disclosure, the third noise feature map obtaining module 703 may be configured to perform downsampling processing on the current image and the reference raw image in a first chrominance channel and a second chrominance channel respectively to obtain corresponding downsampled images in each chrominance channel;
Respectively carrying out mean value filtering treatment on each downsampled image corresponding to the first chrominance channel, and carrying out fusion and upsampling treatment by utilizing the mean value filtering treatment result to obtain a noise characteristic result of the first chrominance channel; and
Respectively carrying out mean value filtering treatment on each downsampled image corresponding to the second chromaticity channel, and carrying out fusion and upsampling treatment by utilizing the mean value filtering treatment result to obtain a noise characteristic result of the second chromaticity channel;
and comparing and fusing the first chrominance channel noise characteristic result and the second chrominance channel noise characteristic result to obtain the third noise characteristic diagram.
In one example of the disclosure, the second noise feature map obtaining module 703 may be configured to downsample the image to be processed and the reference image to obtain corresponding sampled images respectively;
And respectively carrying out mean value filtering processing on the obtained downsampled images so as to carry out fusion and upsampling processing by utilizing the mean value filtering processing result, thereby obtaining the second noise characteristic diagram.
In one example of the present disclosure, the denoising processing module 705 may be further configured to perform product transform fusion on the first noise feature map, the second noise feature map, and the third noise feature map based on the noise level parameter, so as to construct the preliminary noise map according to a filtering result of a pixel maximum value;
and sequentially performing expansion processing and smoothing processing on the preliminary noise map to obtain the target noise characteristic map.
In one example of the disclosure, the denoising processing module 705 may be configured to perform image fusion processing on the current image and the reference original image, to obtain an initial fused image;
And guiding the fusion degree between the current image and the initial fusion image by using the target noise characteristic image and performing image fusion processing to remove the ghost noise of the current image.
In one example of the present disclosure, the apparatus 70 may further include: and a parameter configuration module. The parameter configuration film may be configured to acquire any one or more of image type information, scene type information, and resolution information of the current image, so as to configure a downsampling parameter according to the acquired information.
The specific details of each module in the above image processing apparatus have been described in detail in the corresponding image processing method, so that the details are not repeated here.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Fig. 8 shows a schematic diagram of an electronic device suitable for use in implementing embodiments of the invention.
It should be noted that the electronic device 500 shown in fig. 8 is only an example, and should not impose any limitation on the functions and the application scope of the embodiments of the present disclosure.
As shown in fig. 8, the electronic apparatus 500 includes a central processing unit (Central Processing Unit, CPU) 501, which can perform various appropriate actions and processes according to a program stored in a Read-Only Memory (ROM) 402 or a program loaded from a storage portion 508 into a random access Memory (Random Access Memory, RAM) 503. In the RAM 503, various programs and data required for the system operation are also stored. The CPU 501, ROM502, and RAM 403 are connected to each other by a bus 504. An Input/Output (I/O) interface 505 is also connected to bus 504.
The following components are connected to the I/O interface 505: an input section 506 including a keyboard, a mouse, and the like; an output portion 507 including a Cathode Ray Tube (CRT), a Liquid crystal display (Liquid CRYSTAL DISPLAY, LCD), and a speaker, etc.; a storage portion 508 including a hard disk and the like; and a communication section 509 including a network interface card such as a LAN (Local Area Network ) card, a modem, or the like. The communication section 509 performs communication processing via a network such as the internet. The drive 510 is also connected to the I/O interface 505 as needed. A removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 510 as needed so that a computer program read therefrom is mounted into the storage section 508 as needed.
In particular, according to embodiments of the present application, the processes described below with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 509, and/or installed from the removable media 511. When executed by a Central Processing Unit (CPU) 501, performs the various functions defined in the system of the present application.
Specifically, the electronic device may be an intelligent mobile terminal device such as a mobile phone, a tablet computer or a notebook computer. Or the electronic device may be an intelligent terminal device such as a desktop computer.
It should be noted that, the computer readable medium shown in the embodiments of the present invention may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-Only Memory (ROM), an erasable programmable read-Only Memory (Erasable Programmable Read Only Memory, EPROM), a flash Memory, an optical fiber, a portable compact disc read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present invention may be implemented by software, or may be implemented by hardware, and the described units may also be provided in a processor. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
It should be noted that, as another aspect, the present application also provides a computer-readable medium, which may be included in the electronic device described in the above embodiment; or may exist alone without being incorporated into the electronic device. The computer-readable medium carries one or more programs which, when executed by one of the electronic devices, cause the electronic device to implement the methods described in the embodiments below. For example, the electronic device may implement the steps shown in fig. 1.
Furthermore, the above-described drawings are only schematic illustrations of processes included in the method according to the exemplary embodiment of the present invention, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
Furthermore, the above-described drawings are only schematic illustrations of processes included in the method according to the exemplary embodiment of the present invention, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (12)

1. An image processing method, comprising:
Acquiring an image to be processed and a corresponding reference image; the image to be processed is a gray image corresponding to a current image, and the reference image is a gray image corresponding to a reference original image continuous with the current image;
Respectively carrying out mean value filtering treatment on the image to be treated and the reference image to construct a first noise characteristic diagram by utilizing the difference value of the corresponding pixel points of the image to be treated and the reference image after the mean value filtering treatment; and
Downsampling the current image and the reference original image in a target chromaticity channel, and carrying out mean value filtering processing on the downsampling result to construct a third noise characteristic diagram by using the mean value filtering processing result;
determining a noise degree parameter corresponding to the current image by combining the first noise feature map and the brightness feature corresponding to the current image;
And determining a target noise characteristic diagram based on the noise degree parameter and combining the first noise characteristic diagram and the third noise characteristic diagram so as to remove ghost noise from the current image according to the target noise characteristic diagram.
2. The image processing method according to claim 1, characterized in that the method further comprises:
And respectively carrying out downsampling and mean filtering on the image to be processed and the reference image, and constructing a second noise characteristic diagram by utilizing a mean filtering processing result so as to determine a target noise characteristic diagram by combining the second noise characteristic diagram, the first noise characteristic diagram and the third noise characteristic diagram based on the noise degree parameter.
3. The image processing method according to claim 1 or 2, wherein the reference original image is a preceding two-frame image that is continuous with the current image; the reference image is two frames of gray images corresponding to two frames of reference original images, and comprises a first reference image and a second reference image.
4. The image processing method according to claim 3, wherein the performing the mean filtering processing on the image to be processed and the reference image respectively to construct a first noise feature map using differences between corresponding pixels of the image to be processed and the reference image after the mean filtering processing includes:
respectively calculating a corresponding pixel point mean value filtering result and a pixel block mean value filtering result according to a preset window size for the image to be processed, the first reference image and the second reference image;
Respectively calculating pixel point average value filtering result differences between the image to be processed and the first reference image and the second reference image, and fusing the pixel point average value filtering result differences based on the pixel point average value filtering result differences to determine a first pixel point filtering result; and
Respectively calculating pixel block mean value filtering result differences between the image to be processed and the first reference image and the second reference image, and fusing the pixel block mean value filtering result differences based on the pixel block mean value filtering result differences to determine a first pixel block filtering result;
And fusing the first pixel point filtering result and the first pixel block filtering result to determine the first noise characteristic diagram.
5. The image processing method according to claim 1, wherein downsampling the current image and the reference original image in a target chromaticity channel, and performing an average filtering process on the downsampled result to construct a third noise feature map using the average filtering process result, comprising:
Performing downsampling processing on the current image and the reference original image in a first chromaticity channel and a second chromaticity channel respectively to obtain corresponding downsampled images in the chromaticity channels;
Respectively carrying out mean value filtering treatment on each downsampled image corresponding to the first chrominance channel, and carrying out fusion and upsampling treatment by utilizing the mean value filtering treatment result to obtain a noise characteristic result of the first chrominance channel; and
Respectively carrying out mean value filtering treatment on each downsampled image corresponding to the second chromaticity channel, and carrying out fusion and upsampling treatment by utilizing the mean value filtering treatment result to obtain a noise characteristic result of the second chromaticity channel;
and comparing and fusing the first chrominance channel noise characteristic result and the second chrominance channel noise characteristic result to obtain the third noise characteristic diagram.
6. The image processing method according to claim 2, wherein the down-sampling and mean filtering processes are performed on the image to be processed and the reference image, respectively, and constructing a second noise feature map using a result of the mean filtering process, includes:
Respectively downsampling the image to be processed and the reference image to obtain corresponding sampled images;
And respectively carrying out mean value filtering processing on the obtained downsampled images so as to carry out fusion and upsampling processing by utilizing the mean value filtering processing result, thereby obtaining the second noise characteristic diagram.
7. The image processing method according to claim 2, wherein the determining a target noise feature map based on the noise level parameter in combination with the second noise feature map and the first and third noise feature maps includes:
Performing product transformation fusion on the first noise characteristic diagram, the second noise characteristic diagram and the third noise characteristic diagram based on the noise degree parameter so as to construct the preliminary noise map according to a screening result of the maximum value of the pixel points;
and sequentially performing expansion processing and smoothing processing on the preliminary noise map to obtain the target noise characteristic map.
8. The image processing method according to claim 1, wherein the removing ghost noise from the current image according to the target noise feature map includes:
Performing image fusion processing on the current image and the reference original image to obtain an initial fusion image;
And guiding the fusion degree between the current image and the initial fusion image by using the target noise characteristic image and performing image fusion processing to remove the ghost noise of the current image.
9. The image processing method according to claim 1 or 2, characterized in that the method further comprises:
and acquiring any one or more of image type information, scene type information and resolution information of the current image, and configuring a downsampling parameter according to the acquired information.
10. An image processing apparatus, comprising:
the image acquisition module is used for acquiring an image to be processed and a corresponding reference image; the image to be processed is a gray image corresponding to a current image, and the reference image is a gray image corresponding to a reference original image continuous with the current image;
The first noise characteristic diagram acquisition module is used for respectively carrying out mean value filtering processing on the image to be processed and the reference image so as to construct a first noise characteristic diagram by utilizing the difference value of the corresponding pixel points of the image to be processed and the reference image after the mean value filtering processing; and
The third noise characteristic diagram acquisition module is used for downsampling the current image and the reference original image in a target chromaticity channel, and carrying out mean value filtering processing on the downsampling result so as to construct a third noise characteristic diagram by utilizing the mean value filtering processing result;
The noise degree parameter acquisition module is used for combining the first noise feature map and the brightness features corresponding to the current image to determine the noise degree parameters corresponding to the current image;
And the denoising processing module is used for determining a target noise feature map based on the noise degree parameter and combining the first noise feature map and the third noise feature map so as to remove ghost noise from the current image according to the target noise feature map.
11. A computer readable medium on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the image processing method according to any one of claims 1 to 9.
12. An electronic device, comprising:
One or more processors;
storage means for storing one or more programs which when executed by the one or more processors cause the one or more processors to implement the image processing method of any of claims 1 to 9.
CN202110720734.7A 2021-06-28 2021-06-28 Image processing method and device, computer readable medium and electronic equipment Active CN113344820B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110720734.7A CN113344820B (en) 2021-06-28 2021-06-28 Image processing method and device, computer readable medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110720734.7A CN113344820B (en) 2021-06-28 2021-06-28 Image processing method and device, computer readable medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN113344820A CN113344820A (en) 2021-09-03
CN113344820B true CN113344820B (en) 2024-05-10

Family

ID=77479213

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110720734.7A Active CN113344820B (en) 2021-06-28 2021-06-28 Image processing method and device, computer readable medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113344820B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113781375B (en) * 2021-09-10 2023-12-08 厦门大学 Vehicle-mounted vision enhancement method based on multi-exposure fusion
CN114697468B (en) * 2022-02-16 2024-04-16 瑞芯微电子股份有限公司 Image signal processing method and device and electronic equipment

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1729936A (en) * 2004-08-06 2006-02-08 株式会社东芝 Method for helical windmill artifact reduction with noise restoration for helical multislice CT
CN102509269A (en) * 2011-11-10 2012-06-20 重庆工业职业技术学院 Image denoising method combined with curvelet and based on image sub-block similarity
CN108898567A (en) * 2018-09-20 2018-11-27 北京旷视科技有限公司 Image denoising method, apparatus and system
CN111031256A (en) * 2019-11-15 2020-04-17 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN111127347A (en) * 2019-12-09 2020-05-08 Oppo广东移动通信有限公司 Noise reduction method, terminal and storage medium
CN111192204A (en) * 2019-11-22 2020-05-22 晏子俊 Image enhancement method, system and computer readable storage medium
CN111311498A (en) * 2018-12-11 2020-06-19 展讯通信(上海)有限公司 Image ghost eliminating method and device, storage medium and terminal
CN111383182A (en) * 2018-12-28 2020-07-07 展讯通信(上海)有限公司 Image denoising method and device and computer readable storage medium
CN112513936A (en) * 2019-11-29 2021-03-16 深圳市大疆创新科技有限公司 Image processing method, device and storage medium
CN112785534A (en) * 2020-09-30 2021-05-11 广东电网有限责任公司广州供电局 Ghost-removing multi-exposure image fusion method in dynamic scene
CN112991203A (en) * 2021-03-08 2021-06-18 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
WO2021217643A1 (en) * 2020-04-30 2021-11-04 深圳市大疆创新科技有限公司 Method and device for infrared image processing, and movable platform

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI423666B (en) * 2008-12-31 2014-01-11 Altek Corp Image elimination method for image sequence
US11024006B2 (en) * 2019-04-22 2021-06-01 Apple Inc. Tagging clipped pixels for pyramid processing in image signal processor
GB201908517D0 (en) * 2019-06-13 2019-07-31 Spectral Edge Ltd 3D digital imagenoise reduction system and method
US11418766B2 (en) * 2019-12-17 2022-08-16 Samsung Electronics Co., Ltd. Apparatus and method for chroma processing for multi-frame fusion

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1729936A (en) * 2004-08-06 2006-02-08 株式会社东芝 Method for helical windmill artifact reduction with noise restoration for helical multislice CT
CN102509269A (en) * 2011-11-10 2012-06-20 重庆工业职业技术学院 Image denoising method combined with curvelet and based on image sub-block similarity
CN108898567A (en) * 2018-09-20 2018-11-27 北京旷视科技有限公司 Image denoising method, apparatus and system
CN111311498A (en) * 2018-12-11 2020-06-19 展讯通信(上海)有限公司 Image ghost eliminating method and device, storage medium and terminal
CN111383182A (en) * 2018-12-28 2020-07-07 展讯通信(上海)有限公司 Image denoising method and device and computer readable storage medium
CN111031256A (en) * 2019-11-15 2020-04-17 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN111192204A (en) * 2019-11-22 2020-05-22 晏子俊 Image enhancement method, system and computer readable storage medium
CN112513936A (en) * 2019-11-29 2021-03-16 深圳市大疆创新科技有限公司 Image processing method, device and storage medium
WO2021102913A1 (en) * 2019-11-29 2021-06-03 深圳市大疆创新科技有限公司 Image processing method and device, and storage medium
CN111127347A (en) * 2019-12-09 2020-05-08 Oppo广东移动通信有限公司 Noise reduction method, terminal and storage medium
WO2021217643A1 (en) * 2020-04-30 2021-11-04 深圳市大疆创新科技有限公司 Method and device for infrared image processing, and movable platform
CN112785534A (en) * 2020-09-30 2021-05-11 广东电网有限责任公司广州供电局 Ghost-removing multi-exposure image fusion method in dynamic scene
CN112991203A (en) * 2021-03-08 2021-06-18 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Ademola E. Ilesanmi,et al..Methods for image denoising using convolutional neural network: a review.Complex & Intelligent Systems.2021,第7卷第2179-2198页. *
基于导向滤波的鬼影消除多曝光图像融合;安世全等;计算机工程与设计;20201130;第41卷(第11期);3154-3160 *

Also Published As

Publication number Publication date
CN113344820A (en) 2021-09-03

Similar Documents

Publication Publication Date Title
CN108694705B (en) Multi-frame image registration and fusion denoising method
KR101756173B1 (en) Image dehazing system by modifying the lower-bound of transmission rate and method therefor
EP3509034B1 (en) Image filtering based on image gradients
CN108833785B (en) Fusion method and device of multi-view images, computer equipment and storage medium
US9202263B2 (en) System and method for spatio video image enhancement
US8279345B2 (en) System and method for random noise estimation in a sequence of images
WO2018082185A1 (en) Image processing method and device
CN113344820B (en) Image processing method and device, computer readable medium and electronic equipment
US20190089869A1 (en) Single Image Haze Removal
CN108174057B (en) Method and device for rapidly reducing noise of picture by utilizing video image inter-frame difference
US20190068891A1 (en) Method and apparatus for rapid improvement of smog/low-light-level image using mapping table
WO2023273868A1 (en) Image denoising method and apparatus, terminal, and storage medium
US20160142593A1 (en) Method for tone-mapping a video sequence
CN116823628A (en) Image processing method and image processing device
CN108885790B (en) Processing images based on generated motion data
CN111311498B (en) Image ghost eliminating method and device, storage medium and terminal
CN111833262A (en) Image noise reduction method and device and electronic equipment
CN115358962B (en) End-to-end visual odometer method and device
KR101535630B1 (en) Apparatus for enhancing the brightness of night image using brightness conversion model
CN116468636A (en) Low-illumination enhancement method, device, electronic equipment and readable storage medium
CN115965531A (en) Model training method, image generation method, device, equipment and storage medium
CN115239653A (en) Multi-split-screen-supporting black screen detection method and device, electronic equipment and readable storage medium
CN113256785B (en) Image processing method, apparatus, device and medium
Wang et al. An airlight estimation method for image dehazing based on gray projection
CN112017128A (en) Image self-adaptive defogging method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant