CN113344820A - Image processing method and device, computer readable medium and electronic equipment - Google Patents

Image processing method and device, computer readable medium and electronic equipment Download PDF

Info

Publication number
CN113344820A
CN113344820A CN202110720734.7A CN202110720734A CN113344820A CN 113344820 A CN113344820 A CN 113344820A CN 202110720734 A CN202110720734 A CN 202110720734A CN 113344820 A CN113344820 A CN 113344820A
Authority
CN
China
Prior art keywords
image
noise
noise characteristic
mean value
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110720734.7A
Other languages
Chinese (zh)
Inventor
王舒瑶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110720734.7A priority Critical patent/CN113344820A/en
Publication of CN113344820A publication Critical patent/CN113344820A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, a computer-readable medium, and an electronic device. The method comprises the following steps: acquiring an image to be processed and a corresponding reference image; respectively carrying out mean value filtering processing on the image to be processed and the reference image so as to construct a first noise characteristic diagram by utilizing the difference value of corresponding pixel points of the image to be processed and the reference image after the mean value filtering processing; down-sampling the current image and the reference original image in a target chrominance channel, and performing mean value filtering processing on down-sampling results to construct a third noise characteristic diagram by using the mean value filtering processing results; determining a noise degree parameter corresponding to the current image by combining the first noise characteristic diagram and the brightness characteristic corresponding to the current image; and determining a target noise characteristic map based on the noise degree parameter and combining the first noise characteristic map and the third noise characteristic map so as to remove the ghost noise from the current image according to the target noise characteristic map. The method can effectively remove the ghost.

Description

Image processing method and device, computer readable medium and electronic equipment
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, a computer-readable medium, and an electronic device.
Background
During the capture of an image or video, "ghosting" may occur in the captured image or video due to problems with ambient light, the captured scene, or hardware devices. In general, ghosting refers to the phenomenon of a row of faint spots after strong light enters the lens.
Ghost removal schemes in the prior art, such as a scheme for removing high dynamic range image ghosts through multi-exposure fusion control; or, the scheme of dynamically scattering a foreground image communicated with a moving object by using foreground and background separation and eliminating the ghost in the detection of the moving object by using a background updating mode based on the spatial similarity cannot be suitable for eliminating the ghost generated in the video denoising process.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The present disclosure provides an image processing method based on an image, an image processing apparatus, a computer readable medium, and an electronic device, which can effectively remove a ghost generated in a video denoising process.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to a first aspect of the present disclosure, there is provided an image processing method including:
acquiring an image to be processed and a corresponding reference image; the image to be processed is a gray image corresponding to a current image, and the reference image is a gray image corresponding to a reference original image continuous to the current image;
respectively carrying out mean value filtering processing on the image to be processed and the reference image so as to construct a first noise characteristic diagram by utilizing the difference value of corresponding pixel points of the image to be processed and the reference image after the mean value filtering processing; and
down-sampling the current image and the reference original image in a target chrominance channel, and performing mean value filtering processing on down-sampling results to construct a third noise characteristic diagram by using the mean value filtering processing results;
determining a noise degree parameter corresponding to the current image by combining the first noise feature map and the brightness feature corresponding to the current image;
and determining a target noise characteristic map based on the noise degree parameter and the first noise characteristic map and the third noise characteristic map so as to remove the ghost noise from the current image according to the ghost map.
According to a second aspect of the present disclosure, there is provided an image processing apparatus comprising:
the image acquisition module is used for acquiring an image to be processed and a corresponding reference image; the image to be processed is a gray image corresponding to a current image, and the reference image is a gray image corresponding to a reference original image continuous to the current image;
the first noise characteristic map acquisition module is used for respectively carrying out mean value filtering processing on the image to be processed and the reference image so as to construct a first noise characteristic map by utilizing the difference value of corresponding pixel points of the image to be processed and the reference image after the mean value filtering processing; and
the third noise characteristic diagram acquisition module is used for performing down-sampling on the current image and the reference original image in a target chrominance channel and performing mean value filtering processing on down-sampling results so as to construct a third noise characteristic diagram by using the mean value filtering processing results;
a noise degree parameter obtaining module, configured to determine a noise degree parameter corresponding to the current image by combining the first noise feature map and the brightness feature corresponding to the current image;
and the denoising processing module is used for determining a target noise characteristic map by combining the first noise characteristic map and the third noise characteristic map based on the noise degree parameter so as to remove the ghost noise from the current image according to the ghost map.
According to a third aspect of the present disclosure, there is provided a computer readable medium having stored thereon a computer program which, when executed by a processor, implements the image processing method described above.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising:
one or more processors;
a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the image processing method described above.
The image processing method provided by the embodiment of the disclosure includes performing pixel point filtering and pixel block filtering on gray level images corresponding to a current image and a reference image, and constructing a first noise characteristic diagram by using a filtering result difference value; meanwhile, downsampling the current image and the reference image in a specified target chrominance channel, and constructing a third noise characteristic diagram by using a mean filtering processing structure; constructing a noise degree parameter through the first noise feature map and the brightness feature of the current image; therefore, the first noise characteristic diagram and the third noise characteristic diagram can be guided by the noise degree parameter to carry out image fusion to obtain a target noise characteristic diagram, and the position where the ghost is generated is determined; the target noise characteristic graph is used for limiting the fusion degree of the current image and the reference image, and the problem of ghost noise in each image frame in the video is solved fundamentally.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty.
FIG. 1 schematically illustrates a flow diagram of an image processing method in an exemplary embodiment of the disclosure;
FIG. 2 schematically illustrates a flow chart of a method of constructing a first noise signature graph in an exemplary embodiment of the disclosure;
FIG. 3 schematically illustrates a flow chart of a method of constructing a third noise signature graph in an exemplary embodiment of the disclosure;
FIG. 4 schematically illustrates a flow chart of another image processing method in an exemplary embodiment of the present disclosure;
FIG. 5 schematically illustrates a flow chart of a method of constructing a second noise signature in an exemplary embodiment of the disclosure;
FIG. 6 is a schematic diagram schematically illustrating a method of constructing a target noise signature in an exemplary embodiment of the disclosure;
fig. 7 schematically illustrates a composition diagram of an image processing apparatus in an exemplary embodiment of the present disclosure;
fig. 8 schematically illustrates a structural diagram of an electronic device in an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
In the related art, the existing image de-ghosting scheme is designed for the ghosting problem generated during high dynamic range video fusion. In one existing scheme, adjacent frame motion regions are detected through image registration, a ghost region is obtained through edge contour detection, and high-dynamic-range image ghosting is removed through multi-exposure fusion control. However, the scheme mainly aims at the ghost problem generated in the multi-exposure fusion process of the high dynamic range image, and when the scheme is set, the multi-exposure image characteristics are considered, and the ghost is eliminated in the fusion process; and is not suitable for eliminating the ghost image generated by video denoising and fusion. In another existing scheme, a foreground and background separation mode is used to dynamically break up a foreground image of a connected moving object, and a background updating mode is used to eliminate ghosting in moving object detection based on spatial similarity. However, such a technical scheme is designed mainly for the application of moving object detection, and is not suitable for the ghost problem encountered in the video time domain denoising algorithm, and we cannot eliminate ghosts in the denoising process in a background separation and background updating manner.
In view of the above-mentioned shortcomings and drawbacks of the prior art, the exemplary embodiment provides a graphics processing method, which can be applied to the problem of ghost image encountered during the temporal denoising process of video. Referring to fig. 1, the image processing method described above may include the steps of:
s11, acquiring the image to be processed and the corresponding reference image; the image to be processed is a gray image corresponding to a current image, and the reference image is a gray image corresponding to a reference original image continuous to the current image;
s12, performing mean filtering processing on the image to be processed and the reference image respectively, so as to construct a first noise characteristic map by using the difference values of corresponding pixel points of the image to be processed and the reference image after the mean filtering processing; and
s13, down-sampling the current image and the reference original image in a target chrominance channel, and performing mean value filtering processing on down-sampling results to construct a third noise characteristic map by using the mean value filtering processing results;
s14, determining a noise degree parameter corresponding to the current image by combining the first noise feature map and the brightness feature corresponding to the current image;
and S15, determining a target noise feature map based on the noise degree parameter and the first noise feature map and the third noise feature map, so as to remove the ghost noise from the current image according to the target noise feature map.
In the image processing method provided by the present exemplary embodiment, a first noise feature map based on image gray scale features is constructed in an inter-frame difference manner; meanwhile, a third noise characteristic map is constructed for the current image and the reference image based on the chrominance information of the images; constructing a noise degree parameter through the first noise feature map and the brightness feature of the current image; therefore, the first noise characteristic diagram and the third noise characteristic diagram can be guided by the noise degree parameter to carry out image fusion to obtain a target noise characteristic diagram, and the position where the ghost is generated is determined; the target noise characteristic graph is used for limiting the fusion degree of the current image and the reference image, and the problem of ghost noise in each image frame in the video is solved fundamentally.
Hereinafter, each step of the image processing method in the present exemplary embodiment will be described in more detail with reference to the drawings and examples.
In this exemplary embodiment, for example, the above method may be applied to a server, and a user may upload video data, or consecutive image frame data of a video after data decomposition, to the server through a terminal device, so that the server performs calculation in response to receiving the video data or the image data. Alternatively, the method may also be applied to an intelligent terminal device having the same computing capability as the server, for example, an intelligent terminal such as a mobile phone, a tablet, or a computer. The calculation may be initiated by the user inputting video data or image data comprising a plurality of consecutive frames.
In step S11, acquiring an image to be processed and a corresponding reference image; the image to be processed is a gray image corresponding to a current image, and the reference image is a gray image corresponding to a reference original image continuous to the current image.
In this exemplary embodiment, for example, the method described above is executed at the server side, when the user input data is video data, the video data may be split into consecutive image frame sequence data. And processing each frame image as a current image in sequence. Meanwhile, a previous one or two frames of images consecutive to the current image may be used as the reference original image. When the current image is the first image of the image frame sequence, the current image may be configured as the reference original image. After the current frame image and the corresponding reference original image are selected, gray processing can be performed on the current frame image and the corresponding reference original image to obtain a corresponding gray image; and configuring the gray level image corresponding to the current image as a to-be-processed image, and configuring the gray level image corresponding to the reference original image as a reference image corresponding to the to-be-processed image.
In this exemplary embodiment, taking the reference original image as two consecutive frames of images that precede the current image as an example, the reference image is two frames of grayscale images corresponding to the two frames of reference original images, and includes a first reference image and a second reference image.
In step S12, mean filtering is performed on the to-be-processed image and the reference image, respectively, so as to construct a first noise feature map by using differences between corresponding pixels of the to-be-processed image and the reference image after the mean filtering.
In this exemplary embodiment, specifically, referring to fig. 2, the step S12 may include:
step S121, calculating corresponding pixel point mean value filtering results and pixel block mean value filtering results according to a preset window size for the image to be processed, the first reference image and the second reference image respectively;
step S122, calculating pixel point mean value filtering result difference values between the image to be processed and the first reference image and the second reference image respectively, and fusing based on the pixel point mean value filtering result difference values to determine a first pixel point filtering result; and
step S123, calculating pixel block mean value filtering result difference values between the image to be processed and the first reference image and the second reference image respectively, and fusing based on the pixel block mean value filtering result difference values to determine a first pixel block filtering result;
step S124, fusing the first pixel point filtering result and the first pixel block filtering result to determine the first noise feature map.
For example, the input data is a grayscale image corresponding to the current image (i.e., the image to be processed) and grayscale images corresponding to two consecutive frames of reference original images (i.e., the first reference image and the second reference image) before the current image. And respectively calculating mean value filtering values between each pixel point and adjacent pixel points of each image aiming at the gray level image (curr) of the current image frame and the gray level images (ref0, ref1) corresponding to the reference original image frame to obtain mean value filtering results of the pixel points corresponding to the curr of the image to be processed, the first reference image ref0 and the second reference image ref 1. Then, calculating the difference of the pixel point mean value filtering results between the image to be processed and the first reference image, wherein Diff1 is ref 0-curr; and simultaneously calculating the difference of the pixel point mean value filtering results between the image to be processed and the second reference image, wherein Diff2 is ref 1-curr. Comparing and fusing the difference value of the mean filtering results of the two pixel points obtained by calculation; specifically, the maximum value may be taken for fusion, and Diff is max (Diff1, Diff2), so as to obtain the filtering result of the first pixel.
Meanwhile, calculating the mean filtering results of pixel block levels of the image to be processed, the first reference image and the second reference image respectively according to the size of a window of 3 x 3, namely calculating the mean value of the current 3 x 3 block and four adjacent 3 x 3 blocks, namely calculating the mean filtering values of five pixel blocks, and then taking the median of the five mean filtering values to obtain the median filtering results of the pixel blocks corresponding to the image to be processed, the first reference image and the second reference image. And respectively calculating the difference value of the mean filtering results of the pixel blocks between the image to be processed and the first reference image and between the image to be processed and the second reference image to obtain the difference value of the mean filtering results of the two pixel blocks. According to the method for comparing and fusing the difference values of the pixel point mean filtering results, the difference values of the two pixel block mean filtering results are fused in a maximum value mode, and a first pixel block filtering result is obtained.
And then, comparing and fusing the first pixel point filtering result and the first pixel block filtering result, specifically, fusing by adopting a maximum value method. Namely, for each pixel point, the result with the largest value between the filtering result of the first pixel point and the filtering result of the first pixel block is selected and reserved, and therefore the first noise characteristic diagram is generated.
The method comprises the steps of obtaining ghost position information of a larger area by taking gray level images of a current image and two expanded frame reference original images as input and depending on an inter-frame difference mode, and thus obtaining a first noise feature map of a pixel level for describing a ghost position range and ghost edge features. Specifically, the first noise feature map describes ghost edges and position information from the viewpoint of the grayscale channel, as a pixel-level ghost feature map (map).
In step S13, the current image and the reference original image are downsampled in the target chrominance channel, and the downsampled result is subjected to a mean filtering process, so as to construct a third noise feature map using the mean filtering process result.
In this exemplary embodiment, referring to fig. 3, the step S13 may include:
step S131, respectively performing downsampling processing on the current image and the reference original image in a first chrominance channel and a second chrominance channel to obtain corresponding downsampled images in the chrominance channels;
step S132, performing mean filtering processing on each downsampled image corresponding to the first chrominance channel, and performing fusion and upsampling processing by using a mean filtering processing result to obtain a first chrominance channel noise characteristic result; and
step S133, performing mean filtering processing on each downsampled image corresponding to the second chrominance channel, and performing fusion and upsampling processing by using a mean filtering processing result to obtain a second chrominance channel noise characteristic result;
step S134, comparing and fusing the first chrominance channel noise feature result and the second chrominance channel noise feature result to obtain the third noise feature map.
Specifically, the first chrominance channel and the second chrominance channel may be a chrominance channel U and a chrominance channel V. The method comprises the steps of respectively obtaining corresponding U-channel images and V-channel images for a current image, a first reference original image and a second reference original image, and then respectively carrying out downsampling on the U-channel images and the V-channel images to obtain the U-channel downsampling images and the V-channel downsampling images of the current image, the U-channel downsampling images and the V-channel downsampling images of the first reference original image, and the U-channel downsampling images and the V-channel downsampling images of the second reference original image. For example, the downsampling multiple may be 4 × 4.
Then, the same calculation method as that in steps S121 to S124 may be used to calculate the average filtering result of the pixel points for the U channel, the U channel down-sampled image of the current image, the U channel down-sampled image of the first reference original image, and the U channel down-sampled image of the second reference original image, respectively; and respectively calculating the average filtering result of the pixel points in the V-channel down-sampling image of the current image under the V channel, the V-channel down-sampling image of the first reference original image and the V-channel down-sampling image of the second reference original image. Meanwhile, under the U channel, respectively calculating the average filtering result of the pixel block of the U channel downsampled image of the current image, the U channel downsampled image of the first reference original image and the U channel downsampled image of the second reference original image; and under the V channel, respectively calculating the average filtering result of the pixel blocks of the V-channel downsampled image of the current image, the V-channel downsampled image of the first reference original image and the V-channel downsampled image of the second reference original image.
Calculating a difference value between a pixel point mean value filtering result of a V-channel downsampling image of a current image and a pixel point mean value filtering result of a V-channel downsampling image of a first reference original image according to data of a V channel to obtain a first difference value result; and calculating the difference value between the pixel point mean value filtering result of the down-sampling image of the V-channel image of the current image and the pixel point mean value filtering result of the down-sampling image of the V-channel of the second reference original image to obtain a second difference value result. And comparing and fusing the first difference result and the second difference result. Specifically, a comparison fusion mode of taking the maximum value can be sampled; that is, for a pixel, a larger value is retained in the first difference result and the second difference result. Thereby obtaining the pixel point mean value filtering result of the sampling image in the V channel.
Meanwhile, calculating the difference value between the pixel block mean value filtering result of the V-channel downsampling image of the current image and the pixel block mean value filtering result of the V-channel downsampling image of the first reference original image to obtain a third difference value result; and calculating the difference value between the pixel block mean value filtering result of the V-channel downsampling image of the current image and the pixel block mean value filtering result of the V-channel downsampling image of the second reference original image to obtain a fourth difference value result. And comparing and fusing the third difference result and the fourth difference result in a mode of keeping the maximum value to obtain a pixel block mean value filtering result of the V-channel downsampling image.
Comparing and fusing the pixel point mean filtering result of the V-channel downsampling image with the pixel block mean filtering result of the V-channel downsampling image, reserving the maximum numerical value corresponding to each pixel point by adopting a maximum value taking mode, and constructing a feature graph of the V-channel downsampling; and then, performing upsampling processing on the feature map, for example, recovering the feature map to the original size by using an interpolation upsampling mode, thereby constructing a V-channel noise feature result.
Calculating the difference value between the pixel point mean value filtering result of the U-channel downsampling image of the current image and the pixel point mean value filtering result of the U-channel downsampling image of the first reference original image aiming at the data of the U channel based on the same calculation strategy as the V channel to obtain a fifth difference value result; and calculating a difference value between a pixel point mean value filtering result of the downsampling image of the U-channel image of the current image and a pixel point mean value filtering result of the downsampling image of the U-channel of the second reference original image to obtain a sixth difference value result. And comparing and fusing the fifth difference result and the sixth difference result, and sampling in a maximum comparison and fusion mode to obtain a pixel point mean value filtering result of the image sampled in the U channel.
Meanwhile, calculating the difference value between the pixel block mean value filtering result of the U-channel downsampling image of the current image and the pixel block mean value filtering result of the U-channel downsampling image of the first reference original image to obtain a seventh difference value result; and calculating the difference value between the pixel block mean value filtering result of the U-channel downsampling image of the current image and the pixel block mean value filtering result of the U-channel downsampling image of the second reference original image to obtain an eighth difference value result. And comparing and fusing the seventh difference result and the eighth difference result in a mode of keeping the maximum value to obtain a pixel block mean value filtering result of the U-channel downsampling image.
Then, comparing and fusing the pixel point average filtering result of the U-channel downsampling image with the pixel block average filtering result of the U-channel downsampling image, reserving the maximum numerical value corresponding to each pixel point by adopting a maximum value taking mode, and constructing a characteristic graph of the U-channel downsampling; and then, performing upsampling processing on the feature map, for example, recovering the feature map to the original size by using an interpolation upsampling mode, thereby constructing a U-channel noise feature result.
And comparing and fusing the obtained U-channel noise characteristic result and the V-channel noise characteristic result in a mode of taking the maximum value, fusing the two chrominance characteristic graphs of the U channel and the V channel, and constructing a third noise characteristic graph. Specifically, the formula may include:
MapUV[i,j]=max(mapU[i,j],mapV[i,j])
wherein mapU is a U-channel noise characteristic result corresponding to the pixel point with the coordinate (i, j), and mapV is a V-channel noise characteristic result corresponding to the pixel point with the coordinate (i, j).
In the third noise characteristic diagram, the detection of the moving object is realized by utilizing the chrominance information of the UV channel, the position of the ghost is calculated, and the U channel and the V channel are compared and fused in a mode of taking the maximum value, so that the third noise characteristic diagram can represent the channel with larger movement in the UV.
In step S14, a noise level parameter corresponding to the current image is determined by combining the first noise feature map and the luminance feature corresponding to the current image.
In this example embodiment, for the current image, the luminance features of each pixel point may be extracted, and the whole luminance feature map may be averaged. For example, the corresponding luminance may be calculated from the RGB value corresponding to each pixel point.
Meanwhile, for the first noise feature map, the whole map may be averaged. Then, a noise degree parameter is calculated according to the result of taking the average value of the whole graph of the brightness characteristic graph and the result of the average value of the whole graph of the first noise characteristic graph, so that the ghost degree described from the global angle can be obtained. Specifically, the formula may include:
ghostD=Adjust(AVE(Y))*AVE(map1)
here, map1 represents the first noise feature map, and Y represents the luminance feature map.
In step S15, a target noise feature map is determined based on the noise degree parameter in combination with the first noise feature map and the third noise feature map, so as to remove the ghost noise from the current image according to the ghost map.
In this exemplary embodiment, the noise degree parameter, the first noise feature map, and the third noise feature map obtained by calculation in the above steps are fused, the first noise feature map and the third noise feature map are multiplicatively fused according to the noise degree parameter, a maximum value is taken as a final result to obtain a ghost feature map, a local maximum value is taken as the final result to be expanded, and a low-pass filter is used to perform smoothing processing to obtain a target noise feature map. And guiding the TNR time domain filtering interframe fusion degree through the target noise characteristic diagram so as to reduce ghost.
In some exemplary embodiments of the present disclosure, as shown with reference to fig. 4, the method described above may further include:
step S21, acquiring an image to be processed and a corresponding reference image; the image to be processed is a gray image corresponding to a current image, and the reference image is a gray image corresponding to a reference original image continuous to the current image;
step S22, mean value filtering processing is respectively carried out on the image to be processed and the reference image, so that a first noise characteristic diagram is constructed by utilizing the difference values of corresponding pixel points of the image to be processed and the reference image after the mean value filtering processing; and
step S23, respectively carrying out down-sampling and mean value filtering processing on the image to be processed and the reference image, and constructing a second noise characteristic diagram by using the mean value filtering processing result;
step S24, down-sampling the current image and the reference original image in a target chrominance channel, and performing mean value filtering processing on down-sampling results to construct a third noise characteristic map by using the mean value filtering processing results;
step S25, determining a noise degree parameter corresponding to the current image by combining the first noise feature map and the brightness feature corresponding to the current image;
step S26, determining a target noise feature map based on the noise degree parameter in combination with the first noise feature map, the second noise feature map and the third noise feature map, so as to remove ghost noise from the current image according to the target noise feature map.
In this exemplary embodiment, the above method may further perform down-sampling on the grayscale image and construct a corresponding noise feature map. Specifically, the to-be-processed image and the reference image may be respectively subjected to downsampling and mean filtering, and a second noise feature map is constructed by using a mean filtering result, so as to determine a target noise feature map based on the noise degree parameter in combination with the second noise feature map, the first noise feature map, and the third noise feature map.
Specifically, referring to fig. 5, the step S23 may include:
step S231, down-sampling the image to be processed and the reference image respectively to obtain corresponding sampled images;
step S232, performing mean filtering processing on the obtained down-sampling images respectively, so as to perform fusion and up-sampling processing by using the mean filtering processing result, so as to obtain the second noise characteristic map.
Specifically, the to-be-processed image, the first reference image, and the second reference image in the grayscale format may be respectively downsampled to obtain corresponding downsampled images. For example, downsampling is performed at a window size of 4 x 4.
The same calculation method as that in steps S121 to S124 may be used to perform the average filtering calculation of the pixel points and the average filtering calculation of the pixel blocks on the downsampled images of the image to be processed, the first reference image, and the second reference image, respectively, so as to obtain the pixel point average filtering result and the pixel block average filtering result of the downsampled images of the image to be processed, the first reference image, and the second reference image. Calculating a difference value between a pixel point mean value filtering result of a down-sampling image of the image to be processed and a pixel point mean value filtering result of a sampling image of the first reference image to obtain a ninth difference value result; meanwhile, calculating a difference value between the pixel point mean value filtering result of the down-sampling image of the image to be processed and the pixel point mean value filtering result of the sampling image of the second reference image to obtain a tenth difference value result. And comparing and fusing the ninth difference result and the tenth difference result, and keeping the maximum value, thereby obtaining a pixel point mean value filtering result.
Meanwhile, calculating a difference value between a pixel block mean value filtering result of the downsampled image of the image to be processed and a pixel block mean value filtering result of the downsampled image of the first reference image to obtain an eleventh difference value result; and calculating the difference value between the pixel block mean value filtering result of the downsampled image of the image to be processed and the pixel block mean value filtering result of the downsampled image of the second reference image to obtain a twelfth difference value result. And comparing and fusing the eleventh difference result and the twelfth difference result, and keeping the maximum value, thereby obtaining a pixel block mean value filtering result.
And then comparing and fusing the pixel point filtering result and the pixel block filtering result of the down-sampling image, fusing by adopting a maximum value method, and recovering the size of the fused result in a mode of up-sampling the adjacent point so as to generate a second noise characteristic diagram. By down-sampling the gray level image, a block-level ghost characteristic diagram can be obtained, and local block-level ghost information can be described; and then, the ghost position information wider than the detected content edge can be obtained by up-sampling.
In this exemplary embodiment, in step S26, as shown in fig. 6, the method may specifically include:
step S261, performing multiplicative transformation fusion on the first noise feature map, the second noise feature map and the third noise feature map based on the noise degree parameter, so as to construct the preliminary noise map according to the screening result of the maximum value of the pixel points;
and step S262, sequentially performing expansion processing and smoothing processing on the preliminary noise map to obtain the target noise characteristic map.
Specifically, the first noise feature map, the second noise feature map, and the third noise feature map may be subjected to multiplicative transformation fusion according to the noise degree parameter ghostD. The principle of multiplicative transformation fusion is to directly perform multiplicative operation on the corresponding pixel gray values of the images with different spatial resolutions so as to obtain the corresponding pixel gray values of the new images. The calculation formula may include:
Map1_new[i,j]=map1[i,j]*ghostD
Map2_new[i,j]=map2[i,j]*ghostD
MapUV_new[i,j]=mapUV[i,j]*ghostD
where map1 denotes the first noise profile, map2 denotes the second noise profile, and mapUV denotes the third noise profile.
And comparing and fusing the three results to obtain a preliminary noise map, wherein the formula can comprise:
map[i,j]=max(map1[i,j],map2[i,j],mapUV[i,j])
after that, local maximum expansion is performed again. Specifically, for the preliminary noise map, for an arbitrary block whose current point is the center 3 × 3, for example, the maximum value of 3 × 3 nine points is taken and assigned to the current point. And then, smoothing is carried out, so that a target noise characteristic diagram is obtained.
In this example embodiment, after the target noise profile is obtained, the target noise profile may be used to guide time domain noise reduction (TNR). Specifically, the current image and the reference original image may be first subjected to image fusion processing to obtain an initial fusion image; and guiding the fusion degree between the current image and the initial fusion image by using the target noise characteristic diagram and carrying out image fusion processing to remove the ghost noise of the current image.
For example, for the current image and the two frames of reference original images, the ratio of 1: 1: and (1) carrying out image fusion processing according to the proportion of 1 to obtain a primary fusion image. And guiding the fusion degree of the primary fusion image and the current image by using a target noise characteristic diagram ghostD. The formula may include:
Out=(255-map)*merge+map*ImgYCurr
wherein merge represents the preliminary fusion image and ImgYCurr represents the grayscale image of the current image.
When the ghost D value is larger, the probability of generating the ghost image of the current pixel is higher, the inter-frame fusion degree is reduced, the fusion proportion is reduced, and the fusion proportion is more inclined to the current frame so as to reduce the ghost image degree.
In this example embodiment, after the current image is acquired, any one or more of corresponding image type information, scene type information, and resolution information may be further identified to configure a down-sampling parameter and/or a size of a pixel block according to the acquired information. For example, when the current image is a night scene, a daytime scene, a portrait or a static object, different sampling windows may be configured to adapt to different image contents and increase the processing speed of the image due to different background contents and different degrees of color richness and brightness of the image.
Based on the above, in other exemplary embodiments of the present disclosure, the first noise feature map and the second noise feature map may also be used to calculate the noise degree parameter; alternatively, the second noise profile and the third noise profile may be used to calculate the noise level parameter.
When calculating each noise characteristic map, the calculation can be performed in sequence according to the steps; alternatively, multiple processes may be created so that the noise signatures may be calculated simultaneously. Thereby improving computational efficiency.
The image processing method provided by the embodiment of the disclosure can be applied to ghost image generated by fusion denoising in a video denoising scene. And constructing a noise characteristic map by using an inter-frame difference mode to describe the ghost map. The pixel-level ghost map is constructed on the basis of a gray channel through the first noise feature map, the local-block ghost map is constructed on the basis of the gray channel through the second noise feature map, the UV-channel ghost map is constructed on the basis of chrominance information through the third noise feature map, detection of moving objects is achieved, and global ghost degree is described under the gray channel through construction of noise degree parameters. Positioning a moving object in the continuous image through gray scale and chrominance information, positioning the position of the moving object, and giving ghost information; the gray level difference information is used for carrying out ghost positioning through three dimensions of global dimension, local dimension and pixel level dimension, and ghost positions are calculated more comprehensively. Moreover, when the noise degree parameter is calculated, the influence of the brightness change of the image on the sensitivity of the ghost map is considered, when the image brightness is low, a ghost is possibly generated when the edge noise mean value is small, and the fusion ratio needs to be adaptively adjusted by combining the image brightness mean value. Combining each characteristic noise image with a ghost degree parameter to realize a ghost map obtained by combining gray information and chrominance information to obtain accurate ghost position information; and (4) in the time domain denoising process, referring to the ghost position information, and eliminating the ghost generated by fusion. The scheme enables the gray value to be divided into three dimensions: calculating the difference between frames at the global level, the local level and the pixel level, and simultaneously calculating the difference value between the chrominance frames; the position of a ghost image generated by time domain fusion is accurately positioned by combining three-dimensional gray scale and chromaticity information; the ghost problem is fundamentally solved by limiting the fusion degree.
It is to be noted that the above-mentioned figures are only schematic illustrations of the processes involved in the method according to an exemplary embodiment of the invention, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Further, as shown in fig. 7, an embodiment of the present example also provides an image processing apparatus 70, including: the noise reduction processing device comprises an image acquisition module 701, a first noise characteristic map acquisition module 702, a third noise characteristic map acquisition module 703, a noise degree parameter acquisition module 704 and a denoising processing module 705. Wherein the content of the first and second substances,
the image obtaining module 701 may be configured to obtain an image to be processed and a corresponding reference image; the image to be processed is a gray image corresponding to a current image, and the reference image is a gray image corresponding to a reference original image continuous to the current image.
The first noise feature map obtaining module 702 may be configured to perform mean filtering processing on the to-be-processed image and the reference image, respectively, so as to construct a first noise feature map by using differences between corresponding pixel points of the to-be-processed image and the reference image after the mean filtering processing.
The third noise feature map obtaining module 703 may be configured to perform downsampling on the current image and the reference original image in a target chrominance channel, and perform mean filtering processing on downsampled results, so as to construct a third noise feature map by using the mean filtering processing result.
The noise degree parameter obtaining module 704 may be configured to determine a noise degree parameter corresponding to the current image by combining the first noise feature map and the brightness feature corresponding to the current image.
The denoising processing module 705 may be configured to determine a target noise feature map based on the noise degree parameter in combination with the first noise feature map and the third noise feature map, so as to remove ghost noise from the current image according to the target noise feature map.
In an example of the present disclosure, the apparatus 70 may further include: and a second noise characteristic map acquisition module (not shown).
The second noise characteristic map obtaining module may be configured to perform downsampling and mean filtering processing on the to-be-processed image and the reference image, respectively, and construct a second noise characteristic map by using a mean filtering processing result, so as to determine a target noise characteristic map based on the noise degree parameter in combination with the second noise characteristic map, the first noise characteristic map, and the third noise characteristic map.
In one example of the present disclosure, the reference original image is a previous two-frame image consecutive to the current image; the reference image is a two-frame gray image corresponding to the two-frame reference original image and comprises a first reference image and a second reference image.
In an example of the present disclosure, the first noise characteristic map obtaining module 702 may be configured to respectively calculate, for the image to be processed, the first reference image, and the second reference image, a corresponding pixel mean filtering result and a pixel mean filtering result according to a preset window size;
calculating pixel point mean value filtering result difference values between the image to be processed and the first reference image and the second reference image respectively, and fusing based on the pixel point mean value filtering result difference values to determine a first pixel point filtering result; and
calculating pixel block mean value filtering result difference values between the image to be processed and the first reference image and the second reference image respectively, and fusing based on the pixel block mean value filtering result difference values to determine a first pixel block filtering result;
and fusing the first pixel point filtering result and the first pixel block filtering result to determine the first noise characteristic diagram.
In an example of the present disclosure, the third noise feature map obtaining module 703 may be configured to perform downsampling processing on the current image and the reference original image in a first chrominance channel and a second chrominance channel, respectively, to obtain corresponding downsampled images in the respective chrominance channels;
respectively carrying out mean value filtering processing on each downsampled image corresponding to the first chrominance channel, and carrying out fusion and upsampling processing by using a mean value filtering processing result to obtain a first chrominance channel noise characteristic result; and
performing mean filtering processing on each downsampled image corresponding to the second chrominance channel, and performing fusion and upsampling processing by using a mean filtering processing result to obtain a second chrominance channel noise characteristic result;
and comparing and fusing the first chrominance channel noise characteristic result and the second chrominance channel noise characteristic result to obtain the third noise characteristic diagram.
In an example of the present disclosure, the second noise characteristic map obtaining module 703 may be configured to perform downsampling on the to-be-processed image and the reference image respectively to obtain corresponding sampled images;
and respectively carrying out mean value filtering processing on the obtained down-sampling images so as to carry out fusion and up-sampling processing by using the mean value filtering processing result to obtain the second noise characteristic diagram.
In an example of the present disclosure, the denoising processing module 705 may be further configured to perform multiplicative transformation fusion on the first noise feature map, the second noise feature map, and the third noise feature map based on the noise degree parameter, so as to construct the preliminary noise map according to a screening result of a maximum value of a pixel point;
and sequentially performing expansion processing and smoothing processing on the preliminary noise map to obtain the target noise characteristic map.
In an example of the present disclosure, the denoising processing module 705 may be configured to perform image fusion processing on the current image and the reference original image to obtain an initial fusion image;
and guiding the fusion degree between the current image and the initial fusion image by using the target noise characteristic diagram and carrying out image fusion processing to remove the ghost noise of the current image.
In an example of the present disclosure, the apparatus 70 may further include: and a parameter configuration module. The parameter configuration film may be configured to acquire any one or more of image type information, scene type information, and resolution information of the current image to configure downsampling parameters according to the acquired information.
The details of each module in the image processing apparatus are already described in detail in the corresponding image processing method, and therefore, the details are not described herein again.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
FIG. 8 shows a schematic diagram of an electronic device suitable for use to implement an embodiment of the invention.
It should be noted that the electronic device 500 shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of the embodiments of the present disclosure.
As shown in fig. 8, the electronic apparatus 500 includes a Central Processing Unit (CPU)501 that can perform various appropriate actions and processes according to a program stored in a Read-Only Memory (ROM) 402 or a program loaded from a storage section 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data necessary for system operation are also stored. The CPU 501, ROM502, and RAM 403 are connected to each other via a bus 504. An Input/Output (I/O) interface 505 is also connected to bus 504.
The following components are connected to the I/O interface 505: an input portion 506 including a keyboard, a mouse, and the like; an output section 507 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage portion 508 including a hard disk and the like; and a communication section 509 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like. The communication section 509 performs communication processing via a network such as the internet. The driver 510 is also connected to the I/O interface 505 as necessary. A removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 510 as necessary, so that a computer program read out therefrom is mounted into the storage section 508 as necessary.
In particular, according to an embodiment of the present invention, the processes described below with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the invention include a computer program product comprising a computer program embodied on a computer-readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 509, and/or installed from the removable medium 511. The computer program executes various functions defined in the system of the present application when executed by a Central Processing Unit (CPU) 501.
Specifically, the electronic device may be an intelligent mobile terminal device such as a mobile phone, a tablet computer, or a notebook computer. Alternatively, the electronic device may be an intelligent terminal device such as a desktop computer.
It should be noted that the computer readable medium shown in the embodiment of the present invention may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a portable Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present invention may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
It should be noted that, as another aspect, the present application also provides a computer-readable medium, which may be included in the electronic device described in the above embodiment; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by an electronic device, cause the electronic device to implement the method as described in the embodiments below. For example, the electronic device may implement the steps shown in fig. 1.
Furthermore, the above-described figures are merely schematic illustrations of processes involved in methods according to exemplary embodiments of the invention, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Furthermore, the above-described figures are merely schematic illustrations of processes involved in methods according to exemplary embodiments of the invention, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is to be limited only by the terms of the appended claims.

Claims (12)

1. An image processing method, comprising:
acquiring an image to be processed and a corresponding reference image; the image to be processed is a gray image corresponding to a current image, and the reference image is a gray image corresponding to a reference original image continuous to the current image;
respectively carrying out mean value filtering processing on the image to be processed and the reference image so as to construct a first noise characteristic diagram by utilizing the difference value of corresponding pixel points of the image to be processed and the reference image after the mean value filtering processing; and
down-sampling the current image and the reference original image in a target chrominance channel, and performing mean value filtering processing on down-sampling results to construct a third noise characteristic diagram by using the mean value filtering processing results;
determining a noise degree parameter corresponding to the current image by combining the first noise feature map and the brightness feature corresponding to the current image;
and determining a target noise characteristic map based on the noise degree parameter and combining the first noise characteristic map and the third noise characteristic map so as to remove ghost noise from the current image according to the target noise characteristic map.
2. The image processing method according to claim 1, characterized in that the method further comprises:
and respectively carrying out downsampling and mean value filtering processing on the image to be processed and the reference image, and constructing a second noise characteristic diagram by using a mean value filtering processing result so as to determine a target noise characteristic diagram by combining the second noise characteristic diagram, the first noise characteristic diagram and the third noise characteristic diagram based on the noise degree parameter.
3. The image processing method according to claim 1 or 2, wherein the reference original image is two previous frames of images consecutive to the current image; the reference image is a two-frame gray image corresponding to the two-frame reference original image and comprises a first reference image and a second reference image.
4. The image processing method according to claim 3, wherein the performing mean filtering processing on the to-be-processed image and the reference image respectively to construct a first noise feature map by using differences between corresponding pixel points of the to-be-processed image and the reference image after the mean filtering processing includes:
calculating corresponding pixel point mean value filtering results and pixel block mean value filtering results according to a preset window size for the image to be processed, the first reference image and the second reference image respectively;
calculating pixel point mean value filtering result difference values between the image to be processed and the first reference image and the second reference image respectively, and fusing based on the pixel point mean value filtering result difference values to determine a first pixel point filtering result; and
calculating pixel block mean value filtering result difference values between the image to be processed and the first reference image and the second reference image respectively, and fusing based on the pixel block mean value filtering result difference values to determine a first pixel block filtering result;
and fusing the first pixel point filtering result and the first pixel block filtering result to determine the first noise characteristic diagram.
5. The method according to claim 1, wherein downsampling the current image and the reference original image in a target chrominance channel, and performing a mean filtering process on downsampled results to construct a third noise feature map using the mean filtering process result, comprises:
respectively performing downsampling processing on the current image and the reference original image in a first chrominance channel and a second chrominance channel to obtain corresponding downsampled images in the chrominance channels;
respectively carrying out mean value filtering processing on each downsampled image corresponding to the first chrominance channel, and carrying out fusion and upsampling processing by using a mean value filtering processing result to obtain a first chrominance channel noise characteristic result; and
performing mean filtering processing on each downsampled image corresponding to the second chrominance channel, and performing fusion and upsampling processing by using a mean filtering processing result to obtain a second chrominance channel noise characteristic result;
and comparing and fusing the first chrominance channel noise characteristic result and the second chrominance channel noise characteristic result to obtain the third noise characteristic diagram.
6. The image processing method according to claim 2, wherein the down-sampling and mean filtering the image to be processed and the reference image, respectively, and constructing a second noise feature map using the mean filtering result comprises:
respectively carrying out down-sampling on the image to be processed and the reference image to obtain corresponding sampling images;
and respectively carrying out mean value filtering processing on the obtained down-sampling images so as to carry out fusion and up-sampling processing by using the mean value filtering processing result to obtain the second noise characteristic diagram.
7. The image processing method according to claim 2, wherein the determining a target noise feature map based on the noise degree parameter in combination with the second noise feature map, the first noise feature map, and the third noise feature map comprises:
performing multiplicative transformation fusion on the first noise characteristic diagram, the second noise characteristic diagram and the third noise characteristic diagram based on the noise degree parameter so as to construct the preliminary noise map according to the screening result of the maximum value of the pixel points;
and sequentially performing expansion processing and smoothing processing on the preliminary noise map to obtain the target noise characteristic map.
8. The image processing method according to claim 1, wherein the removing ghost noise from the current image according to the target noise feature map comprises:
performing image fusion processing on the current image and the reference original image to obtain an initial fusion image;
and guiding the fusion degree between the current image and the initial fusion image by using the target noise characteristic diagram and carrying out image fusion processing to remove the ghost noise of the current image.
9. The image processing method according to claim 1 or 2, characterized in that the method further comprises:
and acquiring any one or more of image type information, scene type information and resolution information of the current image to configure downsampling parameters according to the acquired information.
10. An image processing apparatus characterized by comprising:
the image acquisition module is used for acquiring an image to be processed and a corresponding reference image; the image to be processed is a gray image corresponding to a current image, and the reference image is a gray image corresponding to a reference original image continuous to the current image;
the first noise characteristic map acquisition module is used for respectively carrying out mean value filtering processing on the image to be processed and the reference image so as to construct a first noise characteristic map by utilizing the difference value of corresponding pixel points of the image to be processed and the reference image after the mean value filtering processing; and
the third noise characteristic diagram acquisition module is used for performing down-sampling on the current image and the reference original image in a target chrominance channel and performing mean value filtering processing on down-sampling results so as to construct a third noise characteristic diagram by using the mean value filtering processing results;
a noise degree parameter obtaining module, configured to determine a noise degree parameter corresponding to the current image by combining the first noise feature map and the brightness feature corresponding to the current image;
and the denoising processing module is used for determining a target noise characteristic map by combining the first noise characteristic map and the third noise characteristic map based on the noise degree parameter so as to remove the ghost noise from the current image according to the target noise characteristic map.
11. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the image processing method of any one of claims 1 to 9.
12. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to carry out the image processing method according to any one of claims 1 to 9.
CN202110720734.7A 2021-06-28 2021-06-28 Image processing method and device, computer readable medium and electronic equipment Pending CN113344820A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110720734.7A CN113344820A (en) 2021-06-28 2021-06-28 Image processing method and device, computer readable medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110720734.7A CN113344820A (en) 2021-06-28 2021-06-28 Image processing method and device, computer readable medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN113344820A true CN113344820A (en) 2021-09-03

Family

ID=77479213

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110720734.7A Pending CN113344820A (en) 2021-06-28 2021-06-28 Image processing method and device, computer readable medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113344820A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113781375A (en) * 2021-09-10 2021-12-10 厦门大学 Vehicle-mounted vision enhancement method based on multi-exposure fusion
CN114697468A (en) * 2022-02-16 2022-07-01 瑞芯微电子股份有限公司 Image signal processing method and device and electronic equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113781375A (en) * 2021-09-10 2021-12-10 厦门大学 Vehicle-mounted vision enhancement method based on multi-exposure fusion
CN113781375B (en) * 2021-09-10 2023-12-08 厦门大学 Vehicle-mounted vision enhancement method based on multi-exposure fusion
CN114697468A (en) * 2022-02-16 2022-07-01 瑞芯微电子股份有限公司 Image signal processing method and device and electronic equipment
CN114697468B (en) * 2022-02-16 2024-04-16 瑞芯微电子股份有限公司 Image signal processing method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN110675404B (en) Image processing method, image processing apparatus, storage medium, and terminal device
EP3509034B1 (en) Image filtering based on image gradients
US8774555B2 (en) Image defogging method and system
CN108833785B (en) Fusion method and device of multi-view images, computer equipment and storage medium
CN111353948B (en) Image noise reduction method, device and equipment
JP6998388B2 (en) Methods and equipment for processing image property maps
EP3281400A1 (en) Automated generation of panning shots
CN108174057B (en) Method and device for rapidly reducing noise of picture by utilizing video image inter-frame difference
CN113344820A (en) Image processing method and device, computer readable medium and electronic equipment
CN110889809B9 (en) Image processing method and device, electronic equipment and storage medium
CN112801907B (en) Depth image processing method, device, equipment and storage medium
JP2016529747A (en) How to tone map a video sequence
CN111563517B (en) Image processing method, device, electronic equipment and storage medium
CN116823628A (en) Image processing method and image processing device
DE112021006769T5 (en) CIRCUIT FOR COMBINED DOWNCLOCKING AND CORRECTION OF IMAGE DATA
CN114514746B (en) System and method for motion adaptive filtering as pre-processing for video encoding
CN105160627B (en) Super-resolution image acquisition method and system based on classification self-learning
CN113256785B (en) Image processing method, apparatus, device and medium
CN111028184B (en) Image enhancement method and system
CN111311498B (en) Image ghost eliminating method and device, storage medium and terminal
Wang et al. An airlight estimation method for image dehazing based on gray projection
CN114679519A (en) Video processing method and device, electronic equipment and storage medium
KR102340942B1 (en) Method for Image Processing and Display Device using the same
CN113469889A (en) Image noise reduction method and device
Bätz et al. Multi-image super-resolution for fisheye video sequences using subpixel motion estimation based on calibrated re-projection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination