CN115496670A - Image processing method, image processing device, computer equipment and storage medium - Google Patents

Image processing method, image processing device, computer equipment and storage medium Download PDF

Info

Publication number
CN115496670A
CN115496670A CN202110681297.2A CN202110681297A CN115496670A CN 115496670 A CN115496670 A CN 115496670A CN 202110681297 A CN202110681297 A CN 202110681297A CN 115496670 A CN115496670 A CN 115496670A
Authority
CN
China
Prior art keywords
image
layer
processed
displacement
adjacent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110681297.2A
Other languages
Chinese (zh)
Inventor
谢朝毅
苏坦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Insta360 Innovation Technology Co Ltd
Original Assignee
Insta360 Innovation Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Insta360 Innovation Technology Co Ltd filed Critical Insta360 Innovation Technology Co Ltd
Priority to CN202110681297.2A priority Critical patent/CN115496670A/en
Priority to PCT/CN2022/097077 priority patent/WO2022262599A1/en
Publication of CN115496670A publication Critical patent/CN115496670A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application relates to an image processing method, an image processing device, a computer device and a storage medium. The method comprises the following steps: acquiring an image to be processed and an adjacent image corresponding to the image to be processed; respectively constructing image pyramids of an image to be processed and an adjacent image, wherein the image pyramid corresponding to the image to be processed is a first image pyramid, and the image pyramid corresponding to the adjacent image is a second image pyramid; calculating image layer displacement between each layer corresponding to the first image pyramid and the second image pyramid and a measurement index value corresponding to the image layer displacement; aligning the adjacent images with the images to be processed according to the displacement of each image layer and the measurement index value corresponding to the displacement of the image layer; and fusing the adjacent images and the images to be processed to obtain the processed images. By adopting the method of the embodiment of the application, the image noise can be effectively reduced, and the image quality can be improved.

Description

Image processing method, image processing device, computer equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, a computer device, and a storage medium.
Background
With the rise of short video software, people increasingly like to use videos to shoot and record life, and the requirements on the quality of video shooting are higher and higher. Especially, the visibility at night is low, and the lighting condition is complex, which easily causes the video image quality not to be high. Therefore, video shooting technology aiming at night scenes appears, so that the quality of the night scene video images is improved.
At present, in a video shooting technology for night scenes, methods of video brightening and noise filtering are mostly adopted to perform image noise reduction, so that the image quality is improved. However, noise reduction using this method may bring about temporal jitter, for example, in the case of vehicle lighting with smear and ground jitter, resulting in low image quality.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an image processing method, an apparatus, a computer device, and a storage medium capable of improving image quality.
A method of image processing, the method comprising:
acquiring an image to be processed and an adjacent image corresponding to the image to be processed;
respectively constructing image pyramids of the image to be processed and the adjacent image, wherein the image pyramid corresponding to the image to be processed is a first image pyramid, and the image pyramid corresponding to the adjacent image is a second image pyramid;
calculating image layer displacement between each layer corresponding to the first image pyramid and the second image pyramid and a measurement index value corresponding to the image layer displacement;
aligning the adjacent images with the images to be processed according to the image layer displacement and the weighing index value corresponding to the image layer displacement;
and fusing the adjacent images and the images to be processed to obtain processed images.
In one embodiment, after the obtaining the image to be processed and the adjacent image corresponding to the image to be processed, before the respectively constructing the image pyramids of the image to be processed and the adjacent image, the method further includes:
and performing image quality optimization processing on the image to be processed and the adjacent image to obtain the image to be processed and the adjacent image after the image quality optimization processing.
In one embodiment, after the obtaining the image to be processed and the adjacent image corresponding to the image to be processed, before the respectively constructing the image pyramids of the image to be processed and the adjacent image, the method further includes:
performing primary filtering processing on the image to be processed and the adjacent image to obtain the image to be processed and the adjacent image after the primary filtering processing;
determining motion areas in the image to be processed and the adjacent image after the primary filtering processing according to the pixel values of the image to be processed and the adjacent image after the primary filtering processing;
calculating the boundary value of the image to be processed and the adjacent image after the primary filtering processing;
determining non-boundary motion areas in the image to be processed and the adjacent image after the primary filtering processing based on the boundary value;
and carrying out secondary filtering processing on the non-boundary motion area to obtain the image to be processed and the adjacent image after noise processing.
In one embodiment, the calculating the image layer displacement between each layer corresponding to the first image pyramid and the second image pyramid comprises:
calculating initial image block displacement of a first image layer, wherein the initial image block displacement is displacement between a first image block in the first image layer and a second image block in a corresponding second image layer, the first image layer is an image layer in the first image pyramid, the second image layer is an image layer in the second image pyramid, the first image block is an image block with a preset size in the first image layer, and the second image block is an image block with the preset size in the second image layer;
calculating an initial measurement index value corresponding to the initial image block displacement;
and determining the image layer displacement between each layer corresponding to the first image pyramid and the second image pyramid according to the initial image block displacement and the initial measurement index value.
In one embodiment, the calculating the initial image block displacement of the first image layer includes:
performing interpolation calculation on image layer displacement of a previous image layer of the first image layer to obtain an interpolation calculation result, wherein the previous image layer is an image layer which is adjacent to the first image layer in the first image pyramid and has a resolution lower than that of the first image layer;
and determining a first image block displacement of the first image layer according to the interpolation calculation result, wherein the initial image block displacement comprises the first image block displacement.
In one embodiment, the calculating the initial image block displacement of the first image layer includes:
performing statistical calculation on image layer displacement of a previous image layer of the first image layer to obtain a statistical calculation result, wherein the previous image layer is a first image layer which is adjacent to the first image layer in the first image pyramid and has lower resolution than the first image layer;
and determining second image block displacement of the first image layer according to the statistical calculation result, wherein the initial image block displacement comprises the second image block displacement.
In one embodiment, the calculating the initial image block displacement of the first image layer includes:
determining image blocks with determined image block displacement in the first image layer;
and determining a third image block displacement of the first image layer according to the image block with the determined image block displacement, wherein the initial image block displacement comprises the third image block displacement.
In one embodiment, the calculating an initial metric value corresponding to the initial image block displacement includes:
determining a second image block with the same displacement as the initial image block of the first image layer in the second image layer;
and calculating an initial measurement index value corresponding to the displacement of the initial image block according to the pixel value of the first image block in the first image layer and the pixel value of the second image block.
In one embodiment, the aligning the adjacent images with the to-be-processed image according to each image layer displacement and the metric value corresponding to the image layer displacement includes:
determining the image block displacement of a first image block in the first image layer from the initial image block displacement based on the initial weighing index value;
determining image layer displacement between each layer corresponding to the first image pyramid and the second image pyramid and a measurement index value corresponding to the image layer displacement based on image block displacement of a first image block in the first image layer;
and aligning the adjacent images with the image to be processed according to the image layer displacement and the measurement index value corresponding to the image layer displacement.
In one embodiment, after determining the image layer displacement between each layer corresponding to the first image pyramid and the second image pyramid based on the image block displacement of the first image block in the first image layer, the method further includes:
determining a preset neighborhood image block adjacent to the first image block in the first image layer;
and updating the image block displacement of the first image block in the first image layer according to the image block displacement of each preset neighborhood image block and the corresponding measurement index value, and determining the updated image layer displacement.
In one embodiment, the fusing the adjacent image and the image to be processed to obtain a processed image includes:
determining the initial weight of the adjacent images according to the updated measurement index value corresponding to the image layer displacement;
fusing the preset weight of the image to be processed with the initial weight of the adjacent image to obtain the fusion weight of the adjacent image and the image to be processed;
and fusing the adjacent images and the image to be processed based on the fusion weight to obtain a processed image.
In one embodiment, after the acquiring the processed image, the method further includes:
and correcting the processed image according to a preset pixel value fluctuation range.
An image processing apparatus, the apparatus comprising:
the image acquisition module is used for acquiring an image to be processed and an adjacent image corresponding to the image to be processed;
the image pyramid construction module is used for respectively constructing the image pyramids of the image to be processed and the adjacent image, wherein the image pyramid corresponding to the image to be processed is a first image pyramid, and the image pyramid corresponding to the adjacent image is a second image pyramid;
the calculation module is used for calculating image layer displacement between each layer corresponding to the first image pyramid and the second image pyramid and a measurement index value corresponding to the image layer displacement;
the image alignment module is used for aligning the adjacent images with the images to be processed according to the displacement of each image layer and the corresponding measurement index value of the displacement of the image layer;
and the image fusion module is used for fusing the adjacent images and the image to be processed to obtain a processed image.
A computer device comprising a memory storing a computer program and a processor implementing the steps of the image processing method described above when executing the computer program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the image processing method as described above.
According to the image processing method, the image processing device, the computer equipment and the storage medium, the image to be processed and the adjacent image corresponding to the image to be processed are obtained; respectively constructing an image pyramid of the image to be processed and an image pyramid of an adjacent image, wherein the image pyramid corresponding to the image to be processed is a first image pyramid, and the image pyramid corresponding to the adjacent image is a second image pyramid; calculating image layer displacement between each layer corresponding to the first image pyramid and each layer corresponding to the second image pyramid, and calculating a weighing index value corresponding to the image layer displacement; aligning the adjacent images with the images to be processed according to the displacement of each image layer and the measurement index value corresponding to the displacement of the image layer; and fusing the adjacent images and the images to be processed to obtain the processed images. By adopting the method of the embodiment, the displacement of the image layer between each layer corresponding to the image pyramid is calculated after the image pyramids of the image to be processed and the adjacent image are constructed, so that the accuracy of displacement calculation can be improved; aligning the adjacent images with the images to be processed according to the displacement of each image layer, so that the alignment degree of the adjacent images and the images to be processed can be improved; and fusing the aligned adjacent images with the images to be processed to obtain the processed images, so that the image noise can be effectively reduced, the image quality can be improved, and the image quality can be improved.
Drawings
FIG. 1 is a diagram of an exemplary embodiment of an image processing method;
FIG. 2 is a flow diagram illustrating a method for image processing according to one embodiment;
FIG. 3 is a diagram illustrating a method of image processing in one embodiment;
FIG. 4 is a diagram illustrating an example embodiment of an image to be processed;
fig. 5 is a diagram illustrating an image quality optimization process according to an embodiment;
FIG. 6 is a diagram illustrating the calculation of image layer displacement in one embodiment;
FIG. 7 is a block diagram showing the configuration of an image processing apparatus according to an embodiment;
FIG. 8 is a diagram illustrating an internal structure of a computer device in one embodiment;
fig. 9 is an internal structural view of a computer device in another embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In one embodiment, the image processing method provided by the present application can be applied to an application environment as shown in fig. 1. The application environment involves both the terminal 102 and the server 104. The terminal 102 may communicate with the server 104 via other communication means such as a network or a protocol. Specifically, the server 104 acquires the image to be processed and the adjacent image corresponding to the image to be processed through the terminal 102, and respectively constructs an image pyramid of the image to be processed and an image pyramid of the adjacent image, where the image pyramid corresponding to the image to be processed is a first image pyramid, and the image pyramid corresponding to the adjacent image is a second image pyramid; calculating image layer displacement between each layer corresponding to the first image pyramid and the second image pyramid and a measurement index value corresponding to the image layer displacement; aligning the adjacent images with the images to be processed according to the displacement of each image layer and the measurement index value corresponding to the displacement of the image layer; and fusing the adjacent images and the images to be processed to obtain the processed images.
In one embodiment, the application environment of the image processing method provided by the present application may only relate to the terminal 102. Specifically, the terminal 102 may directly obtain the image to be processed and the adjacent image corresponding to the image to be processed, perform image processing on the image to be processed and the adjacent image in the terminal 102, and finally obtain the processed image.
In one embodiment, the application environment of the image processing method provided by the present application may only involve the server 104. Specifically, the server 104 may directly obtain the image to be processed and the adjacent image corresponding to the image to be processed, perform image processing on the image to be processed and the adjacent image in the server 104, and finally obtain the processed image.
The terminal 102 may be, but is not limited to, various cameras, personal computers, notebook computers, smart phones, tablet computers, portable wearable devices, and the like. The server 104 may be implemented as a stand-alone server or as a server cluster comprised of multiple servers.
In one embodiment, as shown in fig. 2, an image processing method is provided, which is described by taking the method as an example applied to the terminal 102 and/or the server 104 in fig. 1, and includes the following steps:
step S202, acquiring the image to be processed and the adjacent image corresponding to the image to be processed.
In one embodiment, the image is an image captured by a camera. The shooting device may be a camera, a video camera, or the like. An image needing to be processed is called an image to be processed, an image adjacent to the image to be processed is called an adjacent image, and the adjacent image can be a single image or a plurality of images.
In one embodiment, when the shooting device is a camera, the camera may directly shoot an image, the shot image that needs to be processed is referred to as a to-be-processed image, one or more previous images of the to-be-processed image that are shot are referred to as adjacent images corresponding to the to-be-processed image, or one or more next images of the to-be-processed image that are shot are referred to as adjacent images corresponding to the to-be-processed image.
In one embodiment, when the shooting device is a video camera, the video camera can shoot a video, and a video frame image needs to be extracted from the video. The extracted video frame image which needs to be processed is called a to-be-processed image, the previous frame or multi-frame image of the extracted to-be-processed image is called an adjacent image corresponding to the to-be-processed image, or the next frame or multi-frame image of the extracted to-be-processed image is called an adjacent image corresponding to the to-be-processed image. The video can be historical video or real-time video, and the image can be historical video frame image or real-time video frame image.
And step S204, respectively constructing image pyramids of the image to be processed and the adjacent image, wherein the image pyramid corresponding to the image to be processed is a first image pyramid, and the image pyramid corresponding to the adjacent image is a second image pyramid.
In one embodiment, the image to be processed and the adjacent image may be down-sampled or up-sampled respectively to obtain an image pyramid, such as a gaussian pyramid or a laplacian pyramid. The image pyramid corresponding to the image to be processed is a first image pyramid, and the image pyramid corresponding to the adjacent image is a second image pyramid. The Down sampling (Down sampling) refers to Down sampling an image. After down-sampling or up-sampling, an image pyramid corresponding to the image may be obtained. An image pyramid is a structure that describes an image in multiple resolutions, and is a set of images that are arranged in a pyramid shape with progressively lower resolutions and that are derived from the same image. The higher the level of the image pyramid, the smaller the image and the lower the resolution.
In one embodiment, the down-sampling mode is various, and at least one of them can be arbitrarily selected. Specifically, the image may be downsampled using an OpenCV software library, or the image may be directly downsampled, or a modified weighted downsampling method may be used for downsampling. The OpenCV software library is a computer vision and machine learning software library, can be operated on operating systems such as Linux, windows, android and Mac OS, and can realize general algorithms in the aspects of computer vision, image processing and the like.
In one embodiment, the down-sampling of the image to be processed and the neighboring images is performed in the same manner. Specifically, the image to be processed is down-sampled to obtain a first image pyramid, and the first image pyramid includes each first image layer. And performing down-sampling on the adjacent images to obtain a second image pyramid, wherein the second image pyramid comprises each second image layer. Because the down-sampling mode is the same, and the image to be processed and the adjacent image are the images obtained by the same shooting device, after the down-sampling of the same mode is carried out on the image to be processed and the adjacent image, the obtained first image layers respectively correspond to the second image layers.
Step S206, calculating image layer displacement between each layer corresponding to the first image pyramid and the second image pyramid, and a measurement index value corresponding to the image layer displacement.
In one embodiment, since there is a shift between the image to be processed and the adjacent image, it is necessary to calculate a displacement between the image to be processed and the adjacent image, and align the image to be processed with the adjacent image according to the displacement. Specifically, after the down-sampling, the offset between the image to be processed and the adjacent image may be decomposed into image layer displacements between each layer corresponding to the first image pyramid and the second image pyramid, and the displacement between the image to be processed and the adjacent image may be determined through the image layer displacements.
In one embodiment, when the adjacent image of the image to be processed is one image, the displacement between the adjacent image and the image to be processed only needs to be calculated once when the adjacent image is aligned with the image to be processed. When the adjacent images of the image to be processed are a plurality of images, when the adjacent images are aligned with the image to be processed, the displacement between the adjacent images and the image to be processed needs to be calculated for a plurality of times. For example, when aligning the three images, the displacement between the previous image and the image to be processed needs to be calculated, so as to align the previous image and the image to be processed, and the displacement between the image to be processed and the next image needs to be calculated, so as to align the next image and the image to be processed, that is, twice displacements need to be calculated.
In one embodiment, when the image layer displacement between each layer corresponding to the first image pyramid and the second image pyramid is calculated, the metric value corresponding to the image layer displacement between each layer is also calculated. The image layer displacement can be determined according to the image layer displacement, and the image layer displacement corresponds to the index value. The smaller the value of the measurement index is, the better the corresponding image layer displacement is. When different calculation modes are adopted to obtain different image layer displacements corresponding to each layer, the optimal image layer displacement corresponding to each layer can be determined according to the measurement index values.
And step S208, aligning the adjacent images with the images to be processed according to the displacement of each image layer and the weighing index value corresponding to the displacement of the image layer.
In one embodiment, after obtaining each image layer displacement and the metric value corresponding to the image layer displacement, the adjacent images may be aligned with the image to be processed according to each image layer displacement and the metric value corresponding to the image layer displacement. Specifically, adjacent images and the image to be processed can be aligned layer by layer to improve the alignment degree of the images, so that the aligned adjacent images and the image to be processed are basically consistent.
And step S210, fusing the adjacent images and the images to be processed to obtain the processed images.
In one embodiment, the image fusion refers to fusing several processed images into one image, so that the accuracy of the image can be improved, and the resolution of the original image can be improved. And after the adjacent images are aligned with the images to be processed, fusing the adjacent images with the images to be processed. In particular, the fusion may be based on the weights of the image to be processed and the adjacent images. After the fusion, a fused image is obtained, which is called a processed image.
In the image processing method, the image to be processed and the adjacent image corresponding to the image to be processed are obtained; respectively constructing image pyramids of an image to be processed and an adjacent image, wherein the image pyramid corresponding to the image to be processed is a first image pyramid, and the image pyramid corresponding to the adjacent image is a second image pyramid; calculating image layer displacement between each layer corresponding to the first image pyramid and the second image pyramid and a measurement index value corresponding to the image layer displacement; aligning the adjacent images with the images to be processed according to the displacement of each image layer and the measurement index value corresponding to the displacement of the image layer; and fusing the adjacent images and the images to be processed to obtain the processed images. By adopting the method of the embodiment, the displacement of the image layer between each layer corresponding to the image pyramid is calculated after the image pyramids of the image to be processed and the adjacent image are constructed, so that the accuracy of displacement calculation can be improved; the adjacent images are aligned with the images to be processed according to the displacement of each image layer, so that the alignment degree of the adjacent images and the images to be processed can be improved; and fusing the aligned adjacent images with the images to be processed to obtain the processed images, so that the image noise can be effectively reduced, the image quality can be improved, and the image quality can be improved.
In one embodiment, after the step S202 acquires the image to be processed and the adjacent image corresponding to the image to be processed, before the step S204 constructs the image pyramid of the image to be processed and the adjacent image, the method further includes:
step S302, performing image quality optimization processing on the image to be processed and the adjacent image to obtain the image to be processed and the adjacent image after the image quality optimization processing.
In one embodiment, image quality optimization processing may be performed on the image to be processed and the adjacent image respectively by using image contrast processing, image sharpening, and the like. Specifically, the manner of image contrast processing may be employed. The image contrast refers to the magnitude of the image gray contrast. When the image is an image taken at night, the image is dark as a whole due to low visibility at night. Therefore, the contrast of the image can be reduced to improve the details of the dark part area of the image. Further, since the light conditions at night are complicated and the influence of light such as street lights and neon lights is often present, a region having higher brightness than the surrounding brightness inevitably appears in the image, and is called a highlight region. After the contrast of the image is reduced and the details of the dark area of the image are improved, the highlight area is lost, so that the contrast of the image needs to be improved to restore the highlight area of the image.
In one embodiment, the way of performing the image quality optimization processing on the image to be processed and the adjacent image is the same. The following embodiments take the processing procedure of the image to be processed as an example, and the image to be processed may be subjected to contrast processing by way of curve mapping. The Look-up table (Look-up table) can be used for improving the image quality, improving the details of a dark area, reducing the contrast of an image, restoring a highlight area of the image and improving the contrast of the image. Specifically, the operation of the lookup table may be performed using image processing software, such as exposure adjustment, tone mapping, and the like in the image processing software PS. And the image highlight area can be restored in a defogging mode through a dark channel, so that the contrast of the image is improved.
In one embodiment, the contrast-processed image to be processed may be weighted with the original image to be processed, so as to obtain an image to be processed with more highlight area details. Specifically, the to-be-processed image and the to-be-processed image after the contrast processing may be weighted to obtain a weight map corresponding to the to-be-processed image. Because the brightness of the highlight area to the common area of the image is gradually changed, the pixel value of the original image to be processed can be multiplied by the preset multiple, and the blurring processing is performed to obtain the weight map corresponding to the image to be processed. Here, the weight may be represented as W, and the preset multiple may be 1.5 times, 2 times, 3 times, or the like. The blurring process may be a mean blurring process, and the radius thereof may be 21. And obtaining a weight map corresponding to the image to be processed by using image processing software. Specifically, highlight extraction or the like in the image processing software PS may be used.
In one embodiment, the preset formula may be used to restore highlight details of the weight map corresponding to the image to be processed, so as to obtain the image to be processed after the image quality optimization processing. Specifically, the preset formula is as follows:
Figure BDA0003122666950000101
in the formula, W (y, x) represents the value of a pixel point (y, x) in a weight graph W corresponding to the image to be processed, r, g and b represent the pixel values of red, green and blue channels of the original image to be processed, and r represents the pixel value of the red, green and blue channels of the original image to be processed new 、g new 、b new And representing pixel values of red, green and blue channels of the image to be processed after the contrast processing.
In one embodiment, after the step S202 of obtaining the image to be processed and the adjacent image corresponding to the image to be processed, before the step S204 of respectively constructing the image pyramids of the image to be processed and the adjacent image, the method further includes:
and S402, performing primary filtering processing on the image to be processed and the adjacent image to obtain the image to be processed and the adjacent image after the primary filtering processing.
The impulse noise is a common noise in an image, and is represented by white dots or black dots randomly appearing in the image, and may be a pixel point with black color in a highlight area, or a pixel point with white color in a dark area, or both. The impulse noise may be caused by strong interference of the image signal, an analog-to-digital converter or bit transmission error. The impulse noise is generally removed by a filtering process, such as a median filtering process or other filtering processes for removing isolated points. When the image capturing apparatus converts raw data into an image, impulse noise is amplified by several pixels, and a filtering process is required many times. However, the image is blurred by the filtering process for a plurality of times, and therefore, the image needs to be subjected to a targeted filtering process, for example, a filtering process for a flat area of the image, or a filtering process for a motion area of the image, so as to effectively reduce noise of the image.
In one embodiment, the image to be processed and the neighboring image are filtered in the same manner. The global filtering process is performed on the entire image, and is referred to as a primary filtering process. Specifically, the image to be processed and the adjacent image are subjected to primary filtering processing, so that the image to be processed and the adjacent image after the primary filtering processing are obtained. The median filtering process, or the improved median filtering process, etc. may be used.
And step S404, determining the motion areas in the image to be processed and the adjacent image after the primary filtering processing according to the pixel values of the image to be processed and the adjacent image after the primary filtering processing.
In one embodiment, since noise of a non-motion region of an image can be removed through a subsequent alignment and fusion process of the image, a filtering process may be performed on a motion region of the image, and the motion region of the image needs to be determined. And determining the motion areas in the image to be processed and the adjacent image after the primary filtering processing according to the pixel values of the image to be processed and the adjacent image after the primary filtering processing.
Specifically, when the difference between the pixel values of the current region of the image to be processed and the corresponding region of the adjacent image is greater than a preset motion threshold, the current region is determined as a motion region, and the motion region threshold is set to a first preset value. And when the difference between the pixel values of the current area of the image to be processed and the corresponding area of the adjacent image is smaller than a preset motion threshold value, determining the current area as a non-motion area, and setting the non-motion area threshold value as a second preset value. The value range of the preset motion threshold is (0, 255), which may be specifically set to 35. The motion region threshold, i.e., the first preset value, may be set to 255, and the non-motion region threshold, i.e., the second preset value, may be set to 0.
Step S406, calculating a boundary value between the image to be processed and the adjacent image after the primary filtering process.
In one embodiment, since the boundary of the image may exceed the image region when performing the filtering process, the filtering process should not be performed at the boundary of the image, and it is necessary to calculate the boundary value between the image to be processed and the adjacent image after the filtering process. The boundary value between the image to be processed and the adjacent image after the primary filtering processing can be calculated through a Sobel operator (Sobel operator) or an edge detection algorithm.
Specifically, when the boundary value calculated from the current region of the image to be processed is greater than the preset boundary threshold value, the current region is determined as the boundary region, and the boundary region threshold value is set to the first preset value. And when the boundary value calculated in the current area of the image to be processed is smaller than the preset boundary threshold value, determining the current area as a non-boundary area, and setting the threshold value of the non-boundary area as a second preset value. The value range of the preset boundary threshold is (0, 255), and may be specifically set to 35. The boundary region threshold, i.e., the first preset value, may be set to 255, and the non-boundary region threshold, i.e., the second preset value, may be set to 0.
In step S408, based on the boundary value, non-boundary motion regions in the image to be processed and the adjacent image after the primary filtering process are determined.
Specifically, according to the non-boundary region threshold and the motion region threshold obtained by the above calculation, the non-boundary motion region in the image to be processed and the adjacent image after the primary filtering processing is determined.
And step S410, carrying out secondary filtering processing on the non-boundary motion area to obtain the image to be processed and the adjacent image after noise processing.
After the non-boundary motion area of the image to be processed and the adjacent image is determined, filtering processing is carried out on the non-boundary motion area, and the filtering processing is called secondary filtering processing. The secondary filtering process may be performed in the same manner as the primary filtering process. Specifically, the to-be-processed image and the adjacent image after the noise processing may be obtained by means of median filtering processing or improved median filtering processing, and the obtained to-be-processed image and adjacent image may be applied to subsequent calculation.
In one embodiment, the step S206 of calculating the image layer displacement between each layer corresponding to the first image pyramid and the second image pyramid includes:
step S502, calculating an initial image block displacement of the first image layer, where the initial image block displacement is a displacement between a first image block in the first image layer and a second image block in a corresponding second image layer, the first image layer is an image layer in the first image pyramid, the second image layer is an image layer in the second image pyramid, the first image block is an image block of a preset size in the first image layer, and the second image block is an image block of a preset size in the second image layer.
In one embodiment, the image layer shift may be determined by calculating the image block shift of each image block of each image layer corresponding to the image pyramid, so as to improve the accuracy of the calculation. And obtaining each first image layer after down-sampling the image to be processed. Similarly, after the down-sampling of the adjacent images is performed in the same manner, each second image layer is obtained. The image blocks with the preset size in the first image layer are called first image blocks, and each first image block with the preset size is obtained. Similarly, the image blocks with the preset size in the second image layer are called second image blocks, and each second image block with the preset size is obtained. The first image layer corresponds to the second image layer, so that the obtained first image block with the preset size corresponds to the second image block. And the displacement between the first image block and the corresponding second image block is called initial image block displacement. When the alignment displacement is calculated, the calculation is performed according to each image block of each layer of image layer. Before calculating the initial image block displacement of the first image layer, the displacement of the minimum resolution image layer needs to be initialized, so that the image block displacement is 0.
Step S504, an initial scaling index value corresponding to the initial image block displacement is calculated.
In one embodiment, multiple calculation modes can be adopted to calculate multiple initial image block displacements of the first image layer, and then an optimal image block displacement is determined according to the corresponding weighing index value. Specifically, an initial metric value corresponding to the initial image block displacement is calculated. And the initial image block displacement corresponds to the initial measurement index value. The initial measurement index value may measure how well the initial image block displacement is obtained by calculation, and an optimal image block displacement may be determined from a plurality of initial image block displacements.
Step S506, according to the initial image block displacement and the initial weighing index value, image layer displacement between each layer corresponding to the first image pyramid and each layer corresponding to the second image pyramid is determined.
In one embodiment, the smaller the value of the initial metric, the better the displacement of the corresponding image block. Specifically, the initial image block displacement corresponding to the initial measurement index value with the smallest value may be determined as the optimal image block displacement of the first image layer according to the initial image block displacement and the initial measurement index value. And then, determining the image layer displacement between each layer corresponding to the first image pyramid and the second image pyramid according to the image block displacement of the first image layer.
In one embodiment, the step S502 of calculating the initial image block displacement of the first image layer includes:
step S602, performing interpolation calculation on image layer displacement of a previous image layer of the first image layer to obtain an interpolation calculation result, where the previous image layer is an image layer in the first image pyramid, adjacent to the first image layer, and having a resolution lower than that of the first image layer.
In one embodiment, after down-sampling the image to be processed, a first image pyramid is obtained. In the first image pyramid, an image layer adjacent to the first image layer and having a resolution lower than that of the first image layer, i.e., a higher-level image layer, is used as a previous image layer. The image layer displacement of the previous image layer is obtained based on the image layer displacement of the image layer with the minimum resolution.
In one embodiment, the image layer displacement of the image layer before the first image layer is interpolated to obtain the interpolation result. The interpolation calculation can adopt lagrangian interpolation, newton interpolation, segmented interpolation, bilinear interpolation and other modes.
Step S604, determining a first image block displacement of the first image layer according to the interpolation calculation result, where the initial image block displacement includes the first image block displacement.
In one embodiment, since the resolutions of the first image layers in the first image pyramid are proportional, the interpolation calculation result of the image layer displacement of the previous image layer needs to be amplified by the corresponding resolution, and then can be used as the image block displacement of the first image block of the first image layer, and the image block displacement determined by using the steps in this embodiment is referred to as the first image block displacement. Wherein, the resolution magnification factor is the resolution ratio of the first image layer and the previous image layer. For example, the resolution of the first image layer is 2 times of the resolution of the previous image layer, that is, the corresponding resolution magnification is 2, and the interpolation result needs to be multiplied by 2 to be used as the image block displacement of the first image block of the first image layer.
In one embodiment, the step S502 of calculating the initial image block displacement of the first image layer includes:
step S702 is to perform statistical calculation on the image layer displacement of the previous image layer of the first image layer to obtain a statistical calculation result, where the previous image layer is a first image layer in the first image pyramid, adjacent to the first image layer, and having a resolution lower than that of the first image layer.
In one embodiment, at least one of a mode, a mean, or a median of displacements of each image block in image layer displacements of the previous image layer may be counted as a result of the statistical calculation. The previous image layer is an image layer which is adjacent to the first image layer and has lower resolution than the first image layer in each first image layer.
Step S704, determining a second image block displacement of the first image layer according to the statistical calculation result, where the initial image block displacement includes the second image block displacement.
In one embodiment, the statistical calculation result of the image layer displacement of the previous image layer needs to be amplified by the corresponding resolution, and then may be used as the image block displacement of the first image block of the first image layer, and the image block displacement determined by using the step in this embodiment is referred to as the second image block displacement.
In one embodiment, the step S502 of calculating the initial image block displacement of the first image layer includes:
step S802, determine the image block in the first image layer for which the image block displacement is determined.
In one embodiment, the initial image block displacement of the first image layer may be further determined based on the image block in the first image layer for which the image block displacement is determined. Wherein the image block for which the image block displacement has been determined still needs to be an image block adjacent to the first image block.
In one embodiment, when the image block displacement of each first image block of the first image layer is calculated, each first image block may be calculated according to a preset manner. The preset mode may be that the image blocks in the first row of the first image layer are calculated from left to right, the image blocks in the second row are calculated from right to left, and the calculation is performed in a cycle manner sequentially. The image block with the determined image block displacement may be determined according to steps S602 to S604, S702 to S704, or according to the steps in this embodiment. Specifically, the image blocks adjacent to the first image block in the first image layer and having the determined image block displacement are the preset number of image blocks. The preset number may be multiple, for example, 2, 3, or 4.
Step S804, determining a third image block displacement of the first image layer according to the image block with the determined image block displacement, where the initial image block displacement includes the third image block displacement.
In one embodiment, the image block displacement of each first image block in the first image layer may be determined according to the image block for which the image block displacement is determined, and the image block displacement determined by the step of this embodiment is referred to as a third image block displacement.
In one embodiment, there is only one first image block in the first image layer, or, in the first image layer, the image block whose image block displacement needs to be determined is the first calculated image block in the first image layer, that is, the first image block does not have any image block adjacent to the first image block and whose image block displacement is determined. Then in this case the image block displacement of the first image block may only be determined according to step S602-step S604 or step S702-step S704.
In one embodiment, the step S504 of calculating the initial metric value corresponding to the initial image block displacement includes:
step S902, determining a second image block in the second image layer with the same displacement as the initial image block in the first image layer.
In one embodiment, the second image layer refers to a second image layer corresponding to the first image layer. The method may further include determining a second image block corresponding to the displacement in the second image layer according to the initial image block displacement of the first image block in the first image layer. In particular, a second image layer corresponding to the first image layer may be determined, in which a second image block having the same initial image block displacement as the first image block of the first image layer is determined.
Step S904, calculating an initial metric value corresponding to the displacement of the initial image block according to the pixel value of the first image block in the first image layer and the pixel value of the second image block.
In one of the embodiments, a sum of absolute differences of pixel values between two image blocks may be used as an initial metric, or a correlation between two image blocks may be used as an initial metric. Specifically, when the sum of absolute differences of pixel values between two image blocks is used as the initial metric, the initial metric corresponding to the initial image block displacement may be calculated according to the pixel value of the first image block in the first image layer and the pixel value of the second image block with the same initial image block displacement.
In one embodiment, the step S208 aligns the neighboring images with the to-be-processed image according to the image layer displacement and the metric value corresponding to the image layer displacement, and includes:
step S1002, determining an image block displacement of a first image block in the first image layer from the initial image block displacements based on the initial metric value.
In one embodiment, according to the initial metric value, the optimal image block displacement of the first image block in the first image layer may be determined from the first image block displacement, the second image block displacement, and the third image block displacement corresponding to the initial metric value with the smallest value.
Step S1004, determining an image layer displacement between each layer corresponding to the first image pyramid and the second image pyramid and a metric value corresponding to the image layer displacement based on the image block displacement of the first image block in the first image layer.
In one embodiment, the image layer displacement between each layer corresponding to the first image pyramid and the second image pyramid is determined by the image block displacement of the first image block in the first image layer. After the image layer displacement corresponding to each layer is determined, the metric value corresponding to the image layer displacement is determined.
Step S1006, according to the displacement of each image layer and the corresponding measurement index value of the image layer displacement, aligning the adjacent images with the image to be processed.
In one embodiment, after determining each image layer displacement and the metric value corresponding to the image layer displacement, the adjacent images may be aligned with the image to be processed according to the determined image layer displacement.
In one embodiment, after determining the image layer displacement between each layer corresponding to the first image pyramid and the second image pyramid based on the image block displacement of the first image block in the first image layer in step S1004, the updating the image layer displacement according to the image block displacements around the first image block may further include:
step S1102 is to determine a preset neighborhood image block adjacent to the first image block in the first image layer.
In one embodiment, the image block displacement of each first image block of the first image layer may be determined only according to one of steps S602 to S604, steps S702 to S704, or steps S802 to S804, or may be determined according to the above three manners at the same time. And determining a preset neighborhood image block adjacent to the first image block in the first image layer. Specifically, the preset neighborhood image block is an image block around the first image block and having determined image block displacement. The number of the preset neighborhood image blocks may be multiple, for example, the number of the preset neighborhood image blocks is 4 from top to bottom, from left to right, from the top to the bottom, from the bottom to the right, from one circle of the preset neighborhood image blocks is 8, or the number of the preset neighborhood image blocks is 24, from two circles to the outside, and the determined image block displacement may be determined according to one of the steps S602 to S604, S702 to S704, or S802 to S804.
Step S1104, updating the image block displacement of the first image block in the first image layer according to the image block displacement of each preset neighborhood image block and the corresponding metric value, and determining the updated image layer displacement.
In one embodiment, the image block displacement of the first image block in the first image layer may be updated according to the determined image block displacement and the corresponding metric value of each of the predetermined neighboring image blocks. Specifically, when a smaller and better metric value exists in a preset neighborhood image block of the first image block, the image block displacement of the first image block is updated to the image block displacement corresponding to the better metric value, otherwise, the image block displacement is not updated.
In one embodiment, the step S210 fuses the adjacent images and the image to be processed to obtain a processed image, including:
step S1202, according to the updated measurement index value corresponding to the image layer displacement, determining the initial weight of the adjacent image.
In one embodiment, after the adjacent image is aligned with the image to be processed, the adjacent image and the image to be processed need to be fused into a noise-reduced image. Wherein the image fusion can be performed based on the weights of the images. Since the initial image block displacement corresponds to the initial metric value, after the image block displacement of the first image block in the first image layer is determined, only one corresponding initial metric value exists, which is called a metric value.
In one embodiment, the initial weight of the second image block of the adjacent image corresponding to the image to be processed is determined according to a relationship between a preset metric value and a threshold value and the metric value of the first image block of the first image layer, that is, the initial weight of the adjacent image is determined. Specifically, the initial weight is determined in such a way that when the threshold of the second image block is greater than the metric value, the initial weight of the second image block is 0, and at this time, the alignment between the image to be processed and the adjacent image is not good; when the threshold of the second image block is smaller than the measure index value, the initial weight of the second image block is = 1-measure index value/threshold.
In one embodiment, in order to improve the quality of image fusion and obtain a higher initial weight, the initial weight may be determined by setting the initial weight of the second image block to 0 when the threshold of the second image block is greater than or equal to a preset larger threshold, setting the initial weight of the second image block to 1 when the threshold of the second image block is smaller than a preset smaller threshold, and sequentially and linearly decreasing the initial weight of the second image block when the threshold of the second image block is between the preset smaller threshold and the preset larger threshold.
Step S1204, fusing the preset weight of the image to be processed and the initial weight of the adjacent image to obtain a fusion weight of the adjacent image and the image to be processed.
In one embodiment, the preset weight of the image to be processed and the initial weight of the corresponding adjacent image are subjected to weighted summation to obtain the fusion weight of the adjacent image and the image to be processed. Wherein, the preset weight of the image to be processed can be set to 1.
And step S1206, fusing the adjacent images and the images to be processed based on the fusion weight, and acquiring the processed images.
In one embodiment, after the neighboring image and the to-be-processed image are fused based on the fusion weight and the processed image is obtained, in order to improve the correlation between the processed image and the original image, the pixel value of the processed image should be within the value range of the pixel value of the original image.
In one embodiment, after acquiring the processed image, the method further includes: and correcting the processed image according to a preset pixel value fluctuation range. Wherein the pixel value fluctuation range of the processed image is set in advance. Specifically, the pixel value fluctuation range may be set to ± 10. For example, after the original image has a pixel value of 50 and the neighboring image and the image to be processed are fused according to the fusion weight to obtain a processed image, if the pixel value of the image is 62, the correction is required to be 60, and if the pixel value of the image is 56, the correction is not required.
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is further described in detail below with reference to the accompanying drawings and one embodiment thereof. It should be understood that the specific embodiments described herein are merely illustrative of and not restrictive on the broad application.
In one specific embodiment, a schematic diagram of the image processing method is shown in fig. 3, where after the image is preprocessed, a preprocessed image is obtained, and then a frame of the processed image is obtained by aligning and fusing multiple frames of images. A schematic diagram of the image to be processed is shown in fig. 4, in which impulse noise is evident in the image. The image processing method comprises the following specific steps:
1. image pre-processing
And determining the image to be processed and the adjacent image corresponding to the image to be processed. Before performing multi-frame denoising processing on an image, image preprocessing can be further included, wherein the image preprocessing mainly includes: at least one of the image quality optimization processing and the noise processing, taking a preprocessing process of an image to be processed as an example, specifically includes the following steps:
image quality optimization pretreatment:
the contrast of the image is reduced by using the curve mapping method shown in fig. 5 (a), the details of the dark area of the image are improved, and the contrast of the image is improved by using the curve mapping method shown in fig. 5 (b), so that the highlight area of the image is restored.
Multiplying the pixel value of the original image to be processed by 2 times, and performing mean fuzzy processing with the radius of 21 to obtain a weight map W corresponding to the image to be processed. Restoring highlight details of the weight map by using a formula to obtain an image to be processed after image quality optimization processing, wherein the formula is as follows:
Figure BDA0003122666950000191
in the formula, W (y, x) represents the value of a pixel point (y, x) in a weight graph W corresponding to the image to be processed, r, g and b represent the pixel values of red, green and blue channels of the original image to be processed, and r represents the pixel value of the red, green and blue channels of the original image to be processed new 、g new 、b new And representing pixel values of red, green and blue channels of the image to be processed after the contrast processing.
Noise preprocessing:
and carrying out global filtering processing on the image to be processed to obtain the image to be processed after the global filtering processing.
And determining a motion area in the image to be processed after the global filtering processing, and setting a threshold value of the motion area to be 255 and a threshold value of a non-motion area to be 0.
And calculating the boundary value of the image to be processed after the global filtering processing through a Sobel operator or an edge detection algorithm, and setting the threshold value of the boundary area to be 255 and the threshold value of the non-boundary area to be 0.
And determining a non-boundary motion area in the image to be processed after the global filtering processing, and performing filtering processing to obtain the image to be processed after the noise processing.
2. Multi-frame image noise reduction processing
The multi-frame image denoising processing mainly comprises the following steps: the method comprises the following steps of aligning an image to be processed with an adjacent image, and fusing the aligned images, and specifically comprises the following steps:
image alignment process:
respectively performing down-sampling on the image to be processed and an adjacent image to obtain a first image pyramid corresponding to the image to be processed, wherein the first image pyramid comprises first image layers; and obtaining a second image pyramid corresponding to the adjacent images, wherein each second image layer is included, and segmenting the first image layer and the second image layer according to the preset size to respectively obtain each first image block and each second image block.
And determining the image layer displacement between each layer corresponding to the first image pyramid and the second image pyramid according to the initial image block displacement between the first image block and the corresponding second image block and the initial measurement index value corresponding to the initial image block displacement.
The method for calculating the displacement of the initial image block comprises the following steps:
mode (1): and performing interpolation calculation on the image layer displacement of the previous image layer of the first image layer to obtain an interpolation calculation result, wherein the previous image layer is an image layer which is adjacent to the first image layer and has a resolution lower than that of the first image layer in the first image pyramid, and the first image block displacement of the first image block of the first image layer is determined according to the interpolation calculation result, and the initial image block displacement comprises the first image block displacement.
Mode (2): and performing statistical calculation on the image layer displacement of the previous image layer of the first image layer to obtain a statistical calculation result, and determining the second image block displacement of the first image block of the first image layer according to the statistical calculation result, wherein the initial image block displacement comprises the second image block displacement.
Mode (3): determining an image block adjacent to the first image block in the first image layer and having determined image block displacement, specifically as shown in fig. 6 (a), arrows indicate a calculation order, ● indicates the first image block in the first image layer, x indicates 3 image blocks adjacent to the first image block and having determined image block displacement, determining a third image block displacement of the first image block in the first image layer according to the image block having determined image block displacement, and the initial image block displacement includes the third image block displacement.
Determining a second image block with the same displacement as the initial image block of the first image layer in a second image layer corresponding to the first image layer, calculating an initial measurement index value corresponding to the initial image block displacement according to the pixel value of the first image block and the pixel value of the second image block, determining the initial image block displacement corresponding to the initial measurement index value with the minimum value as the image block displacement of the first image block, and determining the image layer displacement between each layer corresponding to the first image pyramid and the second image pyramid based on the image block displacement.
After determining the image block displacement of the first image block, determining a preset neighborhood image block adjacent to the first image block in the first image layer, specifically as shown in fig. 6 (b), ● represents the first image block of the first image layer, x represents 8 preset neighborhood image blocks adjacent to the first image block, updating the image block displacement of the first image block according to the image block displacement of each preset neighborhood image block and the corresponding metric value, and determining the updated image layer displacement based on the updating.
And (3) fusion process of images:
and determining the initial weight of a second image block of the adjacent image according to the updated weighing index value corresponding to the image layer displacement, wherein when the threshold of the second image block is greater than the weighing index value, the initial weight of the second image block is 0, and when the threshold of the second image block is less than the weighing index value, the initial weight of the second image block is = 1-weighing index value/threshold.
And the preset weight of the first image block of the image to be processed is 1, and the first image block of the image to be processed and the initial weight of the second image block of the adjacent image are fused to obtain the fusion weight of the adjacent image and the image to be processed.
And fusing the adjacent images and the images to be processed based on the fusion weight to obtain the processed images, and correcting the processed images according to the preset fluctuation range of the pixel values. When the pixel value fluctuation range is set to ± 10, the pixel value of the original image is 50, and if the pixel value of the processed image is 62, correction is required to be 60, and if the pixel value of the processed image is 56, correction is not required.
Thus, a frame of image after noise reduction processing is obtained.
It should be understood that, although the steps in the flowchart of fig. 2 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 2 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least a portion of the other steps or stages.
In one embodiment, as shown in fig. 7, there is provided an image processing apparatus including: an image acquisition module 710, an image pyramid construction module 720, a calculation module 730, an image alignment module 740, and an image fusion module 750, wherein:
the image obtaining module 710 is configured to obtain an image to be processed and an adjacent image corresponding to the image to be processed.
An image pyramid constructing module 720, configured to construct image pyramids of the to-be-processed image and the adjacent image, respectively, where an image pyramid corresponding to the to-be-processed image is a first image pyramid, and an image pyramid corresponding to the adjacent image is a second image pyramid.
The calculating module 730 is configured to calculate an image layer displacement between each layer corresponding to the first image pyramid and the second image pyramid, and a metric value corresponding to the image layer displacement.
And an image alignment module 740, configured to align the adjacent image with the image to be processed according to each image layer displacement and the metric value corresponding to the image layer displacement.
And an image fusion module 750, configured to fuse the adjacent image and the to-be-processed image to obtain a processed image.
In one embodiment, the image processing apparatus further includes:
and the image quality optimization processing module is used for performing image quality optimization processing on the image to be processed and the adjacent image after the image to be processed and the adjacent image corresponding to the image to be processed are obtained and before the image pyramids of the image to be processed and the adjacent image are respectively constructed, so as to obtain the image to be processed and the adjacent image after the image quality optimization processing.
And the noise reduction processing module is used for performing noise reduction processing on the image to be processed and the adjacent image before the image pyramids of the image to be processed and the adjacent image are respectively constructed after the image to be processed and the adjacent image corresponding to the image to be processed are obtained, so as to obtain the image to be processed and the adjacent image after noise processing.
In one embodiment, the noise reduction processing module comprises the following units:
and the primary filtering processing unit is used for performing primary filtering processing on the image to be processed and the adjacent image to obtain the image to be processed and the adjacent image after the primary filtering processing.
And the motion area determining unit is used for determining the motion areas in the image to be processed and the adjacent image after the primary filtering processing according to the pixel values of the image to be processed and the adjacent image after the primary filtering processing.
And the boundary value calculating unit is used for calculating the boundary values of the image to be processed and the adjacent image after the primary filtering processing.
And the non-boundary motion region determining unit is used for determining non-boundary motion regions in the image to be processed and the adjacent image after the primary filtering processing based on the boundary value.
And the secondary filtering processing unit is used for carrying out secondary filtering processing on the non-boundary motion area to obtain the image to be processed and the adjacent image after noise processing.
In one embodiment, the calculation module 730 includes the following elements:
an initial image block displacement calculation unit, configured to calculate an initial image block displacement of a first image layer, where the initial image block displacement is a displacement between a first image block in the first image layer and a second image block in a corresponding second image layer, the first image layer is an image layer in the first image pyramid, the second image layer is an image layer in the second image pyramid, the first image block is an image block of a preset size in the first image layer, and the second image block is an image block of the preset size in the second image layer.
And the initial measurement index value calculation unit is used for calculating an initial measurement index value corresponding to the displacement of the initial image block.
And the image layer displacement determining unit is used for determining the image layer displacement between each layer corresponding to the first image pyramid and the second image pyramid according to the initial image block displacement and the initial measurement index value.
In one embodiment, the initial image block displacement calculating unit includes a first image block displacement calculating unit, and the first image block displacement calculating unit includes the following units:
and the interpolation calculation unit is used for carrying out interpolation calculation on the image layer displacement of the previous image layer of the first image layer to obtain an interpolation calculation result, wherein the previous image layer is an image layer which is adjacent to the first image layer in the first image pyramid and has lower resolution than the first image layer.
And the first image block displacement determining unit is used for determining first image block displacement of the first image layer according to the interpolation calculation result, wherein the initial image block displacement comprises the first image block displacement.
In one embodiment, the initial image block displacement calculating unit includes a second image block displacement calculating unit, and the second image block displacement calculating unit includes the following units:
and the statistical calculation unit is used for performing statistical calculation on the image layer displacement of a previous image layer of the first image layer to obtain a statistical calculation result, wherein the previous image layer is a first image layer which is adjacent to the first image layer in the first image pyramid and has lower resolution than the first image layer.
And the second image block displacement determining unit is used for determining second image block displacement of the first image layer according to the statistical calculation result, wherein the initial image block displacement comprises the second image block displacement.
In one embodiment, the initial image block displacement calculating unit includes a third image block displacement calculating unit, and the third image block displacement calculating unit includes the following units:
and the image block determining unit is used for determining the image block with the determined image block displacement in the first image layer.
And the third image block displacement determining unit is used for determining a third image block displacement of the first image layer according to the image block with the determined image block displacement, wherein the initial image block displacement comprises the third image block displacement.
In one embodiment, the initial metric value calculation unit includes the following units:
and the second image block determining unit is used for determining a second image block with the same displacement as the initial image block of the first image layer in the second image layer.
And the initial measurement index value determining unit is used for calculating an initial measurement index value corresponding to the displacement of the initial image block according to the pixel value of the first image block in the first image layer and the pixel value of the second image block.
In one embodiment, the image alignment module 740 includes the following elements:
and the image block displacement determining unit is used for determining the image block displacement of the first image block in the first image layer from the initial image block displacement based on the initial measurement index value.
And the image layer displacement determining unit is used for determining the image layer displacement between each layer corresponding to the first image pyramid and the second image pyramid and the measurement index value corresponding to the image layer displacement based on the image block displacement of the first image block in the first image layer.
And the image alignment unit is used for aligning the adjacent images with the images to be processed according to the displacement of each image layer and the weighing index value corresponding to the displacement of the image layer.
In one embodiment, the calculating unit further includes an image block displacement updating unit, and the image block displacement updating unit includes the following units:
and the second image block determining unit is used for determining a preset neighborhood image block adjacent to the first image block in the first image layer after determining the image layer displacement between each layer corresponding to the first image pyramid and the second image pyramid based on the image block displacement of the first image block in the first image layer.
And the image block displacement updating unit is used for updating the image block displacement of the first image block in the first image layer according to the image block displacement of each preset neighborhood image block and the corresponding measurement index value, and determining the updated image layer displacement.
In one embodiment, the image fusion module 760 includes the following elements:
and the initial weight determining unit is used for determining the initial weight of the adjacent image according to the measurement index value corresponding to the updated image layer displacement.
And the fusion weight calculation unit is used for fusing the preset weight of the image to be processed with the initial weight of the adjacent image to obtain the fusion weight of the adjacent image and the image to be processed.
And the image fusion unit is used for fusing the adjacent images and the images to be processed based on the fusion weight to acquire the processed images.
In one embodiment, the image processing apparatus further includes:
and the image correction unit is used for correcting the processed image according to a preset pixel value fluctuation range after the processed image is acquired.
For specific limitations of the image processing apparatus, reference may be made to the above limitations of the image processing method, which are not described herein again. The respective modules in the image processing apparatus described above may be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 8. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing image processing data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an image processing method.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 9. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement an image processing method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the configurations illustrated in fig. 8-9 are merely block diagrams of portions of configurations related to aspects of the present application, and do not constitute limitations on the computing devices to which aspects of the present application may be applied, as particular computing devices may include more or less components than those illustrated, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the image processing method described above when executing the computer program.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the image processing method described above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (12)

1. A method of image processing, the method comprising:
acquiring an image to be processed and an adjacent image corresponding to the image to be processed;
respectively constructing image pyramids of the image to be processed and the adjacent image, wherein the image pyramid corresponding to the image to be processed is a first image pyramid, and the image pyramid corresponding to the adjacent image is a second image pyramid;
calculating image layer displacement between each layer corresponding to the first image pyramid and the second image pyramid and a measurement index value corresponding to the image layer displacement;
aligning the adjacent images with the images to be processed according to the image layer displacement and the weighing index value corresponding to the image layer displacement;
and fusing the adjacent images and the images to be processed to obtain processed images.
2. The image processing method according to claim 1, wherein after the obtaining of the image to be processed and the adjacent image corresponding to the image to be processed, before the respectively constructing of the image pyramids of the image to be processed and the adjacent image, at least one of the following is further included:
the first item:
performing image quality optimization processing on the image to be processed and the adjacent image to obtain the image to be processed and the adjacent image after the image quality optimization processing;
the second term:
performing primary filtering processing on the image to be processed and the adjacent image to obtain the image to be processed and the adjacent image after the primary filtering processing;
determining motion areas in the image to be processed and the adjacent image after the primary filtering processing according to the pixel values of the image to be processed and the adjacent image after the primary filtering processing;
calculating the boundary value of the image to be processed and the adjacent image after the primary filtering processing;
determining non-boundary motion areas in the image to be processed and the adjacent image after the primary filtering processing based on the boundary value;
and carrying out secondary filtering processing on the non-boundary motion area to obtain the image to be processed and the adjacent image after noise processing.
3. The method of claim 1, wherein the computing the image layer displacement between each layer corresponding to the first image pyramid and the second image pyramid comprises:
calculating initial image block displacement of a first image layer, wherein the initial image block displacement is displacement between a first image block in the first image layer and a second image block in a corresponding second image layer, the first image layer is an image layer in the first image pyramid, the second image layer is an image layer in the second image pyramid, the first image block is an image block with a preset size in the first image layer, and the second image block is an image block with the preset size in the second image layer;
calculating an initial measurement index value corresponding to the initial image block displacement;
and determining the image layer displacement between each layer corresponding to the first image pyramid and the second image pyramid according to the initial image block displacement and the initial measurement index value.
4. The method according to claim 3, wherein said calculating an initial image block displacement for the first image layer comprises at least one of:
the first item:
performing interpolation calculation on image layer displacement of a previous image layer of the first image layer to obtain an interpolation calculation result, wherein the previous image layer is an image layer which is adjacent to the first image layer in the first image pyramid and has a resolution lower than that of the first image layer;
determining a first image block displacement of the first image layer according to the interpolation calculation result, wherein the initial image block displacement comprises the first image block displacement;
the second term is:
performing statistical calculation on image layer displacement of a previous image layer of the first image layer to obtain a statistical calculation result, wherein the previous image layer is a first image layer which is adjacent to the first image layer in the first image pyramid and has lower resolution than the first image layer;
determining a second image block displacement of the first image layer according to the statistical calculation result, wherein the initial image block displacement comprises the second image block displacement;
the third item:
determining image blocks with determined image block displacement in the first image layer;
and determining a third image block displacement of the first image layer according to the image block with the determined image block displacement, wherein the initial image block displacement comprises the third image block displacement.
5. The image processing method according to claim 4, wherein said calculating an initial metric value corresponding to the initial image block displacement comprises:
determining a second image block with the same displacement as the initial image block of the first image layer in the second image layer;
and calculating an initial measurement index value corresponding to the displacement of the initial image block according to the pixel value of the first image block in the first image layer and the pixel value of the second image block.
6. The method according to claim 5, wherein said aligning the neighboring image with the image to be processed according to each of the image layer displacements and a metric value corresponding to the image layer displacement comprises:
determining the image block displacement of a first image block in the first image layer from the initial image block displacement based on the initial measurement index value;
determining image layer displacement between each layer corresponding to the first image pyramid and the second image pyramid and a measurement index value corresponding to the image layer displacement based on image block displacement of a first image block in the first image layer;
and aligning the adjacent images with the image to be processed according to the image layer displacement and the measurement index value corresponding to the image layer displacement.
7. The method of claim 5, further comprising, after determining the image layer displacement between each layer corresponding to the first image pyramid and the second image pyramid based on the image block displacement of the first image block in the first image layer:
determining a preset neighborhood image block adjacent to the first image block in the first image layer;
and updating the image block displacement of the first image block in the first image layer according to the image block displacement of each preset neighborhood image block and the corresponding measurement index value, and determining the updated image layer displacement.
8. The image processing method according to claim 7, wherein the fusing the adjacent image and the image to be processed to obtain a processed image comprises:
determining the initial weight of the adjacent images according to the updated measurement index value corresponding to the image layer displacement;
fusing the preset weight of the image to be processed with the initial weight of the adjacent image to obtain the fusion weight of the adjacent image and the image to be processed;
and fusing the adjacent images and the image to be processed based on the fusion weight to obtain a processed image.
9. The image processing method according to claim 8, further comprising, after said acquiring the processed image:
and correcting the processed image according to a preset pixel value fluctuation range.
10. An image processing apparatus, characterized in that the apparatus comprises:
the image acquisition module is used for acquiring an image to be processed and an adjacent image corresponding to the image to be processed;
the image pyramid construction module is used for respectively constructing the image pyramids of the image to be processed and the adjacent image, wherein the image pyramid corresponding to the image to be processed is a first image pyramid, and the image pyramid corresponding to the adjacent image is a second image pyramid;
the calculation module is used for calculating image layer displacement between each layer corresponding to the first image pyramid and the second image pyramid and a measurement index value corresponding to the image layer displacement;
the image alignment module is used for aligning the adjacent images with the images to be processed according to the displacement of each image layer and the corresponding measurement index value of the displacement of the image layer;
and the image fusion module is used for fusing the adjacent images and the image to be processed to obtain a processed image.
11. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor realizes the steps of the image processing method of any one of claims 1 to 9 when executing the computer program.
12. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the image processing method of any one of claims 1 to 9.
CN202110681297.2A 2021-06-18 2021-06-18 Image processing method, image processing device, computer equipment and storage medium Pending CN115496670A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110681297.2A CN115496670A (en) 2021-06-18 2021-06-18 Image processing method, image processing device, computer equipment and storage medium
PCT/CN2022/097077 WO2022262599A1 (en) 2021-06-18 2022-06-06 Image processing method and apparatus, and computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110681297.2A CN115496670A (en) 2021-06-18 2021-06-18 Image processing method, image processing device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115496670A true CN115496670A (en) 2022-12-20

Family

ID=84464029

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110681297.2A Pending CN115496670A (en) 2021-06-18 2021-06-18 Image processing method, image processing device, computer equipment and storage medium

Country Status (2)

Country Link
CN (1) CN115496670A (en)
WO (1) WO2022262599A1 (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8478076B2 (en) * 2010-07-05 2013-07-02 Apple Inc. Alignment of digital images and local motion detection for high dynamic range (HDR) imaging
CN103841296B (en) * 2013-12-24 2017-01-18 哈尔滨工业大学 Real-time electronic image stabilizing method with wide-range rotation and horizontal movement estimating function
WO2017206656A1 (en) * 2016-05-31 2017-12-07 努比亚技术有限公司 Image processing method, terminal, and computer storage medium
CN108573269B (en) * 2017-10-24 2021-02-05 北京金山云网络技术有限公司 Image feature point matching method, matching device, electronic device and storage medium
CN111652818B (en) * 2020-05-29 2023-09-29 浙江大华技术股份有限公司 Pyramid-based image filtering method, pyramid-based image filtering device and storage medium
CN112308775A (en) * 2020-09-23 2021-02-02 中国石油大学(华东) Underwater image splicing method and device

Also Published As

Publication number Publication date
WO2022262599A1 (en) 2022-12-22

Similar Documents

Publication Publication Date Title
US11790481B2 (en) Systems and methods for fusing images
US9262684B2 (en) Methods of image fusion for image stabilization
CN108833785B (en) Fusion method and device of multi-view images, computer equipment and storage medium
US8179466B2 (en) Capture of video with motion-speed determination and variable capture rate
CN107395991B (en) Image synthesis method, image synthesis device, computer-readable storage medium and computer equipment
US8582915B2 (en) Image enhancement for challenging lighting conditions
WO2015184208A1 (en) Constant bracketing for high dynamic range operations (chdr)
US20170148137A1 (en) Image data processing apparatus and method
US20160125618A1 (en) Method, device, and system for pre-processing a video stream for subsequent motion detection processing
US11334961B2 (en) Multi-scale warping circuit for image fusion architecture
US20110085026A1 (en) Detection method and detection system of moving object
CN106846250B (en) Super-resolution reconstruction method based on multi-scale filtering
CN112384945A (en) Super-resolution using natural handheld motion applied to user devices
CN112184609B (en) Image fusion method and device, storage medium and terminal
CN113888509A (en) Method, device and equipment for evaluating image definition and storage medium
JP6838918B2 (en) Image data processing device and method
CN112508832A (en) Object-oriented remote sensing image data space-time fusion method, system and equipment
CN115496670A (en) Image processing method, image processing device, computer equipment and storage medium
CN115035013A (en) Image processing method, image processing apparatus, terminal, and readable storage medium
JP3959547B2 (en) Image processing apparatus, image processing method, and information terminal apparatus
JP7025237B2 (en) Image processing equipment and its control method and program
CN111311498B (en) Image ghost eliminating method and device, storage medium and terminal
CN111080543A (en) Image processing method and device, electronic equipment and computer readable storage medium
CN101364303B (en) Edge pixel extracting and processing method
CN111835968B (en) Image definition restoration method and device and image shooting method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination