CN108898567B - Image noise reduction method, device and system - Google Patents

Image noise reduction method, device and system Download PDF

Info

Publication number
CN108898567B
CN108898567B CN201811104277.3A CN201811104277A CN108898567B CN 108898567 B CN108898567 B CN 108898567B CN 201811104277 A CN201811104277 A CN 201811104277A CN 108898567 B CN108898567 B CN 108898567B
Authority
CN
China
Prior art keywords
image
frame image
target
blocks
target frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811104277.3A
Other languages
Chinese (zh)
Other versions
CN108898567A (en
Inventor
白雪
李凯
王珏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kuangshi Technology Co Ltd
Original Assignee
Beijing Kuangshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kuangshi Technology Co Ltd filed Critical Beijing Kuangshi Technology Co Ltd
Priority to CN201811104277.3A priority Critical patent/CN108898567B/en
Publication of CN108898567A publication Critical patent/CN108898567A/en
Application granted granted Critical
Publication of CN108898567B publication Critical patent/CN108898567B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an image denoising method, device and system, relating to the technical field of image processing, wherein the method comprises the following steps: acquiring a plurality of frame images acquired by image acquisition equipment aiming at the same scene to be shot; selecting a reference frame image and a target frame image from a plurality of frame images; the number of the reference frame images is one, and the number of the target frame images is at least one; dividing a reference frame image into a plurality of reference image blocks according to a preset image division mode; determining the position of each reference image block in the target frame image so as to align the reference frame image and the target frame image; and carrying out image fusion processing on the aligned reference frame image and the target frame image to obtain a noise reduction image. The method can effectively shorten the time of multi-frame noise reduction and improve the multi-frame noise reduction efficiency.

Description

Image noise reduction method, device and system
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image denoising method, apparatus, and system.
Background
In the process of acquiring an image, noise is easily carried due to hardware reasons or scene reasons of the image acquisition device, for example, images acquired by a mobile phone with a poor size of a photosensitive sensor and a lens under scenes such as weak light, dim light and the like often carry more noise. Therefore, in order to obtain a high-quality image, it is necessary to perform noise reduction processing on the image to remove useless noise information in the image while maintaining the integrity of the original information as much as possible.
The multi-frame noise reduction is a traditional image noise reduction method, and most of the existing multi-frame noise reduction methods are to perform weighted average on each pixel point corresponding to different frames of images to estimate the color value of each pixel, so that a cleaner image is obtained. However, when image capturing devices such as mobile phones and digital cameras are used for continuous shooting, phenomena such as jitter may occur, and thus, alignment operations are required to be performed on continuously captured multi-frame images, and the alignment operations require processing based on each pixel point, which results in long time-consuming noise reduction and is not suitable for devices such as mobile phones which need to be operated quickly.
Disclosure of Invention
In view of this, the present invention provides an image denoising method, device and system, which can effectively shorten the time of multi-frame denoising and improve the efficiency of multi-frame denoising.
In order to achieve the above purpose, the embodiment of the present invention adopts the following technical solutions:
in a first aspect, an embodiment of the present invention provides an image denoising method, where the method includes: acquiring a plurality of frame images acquired by image acquisition equipment aiming at the same scene to be shot; selecting a reference frame image and a target frame image from a plurality of frame images; the number of the reference frame images is one, and the number of the target frame images is at least one; dividing the reference frame image into a plurality of reference image blocks according to a preset image dividing mode; determining the position of each reference image block in the target frame image so as to align the reference frame image and the target frame image; and carrying out image fusion processing on the aligned reference frame image and the target frame image to obtain a noise reduction image.
Further, the step of dividing the reference frame image into a plurality of reference image blocks according to a preset image dividing manner includes: dividing the reference frame image by using a uniform grid or a non-uniform grid; and determining each grid area obtained by dividing the reference frame image as a reference image block.
Further, the step of determining the position of each reference image block corresponding to the target frame image includes: determining a first position of a central point of each reference image block in the reference frame image, and determining an optical flow vector of the central point of each reference image block on the target frame image; calculating to obtain a second position of the central point of each reference image block mapped to the target frame image according to the first position of the central point of each reference image block and the optical flow vector; and determining the position of each reference image block in the target frame image based on the second position and the size of each reference image block.
Further, the step of determining an optical flow vector of a center point of each of the reference image blocks on the target frame image includes: carrying out feature point detection and matching processing on the reference frame image and the target frame image to obtain matching feature points between the reference frame image and the target frame image; calculating an optical flow vector of each matched feature point on the target frame image by taking the reference frame image as a reference; dividing the target frame image into a plurality of target meshes according to the preset image division mode, and calculating an optical flow vector of a vertex of each target mesh according to the optical flow vector of each matched feature point on the target frame image; and calculating an optical flow vector of the central point of each reference image block on the target frame image according to the optical flow vector of the vertex of each target grid.
Further, the step of calculating the optical flow vector of each matching feature point on the target frame image with reference to the reference frame image determines the optical flow vector of each matching feature point mapped from the reference frame image to the target frame image according to the position coordinates of each matching feature point on the reference frame image and the position coordinates of each matching feature point on the target frame image.
Further, the step of calculating an optical flow vector of a vertex of each of the target meshes from the optical flow vector of each of the matching feature points on the target frame image includes: interpolating an optical flow vector of each matched feature point on the target frame image to a lattice point of each target grid; determining a median of the optical-flow vectors collected for the vertices of each of the target meshes as the optical-flow vectors for the vertices of each of the target meshes.
Further, the step of calculating an optical flow vector of a center point of each reference image block on the target frame image according to the optical flow vector of a vertex of each target mesh includes: determining a target grid corresponding to the central point of each reference image block; determining an optical flow vector of the central point of each reference image block on the target frame image according to optical flow vectors of three vertexes of a target grid corresponding to the central point of each reference image block on the basis of a barycentric coordinate transformation method; and the central point of the reference image block is positioned in a triangular area formed by the three vertexes of the target grid.
Further, the step of performing image fusion processing on the aligned reference frame image and the target frame image includes: determining corresponding image blocks in the aligned reference frame image and the target frame image as aligned image blocks; performing ghost detection on each aligned image block in the reference frame image and the target frame image, determining the aligned image block with the ghost as a ghost block, and determining the image block without the ghost as a non-ghost block; performing noise reduction processing on the non-ghost blocks by adopting a time domain wiener filtering algorithm to obtain noise-reduced non-ghost blocks; denoising the ghost blocks by adopting a non-local mean algorithm to obtain denoised ghost blocks; and carrying out multi-scale fusion processing on the noise-reduced non-ghost blocks and the noise-reduced ghost blocks.
Further, the step of performing ghost detection on each aligned image block in the reference frame image and the target frame image includes: calculating the distance between the aligned image blocks of each target frame image and the reference frame image according to a preset block distance formula; the block distance formula is:
Figure BDA0001806529420000041
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE001
the pixel value of the jth pixel point of the aligned image block in the tth target frame image is obtained; wherein j is more than or equal to 1 and less than or equal to N; n is the total number of pixel points contained in the aligned image block; t is more than or equal to 1 and less than or equal to H; h is the number of the target frame images;
Figure DEST_PATH_IMAGE002
the pixel value of the jth pixel point of the aligned image block in the reference frame image is obtained; calculating the sum of distances of aligned image blocks of all the target frame images and the reference frame images to obtain a sum value; judging whether the sum is higher than a preset threshold value; if yes, determining that the alignment image block has a ghost; and if not, determining that the alignment image block has no ghost.
Further, the pairThe step of performing multi-scale fusion processing on the noise-reduced non-ghost blocks and the noise-reduced ghost blocks comprises: generating a mask map M, and a first image G containing only the ghost blocks and a second image L containing only the non-ghost blocks, based on the positions of the noise-reduced non-ghost blocks and the noise-reduced ghost blocks in the reference frame image; the pixel values of the ghost blocks in the mask image are all preset first pixel values, and the pixel values of the non-ghost blocks in the mask image are all preset second pixel values; decomposing the first image G into a sequence of multi-scale Laplacian pyramid images { G }iDecomposing the second image L into a multi-scale Laplacian pyramid image sequence { L }iAnd decomposing the mask map M into a multi-scale Gaussian pyramid sequence { M }i}: wherein i is more than or equal to 1 and less than or equal to n; n is the pyramid layer number; performing multi-scale fusion processing on the noise-reduced non-ghost blocks and the noise-reduced ghost blocks according to a multi-scale fusion formula; wherein the multi-scale fusion formula is:
Figure BDA0001806529420000044
to pair
Figure BDA0001806529420000045
And reconstructing the image sequence to obtain a noise reduction image.
In a second aspect, an embodiment of the present invention further provides an image noise reduction apparatus, where the apparatus includes: the image acquisition module is used for acquiring a plurality of frame images acquired by the image acquisition equipment aiming at the same scene to be shot; the image selecting module is used for selecting a reference frame image and a target frame image from the plurality of frame images; the number of the reference frame images is one, and the number of the target frame images is at least one; the image dividing module is used for dividing the reference frame image into a plurality of reference image blocks according to a preset image dividing mode; the image alignment module is used for determining the position of each reference image block in the target frame image so as to align the reference frame image and the target frame image; and the image fusion module is used for carrying out image fusion processing on the aligned reference frame image and the target frame image to obtain a noise reduction image.
In a third aspect, an embodiment of the present invention provides an image noise reduction system, where the system includes: the system comprises an image acquisition device, a processor and a storage device; the image acquisition equipment is used for acquiring a plurality of frame images; the storage device has stored thereon a computer program which, when executed by the processor, performs the method of any one of the aspects as provided in the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, performs the steps of the method according to any one of the above-mentioned first aspect.
The embodiment of the invention provides an image noise reduction method, device and system, when acquiring a plurality of frame images acquired by image acquisition equipment aiming at the same scene to be shot, firstly selecting a reference frame image and a target frame image from the plurality of frame images; dividing the reference frame image into a plurality of reference image blocks according to a preset image dividing mode, and determining the position of each reference image block in the target frame image so as to align the reference frame image and the target frame image; and then carrying out image fusion processing on the aligned reference frame image and the target frame image to obtain a noise reduction image. Compared with the prior art that the alignment processing is required to be carried out on the basis of each pixel point, the image denoising method provided by the embodiment divides an image into image blocks, and only the alignment processing is carried out on the basis of the image blocks, so that the multiframe denoising time is greatly shortened, the multiframe denoising efficiency is improved, and the method is more suitable for equipment such as a mobile phone and the like which need to run quickly.
Additional features and advantages of embodiments of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of embodiments of the invention as set forth above.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
FIG. 2 is a flow chart of an image denoising method according to an embodiment of the present invention;
figure 3 shows a block alignment diagram provided by an embodiment of the present invention,
FIG. 4 is a schematic diagram illustrating the calculation of optical flow vectors according to an embodiment of the present invention;
FIG. 5a is a mask diagram provided by an embodiment of the present invention;
FIG. 5b is a schematic diagram of a first image provided by an embodiment of the invention;
FIG. 5c is a schematic diagram of a second image provided by an embodiment of the invention;
fig. 6 shows a block diagram of an image noise reduction apparatus according to an embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It can be understood that images acquired by image processing devices with poor sizes of light sensors and lenses in low-light, dark-light and other scenes tend to carry much noise. The reason is that: in the process that image acquisition equipment such as a digital camera and a mobile phone transmits light and picture information to a photosensitive sensor through a lens, heat is generated, the longer the shutter time is, the more the light irradiation time received by the photosensitive sensor is, and the longer the working time of the photosensitive sensor is. The photosensor generates heat during long-term operation, and the heat is distributed to each crystal unit of the photosensor. After imaging is completed, this heat can damage some pixels on the image, thus creating noise.
In view of the long time consumed for noise reduction of the multi-frame noise reduction method in the prior art, to improve the problem, the image noise reduction method, the apparatus and the system provided in the embodiments of the present invention may be applied to any occasions where noise reduction processing needs to be performed on an image, and may be directly applied by each processor in a photographing process by any electronic device having a photographing function, such as a camera device, a mobile phone, a camera, and the like, or may be applied by an intelligent terminal, such as a computer, after receiving a multi-frame image acquired by an image acquisition device, when performing post-noise reduction processing on the multi-frame image, and the following detailed description is provided for the embodiments of the present invention.
The first embodiment is as follows:
first, an exemplary electronic device 100 for implementing an image noise reduction method, apparatus and system according to an embodiment of the present invention is described with reference to fig. 1.
As shown in fig. 1, an electronic device 100 includes one or more processors 102, one or more memory devices 104, an input device 106, an output device 108, and an image capture device 110, which are interconnected via a bus system 112 and/or other type of connection mechanism (not shown). It should be noted that the components and structure of the electronic device 100 shown in fig. 1 are exemplary only, and not limiting, and the electronic device may have other components and structures as desired.
The processor 102 may be implemented in at least one hardware form of a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), the processor 102 may be one or a combination of several of a Central Processing Unit (CPU) or other forms of processing units with data processing capability and/or instruction execution capability, and may control other components in the electronic device 100 to perform desired functions.
The storage 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. On which one or more computer program instructions may be stored that may be executed by processor 102 to implement client-side functionality (implemented by the processor) and/or other desired functionality in embodiments of the invention described below. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer-readable storage medium.
The input device 106 may be a device used by a user to input instructions and may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like.
The output device 108 may output various information (e.g., images or sounds) to the outside (e.g., a user), and may include one or more of a display, a speaker, and the like.
The image capture device 110 may take images (e.g., photographs, videos, etc.) desired by the user and store the taken images in the storage device 104 for use by other components.
Exemplarily, an exemplary electronic device for implementing the image noise reduction method, apparatus and system according to the embodiments of the present invention may be implemented as a smart terminal such as a camera, a smart phone, a tablet computer, and the like.
Example two:
referring to fig. 2, a flow chart of an image denoising method is shown, which includes the following steps:
step S202, acquiring a plurality of frame images acquired by the image acquisition equipment aiming at the same scene to be shot. The multiple frame images may be obtained by the image capturing device in a short time (such as 2s) and may form a group of image frame sequences according to the capturing time.
Step S204, selecting a reference frame image and a target frame image from a plurality of frame images; the number of the reference frame images is one, and the number of the target frame images is at least one. For example, the image frame sequence has (H +1) frames, and 1 frame is selected as a reference frame image (also referred to as a reference frame image), and the remaining H frames are all target frame images. The reference frame image may be randomly selected, and may be the first frame in the image frame sequence, or any one of the frames, which is not limited herein. Of course, it is also possible to extract several frame images from the image frame sequence and then select the reference frame image and the target frame image from the extracted frame images. When extracting the frame images, the frame images can be extracted at equal intervals, or only clear frame images can be extracted, and blurred frame images can be eliminated.
Step S206, dividing the reference frame image into a plurality of reference image blocks according to a preset image dividing manner.
In a specific implementation, the reference frame image may be divided by using a uniform grid or a non-uniform grid, and then each grid region obtained by dividing the reference frame image is determined as one reference image block. Taking the uniform grid division manner as an example, the reference frame image may be uniformly divided into a plurality of image blocks with the same size. Preferably, the reference frame image is uniformly divided into a plurality of square image blocks, for example, the size of each image block may be unified to 64 × 64, 32 × 32, or 16 × 16, and the size of the image block may be flexibly set according to the requirement, which is not limited herein. Step S208, determining a position of each reference image block in the target frame image, so as to align the reference frame image and the target frame image.
It can be understood that, in the image capturing process, situations such as shaking of the image capturing device may occur, so that the continuously captured multi-frame images are not completely aligned, and situations such as image frame shifting may occur. In this embodiment, the reference image frame is taken as a reference, the position of each reference image block in the target frame image is determined, that is, the displacement of the reference image block in the target frame image corresponding to the reference frame image is characterized, and the reference frame image and the target frame image can be aligned based on the displacement of each image block. Compared with the mode of aligning each pixel point of a multi-frame image in the prior art, the block alignment mode greatly shortens the processing time and improves the processing efficiency. For example, assuming that the size of an image block is 64 × 64, the speed can be increased by 64 × 4096 times compared to the pixel alignment method.
The number of the target frame images is multiple, so that for each target frame image, the position of each reference image block in each target frame image is determined in the above manner, so that the reference frame images are aligned with all the target frame images; the target frame images mentioned in this embodiment all refer to processing each target frame image, and are not described in detail below.
And step S210, carrying out image fusion processing on the aligned reference frame image and the target frame image to obtain a noise reduction image.
The image fusion processing mode can directly adopt a related fusion processing mode, or firstly carry out ghost detection on the aligned reference frame image and the target frame image, further carry out processing such as ghost removal and the like based on a ghost detection result, and then realize image fusion; the method of detecting the ghost image during the image fusion processing can further improve the quality of the noise reduction image.
According to the image denoising method provided by the embodiment, the image is divided into the image blocks, and only the alignment processing is performed based on the image blocks, so that the multi-frame denoising time is greatly shortened, the multi-frame denoising efficiency is improved, and the method is more suitable for devices such as mobile phones and the like which need to run quickly.
When determining the position of each reference image block in the target frame image in the step S208, the following steps 1 to 3 may be referred to:
step 1, determining a first position of a central point of each reference image block in a reference frame image, and determining an optical flow vector of the central point of each reference image block on a target frame image. The optical flow vector characterizes the motion condition of the central point of each reference image block from the reference frame image to the target frame image. In general, an optical flow vector (referred to simply as an optical flow) is the "instantaneous velocity" of the motion of a pixel of a spatially moving object on an observation imaging plane. Optical flow studies have primarily utilized temporal variations and correlations of pixel intensity data in image sequences to determine the "motion" of respective pixel locations. The optical flow is introduced, so that the motion field which cannot be directly obtained originally can be approximately obtained from the image sequence, namely, the optical flow vector can represent the motion information of the same pixel point in the two images, and the change of the images can be represented.
And 2, calculating to obtain a second position of the central point of each reference image block mapped to the target frame image according to the first position of the central point of each reference image block and the optical flow vector. For example, the center point of a certain reference image block is O, and the optical flow vector of the center point O on the target frame image is d through the above calculationOAnd the first position of the central point O on the reference frame image is Z, the central point O is mapped to the second position Z' ═ Z + d on the target frame imageOAnd Z' are point coordinates in a preset coordinate system.
And 3, determining the position of each reference image block in the target frame image based on each second position and the size of each reference image block.
For example, the center of a block of a reference image block in the reference frame image is a left eye of a person, then the position corresponding to the left eye is found in the target frame image, and a square with the size (also called grid size) of the image block is drawn by taking the left eye as the center, that is, the position of the reference image block with the left eye as the center in the target frame image is determined. FIG. 3 is a block alignment diagram showing a reference image block P in a reference frame image0Is corresponding toDefining the corresponding position of each frame image in H target frame images, i.e. P1~PH. It is to be understood that for simplicity of illustration in fig. 3, each target frame image is not fully illustrated, but only the reference image block P is illustrated0And each target frame image corresponds to the found image block. P0~PHCan be considered as aligned image blocks containing the same image information.
For ease of understanding, a specific implementation is given below mainly for step 1 above, and is specifically set forth below:
according to the preset coordinate system, the first position of the central point of each reference image block in the reference frame image can be directly determined, and when the optical flow vector of the central point of each reference image block on the target frame image is determined, in one embodiment, the following sub-steps can be referred to for implementation:
(1) and carrying out feature point detection and matching processing on the reference frame image and the target frame image to obtain matching feature points between the reference frame image and the target frame image.
For example, feature point detection algorithms such as SIFT (Scale-invariant feature transform), SURF (Speeded Up Robust Features), FAST from accessed Segment test, orb (organized FAST and managed bridge), etc. may be used to perform feature point detection and matching processing on the reference frame image and the target frame image to obtain matched feature points. In one embodiment, the number of the matched feature points is, for example, 200 to 500, but may be more or less. Compared with the mode of analyzing all pixel points one by one, the sparse sampling mode based on the matching feature points between the reference frame image and the target frame image can effectively improve the analysis efficiency.
(2) And calculating an optical flow vector of each matched feature point on the target frame image by taking the reference frame image as a reference.
Specifically, the mapping of each matched feature point from the reference frame image to the target is determined according to the position coordinate of each matched feature point on the reference frame image and the position coordinate of each matched feature point on the target frame imageAn optical flow vector on the framed image. Taking one of the matching feature points i as an example, the optical flow vector of the matching feature point i from the reference frame image to the target frame image is di=qi-pi(ii) a That is, the matching feature point i is moved (mapped) from the reference frame image to the target frame image, where the coordinate of the matching feature point i in the reference frame image is piThe coordinate of the target frame image is qi
(3) And according to a preset image division mode, dividing the target frame image into a plurality of target meshes, and calculating the optical flow vector of the vertex of each target mesh according to the optical flow vector of each matched feature point on the target frame image. It will be appreciated that the target frame image should be divided in the same manner as the reference frame image.
When specifically calculating the optical flow vector of the vertex of each target mesh, the optical flow vector of each matching feature point on the target frame image may be interpolated onto the lattice point of each target mesh; the median (which may also be referred to as the mean) of the optical-flow vectors collected by the vertices of each target mesh is determined as the optical-flow vector of the vertices of each target mesh. The optical flow vector of each matching feature point on the target frame image is interpolated to a lattice point of each target grid, which may also be referred to as diffusing the optical flow vector of each matching feature point on the target frame image to a lattice point of each target grid, and may be implemented by using a related vector diffusion (interpolation) algorithm, which is not described herein again. It is understood that, a plurality of optical-flow vectors are collected at the vertex of each target mesh, each optical-flow vector corresponds to the abscissa x and the ordinate y in a preset coordinate system, and for each vertex, the median x 'is calculated from the x values of all the optical-flow vectors collected at the vertex, and the median y' is calculated from the y values of all the optical-flow vectors collected at the vertex, that is, the median of the optical-flow vectors collected at the vertex can be determined. That is, the abscissa and the ordinate are x 'and y' respectively corresponding to the median of the optical flow vectors collected by the vertex. In the above specific process for determining the median of the optical-flow vectors collected by the vertices of each target mesh, in this embodiment, the median of the optical-flow vectors collected by the vertices of each target mesh is determined as the optical-flow vector of the vertex of each target mesh, and this way of obtaining the median can obtain one stable optical-flow vector corresponding to each vertex, which is helpful for the subsequent calculation.
(4) And calculating the optical flow vector of the central point of each reference image block on the target frame image according to the optical flow vector of the vertex of each target grid.
For example, a target grid corresponding to a center point of each reference image block may be first determined; then, based on a barycentric coordinate transformation method, determining an optical flow vector of the central point of each reference image block on the target frame image according to the optical flow vectors of three vertexes of the target grid corresponding to the central point of each reference image block; and the central point of the reference image block is positioned in a triangular area formed by three vertexes of the target grid. Referring to the calculation schematic diagram of optical flow vectors shown in fig. 4, as shown in fig. 4, for any point X, a triangle ABC surrounded by three vertices in a mesh where the point X is located can be found, and the optical flow vectors of the vertices are denoted as dA,dB,dC. Optical flow vector d obtainable by Barycentric (Barycentric) coordinate transformationX=a1dA+a2dB+a3dCWherein a is1,a2,a3Is the Barycentric coordinate of X in the triangle. Based on the principle of calculating the optical flow vector, when the optical flow vector of the vertex of each target mesh is known, the optical flow vector d of the center point O of each reference image block on the target frame image can be calculatedO
By the image block-based alignment mode, the calculation time can be greatly shortened. For example, conventional image alignment methods require computing the optical flow vector for each pixel and resampling the image, which is time consuming to process multiple frames of images. In the method provided by the embodiment, only a small number of optical flow vectors of the grid center need to be calculated, the sub-images (i.e., image blocks) are directly selected from each frame, no additional memory needs to be allocated, and no image resampling needs to be performed, so that the image alignment efficiency is greatly improved, and the noise reduction speed of multi-frame images is further improved.
Since the image capturing device usually captures a plurality of frame images in a fast continuous shooting manner, it can be assumed that the optical flow vectors are locally smooth, and a small area of each image block can be approximated to consider that the optical flow vectors of four vertices are the same, so that most of the corresponding image blocks of the multi-frame images are only translated, and the above calculation method also has higher accuracy.
Further, considering that ghost images may occur in a plurality of frame images when the device continuously takes pictures in a short time, in order to obtain an image with better quality, when the aligned reference frame image and the target frame image are subjected to image fusion processing in step S210, the following steps may be referred to:
step 1, determining corresponding image blocks in the aligned reference frame image and the target frame image as aligned image blocks.
And 2, carrying out ghost detection on each alignment image block in the reference frame image and the target frame image, determining the alignment image block with the ghost as a ghost block, and determining the image block without the ghost as a non-ghost block. Among them, image ghosting may be caused by a moving object in a scene to be photographed.
When the method is specifically implemented, the distance between the aligned image blocks of each target frame image and the reference frame image can be calculated according to a preset block distance formula; this embodiment provides a block distance formula, which is specifically as follows:
Figure BDA0001806529420000141
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE003
the pixel value of the jth pixel point of the aligned image block in the tth target frame image is obtained; j is more than or equal to 1 and less than or equal to N, and N is the total number of pixel points contained in the aligned image block; t is more than or equal to 1 and less than or equal to H, and H is the number of the target frame images;
Figure DEST_PATH_IMAGE004
for aligned image blocks in a reference frame imageThe pixel value of the jth pixel point. For example, if the size of an aligned image block is 100 × 100, N is 10000. If there are 6 target frame images in total, then t is taken from 1 to 6. In this embodiment, a small number of pixel points may be sampled when calculating the block distance, so as to improve the performance of the algorithm.
And then calculating the sum of the distances of the aligned image blocks of all the target frame images and the reference frame images to obtain a sum value. Judging whether the sum is higher than a preset threshold value; if yes, determining that the alignment image block has a ghost; and if not, determining that the alignment image block has no ghost. That is, the distance sum E ═ Σ EtIf E exceeds a certain threshold, then the aligned image block is considered (e.g., P in FIG. 3)0~PH) The image capturing device can also be understood as that the image capturing device continuously captures a plurality of image frames, and a certain same area (aligned image block) in the image frames has a ghost.
And 3, carrying out noise reduction treatment on the non-ghost blocks by adopting a time domain wiener filtering algorithm to obtain the noise-reduced non-ghost blocks. The time domain wiener filtering algorithm can be referred to as the following formula:
Figure BDA0001806529420000151
x is the gray value of the pixel in the reference image block, mu and sigmatThe mean and standard deviation, respectively, of the same-position pixel values in all aligned image blocks. σ is the estimated noise variance, which can be set to a constant based on the overall noise strength of the image. The above formula is suitable for collecting the image frame as the gray image, if the collected image frame is the RGB color image, the above formula can be applied to each channel of the color image independently, and the details are not repeated herein.
And 4, carrying out noise reduction on the ghost blocks by adopting a Non-Local Means (Non-Local Means) algorithm to obtain the noise-reduced ghost blocks. Specifically, each pixel in the ghost block may be denoised using a non-local mean algorithm. For ghost blocks, the non-local mean algorithm finds repeated image contents such as a plurality of sub-image areas in a piece of blue sky, a plurality of sub-image areas above and below a telegraph pole and the like in all frame images as much as possible to reduce noise; it should be understood that, unlike the image block proposed in the present embodiment, the size of the sub-image area selected in the non-local mean algorithm is generally small, and the number of pixels included in the sub-image area is less than the number of pixels in the image block.
And 5, carrying out multi-scale fusion processing on the noise-reduced non-ghost blocks and the noise-reduced ghost blocks.
The embodiment provides a multi-scale fusion processing mode, which can refer to the following steps:
(1) generating a mask map M, and a first image G containing only the ghost blocks and a second image L containing only the non-ghost blocks, based on the positions of the noise-reduced non-ghost blocks and the noise-reduced ghost blocks in the reference frame image; the pixel values of the ghost blocks in the mask image are all preset first pixel values, and the pixel values of the non-ghost blocks in the mask image are all preset second pixel values. For ease of understanding, reference may be made to a schematic mask diagram as shown in fig. 5a, a schematic first image diagram as shown in fig. 5b, and a schematic second image diagram as shown in fig. 5 c. As shown in fig. 5a, the first pixel values of all the ghost blocks in the mask map are 0, and the preset second pixel values of the pixel values of all the non-ghost blocks are 255, and the mask map is a black and white block diagram. Correspondingly, the first image only contains the ghost blocks, the second image only contains the non-ghost blocks, the image blocks displayed as the ghost blocks in the first image are empty at the corresponding positions of the second image, and the image blocks displayed as the non-ghost blocks in the second image are empty at the corresponding positions of the first image. The first image and the second image can be combined to form a complete image.
(2) Decomposing a first image G into a sequence of multi-scale Laplacian pyramid images { G }iDecomposing the second image L into a multi-scale Laplacian pyramid image sequence { L }iAnd decomposing the mask map M into a multi-scale Gaussian pyramid sequence { M }i}: wherein i is more than or equal to 1 and less than or equal to n; n is the pyramid layer number.
(3) Carrying out multi-scale fusion processing on the noise-reduced non-ghost blocks and the noise-reduced ghost blocks according to a multi-scale fusion formula; wherein, the multi-scale fusion formula is as follows:
Figure DEST_PATH_IMAGE005
(4) to pair
Figure DEST_PATH_IMAGE006
And reconstructing the image sequence to obtain a noise reduction image.
By adopting the image fusion mode of the image pyramid, micro detail contrast and texture change characteristics in the image can be embodied in a multi-level mode, ghost can be removed in the fusion process, and richer and clearer image information can be embodied.
In summary, the image denoising method provided by this embodiment only needs to perform alignment processing based on the image blocks, so that the multi-frame denoising time is greatly shortened, the multi-frame denoising efficiency is improved, and the method is more suitable for devices such as mobile phones and the like which need to operate quickly. Moreover, when the image denoising method is used for fusing multi-frame images, ghost detection can be performed on each image block in advance, different denoising modes are respectively adopted for image blocks with ghosts (ghost blocks) and image blocks without ghosts (non-ghost blocks), so that ghosts can be removed better, and a multi-scale fusion mode is adopted, so that a denoising image with better quality can be obtained better.
Example three:
for the image noise reduction method provided in the second embodiment, an embodiment of the present invention provides an image noise reduction apparatus, referring to a structural block diagram of the image noise reduction apparatus shown in fig. 6, including the following modules:
the image acquisition module 602 is configured to acquire a plurality of frame images acquired by an image acquisition device for a same scene to be photographed;
an image selecting module 604, configured to select a reference frame image and a target frame image from multiple frame images; the number of the reference frame images is one, and the number of the target frame images is at least one;
an image dividing module 606, configured to divide the reference frame image into a plurality of reference image blocks according to a preset image dividing manner;
an image alignment module 608, configured to determine a position of each reference image block in the target frame image, so as to align the reference frame image and the target frame image;
and the image fusion module 610 is configured to perform image fusion processing on the aligned reference frame image and the target frame image to obtain a noise-reduced image.
The image denoising device provided by the embodiment divides an image into image blocks, and only needs to perform alignment processing based on the image blocks, so that the multiframe denoising time is greatly shortened, the multiframe denoising efficiency is improved, and the device is more suitable for equipment such as a mobile phone and the like which needs to run quickly.
In one embodiment, the image dividing module 606 is configured to: dividing the reference frame image by using a uniform grid or a non-uniform grid; and determining each grid area obtained by dividing the reference frame image as a reference image block.
In one embodiment, the image alignment module 608 is configured to: determining a first position of a central point of each reference image block in a reference frame image, and determining an optical flow vector of the central point of each reference image block on a target frame image; calculating to obtain a second position of the central point of each reference image block mapped to the target frame image according to the first position of the central point of each reference image block and the optical flow vector; and determining the position of each reference image block in the target frame image based on each second position and the size of each reference image block.
In an embodiment, the image alignment module 608 is further configured to: carrying out feature point detection and matching processing on the reference frame image and the target frame image to obtain matching feature points between the reference frame image and the target frame image; calculating an optical flow vector of each matched feature point on the target frame image by taking the reference frame image as a reference; dividing the target frame image into a plurality of target meshes according to a preset image division mode, and calculating an optical flow vector of a vertex of each target mesh according to the optical flow vector of each matched feature point on the target frame image; and calculating the optical flow vector of the central point of each reference image block on the target frame image according to the optical flow vector of the vertex of each target grid.
In an embodiment, the image alignment module 608 is further configured to: and determining an optical flow vector of each matched feature point mapped from the reference frame image to the target frame image according to the position coordinate of each matched feature point on the reference frame image and the position coordinate of each matched feature point on the target frame image.
In an embodiment, the image alignment module 608 is further configured to: interpolating the optical flow vector of each matched feature point on the target frame image to the grid point of each target grid;
the median of the optical-flow vectors collected for the vertices of each target mesh is determined as the optical-flow vector for the vertices of each target mesh.
In an embodiment, the image alignment module 608 is further configured to: determining a target grid corresponding to the central point of each reference image block; based on a barycentric coordinate transformation method, determining an optical flow vector of the central point of each reference image block on a target frame image according to optical flow vectors of three vertexes of a target grid corresponding to the central point of each reference image block; and the central point of the reference image block is positioned in a triangular area formed by three vertexes of the target grid.
In one embodiment, the image fusion module 610 is configured to: determining corresponding image blocks in the aligned reference frame image and the target frame image as aligned image blocks; carrying out ghost detection on each alignment image block in the reference frame image and the target frame image, determining the alignment image block with the ghost as a ghost block, and determining the image block without the ghost as a non-ghost block; carrying out noise reduction processing on the non-ghost blocks by adopting a time domain wiener filtering algorithm to obtain noise-reduced non-ghost blocks; denoising the ghost blocks by adopting a non-local mean algorithm to obtain denoised ghost blocks; and carrying out multi-scale fusion processing on the noise-reduced non-ghost blocks and the noise-reduced ghost blocks to obtain a noise-reduced image.
In an embodiment, the image fusion module 610 is further configured to: calculating the distance between the aligned image blocks of each target frame image and the reference frame image according to a preset block distance formula; the block distance formula is:
Figure DEST_PATH_IMAGE007
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE008
the pixel value of the jth pixel point of the aligned image block in the tth target frame image is obtained; wherein j is more than or equal to 1 and less than or equal to N; n is the total number of pixel points contained in the aligned image block; t is more than or equal to 1 and less than or equal to H; h is the number of target frame images;
Figure DEST_PATH_IMAGE009
the pixel value of the jth pixel point of the aligned image block in the reference frame image is obtained;
calculating the sum of distances of aligned image blocks of all the target frame images and the reference frame images to obtain a sum value; judging whether the sum is higher than a preset threshold value; if yes, determining that the alignment image block has a ghost; and if not, determining that the alignment image block has no ghost.
In an embodiment, the image fusion module 610 is further configured to: generating a mask map M, and a first image G containing only the ghost blocks and a second image L containing only the non-ghost blocks, based on the positions of the noise-reduced non-ghost blocks and the noise-reduced ghost blocks in the reference frame image; the pixel values of the ghost blocks in the mask image are all preset first pixel values, and the pixel values of the non-ghost blocks in the mask image are all preset second pixel values; decomposing a first image G into a sequence of multi-scale Laplacian pyramid images { G }iDecomposing the second image L into a multi-scale Laplacian pyramid image sequence { L }iAnd decomposing the mask map M into a multi-scale Gaussian pyramid sequence { M }i}: wherein i is more than or equal to 1 and less than or equal to n; n is the pyramid layer number; fusion according to multiple scalesA formula, carrying out multi-scale fusion processing on the de-noised non-ghost blocks and the de-noised ghost blocks; wherein, the multi-scale fusion formula is as follows:
Figure DEST_PATH_IMAGE010
obtained after multi-scale fusion processing
Figure DEST_PATH_IMAGE011
And reconstructing the image sequence to obtain a noise reduction image.
The device provided by the embodiment has the same implementation principle and technical effect as the foregoing embodiment, and for the sake of brief description, reference may be made to the corresponding contents in the foregoing method embodiment for the portion of the embodiment of the device that is not mentioned.
Example four:
the present embodiment provides an image noise reduction system, including: the system comprises an image acquisition device, a processor and a storage device; the image acquisition equipment is used for acquiring a plurality of frame images; the storage device stores a computer program which, when executed by the processor, performs any one of the image noise reduction methods provided in embodiment two.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the system described above may refer to the corresponding process in the foregoing embodiments, and is not described herein again.
Further, the present embodiment provides a computer-readable storage medium, on which a computer program is stored, and the computer program is executed by a processor to perform the steps of the method provided in the second embodiment.
The image denoising method, apparatus and computer program product of the system provided in the embodiments of the present invention include a computer readable storage medium storing a program code, where instructions included in the program code may be used to execute the method described in the foregoing method embodiments, and specific implementation may refer to the method embodiments, and will not be described herein again.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (12)

1. A method for image noise reduction, the method comprising:
acquiring a plurality of frame images acquired by image acquisition equipment aiming at the same scene to be shot;
selecting a reference frame image and a target frame image from a plurality of frame images; the number of the reference frame images is one, and the number of the target frame images is at least one; the reference frame image is randomly selected or is the first frame in the multiple frame images;
dividing the reference frame image into a plurality of reference image blocks according to a preset image dividing mode;
determining the position of each reference image block in the target frame image so as to align the reference frame image and the target frame image;
performing image fusion processing on the aligned reference frame image and the target frame image to obtain a noise reduction image;
the step of dividing the reference frame image into a plurality of reference image blocks according to a preset image division mode includes:
dividing the reference frame image by using a uniform grid or a non-uniform grid;
and determining each grid area obtained by dividing the reference frame image as a reference image block.
2. The method according to claim 1, wherein the step of determining the position of each of the reference image blocks in the target frame image comprises:
determining a first position of a central point of each reference image block in the reference frame image, and determining an optical flow vector of the central point of each reference image block on the target frame image;
calculating to obtain a second position of the central point of each reference image block mapped to the target frame image according to the first position of the central point of each reference image block and the optical flow vector;
and determining the position of each reference image block in the target frame image based on the second position and the size of each reference image block.
3. The method according to claim 2, wherein said step of determining an optical flow vector of a center point of each of said reference image blocks on said target frame image comprises:
carrying out feature point detection and matching processing on the reference frame image and the target frame image to obtain matching feature points between the reference frame image and the target frame image;
calculating an optical flow vector of each matched feature point on the target frame image by taking the reference frame image as a reference;
dividing the target frame image into a plurality of target meshes according to the preset image division mode, and calculating an optical flow vector of a vertex of each target mesh according to the optical flow vector of each matched feature point on the target frame image;
and calculating an optical flow vector of the central point of each reference image block on the target frame image according to the optical flow vector of the vertex of each target grid.
4. The method according to claim 3, wherein said step of calculating an optical flow vector of each of said matched feature points on said target frame image with reference to said reference frame image,
and determining an optical flow vector of each matched feature point mapped from the reference frame image to the target frame image according to the position coordinate of each matched feature point on the reference frame image and the position coordinate of each matched feature point on the target frame image.
5. The method according to claim 3, wherein said step of calculating optical flow vectors of vertices of each of said target meshes from optical flow vectors of each of said matched feature points on said target frame image comprises:
interpolating an optical flow vector of each matched feature point on the target frame image to a lattice point of each target grid;
determining a median of the optical-flow vectors collected for the vertices of each of the target meshes as the optical-flow vectors for the vertices of each of the target meshes.
6. The method according to claim 3, wherein said step of calculating an optical flow vector of the central point of each of said reference image blocks on said target frame image from the optical flow vectors of the vertices of each of said target meshes comprises:
determining a target grid corresponding to the central point of each reference image block;
determining an optical flow vector of the central point of each reference image block on the target frame image according to optical flow vectors of three vertexes of a target grid corresponding to the central point of each reference image block on the basis of a barycentric coordinate transformation method; and the central point of the reference image block is positioned in a triangular area formed by the three vertexes of the target grid.
7. The method according to claim 1, wherein the step of performing image fusion processing on the aligned reference frame image and the target frame image to obtain a noise-reduced image comprises:
determining corresponding image blocks in the aligned reference frame image and the target frame image as aligned image blocks;
performing ghost detection on each aligned image block in the reference frame image and the target frame image, determining the aligned image block with the ghost as a ghost block, and determining the image block without the ghost as a non-ghost block;
performing noise reduction processing on the non-ghost blocks by adopting a time domain wiener filtering algorithm to obtain noise-reduced non-ghost blocks;
denoising the ghost blocks by adopting a non-local mean algorithm to obtain denoised ghost blocks;
and carrying out multi-scale fusion processing on the noise-reduced non-ghost blocks and the noise-reduced ghost blocks to obtain a noise-reduced image.
8. The method of claim 7, wherein the step of detecting ghosting of each of the aligned image blocks in the reference frame image and the target frame image comprises:
calculating the distance between the aligned image blocks of each target frame image and the reference frame image according to a preset block distance formula; the block distance formula is:
Figure FDA0002821303140000031
wherein the content of the first and second substances,
Figure FDA0002821303140000041
the pixel value of the jth pixel point of the aligned image block in the tth target frame image is obtained; wherein j is more than or equal to 1 and less than or equal to N; n is the total number of pixel points contained in the aligned image block; t is more than or equal to 1 and less than or equal to H; h is the number of the target frame images;
Figure FDA0002821303140000042
the pixel value of the jth pixel point of the aligned image block in the reference frame image is obtained;
calculating the sum of distances of aligned image blocks of all the target frame images and the reference frame images to obtain a sum value;
judging whether the sum is higher than a preset threshold value;
if yes, determining that the alignment image block has a ghost; and if not, determining that the alignment image block has no ghost.
9. The method of claim 7, wherein the step of performing multi-scale fusion processing on the noise-reduced non-ghost blocks and the noise-reduced ghost blocks to obtain a noise-reduced image comprises:
generating a mask map M, and a first image G containing only the ghost blocks and a second image L containing only the non-ghost blocks, based on the positions of the noise-reduced non-ghost blocks and the noise-reduced ghost blocks in the reference frame image; the pixel values of the ghost blocks in the mask image are all preset first pixel values, and the pixel values of the non-ghost blocks in the mask image are all preset second pixel values;
decomposing the first image G into a sequence of multi-scale Laplacian pyramid images { G }iDecomposing the second image L into a multi-scale Laplacian pyramid image sequence { L }iAnd decomposing the mask map M into a multi-scale Gaussian pyramid sequence { M }i}: wherein i is more than or equal to 1 and less than or equal to n; n is the pyramid layer number;
performing multi-scale fusion processing on the noise-reduced non-ghost blocks and the noise-reduced ghost blocks according to a multi-scale fusion formula; wherein the multi-scale fusion formula is:
Figure FDA0002821303140000043
obtained after multi-scale fusion processing
Figure FDA0002821303140000044
And reconstructing the image sequence to obtain a noise reduction image.
10. An image noise reduction apparatus, characterized in that the apparatus comprises:
the image acquisition module is used for acquiring a plurality of frame images acquired by the image acquisition equipment aiming at the same scene to be shot;
the image selecting module is used for selecting a reference frame image and a target frame image from the plurality of frame images; the number of the reference frame images is one, and the number of the target frame images is at least one; the reference frame image is randomly selected or is the first frame in the multiple frame images;
the image dividing module is used for dividing the reference frame image into a plurality of reference image blocks according to a preset image dividing mode;
the image alignment module is used for determining the position of each reference image block in the target frame image so as to align the reference frame image and the target frame image;
the image fusion module is used for carrying out image fusion processing on the aligned reference frame image and the target frame image to obtain a noise reduction image;
the image partitioning module is configured to: dividing the reference frame image by using a uniform grid or a non-uniform grid; and determining each grid area obtained by dividing the reference frame image as a reference image block.
11. An image noise reduction system, the system comprising: the system comprises an image acquisition device, a processor and a storage device;
the image acquisition equipment is used for acquiring a plurality of frame images;
the storage device has stored thereon a computer program which, when executed by the processor, performs the method of any of claims 1 to 9.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of the preceding claims 1 to 9.
CN201811104277.3A 2018-09-20 2018-09-20 Image noise reduction method, device and system Active CN108898567B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811104277.3A CN108898567B (en) 2018-09-20 2018-09-20 Image noise reduction method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811104277.3A CN108898567B (en) 2018-09-20 2018-09-20 Image noise reduction method, device and system

Publications (2)

Publication Number Publication Date
CN108898567A CN108898567A (en) 2018-11-27
CN108898567B true CN108898567B (en) 2021-05-28

Family

ID=64360137

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811104277.3A Active CN108898567B (en) 2018-09-20 2018-09-20 Image noise reduction method, device and system

Country Status (1)

Country Link
CN (1) CN108898567B (en)

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111353948B (en) * 2018-12-24 2023-06-27 Tcl科技集团股份有限公司 Image noise reduction method, device and equipment
CN109859126B (en) 2019-01-17 2021-02-02 浙江大华技术股份有限公司 Video noise reduction method and device, electronic equipment and storage medium
CN109919878B (en) * 2019-03-14 2023-04-18 上海市第一人民医院 Noise reduction method and system based on optical coherence tomography
CN111754411B (en) * 2019-03-27 2024-01-05 Tcl科技集团股份有限公司 Image noise reduction method, image noise reduction device and terminal equipment
CN110084764A (en) * 2019-04-29 2019-08-02 努比亚技术有限公司 Image noise reduction processing method, mobile terminal, device and computer storage medium
CN110310251B (en) * 2019-07-03 2021-10-29 北京字节跳动网络技术有限公司 Image processing method and device
CN110827336A (en) * 2019-11-01 2020-02-21 厦门美图之家科技有限公司 Image alignment method, device, equipment and storage medium
CN111046950B (en) * 2019-12-11 2023-09-22 北京迈格威科技有限公司 Image processing method and device, storage medium and electronic device
CN111091513B (en) * 2019-12-18 2023-07-25 Oppo广东移动通信有限公司 Image processing method, device, computer readable storage medium and electronic equipment
CN113313788A (en) * 2020-02-26 2021-08-27 北京小米移动软件有限公司 Image processing method and apparatus, electronic device, and computer-readable storage medium
CN111401411B (en) * 2020-02-28 2023-09-29 北京小米松果电子有限公司 Method and device for acquiring sample image set
CN111275653B (en) * 2020-02-28 2023-09-26 北京小米松果电子有限公司 Image denoising method and device
CN111627041B (en) * 2020-04-15 2023-10-10 北京迈格威科技有限公司 Multi-frame data processing method and device and electronic equipment
CN111741187B (en) * 2020-06-08 2022-08-26 北京小米松果电子有限公司 Image processing method, device and storage medium
CN113808227B (en) * 2020-06-12 2023-08-25 杭州普健医疗科技有限公司 Medical image alignment method, medium and electronic equipment
CN111833269B (en) * 2020-07-13 2024-02-02 字节跳动有限公司 Video noise reduction method, device, electronic equipment and computer readable medium
CN112132769A (en) * 2020-08-04 2020-12-25 绍兴埃瓦科技有限公司 Image fusion method and device and computer equipment
CN112037147B (en) * 2020-09-02 2024-05-14 上海联影医疗科技股份有限公司 Medical image noise reduction method and device
CN112288642A (en) * 2020-09-21 2021-01-29 北京迈格威科技有限公司 Ghost detection method, image fusion method and corresponding device
CN112488027B (en) * 2020-12-10 2024-04-30 Oppo(重庆)智能科技有限公司 Noise reduction method, electronic equipment and computer storage medium
CN115499559A (en) * 2021-06-18 2022-12-20 哲库科技(上海)有限公司 Image processing apparatus and method, processing chip, and electronic device
CN113344820B (en) * 2021-06-28 2024-05-10 Oppo广东移动通信有限公司 Image processing method and device, computer readable medium and electronic equipment
CN113344821B (en) * 2021-06-29 2022-10-21 展讯通信(上海)有限公司 Image noise reduction method, device, terminal and storage medium
CN113469908B (en) * 2021-06-29 2022-11-18 展讯通信(上海)有限公司 Image noise reduction method, device, terminal and storage medium
CN113344822B (en) * 2021-06-29 2022-11-18 展讯通信(上海)有限公司 Image denoising method, device, terminal and storage medium
CN113628134A (en) * 2021-07-28 2021-11-09 商汤集团有限公司 Image noise reduction method and device, electronic equipment and storage medium
CN113674232A (en) * 2021-08-12 2021-11-19 Oppo广东移动通信有限公司 Image noise estimation method and device, electronic equipment and storage medium
CN114862735A (en) * 2022-05-23 2022-08-05 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN116740182B (en) * 2023-08-11 2023-11-21 摩尔线程智能科技(北京)有限责任公司 Ghost area determining method and device, storage medium and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102136142A (en) * 2011-03-16 2011-07-27 内蒙古科技大学 Nonrigid medical image registration method based on self-adapting triangular meshes
CN105611181A (en) * 2016-03-30 2016-05-25 努比亚技术有限公司 Multi-frame photographed image synthesizer and method
EP3110149A1 (en) * 2015-06-24 2016-12-28 Politechnika Poznanska A system and a method for depth-image-based rendering
CN106504250A (en) * 2016-10-28 2017-03-15 锐捷网络股份有限公司 Image block identification matching process and far-end server
CN107169939A (en) * 2017-05-31 2017-09-15 广东欧珀移动通信有限公司 Image processing method and related product

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102136142A (en) * 2011-03-16 2011-07-27 内蒙古科技大学 Nonrigid medical image registration method based on self-adapting triangular meshes
EP3110149A1 (en) * 2015-06-24 2016-12-28 Politechnika Poznanska A system and a method for depth-image-based rendering
CN105611181A (en) * 2016-03-30 2016-05-25 努比亚技术有限公司 Multi-frame photographed image synthesizer and method
CN106504250A (en) * 2016-10-28 2017-03-15 锐捷网络股份有限公司 Image block identification matching process and far-end server
CN107169939A (en) * 2017-05-31 2017-09-15 广东欧珀移动通信有限公司 Image processing method and related product

Also Published As

Publication number Publication date
CN108898567A (en) 2018-11-27

Similar Documents

Publication Publication Date Title
CN108898567B (en) Image noise reduction method, device and system
KR102574141B1 (en) Image display method and device
CN109242961B (en) Face modeling method and device, electronic equipment and computer readable medium
CN108694705B (en) Multi-frame image registration and fusion denoising method
US10708525B2 (en) Systems and methods for processing low light images
US9041834B2 (en) Systems and methods for reducing noise in video streams
CN113992861B (en) Image processing method and image processing device
WO2017016050A1 (en) Image preview method, apparatus and terminal
EP3798975B1 (en) Method and apparatus for detecting subject, electronic device, and computer readable storage medium
AU2017254859A1 (en) Method, system and apparatus for stabilising frames of a captured video sequence
CN110324532B (en) Image blurring method and device, storage medium and electronic equipment
CN112367459B (en) Image processing method, electronic device, and non-volatile computer-readable storage medium
CN110349163B (en) Image processing method and device, electronic equipment and computer readable storage medium
EP4044579A1 (en) Main body detection method and apparatus, and electronic device and computer readable storage medium
CN110796041B (en) Principal identification method and apparatus, electronic device, and computer-readable storage medium
CN111183630B (en) Photo processing method and processing device of intelligent terminal
Zhang et al. Deep motion blur removal using noisy/blurry image pairs
US20220398704A1 (en) Intelligent Portrait Photography Enhancement System
CN115049572A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
Guthier et al. Parallel implementation of a real-time high dynamic range video system
CN109727193B (en) Image blurring method and device and electronic equipment
JP7025237B2 (en) Image processing equipment and its control method and program
CN108431867B (en) Data processing method and terminal
CN111835968B (en) Image definition restoration method and device and image shooting method and device
CN115086558B (en) Focusing method, image pickup apparatus, terminal apparatus, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant