CN110650295B - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN110650295B
CN110650295B CN201911169435.8A CN201911169435A CN110650295B CN 110650295 B CN110650295 B CN 110650295B CN 201911169435 A CN201911169435 A CN 201911169435A CN 110650295 B CN110650295 B CN 110650295B
Authority
CN
China
Prior art keywords
image
image block
pixel
reconstructed
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911169435.8A
Other languages
Chinese (zh)
Other versions
CN110650295A (en
Inventor
冯召东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Spreadtrum Communications Shanghai Co Ltd
Original Assignee
Spreadtrum Communications Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Spreadtrum Communications Shanghai Co Ltd filed Critical Spreadtrum Communications Shanghai Co Ltd
Priority to CN201911169435.8A priority Critical patent/CN110650295B/en
Publication of CN110650295A publication Critical patent/CN110650295A/en
Application granted granted Critical
Publication of CN110650295B publication Critical patent/CN110650295B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The present disclosure relates to an image processing method and apparatus, the method comprising: aiming at a multi-frame image which is subjected to integral alignment based on a preset reference image, obtaining an image block displacement set of a plurality of image blocks of each frame image based on each reference image block of the preset reference image; the method comprises the steps of up-sampling an image block displacement set of each frame of image to obtain an up-sampled image block displacement set; carrying out image block alignment on each image block by using each image block displacement in the up-sampled image block displacement set to obtain a plurality of aligned images; fusing the aligned multi-frame images to obtain a fused image; and performing super-resolution reconstruction by using the fusion image to obtain a reconstructed image. The multi-frame image can be used for fusion, alignment and super-resolution reconstruction to obtain a reconstructed high-resolution and high-signal-to-noise-ratio image, so that the environmental adaptability and flexibility of image processing are improved.

Description

Image processing method and device
Technical Field
The present disclosure relates to the field of image processing, and in particular, to an image processing method and apparatus.
Background
Due to volume and cost limitations, the shooting capability of mobile terminals (cell phone cameras, etc.) is often lower than that of professional cameras, particularly in terms of low resolution and low signal-to-noise ratio. Thanks to the rapid development of computational imaging, it has become possible to improve the imaging quality by means of algorithms and has gradually evolved into a science, computational imaging. However, in the related art, when the image processing is performed on the image captured by the mobile terminal to improve the resolution and the signal-to-noise ratio of the image, the adopted image processing model is complex, and the requirement on the captured image is high (for example, some related technologies require that the captured image needs random jitter).
Disclosure of Invention
In view of this, the present disclosure proposes an image processing method, the method comprising:
aiming at a multi-frame image which is subjected to integral alignment based on a preset reference image, obtaining an image block displacement set of a plurality of image blocks of each frame image based on each reference image block of the preset reference image, wherein each image block corresponds to each reference image block one by one; the method comprises the steps of up-sampling an image block displacement set of each frame of image to obtain an up-sampled image block displacement set; carrying out image block alignment on each image block by using each image block displacement in the up-sampled image block displacement set to obtain a plurality of aligned images;
fusing the aligned multi-frame images to obtain a fused image;
and performing super-resolution reconstruction by using the fusion image to obtain a reconstructed image.
In a possible embodiment, the deriving a set of image block displacements for a plurality of image blocks of each frame image based on each reference image block of a preset reference image includes:
and aiming at each image block of each frame image, obtaining the image block displacement of the image block of the frame image relative to the corresponding reference image block.
In one possible embodiment, the method further comprises:
judging whether the image blocks of each frame of image are aligned with the reference image blocks of the preset reference image in error or not according to each image block of each frame of image;
in the case of error alignment, updating the displacement of the image block with displacements of a plurality of neighborhood image blocks of the image block.
In a possible implementation manner, the determining whether the image block of the frame image is in error alignment with the reference image block of the preset reference image includes:
judging that the image block is aligned with the corresponding reference image block in error under the conditions that the difference between the image block displacement of the image block and the image block displacement mean value of the plurality of neighborhood image blocks reaches a preset value, the image block is not similar to the corresponding reference image block, and the reference image block is a high-frequency area; or
And under the conditions that the difference between the image block displacement of the image block and the image block displacement mean value of the plurality of neighborhood image blocks reaches a preset value, the image block is not similar to the corresponding reference image block, and the reference image block is a low-frequency area, judging that the image block is in error alignment with the corresponding reference image block.
In a possible embodiment, after obtaining the fused image, the method further comprises:
performing guiding filtering on the fused image according to a preset reference image to obtain a guided filtered fused image;
wherein the super-resolution reconstruction by using the fusion image comprises:
and performing super-resolution reconstruction by using the fusion image after the guiding filtering.
In a possible embodiment, the performing super-resolution reconstruction by using the fusion image includes:
determining neighborhood pixels of a pixel to be reconstructed;
and reconstructing the pixel to be reconstructed by utilizing the neighborhood pixel of the pixel to be reconstructed.
In a possible implementation manner, the reconstructing the pixel to be reconstructed by using the neighborhood pixels of the pixel to be reconstructed includes:
reconstructing the pixel to be reconstructed by using the following formula:
Figure 195089DEST_PATH_IMAGE001
wherein I (I, j) represents the pixel to be reconstructed, I, j represent the coordinates of the pixel to be reconstructed, In(i, j) neighborhood pixels, w, representing the pixel to be reconstructednRepresenting the reconstruction weight, n represents the index, Num represents the number of neighborhood pixels of the pixel to be reconstructed,
wherein, Wn=eT(HTGH)-1HG,eTAnd representing a row vector with each element being 1, G representing a preset Gaussian filter kernel, and H representing second-order Taylor expansion after linear fitting of the pixel value of the pixel to be reconstructed relative to the adjacent pixel.
According to another aspect of the present disclosure, there is provided an image processing apparatus, the apparatus including:
the alignment module is used for obtaining an image block displacement set of a plurality of image blocks of each frame image on the basis of each reference image block of a preset reference image for a plurality of frames of images which are subjected to integral alignment on the basis of the preset reference image, wherein each image block corresponds to each reference image block one by one; the method comprises the steps of up-sampling an image block displacement set of each frame of image to obtain an up-sampled image block displacement set; carrying out image block alignment on each image block by using each image block displacement in the up-sampled image block displacement set to obtain a plurality of aligned images;
the fusion module is connected with the alignment module and used for fusing the aligned multi-frame images to obtain a fused image;
and the reconstruction module is connected with the fusion module and used for performing super-resolution reconstruction by using the fusion image to obtain a reconstructed image.
In a possible embodiment, the deriving a set of image block displacements for a plurality of image blocks of each frame image based on each reference image block of a preset reference image includes:
and aiming at each image block of each frame image, obtaining the image block displacement of the image block of the frame image relative to the corresponding reference image block.
In a possible embodiment, the apparatus further comprises:
a correction module coupled to the alignment module for:
judging whether the image blocks of each frame of image are aligned with the reference image blocks of the preset reference image in error or not according to each image block of each frame of image;
in the case of error alignment, updating the displacement of the image block with displacements of a plurality of neighborhood image blocks of the image block.
In a possible implementation manner, the determining whether the image block of the frame image is in error alignment with the reference image block of the preset reference image includes:
judging that the image block is aligned with the corresponding reference image block in error under the conditions that the difference between the image block displacement of the image block and the image block displacement mean value of the plurality of neighborhood image blocks reaches a preset value, the image block is not similar to the corresponding reference image block, and the reference image block is a high-frequency area; or
And under the conditions that the difference between the image block displacement of the image block and the image block displacement mean value of the plurality of neighborhood image blocks reaches a preset value, the image block is not similar to the corresponding reference image block, and the reference image block is a low-frequency area, judging that the image block is in error alignment with the corresponding reference image block.
In a possible embodiment, after obtaining the fused image, the apparatus further comprises:
the filtering module is connected with the fusion module and used for performing guiding filtering on the fusion image according to a preset reference image to obtain a fusion image subjected to guiding filtering;
wherein the super-resolution reconstruction by using the fusion image comprises:
and performing super-resolution reconstruction by using the fusion image after the guiding filtering.
In a possible embodiment, the performing super-resolution reconstruction by using the fusion image includes:
determining neighborhood pixels of a pixel to be reconstructed;
and reconstructing the pixel to be reconstructed by utilizing the neighborhood pixel of the pixel to be reconstructed.
In a possible implementation manner, the reconstructing the pixel to be reconstructed by using the neighborhood pixels of the pixel to be reconstructed includes:
reconstructing the pixel to be reconstructed by using the following formula:
Figure 55598DEST_PATH_IMAGE002
wherein I (I, j) represents the pixel to be reconstructed, I, j represent the coordinates of the pixel to be reconstructed, In(i, j) neighborhood pixels, w, representing the pixel to be reconstructednRepresenting the reconstruction weight, n represents the index, Num represents the number of neighborhood pixels of the pixel to be reconstructed,
wherein, Wn=eT(HTGH)-1HG,eTRepresent each oneAnd the elements are row vectors of 1, G represents a preset Gaussian filter kernel, and H represents second-order Taylor expansion after the pixel value of the pixel to be reconstructed is linearly fitted relative to the adjacent pixel.
By the method and the device, the embodiment of the disclosure can obtain the image block displacement set of a plurality of image blocks of each frame image on the basis of each reference image block of a preset reference image for a multi-frame image which is subjected to integral alignment on the basis of the preset reference image, wherein each image block corresponds to each reference image block one to one; the method comprises the steps of up-sampling an image block displacement set of each frame of image to obtain an up-sampled image block displacement set; and carrying out image block alignment on each image block by using each image block displacement in the up-sampled image block displacement set to obtain aligned multi-frame images, fusing the aligned multi-frame images to obtain a fused image, and carrying out super-resolution reconstruction by using the fused image to obtain a reconstructed image. When the images are aligned, the image block displacement set of the image blocks of each frame of image is obtained, and the image block displacement set is up-sampled, so that the alignment precision can be improved, and the operation cost can be saved.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features, and aspects of the disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flow chart of an image processing method according to an embodiment of the present disclosure.
Fig. 2 shows a flow chart of an image processing method according to an embodiment of the present disclosure.
Fig. 3 shows a schematic diagram of an image block of a fused image.
Fig. 4 shows a schematic diagram of an image processing method according to an embodiment of the present disclosure.
Fig. 5 illustrates an effect diagram of an image processing method according to an embodiment of the present disclosure.
Fig. 6 illustrates an effect diagram of an image processing method according to an embodiment of the present disclosure.
Fig. 7 shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Referring to fig. 1, fig. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure.
The method can be applied to a terminal or a server, where the terminal is also called a User Equipment (UE), a Mobile Station (MS), a Mobile Terminal (MT), and the like, and is a device that provides voice and/or data connectivity to a user, for example, a handheld device with a wireless connection function, a vehicle-mounted device, and the like. Currently, some examples of terminals are: a mobile phone (mobile phone), a tablet computer, a notebook computer, a palm top computer, a Mobile Internet Device (MID), a wearable device, a Virtual Reality (VR) device, an Augmented Reality (AR) device, a wireless terminal in industrial control (industrial control), a wireless terminal in unmanned driving (self), a wireless terminal in remote surgery (remote medical supply), a wireless terminal in smart grid (smart grid), a wireless terminal in transportation safety (transportation safety), a wireless terminal in city (smart city), a wireless terminal in smart home (smart home), a wireless terminal in vehicle networking, and the like. The present disclosure is not limited to a particular type of terminal or server.
As shown in fig. 1, the method includes:
step S11, for a multi-frame image which is integrally aligned based on a preset reference image, obtaining an image block displacement set of a plurality of image blocks of each frame image based on each reference image block of the preset reference image, wherein each image block corresponds to each reference image block one by one; the method comprises the steps of up-sampling an image block displacement set of each frame of image to obtain an up-sampled image block displacement set; carrying out image block alignment on each image block by using each image block displacement in the up-sampled image block displacement set to obtain a plurality of aligned images;
step S12, fusing the aligned multi-frame images to obtain a fused image;
and step S13, performing super-resolution reconstruction by using the fusion image to obtain a reconstructed image.
By the method, the embodiment of the disclosure can obtain the image block displacement set of the plurality of image blocks of each frame image on the basis of each reference image block of the preset reference image for the multi-frame image which is subjected to integral alignment on the basis of the preset reference image, wherein each image block corresponds to each reference image block one by one; the method comprises the steps of up-sampling an image block displacement set of each frame of image to obtain an up-sampled image block displacement set; and carrying out image block alignment on each image block by using each image block displacement in the up-sampled image block displacement set to obtain aligned multi-frame images, fusing the aligned multi-frame images to obtain a fused image, and carrying out super-resolution reconstruction by using the fused image to obtain a reconstructed image. When the images are aligned, the image block displacement set of the image blocks of each frame of image is obtained, and the image block displacement set is up-sampled, so that the alignment precision can be improved, and the operation cost can be saved.
In one possible embodiment, the multi-frame image may be a multi-frame image continuously photographed with respect to the target photographic subject.
In one possible implementation, step S11 may include:
integrally aligning (globally aligning) each frame of image on the basis of a preset reference image to obtain a plurality of integrally aligned frames of images;
and aiming at each image block of each frame of image after integral alignment, carrying out image block alignment (local alignment) on the basis of each reference image block of the preset reference image, wherein each image block corresponds to each reference image block one by one.
For example, for any one frame of image, there may be a positional shift (for example, there may be a positional shift in the X direction and/or the Y direction) between the other image and the preset reference image with respect to the preset reference image, and by the overall alignment, the any one frame of image and the preset reference image may be in an aligned (or overlapped) state as a whole.
The image block alignment (which may also be referred to as local alignment) refers to dividing any one frame of image and a preset reference image into a plurality of image blocks, taking each image block in the preset reference image as a reference image block, and performing image block level alignment on each image block of each frame of image based on the reference image block. When multiple frames of images are obtained by continuously shooting a target shooting object, the frames of images may have different positions, that is, each corresponding image block of each frame of image may have a position offset (for example, a position offset in an X direction and/or a Y direction may exist), and by means of image block alignment, an image block of any one frame of image and an image block of a preset reference image may be in an aligned (or overlapped) state.
The embodiment of the disclosure performs integral alignment on each frame of image, and performs image block alignment on the image blocks of each frame of image after integral alignment, so as to obtain a plurality of frames of aligned images. According to the embodiment of the invention, the multi-frame objects are aligned through the sequence of integral alignment and image block alignment, so that a reliable alignment result can be obtained, and the aligned multi-frame images have stronger robustness.
In a possible implementation manner, the preset reference image may be any one of the multiple frames of images.
Preferably, the preset reference image may be an intermediate frame image of the multi-frame images.
The embodiment of the disclosure can use the intermediate frame image in the multi-frame image as the preset reference image to perform the integral alignment and the image block alignment, so as to reduce the complexity of the model, save the operation resource and improve the speed and the efficiency of image processing.
In one example, when performing global alignment, the alignment accuracy may be at the pixel level to reduce the complexity of the algorithm when performing alignment.
In a possible implementation, the integrally aligning each frame of image based on the preset reference image may include:
for each frame of image, obtaining the image displacement of the frame of image relative to the preset reference image;
and integrally aligning the frame image by using the image displacement of the frame image and the preset reference image.
Wherein the image displacement may be a positional shift of the frame image with respect to a preset reference image in the entire image. The tile displacement in the following may be a position offset of the tile with respect to the reference tile. The position offset between the images or image blocks may be expressed in various ways according to necessity, and may be determined according to the related art.
In one example, the image displacement may include a plurality of directions, for example, may include perpendicular X and Y directions. When the entire alignment is performed, the alignment in the X direction may be performed according to the image displacement in the X direction, and the alignment in the Y direction may be performed according to the image displacement in the Y direction.
Of course, it should be understood that the above description of the overall alignment is exemplary and should not be taken as limiting the present disclosure.
In a possible embodiment, the "deriving a set of image block displacements for a plurality of image blocks of each frame image on the basis of each reference image block of the preset reference image" may include:
and aiming at each image block of each frame image, obtaining the image block displacement of the image block of the frame image relative to the corresponding reference image block. The image block displacement set of the frame image can be formed by the image block displacements.
After obtaining the image block displacement of the image block of the frame image relative to the corresponding reference image block, the method may further include:
and carrying out image block alignment on the image blocks of the frame image by using the image block displacement and the corresponding reference image blocks.
According to the method, the image blocks of each frame of image can be aligned according to the reference image blocks of the preset reference image, so that the robustness of the image is improved, and a reliable alignment result is obtained.
In a possible implementation manner, the embodiment of the present disclosure may perform blocking on each frame of image to obtain a plurality of image blocks of each frame of image, and perform blocking on a preset reference image in the same manner to obtain a plurality of reference image blocks. In the embodiment of the present disclosure, the image sizes of the preset reference image and the other frame images may be the same, and therefore, when the other images and the preset reference image are in an overall aligned state, each reference image block of the preset reference image and each image block of the other frame images have a one-to-one correspondence relationship.
In one example, the image blocks may be in a square shape, and a side length of each image block may be 300 to 500 (unit: pixel) in size.
Preferably, the side length of each image block may be 400 pixels, i.e. the size of each image block is 400 × 400.
According to the embodiment of the disclosure, the image blocks of 400 × 400 are selected for aligning the image blocks, so that the robustness and the image precision of the image can be ensured to be high.
In a possible embodiment, the "obtaining, for each image block of each frame image, an image block displacement of the image block of the frame image relative to a corresponding reference image block" may include:
performing Fourier transform on the reference image block f (x, y) and the image block g (x, y) of the current frame image:
F(Kx,ky)=FT{f(x,y)},
G(Kx,ky)=FT{g(x,y)},
wherein: x, y denote the coordinates of the image block in the spatial domain, kx、kyRepresenting the coordinates of the image block in the frequency domain, F (k)x,ky) Denotes the result of the Fourier transform of the reference image block f (x, y), G (k)x,ky) Representing the result of Fourier transform of an image block g (x, y) of the current frame image;
in the case where the image block g (X, Y) of the current frame image has a large overlapping area with the reference image block f (X, Y) and there is only translation, the displacement in the X and Y directions is X0,y0Then, the image block g (x, y) of the current frame image can be represented as:
g(x,y)= f(x-x0,y-y0),
the translational nature of the fourier transform yields:
Figure 818280DEST_PATH_IMAGE003
where M and N are the size of the image block (in one example, M, N may be equal, e.g., both 400 pixels);
the phase term R (k) of the Fourier spectrum introduced by the spatial translation can be extracted by the following formulax,ky) I.e. the normalized cross-power spectrum is:
Figure 346213DEST_PATH_IMAGE004
to R (k)x,ky) An inverse Fourier transform is performed to obtain a pulse function delta (x-x)0,y-y0) The pulse function delta (x-x)0,y-y0) In (x)0,y0) A maximum value can be taken;
determining the pulse function delta (x-x)0,y-y0) The image block displacement can be obtained at the position of the maximum value.
Of course, the above description of the method for obtaining the image block displacement of each frame image relative to the image block of the reference image block is exemplary and should not be construed as a limitation to the present disclosure, and a person skilled in the art may determine the image block displacement of each frame image relative to the image block of the reference image block by other methods, and the present disclosure is not limited thereto.
By the method, the image block displacement of the image block of each frame image relative to the reference image block can be obtained, and the image block alignment is carried out by utilizing the image block displacement.
For step S11, the embodiment of the present disclosure may process the image block displacements of all image blocks of each frame image to obtain a more accurate image block displacement. This process will be described below.
The image block displacement set of each frame image may include an image block displacement of each image block relative to a corresponding reference image block. For example, assuming that the size of the image is 4000 × 4000 pixels, and assuming that the frame image and the predetermined reference image block are divided into 100 image blocks, the image block displacement set includes image block displacements of 100 image blocks.
Upsampling may refer to any technique that may render an image to a higher resolution, for example, upsampling may include resampling and interpolation, and by upsampling the set of image block displacements for each frame of image, the set of image block displacements for each frame of image with a higher resolution may be obtained.
In one possible embodiment, in order to make the image block displacement set excessively smooth, the image block displacement set of the frame image may be upsampled by using a bilinear interpolation method.
Taking the above example as a bearing, 4 times upsampling may be performed on the image block displacement set by using a bilinear interpolation method, and the upsampled image block displacement set should be 1600 times.
Of course, the above description is exemplary, and the embodiments of the present disclosure do not limit the upsampling parameters (e.g., the upsampling multiple), nor limit the upsampling method (e.g., bilinear interpolation), and a person skilled in the art may set the upsampling parameters as needed to select the upsampling method by referring to the related art.
By the method, the image block displacement set of the image blocks of each frame of image can be obtained, the image block displacement set is subjected to up-sampling, and each image block of the frame of image is subjected to image block alignment according to the up-sampled image block displacement set. .
In one possible embodiment, the method may further include:
judging whether the image blocks of each frame of image are aligned with the reference image blocks of the preset reference image in error or not according to each image block of each frame of image;
in the case of error alignment, updating the displacement of the image block with displacements of a plurality of neighborhood image blocks of the image block.
Of course, the embodiment of the present disclosure may perform error alignment judgment on the image block displacement set obtained after the upsampling in step S11, so as to judge whether the image block has error alignment.
Through the above method, the embodiment of the present disclosure may verify whether the image block of the frame image is in error alignment with the reference image block, and may update the displacement of the image block by using the displacements of a plurality of neighboring image blocks of the image block when the image block is in error alignment with the reference image block. The embodiment of the disclosure can correct the displacement of the image block which is aligned incorrectly, so that when the image blocks are aligned, the image block can be displaced accurately, and a better alignment result can be obtained.
The error alignment may be that, after the whole alignment and the image block alignment are performed, each image block of each frame image and each reference image block of the preset reference image are aligned in position, but information between the aligned image blocks and the reference image blocks is different or dissimilar.
In a possible implementation, the updating the displacement of the image block with the displacements of the plurality of neighborhood image blocks of the image block may include:
updating the displacement of the image block by using the average value of the displacements of a plurality of neighborhood image blocks of the image block.
Through the method, the displacement of the image block is updated by using the displacement average value of the plurality of neighborhood image blocks, so that the difference between the image block and other neighborhood image blocks can be reduced, and the displacement value of the image block is corrected.
In one example, the plurality of neighborhood image blocks are, for example, 3 × 3 neighborhood image blocks.
In a possible embodiment, the image block performing the error alignment determination may be an upsampled image block, for example, the original image block is 400 × 400, the upsampled image block may be 35 × 35, and the like, and of course, the image block performing the error alignment determination may be the original image block, and the disclosure is not limited thereto.
In a possible implementation manner, the determining whether the image block of the frame image is in error alignment with the reference image block of the preset reference image may include:
judging that the image block is in error alignment with the corresponding reference image block when the difference between the image block displacement of the image block and the image block displacement mean value of the plurality of neighborhood image blocks reaches a preset value, the image block is not similar to the corresponding reference image block, and the reference image block is a high-frequency area (high-frequency texture area); or
And under the conditions that the difference between the image block displacement of the image block and the image block displacement mean value of the plurality of neighborhood image blocks reaches a preset value, the image block is not similar to the corresponding reference image block, and the reference image block is a low-frequency area (flat area), judging that the image block is in error alignment with the corresponding reference image block.
When the reference image block is a high-frequency area, it can be shown that the gray value of the pixel of the reference image block changes faster; when the reference image block is a low frequency region, it can be represented that the pixel gray scale value of the reference image block changes slowly. Of course, the above description of the high frequency region and the low frequency region is exemplary, in the field of image processing, the high frequency region and the low frequency region may be interpreted as other regions, the present disclosure is not limited thereto, and those skilled in the art may refer to the description of the high frequency region and the low frequency region in the related art.
The determination mode of the image block displacement, the judgment mode of whether the image blocks are similar or not, and the judgment mode of whether the image blocks are in a high-frequency area or a low-frequency area can be realized based on the related technology.
In a possible implementation manner, the determining whether the image block of the frame image is in error alignment with the reference image block of the preset reference image may include:
the intra difference diff is obtained by the following formulaintraAnd inter-frame difference diffinterThe intra difference diffintraCan be used for judging whether the reference image block is a high-frequency area or not and the difference diff between framesinterCan be used to determine whether the image block is similar to the corresponding reference image block (when the inter-frame difference diff is present)interA larger value may indicate that the two are less similar):
Figure 401894DEST_PATH_IMAGE005
Figure 708286DEST_PATH_IMAGE006
where D is the neighborhood radius (e.g., D is 1 when the neighborhood is 3 image blocks), IrefRepresenting a reference image block, IcurRepresenting image blocks of the current frame image.
Through the above method, the embodiment of the disclosure obtains the intra-frame difference diffintraAnd inter-frame difference diffinterLater, the intra difference diff may be utilizedintraAnd inter-frame difference diffinterAnd judging whether the current image block and the reference image block are aligned in error or not.
In a possible implementation manner, the determining whether the image block of the frame image is in error alignment with the reference image block of the preset reference image may include:
the difference between the image block displacement (including the x direction and the y direction) of the current image block and the image block displacement mean value of the image block in the neighborhood is obtained through the following formula:
Figure 507615DEST_PATH_IMAGE007
wherein m and n represent imagesAn index of a block, x '(m, n) denotes an x-direction original value of the current image block, y' (m, n) denotes a y-direction original value of the current image block, diffxRepresenting the difference, diff, between the x-direction displacement in the image block displacement of the current image block and the image block displacement mean of its neighboring image blocksyAnd the difference between the y-direction displacement in the image block displacement of the current image block and the image block displacement mean value of the image block in the neighborhood is represented.
In this example, the neighborhood of the image block is 3, although it should be understood that the neighborhood of the image block may be other in other embodiments.
Wherein, the determining that the image block is aligned with the corresponding reference image block in error when the difference between the image block displacement of the image block and the image block displacement mean of the neighboring image block reaches a preset value, the image block is not similar to the corresponding reference image block, and the reference image block is a high-frequency region (high-frequency texture region) may include:
in case of intra-frame difference greater than a preset parameter th0 (preset value) (diff)intra> th0) to determine that the reference image block is a high-frequency texture region, and in this case, to continuously determine whether the current image block is similar to the reference image block;
taking the larger value of the absolute values of diffx and diffy and the preset parameter diffth1Comparing, when the larger value is larger than the preset parameter diffth1In the case of (max (diff)x),abs(diffy))>diffth1) Determining that a difference of an image block displacement of the current image block with respect to an image block displacement mean of the plurality of neighboring image blocks reaches a preset value, and determining that the current image block is dissimilar from the reference image block ((max (diff) when an inter-frame difference is greater than a preset parameter th 1)x),abs(diffy))>diffth1)&(diffinter>th1))。
When the conditions ((diff) are satisfied simultaneouslyintra>th0)&max(abs(diffx),abs(diffy))>diffth1) May determine that the current image block is in error alignment with the reference image block.
The determining that the image block is aligned with the corresponding reference image block in error when the difference between the image block displacement of the image block and the image block displacement mean value of the neighborhood image block reaches a preset value, the image block is not similar to the corresponding reference image block, and the reference image block is a low-frequency region (flat region) includes:
in case of intra-frame difference smaller than a preset parameter th0 (preset value) (diff)intra< th0), it may be judged that the reference image block is a flat area (low frequency area), in which case it is continuously judged whether the current image block is similar to the reference image block;
taking the larger value of the absolute values of diffx and diffy and the preset parameter diffth1Comparing, when the larger value is larger than the preset parameter diffth2In the case of (max (diff)x),abs(diffy))>diffth2) Determining that the difference between the image block displacement of the image block and the average of the image block displacements of the plurality of neighborhood image blocks reaches a preset value, and meanwhile, determining that the current image block is dissimilar to the reference image block ((max (dif) when the inter-frame difference is greater than a preset parameter th2)x),abs(diffy))>diffth2)&(diffinter>th2))。
When the conditions ((diff) are satisfied simultaneouslyintra<th0)&(max(abs(diffx),abs(diffy))>diffth2)&(diffinter> th2)) it may be determined that the current image block is incorrectly aligned with the reference image block.
Among them, th0, th1, th2 and diffth1、diffth2For the preset parameter, the preset parameter Th1 may be less than the preset parameter Th2, the preset parameter diffth2May be less than a preset parameter diffth1The preset parameter th0 and other preset parameters may be in independent relationship in size (i.e. the preset parameter th0 is independent of other preset parameters), and these preset parameters may be set according to actual scenes and actual needs, for which, the present disclosure does not limit the specific size of the preset parameters.
When the current image block and the reference image block are judged to be in error alignment through the two manners, the "updating the displacement of the image block by using the average value of the displacements of the plurality of neighborhood image blocks of the image block" may include:
the displacement of the current image block is obtained by the following formula:
Figure 296579DEST_PATH_IMAGE008
where x (m, n) represents the x-direction displacement update value of the current image block, and y (m, n) represents the y-direction displacement update value of the current image block.
According to the above method, the embodiment of the present disclosure may update the displacement of the current image block by using the average value of the neighborhood image blocks of the current image block.
Of course, it should be understood that the above description is exemplary and should not be construed as limiting the present disclosure, for example, the present disclosure does not limit the size of the neighborhood, which may be 3 x 3 or 4 x 4.
It should be noted that, the present disclosure does not limit the specific method for performing image fusion in step S12, and a person skilled in the art may perform image fusion on the aligned multi-frame images by using an appropriate method according to actual needs to obtain a fused image (point cloud data).
Referring to fig. 2, fig. 2 is a flowchart illustrating an image processing method according to an embodiment of the disclosure.
In a possible implementation, as shown in fig. 2, after obtaining the fused image according to step S12, the method may further include:
step S14, performing guiding filtering on the fused image according to a preset reference image to obtain a guiding filtered fused image;
step S13 of performing super-resolution reconstruction using the fused image may include:
and S131, performing super-resolution reconstruction by using the fusion image after the guide filtering.
Through the method, the embodiment of the disclosure can perform guided filtering on the fused image, and remove the ghost image from the fused image by using the preset reference image as the guide template, so that the complexity of image processing can be reduced.
It should be noted that the present disclosure does not limit the specific implementation manner of the guided filtering, and those skilled in the art can refer to the related art implementation.
In a possible embodiment, the step S13 of performing super-resolution reconstruction using the fused image may include:
determining neighborhood pixels of a pixel to be reconstructed;
and reconstructing the pixel to be reconstructed by utilizing the neighborhood pixel of the pixel to be reconstructed.
Through the method, the reconstructed image with higher resolution and higher signal-to-noise ratio can be obtained, and the method is simple and reliable.
In a possible implementation manner, the reconstructing the pixel to be reconstructed by using the neighborhood pixels of the pixel to be reconstructed may include:
reconstructing the pixel to be reconstructed by using the following formula:
Figure 585478DEST_PATH_IMAGE002
wherein I (I, j) represents the pixel to be reconstructed, I, j represent the coordinates of the pixel to be reconstructed, In(i, j) neighborhood pixels, w, representing the pixel to be reconstructednDenotes a reconstruction weight, n denotes an index (index of a neighbor pixel), Num denotes the number of neighbor pixels of a pixel to be reconstructed,
wherein, Wn=eT(HTGH)-1HG,eTThe method comprises the steps of representing a row vector with each element being 1, G representing a preset Gaussian filter kernel (the size of the preset Gaussian filter kernel can be determined according to the distance between a pixel to be reconstructed and a neighborhood pixel, or determined according to other modes), and H representing second-order Taylor expansion after linear fitting of a pixel value of the pixel to be reconstructed relative to the neighborhood pixel.
The actual size of the parameters such as the reconstruction weight, the preset Gaussian filter kernel and the like is not limited in the disclosure, and can be determined by the person skilled in the art as required.
In one example, the expression for H may be:
Figure 390885DEST_PATH_IMAGE009
wherein, a1~aNumThe abscissa of a neighborhood pixel (i.e. neighborhood point) representing a pixel to be reconstructed (i.e. a point to be interpolated) (a, b), b1~bNumDenotes the ordinate of the neighborhood pixel of the pixel (a, b) to be reconstructed, a denotes the abscissa of the pixel (a, b) to be reconstructed, and b denotes the ordinate of the pixel (a, b) to be reconstructed.
In a possible implementation, the determining a neighborhood pixel of the pixel to be reconstructed may include:
and determining the neighborhood pixels of the pixel to be reconstructed by utilizing a KNN (K nearest neighbor) equal-depth learning algorithm.
According to the method and the device, the neighborhood pixels of the pixels to be reconstructed are determined by utilizing the KNN and other deep learning algorithms, so that the processing process can be simplified, and the operation resources can be saved.
Referring to fig. 3, fig. 3 is a schematic diagram illustrating image blocks of a fused image.
As shown in fig. 3, for each image block, the neighborhood of each pixel excluding the boundary pixel has the same relative position relationship with the pixel, i.e., the variation of the pixels in the image block has repeatability and periodicity.
As shown in fig. 3, the points a and B have the same shape, and fig. 3 includes a plurality of the same or similar points a and B, and it can be seen that the variation of the pixels in the image block has periodicity.
Therefore, the embodiment of the disclosure can optimize the deep learning algorithm such as KNN, and as long as the neighborhood pixels of the pixel to be reconstructed in one period are found, the neighborhood pixels of other reconstructed pixels can be obtained by using the repeatability and periodicity.
According to the method, the super-resolution reconstruction of the fusion image can be realized through the method, the image with higher resolution is obtained, the detail of the image is better recovered, and the sub-pixel information obtained by utilizing the KNN and other deep learning algorithms to carry out pixel reconstruction has the characteristic of high reliability.
It should be noted that the above description of determining the neighborhood pixels of the reconstructed pixel using a deep learning algorithm such as KNN is exemplary and should not be construed as limiting the present disclosure.
In addition, the embodiment of the present disclosure does not limit how to determine the neighborhood pixels of the reconstructed pixels through the KNN and other deep learning algorithms, and those skilled in the art can refer to the introduction of the related art to the KNN and other deep learning algorithms to implement the embodiment of the present disclosure.
Referring to fig. 4, fig. 4 is a schematic diagram illustrating an image processing method according to an embodiment of the disclosure.
As shown in fig. 4, according to the image processing method provided by the embodiment of the present disclosure, the embodiment of the present disclosure may perform global alignment (i.e., global alignment) and local alignment (i.e., image block alignment) on multiple frames of images continuously captured by a target capture object, perform image fusion according to the aligned multiple frames of images to obtain point cloud data of a fusion image, perform guided filtering on the fusion image to achieve the purpose of removing ghosts, perform super-resolution reconstruction, and obtain a reconstructed image with higher resolution and higher definition.
Referring to fig. 5, fig. 5 is a schematic diagram illustrating an effect of an image processing method according to an embodiment of the present disclosure.
The left part of fig. 5 shows a schematic diagram (single frame image) of one frame image in a captured multi-frame image, and the right part of fig. 5 shows a schematic diagram (PSR result) of a reconstructed image obtained by using the embodiment of the present disclosure, and as can be seen from fig. 5, the resolution of the reconstructed image obtained by processing the multi-frame image according to the embodiment of the present disclosure is high.
Referring to fig. 6, fig. 6 is a schematic diagram illustrating an effect of an image processing method according to an embodiment of the present disclosure.
Fig. 6 shows a schematic diagram (a single frame image) of a frame image in a captured multi-frame image, and the right part of fig. 6 shows a schematic diagram (PSR result) of a reconstructed image obtained by using the embodiment of the present disclosure, and it can be seen from fig. 6 that, a denoising result is obvious and the image is clearer after the multi-frame image is processed according to the embodiment of the present disclosure.
The embodiment of the disclosure aligns and fuses multi-frame images, and then obtains a fused image comprising a series of randomly distributed sampling point cloud data, and removes ghosts by performing guided filtering on the point cloud data (fused image), so that the complexity of an image processing method can be reduced, and the ghost removing effect is good.
Referring to fig. 7, fig. 7 is a block diagram of an image processing apparatus according to an embodiment of the present disclosure.
The device can be used in an electronic device and a server, and as shown in fig. 7, the device comprises:
an alignment module 10, configured to obtain, for a multi-frame image that is subjected to overall alignment based on a preset reference image, an image block displacement set of a plurality of image blocks of each frame image based on each reference image block of the preset reference image, where each image block corresponds to each reference image block one to one; the method comprises the steps of up-sampling an image block displacement set of each frame of image to obtain an up-sampled image block displacement set; carrying out image block alignment on each image block by using each image block displacement in the up-sampled image block displacement set to obtain a plurality of aligned images;
the fusion module 20 is connected to the alignment module 10 and configured to fuse the aligned multi-frame images to obtain a fused image;
and the reconstruction module 30 is connected to the fusion module 20 and is used for performing super-resolution reconstruction by using the fusion image to obtain a reconstructed image.
By the above apparatus, the embodiment of the present disclosure may obtain, for a multi-frame image that is subjected to integral alignment based on a preset reference image, an image block displacement set of a plurality of image blocks of each frame image based on each reference image block of the preset reference image, where each image block corresponds to each reference image block one to one; the method comprises the steps of up-sampling an image block displacement set of each frame of image to obtain an up-sampled image block displacement set; and carrying out image block alignment on each image block by using each image block displacement in the up-sampled image block displacement set to obtain aligned multi-frame images, fusing the aligned multi-frame images to obtain a fused image, and carrying out super-resolution reconstruction by using the fused image to obtain a reconstructed image. When the images are aligned, the image block displacement set of the image blocks of each frame of image is obtained, and the image block displacement set is up-sampled, so that the alignment precision can be improved, and the operation cost can be saved.
In a possible embodiment, the deriving a set of image block displacements for a plurality of image blocks of each frame image based on each reference image block of a preset reference image includes:
and aiming at each image block of each frame image, obtaining the image block displacement of the image block of the frame image relative to the corresponding reference image block.
In a possible embodiment, the apparatus may further include:
a correction module (not shown) connected to the alignment module for:
judging whether the image blocks of each frame of image are aligned with the reference image blocks of the preset reference image in error or not according to each image block of each frame of image;
in the case of error alignment, updating the displacement of the image block with displacements of a plurality of neighborhood image blocks of the image block.
In a possible implementation manner, the determining whether the image block of the frame image is in error alignment with the reference image block of the preset reference image includes:
judging that the image block is aligned with the corresponding reference image block in error under the conditions that the difference between the image block displacement of the image block and the image block displacement mean value of the plurality of neighborhood image blocks reaches a preset value, the image block is not similar to the corresponding reference image block, and the reference image block is a high-frequency area; or
And under the conditions that the difference between the image block displacement of the image block and the image block displacement mean value of the plurality of neighborhood image blocks reaches a preset value, the image block is not similar to the corresponding reference image block, and the reference image block is a low-frequency area, judging that the image block is in error alignment with the corresponding reference image block.
In a possible embodiment, after obtaining the fused image, the apparatus further comprises:
a filtering module (not shown) connected to the fusion module, configured to perform guided filtering on the fusion image according to a preset reference image, so as to obtain a guided-filtered fusion image;
wherein the super-resolution reconstruction by using the fusion image comprises:
and performing super-resolution reconstruction by using the fusion image after the guiding filtering.
In a possible embodiment, the performing super-resolution reconstruction by using the fusion image includes:
determining neighborhood pixels of a pixel to be reconstructed;
and reconstructing the pixel to be reconstructed by utilizing the neighborhood pixel of the pixel to be reconstructed.
In a possible implementation manner, the reconstructing the pixel to be reconstructed by using the neighborhood pixels of the pixel to be reconstructed includes:
reconstructing the pixel to be reconstructed by using the following formula:
Figure 728326DEST_PATH_IMAGE001
wherein I (I, j) represents the pixel to be reconstructed, I, j represent the coordinates of the pixel to be reconstructed, In(i, j) neighborhood pixels, w, representing the pixel to be reconstructednRepresenting the reconstruction weight, n represents the index, Num represents the number of neighborhood pixels of the pixel to be reconstructed,
wherein, Wn=eT(HTGH)-1HG,eTAnd representing a row vector with each element being 1, G representing a preset Gaussian filter kernel, and H representing second-order Taylor expansion after linear fitting of the pixel value of the pixel to be reconstructed relative to the adjacent pixel.
According to the image processing device provided by the embodiment of the disclosure, the multi-frame images continuously shot by the target shooting object can be subjected to global alignment (namely, integral alignment) and local alignment (namely, image block alignment), image fusion is performed according to the aligned multi-frame images to obtain point cloud data of the fused image, the fused image is subjected to guide filtering to achieve the purpose of removing ghost, super-resolution reconstruction is performed, and a reconstructed image with higher resolution and higher definition can be obtained.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. An image processing method, characterized in that the method comprises:
aiming at a multi-frame image which is subjected to integral alignment based on a preset reference image, obtaining an image block displacement set of a plurality of image blocks of each frame image based on each reference image block of the preset reference image, wherein each image block corresponds to each reference image block one by one; the method comprises the steps of up-sampling an image block displacement set of each frame of image to obtain an up-sampled image block displacement set; carrying out image block alignment on each image block by using each image block displacement in the up-sampled image block displacement set to obtain a plurality of aligned images;
fusing the aligned multi-frame images to obtain a fused image;
performing super-resolution reconstruction by using the fusion image to obtain a reconstructed image;
the super-resolution reconstruction by using the fusion image comprises the following steps:
determining neighborhood pixels of a pixel to be reconstructed;
reconstructing the pixel to be reconstructed by using the neighborhood pixel of the pixel to be reconstructed;
the reconstructing the pixel to be reconstructed by using the neighborhood pixel of the pixel to be reconstructed includes:
reconstructing the pixel to be reconstructed by using the following formula:
Figure 503484DEST_PATH_IMAGE001
wherein I (I, j) represents the pixel to be reconstructed, I, j represent the coordinates of the pixel to be reconstructed, In(i, j) neighborhood pixels, w, representing the pixel to be reconstructednRepresenting the reconstruction weight, n represents the index, Num represents the number of neighborhood pixels of the pixel to be reconstructed,
wherein, Wn=eT(HTGH)-1HG,eTAnd representing a row vector with each element being 1, G representing a preset Gaussian filter kernel, and H representing second-order Taylor expansion after linear fitting of the pixel value of the pixel to be reconstructed relative to the adjacent pixel.
2. The method according to claim 1, wherein obtaining a displaced set of image blocks of a plurality of image blocks of each frame image on the basis of each reference image block of a predetermined reference image comprises:
and aiming at each image block of each frame image, obtaining the image block displacement of the image block of the frame image relative to the corresponding reference image block.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
judging whether the image blocks of each frame of image are aligned with the reference image blocks of the preset reference image in error or not according to each image block of each frame of image;
in the case of error alignment, updating the displacement of the image block with displacements of a plurality of neighborhood image blocks of the image block.
4. The method of claim 3, wherein the determining whether the image block of the frame image is in error alignment with the reference image block of the default reference image comprises:
judging that the image block is aligned with the corresponding reference image block in error under the conditions that the difference between the image block displacement of the image block and the image block displacement mean value of the plurality of neighborhood image blocks reaches a preset value, the image block is not similar to the corresponding reference image block, and the reference image block is a high-frequency area; or
And under the conditions that the difference between the image block displacement of the image block and the image block displacement mean value of the plurality of neighborhood image blocks reaches a preset value, the image block is not similar to the corresponding reference image block, and the reference image block is a low-frequency area, judging that the image block is in error alignment with the corresponding reference image block.
5. The method of claim 1, wherein after obtaining the fused image, the method further comprises:
performing guiding filtering on the fused image according to a preset reference image to obtain a guided filtered fused image;
wherein the super-resolution reconstruction by using the fusion image comprises:
and performing super-resolution reconstruction by using the fusion image after the guiding filtering.
6. An image processing apparatus, characterized in that the apparatus comprises:
the alignment module is used for obtaining an image block displacement set of a plurality of image blocks of each frame image on the basis of each reference image block of a preset reference image for a plurality of frames of images which are subjected to integral alignment on the basis of the preset reference image, wherein each image block corresponds to each reference image block one by one; the method comprises the steps of up-sampling an image block displacement set of each frame of image to obtain an up-sampled image block displacement set; carrying out image block alignment on each image block by using each image block displacement in the up-sampled image block displacement set to obtain a plurality of aligned images;
the fusion module is connected with the alignment module and used for fusing the aligned multi-frame images to obtain a fused image;
the reconstruction module is connected with the fusion module and used for performing super-resolution reconstruction by using the fusion image to obtain a reconstructed image;
the super-resolution reconstruction by using the fusion image comprises the following steps:
determining neighborhood pixels of a pixel to be reconstructed;
reconstructing the pixel to be reconstructed by using the neighborhood pixel of the pixel to be reconstructed;
the reconstructing the pixel to be reconstructed by using the neighborhood pixel of the pixel to be reconstructed includes:
reconstructing the pixel to be reconstructed by using the following formula:
Figure 977322DEST_PATH_IMAGE001
wherein I (I, j) represents the pixel to be reconstructed, I, j represent the coordinates of the pixel to be reconstructed, In(i, j) neighborhood pixels, w, representing the pixel to be reconstructednRepresenting the reconstruction weight, n represents the index, Num represents the number of neighborhood pixels of the pixel to be reconstructed,
wherein, Wn=eT(HTGH)-1HG,eTAnd representing a row vector with each element being 1, G representing a preset Gaussian filter kernel, and H representing second-order Taylor expansion after linear fitting of the pixel value of the pixel to be reconstructed relative to the adjacent pixel.
7. The apparatus of claim 6, wherein obtaining a displaced set of image blocks for a plurality of image blocks of each frame image based on each reference image block of a default reference image comprises:
and aiming at each image block of each frame image, obtaining the image block displacement of the image block of the frame image relative to the corresponding reference image block.
8. The apparatus of claim 6 or 7, further comprising:
a correction module coupled to the alignment module for:
judging whether the image blocks of each frame of image are aligned with the reference image blocks of the preset reference image in error or not according to each image block of each frame of image;
in the case of error alignment, updating the displacement of the image block with displacements of a plurality of neighborhood image blocks of the image block.
9. The apparatus of claim 8, wherein the determining whether the tile of the frame of picture is in error alignment with a reference tile of the default reference picture comprises:
judging that the image block is aligned with the corresponding reference image block in error under the conditions that the difference between the image block displacement of the image block and the image block displacement mean value of the plurality of neighborhood image blocks reaches a preset value, the image block is not similar to the corresponding reference image block, and the reference image block is a high-frequency area; or
And under the conditions that the difference between the image block displacement of the image block and the image block displacement mean value of the plurality of neighborhood image blocks reaches a preset value, the image block is not similar to the corresponding reference image block, and the reference image block is a low-frequency area, judging that the image block is in error alignment with the corresponding reference image block.
10. The apparatus of claim 9, wherein after obtaining the fused image, the apparatus further comprises:
the filtering module is connected with the fusion module and used for performing guiding filtering on the fusion image according to a preset reference image to obtain a fusion image subjected to guiding filtering;
wherein the super-resolution reconstruction by using the fusion image comprises:
and performing super-resolution reconstruction by using the fusion image after the guiding filtering.
CN201911169435.8A 2019-11-26 2019-11-26 Image processing method and device Active CN110650295B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911169435.8A CN110650295B (en) 2019-11-26 2019-11-26 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911169435.8A CN110650295B (en) 2019-11-26 2019-11-26 Image processing method and device

Publications (2)

Publication Number Publication Date
CN110650295A CN110650295A (en) 2020-01-03
CN110650295B true CN110650295B (en) 2020-03-06

Family

ID=68995983

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911169435.8A Active CN110650295B (en) 2019-11-26 2019-11-26 Image processing method and device

Country Status (1)

Country Link
CN (1) CN110650295B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113256501B (en) * 2020-02-10 2022-08-23 武汉Tcl集团工业研究院有限公司 Image processing method, storage medium and terminal equipment
CN111447359B (en) * 2020-03-19 2021-07-02 展讯通信(上海)有限公司 Digital zoom method, system, electronic device, medium, and digital imaging device
CN113518243A (en) * 2020-04-10 2021-10-19 Tcl科技集团股份有限公司 Image processing method and device
CN113554659B (en) * 2020-04-23 2023-06-02 杭州海康威视数字技术股份有限公司 Image processing method, device, electronic equipment, storage medium and display system
CN111784578A (en) * 2020-06-28 2020-10-16 Oppo广东移动通信有限公司 Image processing method, image processing device, model training method, model training device, image processing equipment and storage medium
CN117616455A (en) * 2022-06-20 2024-02-27 北京小米移动软件有限公司 Multi-frame image alignment method, multi-frame image alignment device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101477684A (en) * 2008-12-11 2009-07-08 西安交通大学 Process for reconstructing human face image super-resolution by position image block
CN104834931A (en) * 2015-03-13 2015-08-12 江南大学 Improved SIFT algorithm based on wavelet transformation
CN106204440A (en) * 2016-06-29 2016-12-07 北京互信互通信息技术有限公司 A kind of multiframe super resolution image reconstruction method and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI419059B (en) * 2010-06-14 2013-12-11 Ind Tech Res Inst Method and system for example-based face hallucination

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101477684A (en) * 2008-12-11 2009-07-08 西安交通大学 Process for reconstructing human face image super-resolution by position image block
CN104834931A (en) * 2015-03-13 2015-08-12 江南大学 Improved SIFT algorithm based on wavelet transformation
CN106204440A (en) * 2016-06-29 2016-12-07 北京互信互通信息技术有限公司 A kind of multiframe super resolution image reconstruction method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于空间域正则化方法的图像超分辨率技术研究;黄淑英;《中国博士学位论文全文数据库信息科技辑》;20140115;第2章 *
序列图像超分辨率重建技术研究;徐志刚;《中国博士学位论文全文数据库》;20130515;第2章 *

Also Published As

Publication number Publication date
CN110650295A (en) 2020-01-03

Similar Documents

Publication Publication Date Title
CN110650295B (en) Image processing method and device
CN110622497B (en) Device with cameras having different focal lengths and method of implementing a camera
CN107959805B (en) Light field video imaging system and method for processing video frequency based on Hybrid camera array
JP4653235B2 (en) Composition of panoramic images using frame selection
CN108833785B (en) Fusion method and device of multi-view images, computer equipment and storage medium
JP6147172B2 (en) Imaging apparatus, image processing apparatus, image processing method, and program
KR102481882B1 (en) Method and apparaturs for processing image
US9959600B2 (en) Motion image compensation method and device, display device
CN103839227B (en) Fisheye image correcting method and device
CN112368710B (en) Method for combining contents from multiple frames and electronic device thereof
CN113301274B (en) Ship real-time video panoramic stitching method and system
WO2016164166A1 (en) Automated generation of panning shots
JP2011139367A (en) Apparatus and method for processing image
CN107809610B (en) Camera parameter set calculation device, camera parameter set calculation method, and recording medium
WO2006079963A2 (en) Device for registering images
EP2761875A1 (en) Methods and apparatus for conditional display of a stereoscopic image pair
JP2011060282A (en) Method and system for motion detection using nonlinear smoothing of motion field
JP2014229971A (en) Rolling shutter distortion correction and video image stabilization processing method
EP2446612A1 (en) Real time video stabilization
US8737758B2 (en) Apparatus and method of reducing noise
CN111292278A (en) Image fusion method and device, storage medium and terminal
CN114429191B (en) Electronic anti-shake method, system and storage medium based on deep learning
JP2010506482A (en) Method and filter for parallax recovery of video stream
JP6838918B2 (en) Image data processing device and method
CN112203023B (en) Billion pixel video generation method and device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant