CN114820299A - Non-uniform motion blur super-resolution image restoration method and device - Google Patents

Non-uniform motion blur super-resolution image restoration method and device Download PDF

Info

Publication number
CN114820299A
CN114820299A CN202210280228.5A CN202210280228A CN114820299A CN 114820299 A CN114820299 A CN 114820299A CN 202210280228 A CN202210280228 A CN 202210280228A CN 114820299 A CN114820299 A CN 114820299A
Authority
CN
China
Prior art keywords
image
resolution
blurred
images
loss function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210280228.5A
Other languages
Chinese (zh)
Inventor
李子涵
崔光茫
赵巨峰
陈颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202210280228.5A priority Critical patent/CN114820299A/en
Publication of CN114820299A publication Critical patent/CN114820299A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a non-uniform motion blur super-resolution image restoration method, which comprises the following steps: s1, constructing a data set; s2, inputting the preprocessed data set into a generator to obtain a preliminary restoration image; s3, distinguishing the preliminary restored image and the real image through a discriminator to obtain a distinguishing result, wherein the discriminator is a Markov distinguishing network; s4, optimizing the countermeasure network by using a loss function to obtain a network result with optimal performance and obtain an optimal restored image, wherein the generated countermeasure network comprises a generator and a discriminator; and S5, outputting a restoration result. The problems that the restoration process is irreversible and the restoration effect is unclear due to the fact that the image information is lost due to single source of the image information are solved.

Description

Non-uniform motion blur super-resolution image restoration method and device
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a device for restoring a non-uniform motion blur super-resolution image.
Background
Today, with highly developed camera technology, images have become another large information carrier in addition to languages, characters, and the like. The resolution of the image is critical to the amount of information acquired. In the imaging process, especially in the motion process, a target object is imaged, and due to the movement of a target relative to a lens and the exposure time of a camera, non-uniform motion blur of an imaged scene image is easily caused, so that the resolution of the image is reduced, and great inconvenience is brought to subsequent image processing. The super-resolution technology is to restore a low-resolution image into a high-resolution image, and the generated pleasant high-resolution image can obviously improve the performance of other machine vision tasks. In recent years, super-resolution technology has also received wide attention.
The deblurring problem based on a single image is highly underdetermined, particularly for a non-uniform motion blurred image, in the blurring process, a clear image is equivalent to be convolved with different convolution kernels at different positions of the image, and the difficulty is greatly improved compared with the deblurring of the uniform motion blurred image. And the image information of the non-uniform motion blurred image at the high-frequency zero point is lost in the imaging process and the missing degree is different, so that different areas need to be segmented for the de-blurring of the non-uniform motion blurred image to achieve the de-blurring aiming at different convolution kernels, and the non-uniform motion blurring becomes a complicated and difficult ill-conditioned problem.
For example, a chinese patent document discloses "a non-uniform motion blurred image adaptive restoration method based on attention model", which is published under the reference number CN111275637A, and designs a condition generation countermeasure network combining with the attention mechanism. The generated network is an encoding and decoding structure, dense connection networks are adopted in the encoding stage to extract features, the feature utilization rate is improved, the propagation of the features is enhanced, and a visual attention mechanism is added, so that the network can adaptively adjust network parameters for different input images, and image blur is dynamically removed. The method adds an attention mechanism, and gives corresponding weights to the areas with different fuzzy degrees, so that the deblurring effect is improved compared with that of the conventional restoration method. However, the input of the image restoring method is a single image, the image information is single in acquisition source, information is easy to lose in the deblurring process, an irreversible effect is caused, and the definition of the restored image needs to be improved.
Disclosure of Invention
The invention provides a method and a device for restoring a non-uniform motion blur super-resolution image, which are used for solving the problems of irreversible restoration process and unclear restoration effect caused by image information loss due to single image information source and can solve the problems of single information amount and unsatisfactory restoration effect of the non-uniform motion blur image in the restoration process.
In order to solve the technical problems, the technical scheme of the invention is as follows:
a non-uniform motion blur super-resolution image restoration method comprises the following steps:
s1, constructing a data set
S1-1, calculating the rotation angle of the image with the maximum complementarity;
s1-2, acquiring a high-resolution blurred image and a high-resolution clear image with high complementarity;
s1-3, preprocessing the image to construct a data set;
s2, inputting the preprocessed data set into a generator to obtain a preliminary restored image;
s3, distinguishing the preliminary restored image and the real image through a discriminator to obtain a distinguishing result, wherein the discriminator is a Markov distinguishing network;
s4, optimizing the countermeasure network by using a loss function to obtain a network result with optimal performance and obtain an optimal restored image, wherein the generated countermeasure network comprises a generator and a discriminator;
and S5, outputting a restoration result.
Preferably, the step S1-1 includes the following sub-steps:
s1-1-1, setting a non-uniform motion blur image acquisition device, and directly setting a rotating prism on a target object and a camera, wherein the target object, the rotating prism and the camera are on the same optical axis;
s1-1-2, enabling the target object light beam to enter a camera lens after rotating the prism, and obtaining one image every time the prism is rotated once to obtain 360 original images;
s1-1-3, preprocessing the 360 original images, and calculating to obtain the image combination with the maximum complementarity and the rotation angle thereof.
Preferably, the step S1-1-3 includes:
1) performing Gaussian filtering on 360 original images by using MATLAB;
2) the high-frequency component h of the blurred image can be obtained by subtracting the original image after Gaussian filtering from the blurred image i
h i =y i -G 0 *y i
Wherein y is i Is a blurred image, G 0 Is a two-dimensional gaussian filtering convolution operator;
3) obtaining high-frequency component h of blurred image i The gradient image in the horizontal and vertical directions of (1) is calculated by the formula:
hd ix =h i *d x hd iy =h i *d y i=1、2
4) obtaining a global gradient image hd of a blurred image i The calculation formula is as follows:
Figure BDA0003557564310000031
5) global gradient image hd for blurred images i Carrying out binarization operation to obtain effective characteristics T of fuzzy image complementarity i =(x,y):
Figure BDA0003557564310000032
Wherein k is i A binary threshold value;
when the binarization operation is carried out, firstly, an Ostu algorithm is adopted to calculate the optimal threshold value of an original image, the Ostu algorithm assumes that the image is composed of a foreground region and a background region, and the optimal threshold value is calculated through traversal [0, 255 ]]The gray level histograms of the foreground area and the background area in the segmentation result in the interval range are compared, and then the variance between the two is compared, so that the gray level threshold value with the maximized variance is the solved binarization threshold value k i
Adjusting according to the optimal threshold of the original image, normalizing the gradient image of the blurred image, wherein the selection range of the threshold is generally set as k i ∈[0.2,0.5];
6) Combining the two binary images of the 360 acquired blurred images in pairs, wherein the total number is 360 2 Obtaining binary images of the combination modes, and respectively obtaining the complementarity of each combination mode, wherein the complementarity formula is as follows:
Figure BDA0003557564310000041
wherein T (A) and T (B) are binary images of the blurred image respectively, M and N are the length and width of the image, and the image combination with the maximum complementarity and the rotation angle thereof are finally obtained by using MATLAB.
Preferably, in step S1-2, the method for obtaining the high-resolution blurred image and the high-resolution sharp image with high complementarity comprises:
s1-2-1, monitoring the motion state of the target object;
s1-2-2, when the motion of the target object is monitored, the camera shoots a plurality of frames of clear pictures, the rotating prism is further controlled to rotate to a rotation angle with the maximum complementarity, and after the rotating prism is rotated, the camera shoots a non-uniform motion blurred image;
and S1-2-3, finally, shooting to obtain a complementary high-resolution blurred image and a multi-frame high-resolution clear image, and reserving all the complementary blurred images and the high-resolution clear image of the last frame.
Preferably, in step S1-3, the acquired high-resolution blurred image pair with high complementarity is double-downsampled to obtain a low-resolution blurred image with high complementarity, and the high-resolution sharp image and the low-resolution blurred image pair are combined into a data set.
Preferably, the data set comprises 1000 acquired high resolution sharp images and 1000 pairs of corresponding low resolution blurred images, the resolution of the sharp images being 1280 × 1024 and the resolution of the low resolution blurred images being 640 × 512.
Preferably, the generator is obtained by jointly optimizing a perceptual loss function, an antagonistic loss function, an edge loss function and an MSE loss function, and the jointly optimized loss function is as follows:
L=L adv +L p +L edge +L MSE
preferably, the optimization method in step S4 is as follows:
the difference of the edge characteristics between the primary restored image and the clear image is measured through an edge loss function, wherein the edge loss function is as follows:
Figure BDA0003557564310000051
wherein S and G (b) are real clear image and preliminary restored image, respectively, and W and H are length and width ^ of image h And + v Gradient operation along the horizontal direction and the vertical direction respectively;
the optimization is performed by an MSE loss function, which is:
Figure BDA0003557564310000052
l and S are respectively a secondary restored image and a real clear image generated by the multi-scale feature extraction module, and N is the number of elements of S and L;
the perception loss function measures the overall difference between the generated secondary restored image and the corresponding real clear image characteristic, and the countermeasure loss function enables the generated high-quality image and the real clear image to be difficult to distinguish,
the perceptual loss function is formulated as:
Figure BDA0003557564310000053
the penalty function is formulated as:
Figure BDA0003557564310000054
wherein, I B The input blurred image is shown, and since the network input is two blurred images having high complementarity, the two images are merged as I B ,I S For a true sharp image, G represents the generator, D represents the discriminator, and N represents the number of training images in a batch.
The invention also discloses a non-uniform motion blur super-resolution image restoration device, which comprises an image acquisition mechanism, a memory, a processor and a computer program which is stored in the memory and can execute the non-uniform motion blur super-resolution image restoration method on the processor.
Preferably, the image acquisition mechanism includes spectroscope, motion perception sensor, rotating prism, high-speed camera and first controller and second controller, high-speed camera includes camera lens and shutter, rotating prism arranges spectroscope transmission and reflection light way in respectively and coaxial setting, first controller and second controller are singlechip STM32, first controller and second controller receive the signal of sensor transmission respectively, first controller is connected on high-speed camera's shutter, controls the shutter action, the second controller is connected on two sides rotating prism, the rotation of control prism, first controller and second controller collaborative work.
The invention has the following characteristics and beneficial effects:
1. the invention aims at the problem of non-uniform motion deblurring, and constructs a new data set. The new data set consists of a complementary image pair and a sharp image. In order to effectively acquire complementary images and clear images, a new image acquisition device is built. On the basis of calculating the rotation angle capable of obtaining the maximum complementary image by using an image complementarity formula, a clear image and a complementary fuzzy image pair of a target scene can be obtained by accurately and efficiently controlling signal receiving and transmitting and prism rotation through a motion perception sensor and a second controller, and a new data set is finally constructed after processing.
2. The network is added with the deformable convolution, the deformable convolution is used for replacing the common convolution, the regular sampling lattice points in the common convolution cause that the network is difficult to adapt to the geometric deformation, the convolution kernel in the deformable convolution can be changed in a self-adaptive mode according to the sampling position, the network adapts to the geometric deformation in the image, the problem that the feature extraction is not accurate enough due to the fact that the convolution kernel is fixed by the traditional convolution is solved, and information redundancy is avoided. In addition, the deformable convolution is used for the up-sampling layer, so that the network can reconstruct the restored image more accurately.
3. In the image restoration method, an attention residual block (RAM) combines channel attention and space attention, adopts sequential connection, divides feature maps of input channel attention into two groups, simultaneously utilizes jump connection, one group enters the channel attention to acquire weight, and the other group is fused with the other group of feature maps after acquiring the weight through the jump connection to enter the space attention, so that a space attention module receives more information, the loss of space information is relieved, and the calculation amount of a network is reduced. Different from the previous design of spatial attention, GN is adopted to replace BN, and the standard convolution is replaced by the deformable convolution, so that the limit of batch size is avoided while more accurate attention maps are generated.
4. The image restoration method realizes the multi-scale residual block which mainly comprises expansion convolutions, the expansion convolutions are sequentially connected, the expansion rate is set to be 1, 2 and 3, the deep information of the image is mined, the information omission is avoided in the convolution process, and meanwhile, the network calculation amount is reduced. And adding jump connection and introducing a fusion module, and fusing by using information of different layers to obtain multi-scale information so as to accelerate network convergence.
5. Different from the prior generation of a countermeasure network, only one blurred image is used as network input, the image information source is single, image information is lost due to various reasons in the restoration process, and the obtained restoration effect is not ideal. The image pair with the maximum complementarity is used as network input, so that image information sources are enriched, more scene information is reserved, the definition of the restored image is improved, and the condition that the restoration process is irreversible is reduced. And aiming at the problem of complementary image information fusion, a self-adaptive residual block is provided, and the module combines image information from an edge feature extraction module, a multi-scale feature extraction module and an upper-layer sub-pixel convolution together through a deformable convolution and space attention mechanism, so that information complementarity is fully utilized, and a high-resolution clear image is effectively reconstructed.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flow chart of a method of an embodiment of the present invention.
FIG. 2 is a schematic diagram of an image acquisition unit in an embodiment of the present invention.
FIG. 3 is a schematic block diagram of an embodiment of the present invention.
Fig. 4 is a schematic structural diagram of a generator in an embodiment of the present invention.
Fig. 5 is a schematic structural diagram of a discriminator in an embodiment of the present invention.
FIG. 6 is a block diagram of an edge feature extraction module and its sub-modules according to an embodiment of the present invention.
FIG. 7 is a diagram of a multi-scale feature extraction module and its sub-modules in an embodiment of the present invention;
FIG. 8 is a block diagram of multi-stream feature fusion and super-resolution reconstruction in an embodiment of the present invention;
Detailed Description
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "up", "down", "front", "back", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on those shown in the drawings, and are used only for convenience in describing the present invention and for simplicity in description, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and thus, are not to be construed as limiting the present invention. Furthermore, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first," "second," etc. may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless otherwise specified.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art through specific situations.
The invention provides a non-uniform motion blur super-resolution image restoration method, as shown in figure 1, comprising the following steps:
s1, constructing a data set
S1-1, calculating the rotation angle of the image with the maximum complementarity;
s1-2, acquiring a high-resolution blurred image and a high-resolution clear image with high complementarity;
s1-3, preprocessing the image to construct a data set;
s2, inputting the preprocessed data set into a generator to obtain a preliminary restoration image;
specifically, as shown in fig. 3 and 4, the generator includes an edge feature extraction module, a multi-scale feature extraction module, and a multi-stream feature fusion and super-resolution reconstruction module, where the multi-scale feature extraction module is connected in parallel with the edge feature extraction module and is connected to the multi-stream feature fusion and super-resolution reconstruction module. The generator is divided into two branches to extract image characteristics, image characteristic information of different sources is fused together, fuzzy image super-resolution is carried out, an image deblurring task is finally completed, and a pleasant clear image is obtained.
Further, as shown in fig. 6-8, the network randomly crops the blurred image with a resolution of 640 × 360 to 256 × 256 pixels for image pre-processing. The low-resolution blurred image pair (blurred image 1 and blurred image 2) is then input to the edge feature extraction module and the multi-scale feature extraction module of the generator, respectively.
The edge feature extraction module adopts an asymmetric U-net network structure, and a specific structure diagram of the module is shown in fig. 6 (a). Inputting a blurred image 1, performing feature extraction on a 7 × 7 convolutional layer, wherein the step length is 1, the padding is 3, filling is performed by adopting 0 to obtain a 64 feature map, and performing GN regularization and activation on the features.
The invention provides an attention residual block (RAM), and a specific structure diagram is shown in fig. 6(b), wherein the module consists of channel attention, space attention and deformable convolution, can focus on a seriously blurred region of an image, generates a corresponding weight diagram, and guides a network to extract edge information of the blurred image. The RAM combines the channel attention and the space attention, adopts a sequential connection mode, and as the image motion blur is the result of convolution of pixel points on the space and a convolution kernel, the channel attention mechanism is globally applied from a space perspective and focuses on which feature maps are more important, and the space attention mechanism focuses on the part of the image, namely where the feature maps are more important, so that the space attention is placed behind the channel attention, and the attention is more favorable for focusing on a serious blur area. In addition, unlike the CBAM block, the CBAM block acquires image space information by reducing the number of channels and then using convolution, but this may cause the image space information to be lost due to global pooling, and to solve this problem, we add a jump connection between channel attention and space attention, and at the same time, divide feature maps of input channel attention into two groups, one group enters channel attention acquisition weight, and the other group enters space attention through the fusion of the jump connection and the other group of feature maps after acquiring weight, so that the space attention module receives more information, alleviates the loss of space information, and reduces the amount of computation of the network. Meanwhile, the method is different from the previous design of space attention, GN is adopted to replace BN, and deformable convolution replaces standard convolution, so that more accurate attention maps are generated, and the limitation of batch size is avoided. There are three sequentially connected RAM blocks, and a residual structure in the residual is formed, which helps to extract clean image information, as shown in fig. 6 (c). The feature map is input to the residual block and downsampled after the RAM, which helps to reduce the loss of image spatial information. A residual block is added at the down-sampling layer, where a jump connection is added, as shown in fig. 6 (d). The upsampling layer adds pixel shuffling to avoid loss of image information and redundant information. In addition, jump connection is added between a down sampling layer and an up sampling layer with the same scale, and network convergence is accelerated. And obtaining feature mapping images with different scales and a primary restored image.
The blurred image 2 is input to a multi-scale feature extraction module. The multi-scale feature extraction module comprises a multi-scale residual block and a deformable convolution, and the network structure is shown in fig. 7 (a). The multiscale feature extraction module adopts a symmetrical U-net network structure and comprises a jump connection between a down-sampling layer and an up-sampling layer, wherein a mirror image is reserved, the down-sampling layer consists of three multiscale residual blocks and a convolution layer with the step length of 2, and the up-sampling layer consists of pixel shuffling and deformable convolution. The multiscale residual block is composed of expansion convolution with three convolution kernels of 3 x 3 and an output layer with the convolution kernel size of 1 x 1 for feature fusion, the expansion rates are 1, 2 and 3 respectively, and missing information generated in the convolution process is avoided by setting the expansion rates of the continuously arranged expansion convolutions to be zigzag. The multi-scale residual block is different from the conventional convolution parallel form, the expansion convolutions are connected in series, the deep information of the image is mined, and the network calculation amount is reduced. In order to better utilize information of different levels, jump connection is added and a fusion module is introduced to fuse features extracted from different sensing fields to obtain multi-scale information, network convergence is accelerated, and a multi-scale residual block structure is shown in fig. 7 (b). The method is characterized in that a deformable convolution and pixel shuffling are added into a decoder network, the method is different from an upsampling mode adopted by a traditional encoder, the deformable convolution is added, the deformable convolution structure is shown in figure 7(c), a convolution kernel is adaptively changed according to a sampling position, the network adapts to geometric deformation, the pixel shuffling is added, and the image information loss and the redundant information acquisition in the upsampling process are avoided. And obtaining feature mapping images with different scales and a primary restored image.
And finally, inputting the feature maps obtained from the edge feature extraction module and the multi-scale feature extraction module into a multi-stream feature fusion and super-resolution reconstruction module to realize feature fusion and super-resolution reconstruction. The sizes of the preliminary restoration image obtained by the two modules and the feature map of each scale are 256 × 256, 128 × 128 and 64 × 64 respectively. The multi-stream feature fusion and super-resolution reconstruction module comprises four adaptive residual blocks, three sub-pixel convolution layers, one convolution layer with LReLU and one convolution layer with tanh, and the structure is shown in FIG. 8 (a). The proposed adaptive residual block consists of a deformable convolution and a spatial attention mechanism, can better utilize the spatial information of the image, adaptively fuses the features from different streams, and then uses a sub-pixel convolution layer, namely pixel shuffling to realize super-resolution of the image to avoid a chessboard effect, and the structure of the adaptive residual block is shown in fig. 8 (b).
S3, distinguishing the preliminary restored image and the real image through a discriminator to obtain a distinguishing result, wherein the discriminator is a Markov distinguishing network;
specifically, as shown in fig. 5, the discriminator adopts a markov decision network, and inputs a real clear image and a generator restored image, and the discriminator structure includes 5 convolution layers, extracts image features, and except for the last layer, each layer includes a convolution layer with a size of 4 × 4 and a step size of 2, an example regularization layer, and a leak ReLU layer.
S4, optimizing the countermeasure network by using a loss function to obtain a network result with optimal performance and obtain an optimal restored image, wherein the generated countermeasure network comprises a generator and a discriminator;
and S5, outputting a restoration result.
Specifically, the step S1-1 includes the following sub-steps:
s1-1-1, setting a non-uniform motion blur image acquisition device, and directly setting a rotating prism on a target object and a camera, wherein the target object, the rotating prism and the camera are on the same optical axis;
s1-1-2, enabling the target object light beam to enter a camera lens after rotating the prism, and obtaining one image every time the prism is rotated once to obtain 360 original images;
s1-1-3, preprocessing the 360 original images, and calculating to obtain the image combination with the maximum complementarity and the rotation angle thereof.
Further, the step S1-1-3 includes:
1) performing Gaussian filtering on 360 original images by using MATLAB;
2) the high-frequency component h of the blurred image can be obtained by subtracting the original image after Gaussian filtering from the blurred image i
h i =y i -G 0 *y i
Wherein y is i Is a blurred image, G 0 Is a two-dimensional gaussian filtering convolution operator;
3) obtaining high-frequency component h of blurred image i The gradient image in the horizontal and vertical directions of (1) is calculated by the formula:
hd ix =h i *d x hd iy =h i *d y i=1、2
4) obtaining a global gradient image hd of a blurred image i The calculation formula is as follows:
Figure BDA0003557564310000121
5) global gradient image hd for blurred images i Carrying out binarization operation to obtain effective characteristics T of fuzzy image complementarity i =(x,y):
Figure BDA0003557564310000122
Wherein k is i A binary threshold value;
when the binarization operation is carried out, firstly, an Ostu algorithm (maximum inter-class variance method) is adopted to calculate the optimal threshold value of an original image, the Ostu algorithm assumes that the image is composed of a foreground region and a background region, and the optimal threshold value is calculated through a traversal meter [0, 255 ]]The gray level histograms of the foreground area and the background area in the segmentation result in the interval range are compared, and then the variance between the two is compared, so that the gray level threshold value with the maximized variance is the solved binarization threshold value k i
Adjusting according to the optimal threshold of the original image, normalizing the gradient image of the blurred image, wherein the selection range of the threshold is generally set as k i ∈[0.2,0.5];
6) Combining the two binary images of the 360 acquired blurred images in pairs, wherein the total number is 360 2 Obtaining binary images of the combination modes, and respectively obtaining the complementarity of each combination mode, wherein the complementarity formula is as follows:
Figure BDA0003557564310000123
wherein T (A) and T (B) are binary images of the blurred image respectively, M and N are the length and width of the image, and the image combination with the maximum complementarity and the rotation angle thereof are finally obtained by using MATLAB.
Specifically, in step S1-2, the method for obtaining the high-resolution blurred image and the high-resolution sharp image with high complementarity includes:
s1-2-1, monitoring the motion state of the target object;
s1-2-2, when the motion of a target object is monitored, a camera shoots multi-frame clear pictures, the rotation of the rotating prism is further controlled to be rotated to a rotation angle with the maximum complementarity, and after the rotation of the rotating prism is completed, the camera shoots non-uniform motion blurred images;
and S1-2-3, finally, shooting to obtain a complementary high-resolution blurred image and a multi-frame high-resolution clear image, and reserving all the complementary blurred images and the high-resolution clear image of the last frame.
It can be understood that the motion of the target object is monitored by the motion sensor 4, and a signal is sent to the first controller and the second controller 2, wherein the first controller is used for controlling the motion of the shutter 102 of the high-speed camera 1, the second controller 2 is used for controlling the rotation of the rotating prism 2, and a certain time difference exists between the signals sent by the first controller and the second controller 2. When the motion perception sensor 4 monitors the motion of the target object, the motion perception sensor 4 sends signals to the first controller and the second controller, wherein the first controller and the second controller send signals successively at a certain time interval. The first controller acts first, the shutter 102 is controlled fast to shoot a plurality of frames of clear pictures, at this time, the angle of the rotating prism 2 is 0 degree, namely, the second controller does not act yet to control the prism to rotate, and does not act rotationally. After the interval time is reached, the second controller 2 rapidly sends out signals to control the two prisms to rotate to the rotation angle with the maximum complementarity, and meanwhile, the first controller controls the camera shutter to keep a certain exposure time and shoot the non-uniform motion blurred image. Two blurred images and multi-frame clear images can be obtained by shooting finally, and all the blurred images and the last frame clear image are reserved.
Further, in the step S1-3, the obtained high-resolution blurred image pair with high complementarity is double-downsampled to obtain a low-resolution blurred image with high complementarity, and the high-resolution sharp image and the low-resolution blurred image pair are collected into a data set.
The data set comprises 1000 acquired high-resolution clear images and 1000 pairs of corresponding low-resolution blurred images, wherein the resolution of the clear images is 1280 multiplied by 1024, and the resolution of the low-resolution blurred images is 640 multiplied by 512.
The invention further sets up that the generator jointly optimizes by a perception loss function, an antagonistic loss function, an edge loss function and an MSE loss function, and the joint loss function of the antagonistic network is generated by:
L=L adv +L p +L edge +L MSE
specifically, the optimization method of step S4 is as follows:
the difference of the edge characteristics between the primary restored image and the clear image is measured through an edge loss function, wherein the edge loss function is as follows:
Figure BDA0003557564310000141
wherein S and G (b) are respectively the real clear image and the preliminary restoration image generated by the edge feature extraction module, and W and H are the length and width ^ of the image h And + v Gradient operation along the horizontal direction and the vertical direction respectively;
in the technical scheme, the edge feature extraction module is optimized, a sobel operator is selected to establish an edge loss function, and the difference of edge features between the low-resolution blurred image and the clear image is measured. The edge loss function is input into a real image and a low-resolution blurred image, the loss function is calculated by calculating the image gradient, the edge feature extraction module is optimized, more edge detail features are extracted, and the performance of the module is improved.
The optimization is performed by an MSE loss function, which is:
Figure BDA0003557564310000142
l and S are respectively a preliminary restored image and a real clear image generated by the multi-scale feature extraction module, and N is the number of elements of S and L;
in the technical scheme, the MSE loss function is selected for optimizing the multi-scale feature extraction module, so that the network is helped to extract more available features and texture details in the fuzzy image.
The perception loss function measures the overall difference between the generated initial restored image and the corresponding real clear image characteristics through optimizing the perception loss function and the countermeasure loss function, the countermeasure loss function enables the generated high-quality image and the real clear image not to be distinguished easily,
the perceptual loss function is formulated as:
Figure BDA0003557564310000143
the penalty function is formulated as:
Figure BDA0003557564310000151
wherein, I B The input blurred image is shown, and since the network input is two blurred images having high complementarity, the two images are merged as I B ,I S For a true sharp image, G represents the generator, D represents the discriminator, and N represents the number of training images in a batch.
In the technical scheme, the countermeasure network is optimized by selecting the sensing loss and the countermeasure loss generation, and the generation of the restoration image by the generator is restrained. Perceptual loss function L p The overall difference between the generated image and the corresponding true sharp image features may be measured, input as a true sharp image and restored image, and the perceptual loss calculated using the VGG19 feature layer as the difference between the generated data and the original true data.
Using WGAN-GP as a function of the penalty loss, L adv Making the generated high quality image indistinguishable from the real image, it has proven robust to the choice of generator.
It can be understood that a self-constructed data set is selected to train and test the network, and clear images and fuzzy images in the data set are turned over and cut to expand the database, so that overfitting of the network is prevented, and network adaptability is enhanced.
The present invention follows the training procedure set forth in WGAN, training the discriminator five times and then the generator once for optimization. The inability of Optimizer selection to use momentum-based methods such as Adam and momentum can cause instability, either RMSProp or SGD can be used. A small batch stochastic gradient descent method was chosen and RMSProp solver was applied. The initial learning rate of the generator and discriminator was set to 1-4, the batch size was set to 1, the epoch was set to 300, and the learning rate was linearly decayed to 0 over the last 150 cycles, and it was found experimentally that the model thus obtained performed better on the test set without significant differences in experimental effectiveness.
SSIM (structural similarity) and PSNR (peak signal-to-noise ratio) are selected to calculate the difference between the restored image and the real clear image, and the two values are used as evaluation indexes to verify the effectiveness of the model.
The invention also discloses a non-uniform motion blur super-resolution image restoration device, as shown in fig. 2, comprising an image acquisition mechanism, a memory, a processor and a computer program which is stored in the memory and can execute the non-uniform motion blur super-resolution image restoration method on the processor.
Further, image acquisition mechanism includes spectroscope, motion perception sensor, rotating prism, high-speed camera and first controller and second controller, high-speed camera includes camera lens and shutter, spectroscope transmission and reflection light way and coaxial setting are arranged in respectively to the rotating prism, first controller and second controller are singlechip STM32, the signal of sensor transmission is received respectively to first controller and second controller, first controller is connected on the shutter of high-speed camera, controls the shutter action, the second controller is connected on two sides rotating prism, the rotation of control prism, first controller and second controller collaborative work.
In the technical scheme, the motion sensing sensor is used for monitoring the motion condition of the target object, when the motion of the object is monitored, the sensor sends signals to the first controller and the second controller, and the first controller and the second controller send the signals at intervals and in succession. The first controller acts at first, the shutter is controlled rapidly to shoot a plurality of frames of clear pictures, the angle of the rotating prism is 0 degree at the moment, namely the second controller does not act yet to control the prism to rotate, and does not act rotationally. After the interval time is reached, the second controller quickly sends out signals to control the two prisms to rotate to the rotation angle with the maximum complementarity, and meanwhile, the first controller controls the camera shutter to keep a certain exposure time and shoot the non-uniform motion blurred image. And finally, two blurred images and a multi-frame clear image can be obtained through shooting, and all the blurred images and the last frame clear image are reserved. And finally, carrying out double-time down-sampling on the blurred image to obtain a low-resolution image, and forming the data set of the invention together with the high-resolution clear image.
The embodiments of the present invention have been described in detail with reference to the accompanying drawings, but the present invention is not limited to the described embodiments. It will be apparent to those skilled in the art that various changes, modifications, substitutions and alterations can be made in these embodiments, including the components, without departing from the principles and spirit of the invention, and still fall within the scope of the invention.

Claims (10)

1. A non-uniform motion blur super-resolution image restoration method is characterized by comprising the following steps:
s1, constructing a data set
S1-1, calculating the rotation angle of the image with the maximum complementarity;
s1-2, acquiring a high-resolution blurred image and a high-resolution clear image with high complementarity;
s1-3, preprocessing the image to construct a data set;
s2, inputting the preprocessed data set into a generator to obtain a preliminary restoration image;
s3, distinguishing the preliminary restored image and the real image through a discriminator to obtain a distinguishing result, wherein the discriminator is a Markov distinguishing network;
s4, optimizing the countermeasure network by using a loss function to obtain a network result with optimal performance and obtain an optimal restored image, wherein the generated countermeasure network comprises a generator and a discriminator;
and S5, outputting a restoration result.
2. The method for restoring a non-uniform motion-blurred super-resolution image as claimed in claim 1, wherein the step S1-1 comprises the following sub-steps:
s1-1-1, setting a non-uniform motion blur image acquisition device, and directly setting a rotating prism on a target object and a camera, wherein the target object, the rotating prism and the camera are on the same optical axis;
s1-1-2, enabling the target object light beam to enter a camera lens after rotating the prism, and obtaining one image every time the prism is rotated once to obtain 360 original images;
s1-1-3, preprocessing the 360 original images, and calculating to obtain the image combination with the maximum complementarity and the rotation angle thereof.
3. The method for restoring a non-uniform motion-blurred super-resolution image as claimed in claim 2, wherein the step S1-1-3 comprises:
1) performing Gaussian filtering on 360 original images by using MATLAB;
2) the high-frequency component h of the blurred image can be obtained by subtracting the original image after Gaussian filtering from the blurred image i
h i =y i -G 0 *y i
Wherein y is i Is a blurred image, G 0 Is a two-dimensional gaussian filtering convolution operator;
3) obtaining high-frequency component h of blurred image i The gradient image in the horizontal and vertical directions of (1) is calculated by the formula:
hd ix =h i *d x hd iy =h i *d y i=1、2
4) obtaining a global gradient image hd of a blurred image i The calculation formula is as follows:
Figure FDA0003557564300000021
5) global gradient image hd for blurred images i Carrying out binarization operation to obtain effective characteristics T of fuzzy image complementarity i =(x,y):
Figure FDA0003557564300000022
Wherein k is i A binary threshold value;
when the binarization operation is carried out, an Ostu algorithm is adopted to calculate the optimal threshold value of an original image, wherein the Ostu algorithm assumes that the image consists of a foreground region and a background region, and the optimal threshold value is calculated through traversal [0, 255 ]]The gray level histograms of the foreground area and the background area in the segmentation result under the interval range are compared, and then the variance between the two is compared, so that the gray level threshold value with the maximized variance is the required binarization threshold value k i
Adjusting according to the optimal threshold of the original image, normalizing the gradient image of the blurred image, wherein the selection range of the threshold is generally set as k i ∈[0.2,0.5];
6) Combining the two binary images of the 360 acquired blurred images in pairs, wherein the total number is 360 2 Obtaining binary images of the combination modes, and respectively obtaining the complementarity of each combination mode, wherein the complementarity formula is as follows:
Figure FDA0003557564300000023
wherein T (A) and T (B) are binary images of the blurred image respectively, M and N are the length and width of the image, and the image combination with the maximum complementarity and the rotation angle thereof are finally obtained by using MATLAB.
4. The method for restoring a non-uniform motion blur super-resolution image as claimed in claim 3, wherein in the step S1-2, the method for obtaining the high-resolution blurred image and the high-resolution sharp image with high complementarity comprises:
s1-2-1, monitoring the motion state of the target object;
s1-2-2, when the motion of the target object is monitored, the camera shoots a plurality of frames of clear pictures, the rotating prism is further controlled to rotate to a rotation angle with the maximum complementarity, and after the rotating prism is rotated, the camera shoots a non-uniform motion blurred image;
and S1-2-3, finally, shooting to obtain a complementary high-resolution blurred image and a multi-frame high-resolution clear image, and reserving all the complementary blurred images and the high-resolution clear image of the last frame.
5. The method for restoring non-uniform motion-blurred super-resolution images as claimed in claim 1, wherein in step S1-3, the obtained high-resolution blurred image pair with high complementarity is double-sampled to obtain a low-resolution blurred image with high complementarity, and the high-resolution sharp image and the low-resolution blurred image are collected into an integrated dataset.
6. The method of claim 5, wherein the data set comprises 1000 high resolution sharp images and 1000 pairs of corresponding low resolution blurred images, the sharp image resolution is 1280 x 1024, and the low resolution blurred image resolution is 640 x 512.
7. The method for restoring a non-uniform motion blur super-resolution image as claimed in claim 1, wherein the generator is obtained by jointly optimizing a perceptual loss function, an antagonistic loss function, an edge loss function and an MSE loss function, and the jointly optimized loss function is:
L=L adv +L p +L edge +L MSE
8. the non-uniform motion blur super-resolution image restoration method according to claim 7,
the optimization method in step S4 includes:
the difference of the edge characteristics between the primary restored image and the clear image is measured through an edge loss function, wherein the edge loss function is as follows:
Figure FDA0003557564300000041
wherein S and G (b) are a real clear image and a preliminary restored image, respectively, W and H are the length and width of the image,
Figure FDA0003557564300000042
and
Figure FDA0003557564300000043
gradient operation along the horizontal direction and the vertical direction respectively;
the optimization is performed by an MSE loss function, which is:
Figure FDA0003557564300000044
wherein L and S are respectively an initial restored image and a real clear image, and N is the number of elements of S and L;
the perception loss function measures the overall difference between the generated initial restored image and the corresponding real clear image characteristics through optimizing the perception loss function and the countermeasure loss function, the countermeasure loss function enables the generated high-quality image and the real clear image not to be distinguished easily,
the perceptual loss function is formulated as:
Figure FDA0003557564300000045
the penalty function is formulated as:
Figure FDA0003557564300000046
wherein, I B Representing the input blurred image, since the network input is two blurred images with high complementarityImage, so two images are fused as I B ,I S For true sharp images, G denotes the generator, D denotes the discriminator, and N denotes the number of training images of a batch.
9. A non-uniform motion blur super-resolution image restoration device, comprising an image acquisition mechanism, a memory, a processor, and a computer program stored in the memory and capable of executing the non-uniform motion blur super-resolution image restoration method according to any one of claims 1 to 8 on the processor.
10. The non-uniform motion blur super-resolution image restoration device according to claim 9, wherein the image acquisition mechanism comprises a spectroscope, a motion sensing sensor, a rotating prism, a high-speed camera, a first controller and a second controller, the high-speed camera comprises a lens and a shutter, the rotating prism is respectively arranged on a transmission light path and a reflection light path of the spectroscope and coaxially arranged, the first controller and the second controller are single-chip microcomputers STM32, the first controller and the second controller respectively receive signals transmitted by the sensor, the first controller is connected to the shutter of the high-speed camera to control the shutter to act, the second controller is connected to the two rotating prisms to control the rotation of the prisms, and the first controller and the second controller work cooperatively.
CN202210280228.5A 2022-03-22 2022-03-22 Non-uniform motion blur super-resolution image restoration method and device Pending CN114820299A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210280228.5A CN114820299A (en) 2022-03-22 2022-03-22 Non-uniform motion blur super-resolution image restoration method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210280228.5A CN114820299A (en) 2022-03-22 2022-03-22 Non-uniform motion blur super-resolution image restoration method and device

Publications (1)

Publication Number Publication Date
CN114820299A true CN114820299A (en) 2022-07-29

Family

ID=82530995

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210280228.5A Pending CN114820299A (en) 2022-03-22 2022-03-22 Non-uniform motion blur super-resolution image restoration method and device

Country Status (1)

Country Link
CN (1) CN114820299A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024098188A1 (en) * 2022-11-07 2024-05-16 京东方科技集团股份有限公司 Visual analysis method of image restoration model, apparatus and electronic device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024098188A1 (en) * 2022-11-07 2024-05-16 京东方科技集团股份有限公司 Visual analysis method of image restoration model, apparatus and electronic device

Similar Documents

Publication Publication Date Title
Dong et al. Multi-scale boosted dehazing network with dense feature fusion
Liu et al. Video super-resolution based on deep learning: a comprehensive survey
CN111709895A (en) Image blind deblurring method and system based on attention mechanism
Wang et al. End-to-end view synthesis for light field imaging with pseudo 4DCNN
CN114092330B (en) Light-weight multi-scale infrared image super-resolution reconstruction method
CN110490919A (en) A kind of depth estimation method of the monocular vision based on deep neural network
CN111343367B (en) Billion-pixel virtual reality video acquisition device, system and method
CN110232653A (en) The quick light-duty intensive residual error network of super-resolution rebuilding
An et al. TR-MISR: Multiimage super-resolution based on feature fusion with transformers
CN114862732B (en) Synthetic aperture imaging method integrating event camera and traditional optical camera
CN111091503A (en) Image out-of-focus blur removing method based on deep learning
Pu et al. Robust high dynamic range (hdr) imaging with complex motion and parallax
CN111476745B (en) Multi-branch network and method for motion blur super-resolution
CN111681166A (en) Image super-resolution reconstruction method of stacked attention mechanism coding and decoding unit
CN108182669A (en) A kind of Super-Resolution method of the generation confrontation network based on multiple dimension of pictures
CN112446835B (en) Image restoration method, image restoration network training method, device and storage medium
CN113298718A (en) Single image super-resolution reconstruction method and system
CN114841856A (en) Image super-pixel reconstruction method of dense connection network based on depth residual channel space attention
CN116029902A (en) Knowledge distillation-based unsupervised real world image super-resolution method
CN114757862B (en) Image enhancement progressive fusion method for infrared light field device
CN114820299A (en) Non-uniform motion blur super-resolution image restoration method and device
Tang et al. Structure-embedded ghosting artifact suppression network for high dynamic range image reconstruction
CN111583345B (en) Method, device and equipment for acquiring camera parameters and storage medium
CN116389912B (en) Method for reconstructing high-frame-rate high-dynamic-range video by fusing pulse camera with common camera
CN117237207A (en) Ghost-free high dynamic range light field imaging method for dynamic scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination