US11798139B2 - Noise-adaptive non-blind image deblurring - Google Patents
Noise-adaptive non-blind image deblurring Download PDFInfo
- Publication number
- US11798139B2 US11798139B2 US17/099,995 US202017099995A US11798139B2 US 11798139 B2 US11798139 B2 US 11798139B2 US 202017099995 A US202017099995 A US 202017099995A US 11798139 B2 US11798139 B2 US 11798139B2
- Authority
- US
- United States
- Prior art keywords
- neural network
- image
- input image
- implementing
- parameters
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
- 238000013528 artificial neural network Methods 0.000 claims abstract description 106
- 238000000034 method Methods 0.000 claims abstract description 60
- 238000012549 training Methods 0.000 claims description 26
- 238000012545 processing Methods 0.000 claims description 25
- 238000013527 convolutional neural network Methods 0.000 claims description 16
- 239000013598 vector Substances 0.000 claims description 10
- 230000006870 function Effects 0.000 description 17
- 238000000354 decomposition reaction Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000010276 construction Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000002329 infrared spectrum Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000001429 visible spectrum Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/001—Image restoration
- G06T5/003—Deblurring; Sharpening
-
- G06T5/70—
-
- G06T5/73—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/001—Image restoration
- G06T5/002—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/001—Image restoration
- G06T5/005—Retouching; Inpainting; Scratch removal
-
- G06T5/60—
-
- G06T5/77—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20004—Adaptive image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20004—Adaptive image processing
- G06T2207/20008—Globally adaptive
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
Definitions
- the subject disclosure relates generally to image deblurring and, more particularly, to noise-adaptive non-blind image deblurring.
- a vehicle may include many sensors that provide information about the vehicle and its environment.
- An exemplary sensor is a camera. Images obtained by one or more cameras of a vehicle may be used to perform semi-autonomous or autonomous operation, for example.
- An image obtained by a camera may be blurred for a variety of reasons, including the movement or vibration of the camera.
- the source of the blurring may be well known based on known movement of the vehicle or calibration performed for the camera. This facilitates non-blind image deblurring.
- a blurred image generally includes noise as well as blurring. Accordingly, it is desirable to provide noise-adaptive non-blind image deblurring.
- a method of performing noise-adaptive non-blind deblurring on an input image that includes blur and noise includes implementing a first neural network on the input image to obtain one or more parameters and performing regularized deconvolution to obtain a deblurred image from the input image.
- the regularized deconvolution uses the one or more parameters to control noise in the deblurred image.
- the method also includes implementing a second neural network to remove artifacts from the deblurred image and provide an output image.
- the implementing the first neural network results in one parameter that is a regularization parameter.
- the implementing the first neural network results in two or more parameters that are weights corresponding with a set of predefined regularization parameters.
- the method also includes training the first neural network and the second neural network individually or together in an end-to-end arrangement.
- the method also includes obtaining, by the processing circuitry, a point spread function that defines the blur in the input image.
- the input image is obtained by a camera in a vehicle and the point spread function is obtained from one or more sensors of the vehicle or from the camera based on a calibration.
- the implementing the first neural network includes obtaining a one-dimensional vector of singular values from the input image and implementing a one-dimensional residual convolutional neural network (CNN).
- CNN convolutional neural network
- a non-transitory computer-readable storage medium stores instructions which, when processed by processing circuitry, cause the processing circuitry to implement a method of performing noise-adaptive non-blind deblurring on an input image that includes blur and noise.
- the method includes implementing a first neural network on the input image to obtain one or more parameters and performing regularized deconvolution to obtain a deblurred image from the input image.
- the regularized deconvolution uses the one or more parameters to control noise in the deblurred image.
- the method also includes implementing a second neural network to remove artifacts from the deblurred image and provide an output image.
- the implementing the first neural network results in one parameter that is a regularization parameter.
- the implementing the first neural network results in two or more parameters that are weights corresponding with a set of predefined regularization parameters.
- the method also includes training the first neural network and the second neural network individually or together in an end-to-end arrangement.
- the method also includes obtaining, by the processing circuitry, a point spread function that defines the blur in the input image.
- the input image is obtained by a camera in a vehicle and the point spread function is obtained from one or more sensors of the vehicle or from the camera based on a calibration.
- the implementing the first neural network includes obtaining a one-dimensional vector of singular values from the input image and implementing a one-dimensional residual convolutional neural network (CNN).
- CNN convolutional neural network
- a vehicle in yet another exemplary embodiment, includes a camera to obtain an input image that includes blur and noise.
- the vehicle also includes processing circuitry to implement a first neural network on the input image to obtain one or more parameters and to perform regularized deconvolution to obtain a deblurred image from the input image.
- the regularized deconvolution uses the one or more parameters to control noise in the deblurred image.
- the processing circuitry also implements a second neural network to remove artifacts from the deblurred image and provide an output image.
- the processing circuitry implements the first neural network and obtain one parameter that is a regularization parameter or obtain two or more parameters that are weights corresponding with a set of predefined regularization parameters.
- the processing circuitry trains the first neural network and the second neural network individually or together in an end-to-end arrangement.
- the processing circuitry obtains a point spread function that defines the blur in the input image.
- the processing circuitry obtains the point spread function from one or more sensors of the vehicle that measure a movement of the vehicle or from a calibration of the camera.
- the first neural network obtains a one-dimensional vector of singular values from the input image and implement a one-dimensional residual convolutional neural network (CNN).
- CNN convolutional neural network
- FIG. 1 is a block diagram of a vehicle in which noise-adaptive non-blind image deblurring is performed according to one or more embodiments;
- FIG. 2 shows exemplary images that illustrate the process of noise-adaptive non-blind image deblurring according to one or more embodiments
- FIG. 3 shows components of a training process of a system that performs noise-adaptive non-blind image deblurring according to one or more embodiments
- FIG. 4 shows the architecture of the first neural network used to perform noise-adaptive non-blind image deblurring according to one or more embodiments
- FIG. 5 shows a process flow for training the first neural network used to perform noise-adaptive non-blind image deblurring according to one or more embodiments
- FIG. 6 shows a process flow for training the first neural network used to perform noise-adaptive non-blind image deblurring according to one or more embodiments
- FIG. 7 shows additional processes needed to generate ground truth in order to train the first neural network when the blur is in two dimensions
- FIG. 8 shows an exemplary process flow for end-to-end training of the neural networks used to perform noise-adaptive non-blind image deblurring according to one or more embodiments
- FIG. 9 shows an exemplary process flow for end-to-end training of the neural networks used to perform noise-adaptive non-blind image deblurring according to one or more embodiments.
- FIG. 10 is a block diagram of the system to perform noise-adaptive non-blind image deblurring according to one or more embodiments
- Non-blind deblurring of blurred images refers to the scenario in which the source of the blurring and a model of the smear is known. Even when the function or model of the smear is known, non-blind deblurring is a nonstable problem, and boundary conditions must be imposed to address artifacts. That is, prior deblurring processes may introduce artifacts. Additionally, noise may be amplified if the deblurring process is not regularized. A prior approach facilitates addressing known or fixed noise in the non-blind deblurring process. Specifically, a joint training procedure is undertaken to determine both the parameters for the regularized deconvolution and the weights of a convolutional neural network (CNN).
- CNN convolutional neural network
- a first neural network (e.g., deep neural network) infers a noise-dependent regularization parameter used in the regularized deconvolution process to produce a deblurred image with artifacts.
- the first neural network provides a regularization parameter value ⁇ .
- the first neural network provides weighting associated with each value in a predefined array of regularization parameter values ⁇ . Using the correct regularization parameter value ⁇ during regularized deconvolution ensures that noise in the input (blurred) image is not amplified in an uncontrollable fashion in the deblurred image.
- a second neural network removes artifacts from the deblurred image.
- a correct value of the regularization parameter ⁇ provided by the first neural network ensures that the value is not too small to be useful (i.e., output image is too noisy) yet not so large that the output image is still blurry. This separate, first neural network is not used according to the prior approach.
- FIG. 1 is a block diagram of a vehicle 100 in which noise-adaptive non-blind image deblurring is performed.
- the exemplary vehicle 100 shown in FIG. 1 is an automobile 101 .
- Two exemplary cameras 110 are shown to obtain images from a front of the vehicle 100 .
- Each of the cameras 110 may be color camera or grayscale camera or any other imaging device that operates in the visible or infrared spectrum.
- the images obtained with the cameras 110 may be one, two, or three-dimensional images that serve as a blurred input image 210 ( FIG. 2 ).
- the vehicle 100 is also shown to include a controller 120 and additional sensors 130 , 140 .
- the additional sensors 130 e.g., inertial measurement unit, wheel speed sensor, gyroscope, accelerometer
- the additional sensors 140 e.g., lidar system, radar system
- the controller 120 may use information from one or more of the sensors 130 , 140 and cameras 110 to perform semi-autonomous or autonomous operation of the vehicle 100 .
- the controller 120 performs noise-adaptive non-blind image deblurring on blurred input images 210 obtained by one or more cameras 110 .
- a camera 110 may include a controller to perform the processing.
- the noise-adaptive non-blind image deblurring requires knowledge of the source of the blurring.
- the source of the blurring may be motion of the vehicle 100 , which is indicated by parameters obtained by the sensors 130 of the vehicle 100 , or may be inherent to the camera 110 , as determined by calibration of the camera 110 .
- the non-blind aspect of the deblurring process is known and not further detailed herein.
- the controller 120 and any controller of a camera 110 may include processing circuitry that may include an application specific integrated circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and memory that executes one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.
- ASIC application specific integrated circuit
- processor shared, dedicated, or group
- memory that executes one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.
- FIG. 2 shows exemplary images that illustrate the process of noise-adaptive non-blind image deblurring according to one or more embodiments.
- An input image 210 is a blurred image with noise.
- a deblurred image 220 is obtained.
- This deblurred image 220 may include artifacts 225 .
- a second neural network 375 is then implemented (at block 370 ( FIG. 3 )) to obtain an output image 230 with the artifacts 225 removed from the deblurred image 220 .
- FIG. 3 shows components of a training process 300 of a system 301 that performs noise-adaptive non-blind image deblurring according to one or more embodiments.
- the first neural network 355 (implemented at block 350 ) facilitates obtaining noise-dependent regularization parameters.
- a single value of a regularization parameter ⁇ or weightings corresponding with a predefined array of values of regularization parameters ⁇ are provided by the first neural network 355 according to alternate embodiments.
- the regularization parameter ⁇ is used to control noise in the regularized deconvolution (at block 360 ) that provides a deblurred image 220 .
- the second neural network 375 (implemented at block 370 ) facilitates removing artifacts from the deblurred image 220 to obtain the output image 230 .
- obtaining a sharp image 315 refers to obtaining an image, whether in real time or from a database, without blurring or noise.
- the training process 300 uses a large set of the sharp images Im 315 over many iterations.
- the sharp image Im 315 represents the ground truth used in the training process 300 . That is, ideally, the output image 230 will be very close to this sharp image Im 315 .
- performing corruption refers to generating noise N and a point spread function (PSF), both of which are applied to the sharp image Im 315 to generate the input image 210 (indicated as I B ) to the system 301 .
- Each neural network may be trained individually or may be trained together in a process known as end-to-end training. Exemplary training processes for the first neural network 355 or for end-to-end training of the full system 301 are discussed with reference to FIGS. 5 - 9 .
- the PSF output at block 320 represents potential sources of blurring of an image obtained by a camera 110 in a vehicle 100 .
- the PSF may be based on motion parameters obtained by the sensors 130 of the vehicle 100 or may be measured based on calibration of the camera 110 when the blurring is inherent to the camera 110 .
- the PSF is used to determine blur (i.e., generate a blur kernel matrix K B ).
- the system 301 obtains the blurred image (at block 340 ) from a camera 110 and determines blur (at block 330 ) based on information from sensors 130 or the camera 110 . Because the blur is determined (at block 330 ) through the known PSF, the system 301 performs non-blind deblurring. Because the noise Nis not known, the system 301 performs noise-adaptive deblurring.
- obtaining the blurred input image I B 210 at block 340 , involves applying the blur (determined at block 330 ) to the sharp image Im 315 and adding the noise N. This is an artificial process to create the input image I B 210 . In the trained system 301 , noise and blur are part of the camera 110 output.
- implementing a first neural network 355 results in determining the regularization parameter ⁇ .
- implementing the first neural network 355 may result in the output of a regularization parameter ⁇ value or may result in the output of weights corresponding with a predefined set of regularization parameter ⁇ values. In the latter case, the weights of the set add to 1.
- the architecture 400 of the first neural network 355 is discussed with reference to FIG. 4 and training of the first neural network 355 is discussed with reference to FIGS. 5 - 7 .
- regularized deconvolution to generate the deblurred image 220 may be performed according to alternate embodiments.
- the first neural network 355 (at block 350 ) is assumed to provide a regularization parameter ⁇ value rather than weights.
- a Tikhonov regularized deconvolution may be used when the input image I B 210 evidences a one-dimensional blur (e.g., horizontal blur).
- T indicates transpose.
- a Wiener deconvolution may be performed.
- ⁇ B ( ⁇ right arrow over (k) ⁇ ) ⁇ m ( ⁇ right arrow over (k) ⁇ ) ⁇ tilde over (K) ⁇ B ( ⁇ right arrow over (k) ⁇ )+ N [EQ. 5]
- ⁇ DB ( ⁇ right arrow over (k) ⁇ ) FFT( I DB ) [EQ. 6]
- FFT fast Fourier transform
- the equations are in the Fourier space, as indicated by vector k, rather than in real space.
- an FFT is performed on the input image I B 210 to obtain ⁇ B .
- the deblurred image I DB 220 is obtained as:
- I ⁇ DB ( k ⁇ ) I ⁇ B ( k ⁇ ) ⁇ K ⁇ B * ( k ⁇ ) ⁇ " ⁇ [LeftBracketingBar]" K ⁇ B ( k ⁇ ) ⁇ " ⁇ [RightBracketingBar]” 2 + ⁇ 2 [ EQ . 7 ]
- the deblurred image I DB 220 is obtained by performing an inverse FFT (IFFT) on ⁇ DB ( ⁇ tilde over (k) ⁇ ).
- implementing the second neural network 375 on the deblurred image I DB 220 results in the output image 230 .
- the image enhancement neural network that removes artifacts from the deblurred image I DB 220 and which is indicated as the second neural network 375 is well-known and is not detailed herein. End-to-end training, which refers to training the first neural network 355 and the second neural network 375 together, is discussed with reference to FIGS. 8 and 9 .
- a mean square error (MSE) may be obtained between the output image 230 provided by the system 301 and the sharp image Im 315 to ascertain the effectiveness of the noise-adaptive non-blind deblurring performed by the system 301 .
- MSE mean square error
- FIG. 4 shows the architecture 400 of the first neural network 355 used to perform noise-adaptive non-blind image deblurring according to one or more embodiments.
- the first neural network 355 is a one-dimensional residual CNN.
- the input to the first neural network 355 is the input image I B 210 , which is a blurred image with noise, and the output N out may be a value of the regularization parameter ⁇ or may be a set of weights that corresponding with values of a set of predefined regularization parameters ⁇ .
- An SVD is performed on the input image I B 210 , at 401 , to obtain a one-dimensional vector of image singular value (SV) logarithms.
- the first convolutional layer 405 converts the input to 64 feature vectors.
- the next four stages 410 are cascades of five residual blocks. While five residual blocks are indicated for each stage 410 , the exemplary embodiment of the architecture 400 does not limit other numbers of subunits in alternate embodiments.
- each cascade 420 includes “Conv1d,” which refers to a filter sliding along the data across one dimension, “BatchNorm,” which refers to a batch normalization type of layer, and “ReLU,” which refers to a rectified linear unit.
- the number of filters N f may be 64, 128, 256, or 512, as indicated for the different stages 410 .
- 1024 feature vectors are fed into a fully connected layer “FC” to produce the output N out .
- FIGS. 5 and 6 detail the training of the first neural network 355 used to perform noise-adaptive non-blind image deblurring according to one or more embodiments.
- end-to-end training refers to training both neural networks according to the arrangement shown in FIG. 3 .
- the first neural network 355 may be trained separately from the second neural network 375 .
- the ground truth is obtained by a function Q( ⁇ ), as detailed.
- FIG. 5 shows a process flow 500 for training the first neural network 355 used to perform noise-adaptive non-blind image deblurring according to one or more embodiments.
- the process flow 500 shown in FIG. 5 is used with one-dimensional blur and when the output of the first neural network 355 is a regularization parameter ⁇ value.
- the previously described processes at blocks 310 to 340 to obtain the sharp image Im 315 and the blurred input image I B 210 are not discussed again.
- a set of images is obtained as regularized deconvolution results for a set of values of the regularization parameter ⁇ .
- the function Q( ⁇ ) selects the optimal regularization parameter ⁇ , ⁇ opt, from among the set of values of the regularization parameter ⁇ .
- the function Q( ⁇ ), at block 520 obtains a mean square distance (MSD) between each of the set of images (generated with the set of values of the regularization parameter ⁇ ) and the sharp image Im 315 and selects the regularization parameter ⁇ corresponding with the image that results in the minimum MSD as ⁇ opt.
- MSD mean square distance
- the input image I B 210 is subjected to an SVD to generate decomposition matrices, similarly to EQ. 2.
- implementing the first neural network 355 results in a single regularization parameter ⁇ reg.
- a log scale of the regularization parameter ⁇ reg and the optimal regularization parameter ⁇ opt is compared based on MSE.
- the process flow 500 may be repeated for a large set of the sharp images Im 315 to train the first neural network 355 .
- FIG. 6 shows a process flow 600 for training the first neural network 355 used to perform noise-adaptive non-blind image deblurring according to one or more embodiments.
- the process flow 600 shown in FIG. 6 is used with one-dimensional blur and when the output of the first neural network 355 is a set of weights corresponding with a set of predefined values of regularization parameters ⁇ .
- previously described processes at blocks 310 to 340 to obtain the sharp image Im 315 and the blurred input image I B 210 are not discussed again.
- a set of images is obtained as regularized deconvolution results. Each image in the set results from a particular set of weights corresponding with a set of predefined regularization parameter ⁇ values.
- the function Q( ⁇ ) selects the set of weights that results in an image with a minimum MSD relative to the sharp image Im 315 .
- the SoftMin function then rescales the weights to ensure that they are between 0 and 1.
- the result is the weighting function g( ⁇ ) such that the area under the curve will add to 1.
- the input image I B 210 is subjected to an SVD to generate decomposition matrices, similarly to EQ. 2 (as discussed with reference to FIG. 5 ).
- implementing the first neural network 355 results in a set of weights indicated as the function f( ⁇ ).
- a weighted sum of images, according to the functions g( ⁇ ) and f( ⁇ ) are compared.
- the process flow 600 may be repeated for a large set of the sharp images Im 315 to train the first neural network 355 .
- FIG. 7 shows additional processes 710 needed to generate ground truth in order to train the first neural network 355 when the blur is in two dimensions.
- the additional processes 710 include performing an FFT on the sharp image Im 315 that is an input to the additional processes 710 .
- the additional processes 710 also include IFFTs at the outputs of the additional processes 710 .
- regularized deconvolution results are obtained in the Fourier space.
- the additional processes 710 may be used to train a first neural network 355 for two-dimensional blur whether the first neural network 355 provides a single regularization parameter ⁇ or weights for a predefined set of regularization parameter ⁇ values.
- FIG. 8 shows an exemplary process flow 800 for end-to-end training of the neural networks used to perform noise-adaptive non-blind image deblurring according to one or more embodiments.
- the exemplary process flow 800 is used when the first neural network 355 outputs a regularization parameter ⁇ .
- the exponential at block 810 of the log result provides the regularization parameter ⁇ used by the regularization deconvolution at block 360 .
- the bypass 820 is a pretraining bypass and allows bypassing the second neural network 375 (implemented at block 370 ). This facilitates a comparison, at block 380 , of the deblurred image I DB 220 with the sharp image Im 315 .
- the output image 230 is compared with the sharp image Im 315 such that the result of both the first neural network 355 (implemented at block 350 ) and the second neural network 375 (implemented at block 370 ) is verified as part of the overall system 301 .
- FIG. 9 shows an exemplary process flow 900 for end-to-end training of the neural networks used to perform noise-adaptive non-blind image deblurring according to one or more embodiments.
- the exemplary process flow 900 is used when the first neural network 355 outputs a set of weights corresponding to a predefined set of regularization parameter ⁇ values.
- implementing the first neural network 355 at block 350 , results in the weights as a function f( ⁇ ) of the predefined set of regularization parameter ⁇ values.
- a set of deconvolved images is generated with each deconvolved image resulting from a different one of the predefined set of regularization parameter ⁇ values.
- a weighted sum of the deconvolved images (from block 610 ) is obtained based on the weights obtained from the first neural network 355 .
- the pretraining bypass 920 in FIG. 9 allows bypassing the second neural network 375 (implemented at block 370 ). This facilitates a comparison, at block 380 , of the deblurred image I DB 220 with the sharp image Im 315 .
- the output image 230 is compared with the sharp image Im 315 such that the result of both the first neural network 355 (implemented at block 350 ) and the second neural network 375 (implemented at block 370 ) is verified as part of the overall system 301 .
- FIG. 10 is a block diagram of the system 301 to perform noise-adaptive non-blind image deblurring according to one or more embodiments.
- the system 301 may be implemented by processing circuitry of the controller 120 of the vehicle 100 , for example.
- a camera 110 provides the blurred input image I B 210 .
- the camera 110 itself and/or sensors 130 that indicate motion of the vehicle 100 provide the PSF that indicates the cause of the blur and facilitates non-blind deblurring.
- implementing a first neural network 355 provides a regularization parameter ⁇ or weights corresponding to a predefined set of regularization parameter ⁇ values.
- the first neural network 355 facilitates control of the noise in the input image I B 210 (i.e., noise-adaptive deblurring).
- regularized deconvolution provides a deblurred image I DB 220 .
- implementing the second neural network 375 facilitates removing artifacts from the deblurred image I DB 220 to generate the output image 230 .
- This output image 230 may be displayed in the vehicle 100 or used for object detection and classification.
Abstract
Description
I B=Im*K B +N [EQ. 1]
K B =USV T [EQ. 2]
[K B]REG −1 =VS(S 2+λ2 I)−1 U T [EQ. 3]
I DB ≅I B [K B]REG −1 [EQ. 4]
Ĩ B({right arrow over (k)})=Ĩ m({right arrow over (k)}){tilde over (K)} B({right arrow over (k)})+N [EQ. 5]
Ĩ DB({right arrow over (k)})=FFT(I DB) [EQ. 6]
The parameters shown in EQ. 5 result from a fast Fourier transform (FFT). That is, because two-dimensional blur rather than one-dimensional blur must be considered, the equations are in the Fourier space, as indicated by vector k, rather than in real space. For example, an FFT is performed on the input image IB 210 to obtain ĨB. The deblurred image IDB 220 is obtained as:
Based on EQ. 8, the deblurred image IDB 220 is obtained by performing an inverse FFT (IFFT) on ĨDB({tilde over (k)}).
Claims (17)
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/099,995 US11798139B2 (en) | 2020-11-17 | 2020-11-17 | Noise-adaptive non-blind image deblurring |
CN202110509845.3A CN114511451A (en) | 2020-11-17 | 2021-05-11 | Noise adaptive non-blind image deblurring |
DE102021114064.1A DE102021114064A1 (en) | 2020-11-17 | 2021-05-31 | Noise-adaptive non-blind image sharpening |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/099,995 US11798139B2 (en) | 2020-11-17 | 2020-11-17 | Noise-adaptive non-blind image deblurring |
Publications (2)
Publication Number | Publication Date |
---|---|
US20220156892A1 US20220156892A1 (en) | 2022-05-19 |
US11798139B2 true US11798139B2 (en) | 2023-10-24 |
Family
ID=81345826
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/099,995 Active 2041-09-03 US11798139B2 (en) | 2020-11-17 | 2020-11-17 | Noise-adaptive non-blind image deblurring |
Country Status (3)
Country | Link |
---|---|
US (1) | US11798139B2 (en) |
CN (1) | CN114511451A (en) |
DE (1) | DE102021114064A1 (en) |
Citations (35)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070165961A1 (en) * | 2006-01-13 | 2007-07-19 | Juwei Lu | Method And Apparatus For Reducing Motion Blur In An Image |
US20080137978A1 (en) * | 2006-12-07 | 2008-06-12 | Guoyi Fu | Method And Apparatus For Reducing Motion Blur In An Image |
WO2008106282A1 (en) * | 2007-02-28 | 2008-09-04 | Microsoft Corporation | Image deblurring with blurred/noisy image pairs |
DE102008036334A1 (en) * | 2008-08-04 | 2009-04-09 | Daimler Ag | Vehicle i.e. car, operating method, involves determining point spreading function based on movement parameter of vehicle, and executing reconstruction of images detected by image detecting unit based on point spreading function |
US20110033130A1 (en) * | 2009-08-10 | 2011-02-10 | Eunice Poon | Systems And Methods For Motion Blur Reduction |
US20110158541A1 (en) * | 2009-12-25 | 2011-06-30 | Shinji Watanabe | Image processing device, image processing method and program |
KR101181161B1 (en) * | 2011-05-19 | 2012-09-17 | 한국과학기술원 | An apparatus and a method for deblurring image blur caused by camera ego motion |
US20140348441A1 (en) * | 2012-03-29 | 2014-11-27 | Nikon Corporation | Algorithm for minimizing latent sharp image cost function and point spread function with a spatial mask in a fidelity term |
US20140355901A1 (en) * | 2012-03-29 | 2014-12-04 | Nikon Corporation | Algorithm for minimizing latent sharp image cost function and point spread function cost function with a spatial mask in a regularization term |
US20160070979A1 (en) * | 2014-09-05 | 2016-03-10 | Huawei Technologies Co., Ltd. | Method and Apparatus for Generating Sharp Image Based on Blurry Image |
CN105447828A (en) * | 2015-11-23 | 2016-03-30 | 武汉工程大学 | Single-viewpoint image deblurring method for carrying out one-dimensional deconvolution along motion blur path |
CN106485685A (en) * | 2016-08-30 | 2017-03-08 | 重庆大学 | A kind of vehicle-mounted high-quality imaging method being restored based on two steps |
US20170191945A1 (en) * | 2016-01-01 | 2017-07-06 | Kla-Tencor Corporation | Systems and Methods for Defect Detection Using Image Reconstruction |
US20180089809A1 (en) * | 2016-09-27 | 2018-03-29 | Nikon Corporation | Image deblurring with a multiple section, regularization term |
US20180158175A1 (en) * | 2016-12-01 | 2018-06-07 | Almalence Inc. | Digital correction of optical system aberrations |
CN108198151A (en) * | 2018-02-06 | 2018-06-22 | 东南大学 | A kind of star chart deblurring method based on improvement RL Deconvolution Algorithm Based on Frequency |
CN108416752A (en) * | 2018-03-12 | 2018-08-17 | 中山大学 | A method of image is carried out based on production confrontation network and removes motion blur |
CN109636733A (en) * | 2018-10-26 | 2019-04-16 | 华中科技大学 | Fluorescent image deconvolution method and system based on deep neural network |
US20190122378A1 (en) * | 2017-04-17 | 2019-04-25 | The United States Of America, As Represented By The Secretary Of The Navy | Apparatuses and methods for machine vision systems including creation of a point cloud model and/or three dimensional model based on multiple images from different perspectives and combination of depth cues from camera motion and defocus with various applications including navigation systems, and pattern matching systems as well as estimating relative blur between images for use in depth from defocus or autofocusing applications |
US20190205614A1 (en) * | 2018-01-03 | 2019-07-04 | Samsung Electronics Co., Ltd. | Method and apparatus for recognizing object |
US10360664B2 (en) * | 2017-01-12 | 2019-07-23 | Postech Academy-Industry Foundation | Image processing apparatus and method using machine learning |
US20200090322A1 (en) * | 2018-09-13 | 2020-03-19 | Nvidia Corporation | Deep neural network processing for sensor blindness detection in autonomous machine applications |
US20200097772A1 (en) * | 2018-09-25 | 2020-03-26 | Honda Motor Co., Ltd. | Model parameter learning device, control device, and model parameter learning method |
US20200160490A1 (en) * | 2018-11-20 | 2020-05-21 | Idemia Identity & Security France | Method for deblurring an image |
US20200193570A1 (en) * | 2017-09-05 | 2020-06-18 | Sony Corporation | Image processing device, image processing method, and program |
CN112241669A (en) * | 2019-07-18 | 2021-01-19 | 杭州海康威视数字技术股份有限公司 | Target identification method, device, system and equipment, and storage medium |
CN108632502B (en) * | 2017-03-17 | 2021-04-30 | 深圳开阳电子股份有限公司 | Image sharpening method and device |
US20210142146A1 (en) * | 2019-11-13 | 2021-05-13 | Micron Technology, Inc. | Intelligent image sensor stack |
US20210152735A1 (en) * | 2019-11-14 | 2021-05-20 | Microsoft Technology Licensing, Llc | Image restoration for through-display imaging |
US20210183015A1 (en) * | 2018-09-13 | 2021-06-17 | Samsung Electronics Co., Ltd. | Image processing apparatus and operation method thereof |
KR20210099456A (en) * | 2020-02-04 | 2021-08-12 | 엘지전자 주식회사 | Image processor, artificial intelligence apparatus and method for generating image data by enhancing specific function |
WO2021169136A1 (en) * | 2020-02-28 | 2021-09-02 | 深圳市商汤科技有限公司 | Image processing method and apparatus, and electronic device and storage medium |
CN114092416A (en) * | 2021-11-04 | 2022-02-25 | 上海市特种设备监督检验技术研究院 | DR blurred image blind deconvolution restoration method and system |
TW202211154A (en) * | 2020-08-07 | 2022-03-16 | 美商奈米創尼克影像公司 | Deep learning model for noise reduction in low snr imaging conditions |
US20220245776A1 (en) * | 2021-02-01 | 2022-08-04 | Microsoft Technology Licensing, Llc | Simultaneously correcting image degradations of multiple types in an image of a face |
-
2020
- 2020-11-17 US US17/099,995 patent/US11798139B2/en active Active
-
2021
- 2021-05-11 CN CN202110509845.3A patent/CN114511451A/en active Pending
- 2021-05-31 DE DE102021114064.1A patent/DE102021114064A1/en active Pending
Patent Citations (35)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070165961A1 (en) * | 2006-01-13 | 2007-07-19 | Juwei Lu | Method And Apparatus For Reducing Motion Blur In An Image |
US20080137978A1 (en) * | 2006-12-07 | 2008-06-12 | Guoyi Fu | Method And Apparatus For Reducing Motion Blur In An Image |
WO2008106282A1 (en) * | 2007-02-28 | 2008-09-04 | Microsoft Corporation | Image deblurring with blurred/noisy image pairs |
DE102008036334A1 (en) * | 2008-08-04 | 2009-04-09 | Daimler Ag | Vehicle i.e. car, operating method, involves determining point spreading function based on movement parameter of vehicle, and executing reconstruction of images detected by image detecting unit based on point spreading function |
US20110033130A1 (en) * | 2009-08-10 | 2011-02-10 | Eunice Poon | Systems And Methods For Motion Blur Reduction |
US20110158541A1 (en) * | 2009-12-25 | 2011-06-30 | Shinji Watanabe | Image processing device, image processing method and program |
KR101181161B1 (en) * | 2011-05-19 | 2012-09-17 | 한국과학기술원 | An apparatus and a method for deblurring image blur caused by camera ego motion |
US20140355901A1 (en) * | 2012-03-29 | 2014-12-04 | Nikon Corporation | Algorithm for minimizing latent sharp image cost function and point spread function cost function with a spatial mask in a regularization term |
US20140348441A1 (en) * | 2012-03-29 | 2014-11-27 | Nikon Corporation | Algorithm for minimizing latent sharp image cost function and point spread function with a spatial mask in a fidelity term |
US20160070979A1 (en) * | 2014-09-05 | 2016-03-10 | Huawei Technologies Co., Ltd. | Method and Apparatus for Generating Sharp Image Based on Blurry Image |
CN105447828A (en) * | 2015-11-23 | 2016-03-30 | 武汉工程大学 | Single-viewpoint image deblurring method for carrying out one-dimensional deconvolution along motion blur path |
US20170191945A1 (en) * | 2016-01-01 | 2017-07-06 | Kla-Tencor Corporation | Systems and Methods for Defect Detection Using Image Reconstruction |
CN106485685A (en) * | 2016-08-30 | 2017-03-08 | 重庆大学 | A kind of vehicle-mounted high-quality imaging method being restored based on two steps |
US20180089809A1 (en) * | 2016-09-27 | 2018-03-29 | Nikon Corporation | Image deblurring with a multiple section, regularization term |
US20180158175A1 (en) * | 2016-12-01 | 2018-06-07 | Almalence Inc. | Digital correction of optical system aberrations |
US10360664B2 (en) * | 2017-01-12 | 2019-07-23 | Postech Academy-Industry Foundation | Image processing apparatus and method using machine learning |
CN108632502B (en) * | 2017-03-17 | 2021-04-30 | 深圳开阳电子股份有限公司 | Image sharpening method and device |
US20190122378A1 (en) * | 2017-04-17 | 2019-04-25 | The United States Of America, As Represented By The Secretary Of The Navy | Apparatuses and methods for machine vision systems including creation of a point cloud model and/or three dimensional model based on multiple images from different perspectives and combination of depth cues from camera motion and defocus with various applications including navigation systems, and pattern matching systems as well as estimating relative blur between images for use in depth from defocus or autofocusing applications |
US20200193570A1 (en) * | 2017-09-05 | 2020-06-18 | Sony Corporation | Image processing device, image processing method, and program |
US20190205614A1 (en) * | 2018-01-03 | 2019-07-04 | Samsung Electronics Co., Ltd. | Method and apparatus for recognizing object |
CN108198151A (en) * | 2018-02-06 | 2018-06-22 | 东南大学 | A kind of star chart deblurring method based on improvement RL Deconvolution Algorithm Based on Frequency |
CN108416752A (en) * | 2018-03-12 | 2018-08-17 | 中山大学 | A method of image is carried out based on production confrontation network and removes motion blur |
US20210183015A1 (en) * | 2018-09-13 | 2021-06-17 | Samsung Electronics Co., Ltd. | Image processing apparatus and operation method thereof |
US20200090322A1 (en) * | 2018-09-13 | 2020-03-19 | Nvidia Corporation | Deep neural network processing for sensor blindness detection in autonomous machine applications |
US20200097772A1 (en) * | 2018-09-25 | 2020-03-26 | Honda Motor Co., Ltd. | Model parameter learning device, control device, and model parameter learning method |
CN109636733A (en) * | 2018-10-26 | 2019-04-16 | 华中科技大学 | Fluorescent image deconvolution method and system based on deep neural network |
US20200160490A1 (en) * | 2018-11-20 | 2020-05-21 | Idemia Identity & Security France | Method for deblurring an image |
CN112241669A (en) * | 2019-07-18 | 2021-01-19 | 杭州海康威视数字技术股份有限公司 | Target identification method, device, system and equipment, and storage medium |
US20210142146A1 (en) * | 2019-11-13 | 2021-05-13 | Micron Technology, Inc. | Intelligent image sensor stack |
US20210152735A1 (en) * | 2019-11-14 | 2021-05-20 | Microsoft Technology Licensing, Llc | Image restoration for through-display imaging |
KR20210099456A (en) * | 2020-02-04 | 2021-08-12 | 엘지전자 주식회사 | Image processor, artificial intelligence apparatus and method for generating image data by enhancing specific function |
WO2021169136A1 (en) * | 2020-02-28 | 2021-09-02 | 深圳市商汤科技有限公司 | Image processing method and apparatus, and electronic device and storage medium |
TW202211154A (en) * | 2020-08-07 | 2022-03-16 | 美商奈米創尼克影像公司 | Deep learning model for noise reduction in low snr imaging conditions |
US20220245776A1 (en) * | 2021-02-01 | 2022-08-04 | Microsoft Technology Licensing, Llc | Simultaneously correcting image degradations of multiple types in an image of a face |
CN114092416A (en) * | 2021-11-04 | 2022-02-25 | 上海市特种设备监督检验技术研究院 | DR blurred image blind deconvolution restoration method and system |
Non-Patent Citations (1)
Title |
---|
Jin et al., "Deep Convolutional Neural Network for Inverse Problems in Imaging", IEEE Transactions on Image Processing, vol. 26, No. 9, Sep. 2017, pp. 4509-4522. |
Also Published As
Publication number | Publication date |
---|---|
DE102021114064A1 (en) | 2022-05-19 |
US20220156892A1 (en) | 2022-05-19 |
CN114511451A (en) | 2022-05-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210350168A1 (en) | Image segmentation method and image processing apparatus | |
Li et al. | An all-in-one network for dehazing and beyond | |
Li et al. | Aod-net: All-in-one dehazing network | |
Zhu et al. | Removing atmospheric turbulence via space-invariant deconvolution | |
CN109035319B (en) | Monocular image depth estimation method, monocular image depth estimation device, monocular image depth estimation apparatus, monocular image depth estimation program, and storage medium | |
US20150254814A1 (en) | Globally dominant point spread function estimation | |
CN110874827B (en) | Turbulent image restoration method and device, terminal equipment and computer readable medium | |
CN113409200B (en) | System and method for image deblurring in a vehicle | |
Lau et al. | Variational models for joint subsampling and reconstruction of turbulence-degraded images | |
CN112215773A (en) | Local motion deblurring method and device based on visual saliency and storage medium | |
CN113793285A (en) | Ultrafast restoration method and system for pneumatic optical effect target twin image | |
US11263773B2 (en) | Object detection apparatus, object detection method, computer program product, and moving object | |
CN110111261B (en) | Adaptive balance processing method for image, electronic device and computer readable storage medium | |
CN113344800B (en) | System and method for training non-blind image deblurring module | |
US11798139B2 (en) | Noise-adaptive non-blind image deblurring | |
CN112465712B (en) | Motion blur star map restoration method and system | |
KR101362183B1 (en) | Depth image noise removal apparatus and method based on camera pose | |
CN116012265B (en) | Infrared video denoising method and device based on time-space domain adaptive filtering | |
JP7034837B2 (en) | 3D convolution arithmetic unit, visual odometry system, and 3D convolution program | |
Braun et al. | Direct tracking from compressive imagers: A proof of concept | |
KR102342940B1 (en) | Method for One-Step L0 Smoothing via Deep Gradient Prior | |
CN114037636A (en) | Multi-frame blind restoration method for correcting image by adaptive optical system | |
López-Martínez et al. | Blind adaptive method for image restoration using microscanning | |
Javaran | Blur length estimation in linear motion blurred images using evolutionary algorithms | |
Zappa et al. | Estimation and compensation of motion blur for the reduction of uncertainty in DIC measurements of flexible bodies |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: GM GLOBAL TECHNOLOGY OPERATIONS LLC, MICHIGAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SLUTSKY, MICHAEL;REEL/FRAME:054388/0121 Effective date: 20201116 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |