US11798139B2 - Noise-adaptive non-blind image deblurring - Google Patents

Noise-adaptive non-blind image deblurring Download PDF

Info

Publication number
US11798139B2
US11798139B2 US17/099,995 US202017099995A US11798139B2 US 11798139 B2 US11798139 B2 US 11798139B2 US 202017099995 A US202017099995 A US 202017099995A US 11798139 B2 US11798139 B2 US 11798139B2
Authority
US
United States
Prior art keywords
neural network
image
input image
implementing
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US17/099,995
Other versions
US20220156892A1 (en
Inventor
Michael Slutsky
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GM Global Technology Operations LLC
Original Assignee
GM Global Technology Operations LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GM Global Technology Operations LLC filed Critical GM Global Technology Operations LLC
Priority to US17/099,995 priority Critical patent/US11798139B2/en
Assigned to GM Global Technology Operations LLC reassignment GM Global Technology Operations LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SLUTSKY, MICHAEL
Priority to CN202110509845.3A priority patent/CN114511451A/en
Priority to DE102021114064.1A priority patent/DE102021114064A1/en
Publication of US20220156892A1 publication Critical patent/US20220156892A1/en
Application granted granted Critical
Publication of US11798139B2 publication Critical patent/US11798139B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/001Image restoration
    • G06T5/003Deblurring; Sharpening
    • G06T5/70
    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/001Image restoration
    • G06T5/002Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/001Image restoration
    • G06T5/005Retouching; Inpainting; Scratch removal
    • G06T5/60
    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • G06T2207/20008Globally adaptive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Definitions

  • the subject disclosure relates generally to image deblurring and, more particularly, to noise-adaptive non-blind image deblurring.
  • a vehicle may include many sensors that provide information about the vehicle and its environment.
  • An exemplary sensor is a camera. Images obtained by one or more cameras of a vehicle may be used to perform semi-autonomous or autonomous operation, for example.
  • An image obtained by a camera may be blurred for a variety of reasons, including the movement or vibration of the camera.
  • the source of the blurring may be well known based on known movement of the vehicle or calibration performed for the camera. This facilitates non-blind image deblurring.
  • a blurred image generally includes noise as well as blurring. Accordingly, it is desirable to provide noise-adaptive non-blind image deblurring.
  • a method of performing noise-adaptive non-blind deblurring on an input image that includes blur and noise includes implementing a first neural network on the input image to obtain one or more parameters and performing regularized deconvolution to obtain a deblurred image from the input image.
  • the regularized deconvolution uses the one or more parameters to control noise in the deblurred image.
  • the method also includes implementing a second neural network to remove artifacts from the deblurred image and provide an output image.
  • the implementing the first neural network results in one parameter that is a regularization parameter.
  • the implementing the first neural network results in two or more parameters that are weights corresponding with a set of predefined regularization parameters.
  • the method also includes training the first neural network and the second neural network individually or together in an end-to-end arrangement.
  • the method also includes obtaining, by the processing circuitry, a point spread function that defines the blur in the input image.
  • the input image is obtained by a camera in a vehicle and the point spread function is obtained from one or more sensors of the vehicle or from the camera based on a calibration.
  • the implementing the first neural network includes obtaining a one-dimensional vector of singular values from the input image and implementing a one-dimensional residual convolutional neural network (CNN).
  • CNN convolutional neural network
  • a non-transitory computer-readable storage medium stores instructions which, when processed by processing circuitry, cause the processing circuitry to implement a method of performing noise-adaptive non-blind deblurring on an input image that includes blur and noise.
  • the method includes implementing a first neural network on the input image to obtain one or more parameters and performing regularized deconvolution to obtain a deblurred image from the input image.
  • the regularized deconvolution uses the one or more parameters to control noise in the deblurred image.
  • the method also includes implementing a second neural network to remove artifacts from the deblurred image and provide an output image.
  • the implementing the first neural network results in one parameter that is a regularization parameter.
  • the implementing the first neural network results in two or more parameters that are weights corresponding with a set of predefined regularization parameters.
  • the method also includes training the first neural network and the second neural network individually or together in an end-to-end arrangement.
  • the method also includes obtaining, by the processing circuitry, a point spread function that defines the blur in the input image.
  • the input image is obtained by a camera in a vehicle and the point spread function is obtained from one or more sensors of the vehicle or from the camera based on a calibration.
  • the implementing the first neural network includes obtaining a one-dimensional vector of singular values from the input image and implementing a one-dimensional residual convolutional neural network (CNN).
  • CNN convolutional neural network
  • a vehicle in yet another exemplary embodiment, includes a camera to obtain an input image that includes blur and noise.
  • the vehicle also includes processing circuitry to implement a first neural network on the input image to obtain one or more parameters and to perform regularized deconvolution to obtain a deblurred image from the input image.
  • the regularized deconvolution uses the one or more parameters to control noise in the deblurred image.
  • the processing circuitry also implements a second neural network to remove artifacts from the deblurred image and provide an output image.
  • the processing circuitry implements the first neural network and obtain one parameter that is a regularization parameter or obtain two or more parameters that are weights corresponding with a set of predefined regularization parameters.
  • the processing circuitry trains the first neural network and the second neural network individually or together in an end-to-end arrangement.
  • the processing circuitry obtains a point spread function that defines the blur in the input image.
  • the processing circuitry obtains the point spread function from one or more sensors of the vehicle that measure a movement of the vehicle or from a calibration of the camera.
  • the first neural network obtains a one-dimensional vector of singular values from the input image and implement a one-dimensional residual convolutional neural network (CNN).
  • CNN convolutional neural network
  • FIG. 1 is a block diagram of a vehicle in which noise-adaptive non-blind image deblurring is performed according to one or more embodiments;
  • FIG. 2 shows exemplary images that illustrate the process of noise-adaptive non-blind image deblurring according to one or more embodiments
  • FIG. 3 shows components of a training process of a system that performs noise-adaptive non-blind image deblurring according to one or more embodiments
  • FIG. 4 shows the architecture of the first neural network used to perform noise-adaptive non-blind image deblurring according to one or more embodiments
  • FIG. 5 shows a process flow for training the first neural network used to perform noise-adaptive non-blind image deblurring according to one or more embodiments
  • FIG. 6 shows a process flow for training the first neural network used to perform noise-adaptive non-blind image deblurring according to one or more embodiments
  • FIG. 7 shows additional processes needed to generate ground truth in order to train the first neural network when the blur is in two dimensions
  • FIG. 8 shows an exemplary process flow for end-to-end training of the neural networks used to perform noise-adaptive non-blind image deblurring according to one or more embodiments
  • FIG. 9 shows an exemplary process flow for end-to-end training of the neural networks used to perform noise-adaptive non-blind image deblurring according to one or more embodiments.
  • FIG. 10 is a block diagram of the system to perform noise-adaptive non-blind image deblurring according to one or more embodiments
  • Non-blind deblurring of blurred images refers to the scenario in which the source of the blurring and a model of the smear is known. Even when the function or model of the smear is known, non-blind deblurring is a nonstable problem, and boundary conditions must be imposed to address artifacts. That is, prior deblurring processes may introduce artifacts. Additionally, noise may be amplified if the deblurring process is not regularized. A prior approach facilitates addressing known or fixed noise in the non-blind deblurring process. Specifically, a joint training procedure is undertaken to determine both the parameters for the regularized deconvolution and the weights of a convolutional neural network (CNN).
  • CNN convolutional neural network
  • a first neural network (e.g., deep neural network) infers a noise-dependent regularization parameter used in the regularized deconvolution process to produce a deblurred image with artifacts.
  • the first neural network provides a regularization parameter value ⁇ .
  • the first neural network provides weighting associated with each value in a predefined array of regularization parameter values ⁇ . Using the correct regularization parameter value ⁇ during regularized deconvolution ensures that noise in the input (blurred) image is not amplified in an uncontrollable fashion in the deblurred image.
  • a second neural network removes artifacts from the deblurred image.
  • a correct value of the regularization parameter ⁇ provided by the first neural network ensures that the value is not too small to be useful (i.e., output image is too noisy) yet not so large that the output image is still blurry. This separate, first neural network is not used according to the prior approach.
  • FIG. 1 is a block diagram of a vehicle 100 in which noise-adaptive non-blind image deblurring is performed.
  • the exemplary vehicle 100 shown in FIG. 1 is an automobile 101 .
  • Two exemplary cameras 110 are shown to obtain images from a front of the vehicle 100 .
  • Each of the cameras 110 may be color camera or grayscale camera or any other imaging device that operates in the visible or infrared spectrum.
  • the images obtained with the cameras 110 may be one, two, or three-dimensional images that serve as a blurred input image 210 ( FIG. 2 ).
  • the vehicle 100 is also shown to include a controller 120 and additional sensors 130 , 140 .
  • the additional sensors 130 e.g., inertial measurement unit, wheel speed sensor, gyroscope, accelerometer
  • the additional sensors 140 e.g., lidar system, radar system
  • the controller 120 may use information from one or more of the sensors 130 , 140 and cameras 110 to perform semi-autonomous or autonomous operation of the vehicle 100 .
  • the controller 120 performs noise-adaptive non-blind image deblurring on blurred input images 210 obtained by one or more cameras 110 .
  • a camera 110 may include a controller to perform the processing.
  • the noise-adaptive non-blind image deblurring requires knowledge of the source of the blurring.
  • the source of the blurring may be motion of the vehicle 100 , which is indicated by parameters obtained by the sensors 130 of the vehicle 100 , or may be inherent to the camera 110 , as determined by calibration of the camera 110 .
  • the non-blind aspect of the deblurring process is known and not further detailed herein.
  • the controller 120 and any controller of a camera 110 may include processing circuitry that may include an application specific integrated circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and memory that executes one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.
  • ASIC application specific integrated circuit
  • processor shared, dedicated, or group
  • memory that executes one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.
  • FIG. 2 shows exemplary images that illustrate the process of noise-adaptive non-blind image deblurring according to one or more embodiments.
  • An input image 210 is a blurred image with noise.
  • a deblurred image 220 is obtained.
  • This deblurred image 220 may include artifacts 225 .
  • a second neural network 375 is then implemented (at block 370 ( FIG. 3 )) to obtain an output image 230 with the artifacts 225 removed from the deblurred image 220 .
  • FIG. 3 shows components of a training process 300 of a system 301 that performs noise-adaptive non-blind image deblurring according to one or more embodiments.
  • the first neural network 355 (implemented at block 350 ) facilitates obtaining noise-dependent regularization parameters.
  • a single value of a regularization parameter ⁇ or weightings corresponding with a predefined array of values of regularization parameters ⁇ are provided by the first neural network 355 according to alternate embodiments.
  • the regularization parameter ⁇ is used to control noise in the regularized deconvolution (at block 360 ) that provides a deblurred image 220 .
  • the second neural network 375 (implemented at block 370 ) facilitates removing artifacts from the deblurred image 220 to obtain the output image 230 .
  • obtaining a sharp image 315 refers to obtaining an image, whether in real time or from a database, without blurring or noise.
  • the training process 300 uses a large set of the sharp images Im 315 over many iterations.
  • the sharp image Im 315 represents the ground truth used in the training process 300 . That is, ideally, the output image 230 will be very close to this sharp image Im 315 .
  • performing corruption refers to generating noise N and a point spread function (PSF), both of which are applied to the sharp image Im 315 to generate the input image 210 (indicated as I B ) to the system 301 .
  • Each neural network may be trained individually or may be trained together in a process known as end-to-end training. Exemplary training processes for the first neural network 355 or for end-to-end training of the full system 301 are discussed with reference to FIGS. 5 - 9 .
  • the PSF output at block 320 represents potential sources of blurring of an image obtained by a camera 110 in a vehicle 100 .
  • the PSF may be based on motion parameters obtained by the sensors 130 of the vehicle 100 or may be measured based on calibration of the camera 110 when the blurring is inherent to the camera 110 .
  • the PSF is used to determine blur (i.e., generate a blur kernel matrix K B ).
  • the system 301 obtains the blurred image (at block 340 ) from a camera 110 and determines blur (at block 330 ) based on information from sensors 130 or the camera 110 . Because the blur is determined (at block 330 ) through the known PSF, the system 301 performs non-blind deblurring. Because the noise Nis not known, the system 301 performs noise-adaptive deblurring.
  • obtaining the blurred input image I B 210 at block 340 , involves applying the blur (determined at block 330 ) to the sharp image Im 315 and adding the noise N. This is an artificial process to create the input image I B 210 . In the trained system 301 , noise and blur are part of the camera 110 output.
  • implementing a first neural network 355 results in determining the regularization parameter ⁇ .
  • implementing the first neural network 355 may result in the output of a regularization parameter ⁇ value or may result in the output of weights corresponding with a predefined set of regularization parameter ⁇ values. In the latter case, the weights of the set add to 1.
  • the architecture 400 of the first neural network 355 is discussed with reference to FIG. 4 and training of the first neural network 355 is discussed with reference to FIGS. 5 - 7 .
  • regularized deconvolution to generate the deblurred image 220 may be performed according to alternate embodiments.
  • the first neural network 355 (at block 350 ) is assumed to provide a regularization parameter ⁇ value rather than weights.
  • a Tikhonov regularized deconvolution may be used when the input image I B 210 evidences a one-dimensional blur (e.g., horizontal blur).
  • T indicates transpose.
  • a Wiener deconvolution may be performed.
  • ⁇ B ( ⁇ right arrow over (k) ⁇ ) ⁇ m ( ⁇ right arrow over (k) ⁇ ) ⁇ tilde over (K) ⁇ B ( ⁇ right arrow over (k) ⁇ )+ N [EQ. 5]
  • ⁇ DB ( ⁇ right arrow over (k) ⁇ ) FFT( I DB ) [EQ. 6]
  • FFT fast Fourier transform
  • the equations are in the Fourier space, as indicated by vector k, rather than in real space.
  • an FFT is performed on the input image I B 210 to obtain ⁇ B .
  • the deblurred image I DB 220 is obtained as:
  • I ⁇ DB ( k ⁇ ) I ⁇ B ( k ⁇ ) ⁇ K ⁇ B * ( k ⁇ ) ⁇ " ⁇ [LeftBracketingBar]" K ⁇ B ( k ⁇ ) ⁇ " ⁇ [RightBracketingBar]” 2 + ⁇ 2 [ EQ . 7 ]
  • the deblurred image I DB 220 is obtained by performing an inverse FFT (IFFT) on ⁇ DB ( ⁇ tilde over (k) ⁇ ).
  • implementing the second neural network 375 on the deblurred image I DB 220 results in the output image 230 .
  • the image enhancement neural network that removes artifacts from the deblurred image I DB 220 and which is indicated as the second neural network 375 is well-known and is not detailed herein. End-to-end training, which refers to training the first neural network 355 and the second neural network 375 together, is discussed with reference to FIGS. 8 and 9 .
  • a mean square error (MSE) may be obtained between the output image 230 provided by the system 301 and the sharp image Im 315 to ascertain the effectiveness of the noise-adaptive non-blind deblurring performed by the system 301 .
  • MSE mean square error
  • FIG. 4 shows the architecture 400 of the first neural network 355 used to perform noise-adaptive non-blind image deblurring according to one or more embodiments.
  • the first neural network 355 is a one-dimensional residual CNN.
  • the input to the first neural network 355 is the input image I B 210 , which is a blurred image with noise, and the output N out may be a value of the regularization parameter ⁇ or may be a set of weights that corresponding with values of a set of predefined regularization parameters ⁇ .
  • An SVD is performed on the input image I B 210 , at 401 , to obtain a one-dimensional vector of image singular value (SV) logarithms.
  • the first convolutional layer 405 converts the input to 64 feature vectors.
  • the next four stages 410 are cascades of five residual blocks. While five residual blocks are indicated for each stage 410 , the exemplary embodiment of the architecture 400 does not limit other numbers of subunits in alternate embodiments.
  • each cascade 420 includes “Conv1d,” which refers to a filter sliding along the data across one dimension, “BatchNorm,” which refers to a batch normalization type of layer, and “ReLU,” which refers to a rectified linear unit.
  • the number of filters N f may be 64, 128, 256, or 512, as indicated for the different stages 410 .
  • 1024 feature vectors are fed into a fully connected layer “FC” to produce the output N out .
  • FIGS. 5 and 6 detail the training of the first neural network 355 used to perform noise-adaptive non-blind image deblurring according to one or more embodiments.
  • end-to-end training refers to training both neural networks according to the arrangement shown in FIG. 3 .
  • the first neural network 355 may be trained separately from the second neural network 375 .
  • the ground truth is obtained by a function Q( ⁇ ), as detailed.
  • FIG. 5 shows a process flow 500 for training the first neural network 355 used to perform noise-adaptive non-blind image deblurring according to one or more embodiments.
  • the process flow 500 shown in FIG. 5 is used with one-dimensional blur and when the output of the first neural network 355 is a regularization parameter ⁇ value.
  • the previously described processes at blocks 310 to 340 to obtain the sharp image Im 315 and the blurred input image I B 210 are not discussed again.
  • a set of images is obtained as regularized deconvolution results for a set of values of the regularization parameter ⁇ .
  • the function Q( ⁇ ) selects the optimal regularization parameter ⁇ , ⁇ opt, from among the set of values of the regularization parameter ⁇ .
  • the function Q( ⁇ ), at block 520 obtains a mean square distance (MSD) between each of the set of images (generated with the set of values of the regularization parameter ⁇ ) and the sharp image Im 315 and selects the regularization parameter ⁇ corresponding with the image that results in the minimum MSD as ⁇ opt.
  • MSD mean square distance
  • the input image I B 210 is subjected to an SVD to generate decomposition matrices, similarly to EQ. 2.
  • implementing the first neural network 355 results in a single regularization parameter ⁇ reg.
  • a log scale of the regularization parameter ⁇ reg and the optimal regularization parameter ⁇ opt is compared based on MSE.
  • the process flow 500 may be repeated for a large set of the sharp images Im 315 to train the first neural network 355 .
  • FIG. 6 shows a process flow 600 for training the first neural network 355 used to perform noise-adaptive non-blind image deblurring according to one or more embodiments.
  • the process flow 600 shown in FIG. 6 is used with one-dimensional blur and when the output of the first neural network 355 is a set of weights corresponding with a set of predefined values of regularization parameters ⁇ .
  • previously described processes at blocks 310 to 340 to obtain the sharp image Im 315 and the blurred input image I B 210 are not discussed again.
  • a set of images is obtained as regularized deconvolution results. Each image in the set results from a particular set of weights corresponding with a set of predefined regularization parameter ⁇ values.
  • the function Q( ⁇ ) selects the set of weights that results in an image with a minimum MSD relative to the sharp image Im 315 .
  • the SoftMin function then rescales the weights to ensure that they are between 0 and 1.
  • the result is the weighting function g( ⁇ ) such that the area under the curve will add to 1.
  • the input image I B 210 is subjected to an SVD to generate decomposition matrices, similarly to EQ. 2 (as discussed with reference to FIG. 5 ).
  • implementing the first neural network 355 results in a set of weights indicated as the function f( ⁇ ).
  • a weighted sum of images, according to the functions g( ⁇ ) and f( ⁇ ) are compared.
  • the process flow 600 may be repeated for a large set of the sharp images Im 315 to train the first neural network 355 .
  • FIG. 7 shows additional processes 710 needed to generate ground truth in order to train the first neural network 355 when the blur is in two dimensions.
  • the additional processes 710 include performing an FFT on the sharp image Im 315 that is an input to the additional processes 710 .
  • the additional processes 710 also include IFFTs at the outputs of the additional processes 710 .
  • regularized deconvolution results are obtained in the Fourier space.
  • the additional processes 710 may be used to train a first neural network 355 for two-dimensional blur whether the first neural network 355 provides a single regularization parameter ⁇ or weights for a predefined set of regularization parameter ⁇ values.
  • FIG. 8 shows an exemplary process flow 800 for end-to-end training of the neural networks used to perform noise-adaptive non-blind image deblurring according to one or more embodiments.
  • the exemplary process flow 800 is used when the first neural network 355 outputs a regularization parameter ⁇ .
  • the exponential at block 810 of the log result provides the regularization parameter ⁇ used by the regularization deconvolution at block 360 .
  • the bypass 820 is a pretraining bypass and allows bypassing the second neural network 375 (implemented at block 370 ). This facilitates a comparison, at block 380 , of the deblurred image I DB 220 with the sharp image Im 315 .
  • the output image 230 is compared with the sharp image Im 315 such that the result of both the first neural network 355 (implemented at block 350 ) and the second neural network 375 (implemented at block 370 ) is verified as part of the overall system 301 .
  • FIG. 9 shows an exemplary process flow 900 for end-to-end training of the neural networks used to perform noise-adaptive non-blind image deblurring according to one or more embodiments.
  • the exemplary process flow 900 is used when the first neural network 355 outputs a set of weights corresponding to a predefined set of regularization parameter ⁇ values.
  • implementing the first neural network 355 at block 350 , results in the weights as a function f( ⁇ ) of the predefined set of regularization parameter ⁇ values.
  • a set of deconvolved images is generated with each deconvolved image resulting from a different one of the predefined set of regularization parameter ⁇ values.
  • a weighted sum of the deconvolved images (from block 610 ) is obtained based on the weights obtained from the first neural network 355 .
  • the pretraining bypass 920 in FIG. 9 allows bypassing the second neural network 375 (implemented at block 370 ). This facilitates a comparison, at block 380 , of the deblurred image I DB 220 with the sharp image Im 315 .
  • the output image 230 is compared with the sharp image Im 315 such that the result of both the first neural network 355 (implemented at block 350 ) and the second neural network 375 (implemented at block 370 ) is verified as part of the overall system 301 .
  • FIG. 10 is a block diagram of the system 301 to perform noise-adaptive non-blind image deblurring according to one or more embodiments.
  • the system 301 may be implemented by processing circuitry of the controller 120 of the vehicle 100 , for example.
  • a camera 110 provides the blurred input image I B 210 .
  • the camera 110 itself and/or sensors 130 that indicate motion of the vehicle 100 provide the PSF that indicates the cause of the blur and facilitates non-blind deblurring.
  • implementing a first neural network 355 provides a regularization parameter ⁇ or weights corresponding to a predefined set of regularization parameter ⁇ values.
  • the first neural network 355 facilitates control of the noise in the input image I B 210 (i.e., noise-adaptive deblurring).
  • regularized deconvolution provides a deblurred image I DB 220 .
  • implementing the second neural network 375 facilitates removing artifacts from the deblurred image I DB 220 to generate the output image 230 .
  • This output image 230 may be displayed in the vehicle 100 or used for object detection and classification.

Abstract

Systems and methods to perform noise-adaptive non-blind deblurring on an input image that includes blur and noise involve implementing a first neural network on the input image to obtain one or more parameters and performing regularized deconvolution to obtain a deblurred image from the input image. The regularized deconvolution uses the one or more parameters to control noise in the deblurred image. A method includes implementing a second neural network to remove artifacts from the deblurred image and provide an output image.

Description

INTRODUCTION
The subject disclosure relates generally to image deblurring and, more particularly, to noise-adaptive non-blind image deblurring.
A vehicle (e.g., automobile, truck, farm equipment, construction equipment, automated factory equipment) may include many sensors that provide information about the vehicle and its environment. An exemplary sensor is a camera. Images obtained by one or more cameras of a vehicle may be used to perform semi-autonomous or autonomous operation, for example. An image obtained by a camera may be blurred for a variety of reasons, including the movement or vibration of the camera. In the vehicle application, the source of the blurring may be well known based on known movement of the vehicle or calibration performed for the camera. This facilitates non-blind image deblurring. However, a blurred image generally includes noise as well as blurring. Accordingly, it is desirable to provide noise-adaptive non-blind image deblurring.
SUMMARY
In one exemplary embodiment, a method of performing noise-adaptive non-blind deblurring on an input image that includes blur and noise includes implementing a first neural network on the input image to obtain one or more parameters and performing regularized deconvolution to obtain a deblurred image from the input image. The regularized deconvolution uses the one or more parameters to control noise in the deblurred image. The method also includes implementing a second neural network to remove artifacts from the deblurred image and provide an output image.
In addition to one or more of the features described herein, the implementing the first neural network results in one parameter that is a regularization parameter.
In addition to one or more of the features described herein, the implementing the first neural network results in two or more parameters that are weights corresponding with a set of predefined regularization parameters.
In addition to one or more of the features described herein, the method also includes training the first neural network and the second neural network individually or together in an end-to-end arrangement.
In addition to one or more of the features described herein, the method also includes obtaining, by the processing circuitry, a point spread function that defines the blur in the input image.
In addition to one or more of the features described herein, the input image is obtained by a camera in a vehicle and the point spread function is obtained from one or more sensors of the vehicle or from the camera based on a calibration.
In addition to one or more of the features described herein, the implementing the first neural network includes obtaining a one-dimensional vector of singular values from the input image and implementing a one-dimensional residual convolutional neural network (CNN).
In another exemplary embodiment, a non-transitory computer-readable storage medium stores instructions which, when processed by processing circuitry, cause the processing circuitry to implement a method of performing noise-adaptive non-blind deblurring on an input image that includes blur and noise. The method includes implementing a first neural network on the input image to obtain one or more parameters and performing regularized deconvolution to obtain a deblurred image from the input image. The regularized deconvolution uses the one or more parameters to control noise in the deblurred image. The method also includes implementing a second neural network to remove artifacts from the deblurred image and provide an output image.
In addition to one or more of the features described herein, the implementing the first neural network results in one parameter that is a regularization parameter.
In addition to one or more of the features described herein, the implementing the first neural network results in two or more parameters that are weights corresponding with a set of predefined regularization parameters.
In addition to one or more of the features described herein, the method also includes training the first neural network and the second neural network individually or together in an end-to-end arrangement.
In addition to one or more of the features described herein, the method also includes obtaining, by the processing circuitry, a point spread function that defines the blur in the input image.
In addition to one or more of the features described herein, the input image is obtained by a camera in a vehicle and the point spread function is obtained from one or more sensors of the vehicle or from the camera based on a calibration.
In addition to one or more of the features described herein, the implementing the first neural network includes obtaining a one-dimensional vector of singular values from the input image and implementing a one-dimensional residual convolutional neural network (CNN).
In yet another exemplary embodiment, a vehicle includes a camera to obtain an input image that includes blur and noise. The vehicle also includes processing circuitry to implement a first neural network on the input image to obtain one or more parameters and to perform regularized deconvolution to obtain a deblurred image from the input image. The regularized deconvolution uses the one or more parameters to control noise in the deblurred image. The processing circuitry also implements a second neural network to remove artifacts from the deblurred image and provide an output image.
In addition to one or more of the features described herein, the processing circuitry implements the first neural network and obtain one parameter that is a regularization parameter or obtain two or more parameters that are weights corresponding with a set of predefined regularization parameters.
In addition to one or more of the features described herein, the processing circuitry trains the first neural network and the second neural network individually or together in an end-to-end arrangement.
In addition to one or more of the features described herein, the processing circuitry obtains a point spread function that defines the blur in the input image.
In addition to one or more of the features described herein, the processing circuitry obtains the point spread function from one or more sensors of the vehicle that measure a movement of the vehicle or from a calibration of the camera.
In addition to one or more of the features described herein, the first neural network obtains a one-dimensional vector of singular values from the input image and implement a one-dimensional residual convolutional neural network (CNN).
The above features and advantages, and other features and advantages of the disclosure are readily apparent from the following detailed description when taken in connection with the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
Other features, advantages and details appear, by way of example only, in the following detailed description, the detailed description referring to the drawings in which:
FIG. 1 is a block diagram of a vehicle in which noise-adaptive non-blind image deblurring is performed according to one or more embodiments;
FIG. 2 shows exemplary images that illustrate the process of noise-adaptive non-blind image deblurring according to one or more embodiments;
FIG. 3 shows components of a training process of a system that performs noise-adaptive non-blind image deblurring according to one or more embodiments;
FIG. 4 shows the architecture of the first neural network used to perform noise-adaptive non-blind image deblurring according to one or more embodiments;
FIG. 5 shows a process flow for training the first neural network used to perform noise-adaptive non-blind image deblurring according to one or more embodiments;
FIG. 6 shows a process flow for training the first neural network used to perform noise-adaptive non-blind image deblurring according to one or more embodiments;
FIG. 7 shows additional processes needed to generate ground truth in order to train the first neural network when the blur is in two dimensions;
FIG. 8 shows an exemplary process flow for end-to-end training of the neural networks used to perform noise-adaptive non-blind image deblurring according to one or more embodiments;
FIG. 9 shows an exemplary process flow for end-to-end training of the neural networks used to perform noise-adaptive non-blind image deblurring according to one or more embodiments; and
FIG. 10 is a block diagram of the system to perform noise-adaptive non-blind image deblurring according to one or more embodiments
DETAILED DESCRIPTION
The following description is merely exemplary in nature and is not intended to limit the present disclosure, its application or uses. It should be understood that throughout the drawings, corresponding reference numerals indicate like or corresponding parts and features.
As previously noted, an image obtained by a camera may be blurred. In a vehicle application, movement or vibration of the camera may cause blurring of images obtained by the camera. Non-blind deblurring of blurred images refers to the scenario in which the source of the blurring and a model of the smear is known. Even when the function or model of the smear is known, non-blind deblurring is a nonstable problem, and boundary conditions must be imposed to address artifacts. That is, prior deblurring processes may introduce artifacts. Additionally, noise may be amplified if the deblurring process is not regularized. A prior approach facilitates addressing known or fixed noise in the non-blind deblurring process. Specifically, a joint training procedure is undertaken to determine both the parameters for the regularized deconvolution and the weights of a convolutional neural network (CNN).
Embodiments of the systems and methods detailed herein relate to noise-adaptive non-blind image deblurring. A first neural network (e.g., deep neural network) infers a noise-dependent regularization parameter used in the regularized deconvolution process to produce a deblurred image with artifacts. According to an exemplary embodiment, the first neural network provides a regularization parameter value λ. According to another exemplary embodiment, the first neural network provides weighting associated with each value in a predefined array of regularization parameter values λ. Using the correct regularization parameter value λ during regularized deconvolution ensures that noise in the input (blurred) image is not amplified in an uncontrollable fashion in the deblurred image. Then a second neural network (e.g., CNN) removes artifacts from the deblurred image. A correct value of the regularization parameter λ provided by the first neural network ensures that the value is not too small to be useful (i.e., output image is too noisy) yet not so large that the output image is still blurry. This separate, first neural network is not used according to the prior approach.
In accordance with an exemplary embodiment, FIG. 1 is a block diagram of a vehicle 100 in which noise-adaptive non-blind image deblurring is performed. The exemplary vehicle 100 shown in FIG. 1 is an automobile 101. Two exemplary cameras 110 are shown to obtain images from a front of the vehicle 100. Each of the cameras 110 may be color camera or grayscale camera or any other imaging device that operates in the visible or infrared spectrum. The images obtained with the cameras 110 may be one, two, or three-dimensional images that serve as a blurred input image 210 (FIG. 2 ).
The vehicle 100 is also shown to include a controller 120 and additional sensors 130, 140. The additional sensors 130 (e.g., inertial measurement unit, wheel speed sensor, gyroscope, accelerometer) obtain information about the vehicle 100 while the additional sensors 140 (e.g., lidar system, radar system) obtain information about its surroundings. The controller 120 may use information from one or more of the sensors 130, 140 and cameras 110 to perform semi-autonomous or autonomous operation of the vehicle 100.
According to one or more embodiments, the controller 120 performs noise-adaptive non-blind image deblurring on blurred input images 210 obtained by one or more cameras 110. Alternately, a camera 110 may include a controller to perform the processing. In either case, the noise-adaptive non-blind image deblurring requires knowledge of the source of the blurring. The source of the blurring may be motion of the vehicle 100, which is indicated by parameters obtained by the sensors 130 of the vehicle 100, or may be inherent to the camera 110, as determined by calibration of the camera 110. The non-blind aspect of the deblurring process is known and not further detailed herein. The controller 120 and any controller of a camera 110 may include processing circuitry that may include an application specific integrated circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and memory that executes one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.
FIG. 2 shows exemplary images that illustrate the process of noise-adaptive non-blind image deblurring according to one or more embodiments. An input image 210 is a blurred image with noise. Based on regularized deconvolution (at block 360 (FIG. 3 )) using a regularization parameter λ obtained with a first neural network 355 (implemented at block 350 (FIG. 3 )), a deblurred image 220 is obtained. This deblurred image 220 may include artifacts 225. A second neural network 375 is then implemented (at block 370 (FIG. 3 )) to obtain an output image 230 with the artifacts 225 removed from the deblurred image 220.
FIG. 3 shows components of a training process 300 of a system 301 that performs noise-adaptive non-blind image deblurring according to one or more embodiments. As detailed and noted with reference to FIG. 2 , the first neural network 355 (implemented at block 350) facilitates obtaining noise-dependent regularization parameters. A single value of a regularization parameter λ or weightings corresponding with a predefined array of values of regularization parameters λ are provided by the first neural network 355 according to alternate embodiments. In either form, the regularization parameter λ is used to control noise in the regularized deconvolution (at block 360) that provides a deblurred image 220. The second neural network 375 (implemented at block 370) facilitates removing artifacts from the deblurred image 220 to obtain the output image 230.
At block 310, obtaining a sharp image 315 (indicated as Im) refers to obtaining an image, whether in real time or from a database, without blurring or noise. To be clear, the training process 300 uses a large set of the sharp images Im 315 over many iterations. The sharp image Im 315 represents the ground truth used in the training process 300. That is, ideally, the output image 230 will be very close to this sharp image Im 315.
At block 320, performing corruption refers to generating noise N and a point spread function (PSF), both of which are applied to the sharp image Im 315 to generate the input image 210 (indicated as IB) to the system 301. Each neural network may be trained individually or may be trained together in a process known as end-to-end training. Exemplary training processes for the first neural network 355 or for end-to-end training of the full system 301 are discussed with reference to FIGS. 5-9 . The PSF output at block 320 represents potential sources of blurring of an image obtained by a camera 110 in a vehicle 100. The PSF may be based on motion parameters obtained by the sensors 130 of the vehicle 100 or may be measured based on calibration of the camera 110 when the blurring is inherent to the camera 110.
At block 330, the PSF is used to determine blur (i.e., generate a blur kernel matrix KB). The system 301 obtains the blurred image (at block 340) from a camera 110 and determines blur (at block 330) based on information from sensors 130 or the camera 110. Because the blur is determined (at block 330) through the known PSF, the system 301 performs non-blind deblurring. Because the noise Nis not known, the system 301 performs noise-adaptive deblurring. As FIG. 3 indicates, obtaining the blurred input image IB 210, at block 340, involves applying the blur (determined at block 330) to the sharp image Im 315 and adding the noise N. This is an artificial process to create the input image I B 210. In the trained system 301, noise and blur are part of the camera 110 output. The input image IB 210 is given by:
I B=Im*K B +N  [EQ. 1]
At block 350, implementing a first neural network 355 results in determining the regularization parameter λ. According to alternate embodiments, implementing the first neural network 355, at block 350, may result in the output of a regularization parameter λ value or may result in the output of weights corresponding with a predefined set of regularization parameter λ values. In the latter case, the weights of the set add to 1. The architecture 400 of the first neural network 355 is discussed with reference to FIG. 4 and training of the first neural network 355 is discussed with reference to FIGS. 5-7 .
At block 360, regularized deconvolution to generate the deblurred image 220, based on the input image IB 210 and the regularization parameter λ, may be performed according to alternate embodiments. For explanatory purposes, the first neural network 355 (at block 350) is assumed to provide a regularization parameter λ value rather than weights. According to an exemplary embodiment, a Tikhonov regularized deconvolution may be used when the input image IB 210 evidences a one-dimensional blur (e.g., horizontal blur). In this case, the blur kernel matrix KB (determined at block 330) is subject to a singular value decomposition (SVD) to generate decomposition matrices:
K B =USV T  [EQ. 2]
In EQ. 2, T indicates transpose. Then, at block 360, the deblurred image 220, indicated as IDB, is obtained, based on the decomposition matrices from EQ. 2 and the regularization parameter λ from block 350, as follows:
[K B]REG −1 =VS(S 22 I)−1 U T  [EQ. 3]
I DB ≅I B [K B]REG −1  [EQ. 4]
At block 360, according to an alternate embodiment, when the input image IB 210 includes two-dimensional blur, a Wiener deconvolution may be performed. In this case,
Ĩ B({right arrow over (k)})=Ĩ m({right arrow over (k)}){tilde over (K)} B({right arrow over (k)})+N  [EQ. 5]
Ĩ DB({right arrow over (k)})=FFT(I DB)  [EQ. 6]
The parameters shown in EQ. 5 result from a fast Fourier transform (FFT). That is, because two-dimensional blur rather than one-dimensional blur must be considered, the equations are in the Fourier space, as indicated by vector k, rather than in real space. For example, an FFT is performed on the input image IB 210 to obtain ĨB. The deblurred image IDB 220 is obtained as:
I ~ DB ( k ) = I ~ B ( k ) K ~ B * ( k ) "\[LeftBracketingBar]" K ~ B ( k ) "\[RightBracketingBar]" 2 + λ 2 [ EQ . 7 ] I ~ DB ( k ) = I ~ B ( k ) = I ~ m K ~ B - 1 ( k , λ ) [ EQ . 8 ]
Based on EQ. 8, the deblurred image IDB 220 is obtained by performing an inverse FFT (IFFT) on ĨDB({tilde over (k)}).
At block 370, implementing the second neural network 375 on the deblurred image IDB 220 results in the output image 230. The image enhancement neural network that removes artifacts from the deblurred image IDB 220 and which is indicated as the second neural network 375 is well-known and is not detailed herein. End-to-end training, which refers to training the first neural network 355 and the second neural network 375 together, is discussed with reference to FIGS. 8 and 9 . As FIG. 3 indicates, a mean square error (MSE) may be obtained between the output image 230 provided by the system 301 and the sharp image Im 315 to ascertain the effectiveness of the noise-adaptive non-blind deblurring performed by the system 301.
FIG. 4 shows the architecture 400 of the first neural network 355 used to perform noise-adaptive non-blind image deblurring according to one or more embodiments. The first neural network 355 is a one-dimensional residual CNN. The input to the first neural network 355 is the input image IB 210, which is a blurred image with noise, and the output Nout may be a value of the regularization parameter λ or may be a set of weights that corresponding with values of a set of predefined regularization parameters λ. An SVD is performed on the input image IB 210, at 401, to obtain a one-dimensional vector of image singular value (SV) logarithms. The first convolutional layer 405 converts the input to 64 feature vectors. The next four stages 410 are cascades of five residual blocks. While five residual blocks are indicated for each stage 410, the exemplary embodiment of the architecture 400 does not limit other numbers of subunits in alternate embodiments.
The known operations that are part of each cascade 420, as indicated for the exemplary cascade 420 in FIG. 4 , include “Conv1d,” which refers to a filter sliding along the data across one dimension, “BatchNorm,” which refers to a batch normalization type of layer, and “ReLU,” which refers to a rectified linear unit. The number of filters Nf may be 64, 128, 256, or 512, as indicated for the different stages 410. As indicated at 430, there is a feature number doubling convolutional layer and a max-pooling layer following each cascade 420. At the output, 1024 feature vectors are fed into a fully connected layer “FC” to produce the output Nout.
FIGS. 5 and 6 detail the training of the first neural network 355 used to perform noise-adaptive non-blind image deblurring according to one or more embodiments. As previously noted, end-to-end training refers to training both neural networks according to the arrangement shown in FIG. 3 . According to alternate embodiments shown in FIG. 5 or FIG. 6 , the first neural network 355 may be trained separately from the second neural network 375. When the first neural network 355 is trained individually, the ground truth is obtained by a function Q(λ), as detailed.
FIG. 5 shows a process flow 500 for training the first neural network 355 used to perform noise-adaptive non-blind image deblurring according to one or more embodiments. The process flow 500 shown in FIG. 5 is used with one-dimensional blur and when the output of the first neural network 355 is a regularization parameter λ value. The previously described processes at blocks 310 to 340 to obtain the sharp image Im 315 and the blurred input image IB 210 are not discussed again. At block 510, a set of images is obtained as regularized deconvolution results for a set of values of the regularization parameter λ. At block 520, the function Q(λ) selects the optimal regularization parameter λ, λopt, from among the set of values of the regularization parameter λ. That is, the function Q(λ), at block 520, obtains a mean square distance (MSD) between each of the set of images (generated with the set of values of the regularization parameter λ) and the sharp image Im 315 and selects the regularization parameter λ corresponding with the image that results in the minimum MSD as λopt.
At block 530, the input image IB 210 is subjected to an SVD to generate decomposition matrices, similarly to EQ. 2. At block 350, according to the exemplary embodiment, implementing the first neural network 355 results in a single regularization parameter λreg. For a more precise comparison, a log scale of the regularization parameter λreg and the optimal regularization parameter λopt is compared based on MSE. The process flow 500 may be repeated for a large set of the sharp images Im 315 to train the first neural network 355.
FIG. 6 shows a process flow 600 for training the first neural network 355 used to perform noise-adaptive non-blind image deblurring according to one or more embodiments. The process flow 600 shown in FIG. 6 is used with one-dimensional blur and when the output of the first neural network 355 is a set of weights corresponding with a set of predefined values of regularization parameters λ. As in the discussion of FIG. 5 , previously described processes at blocks 310 to 340 to obtain the sharp image Im 315 and the blurred input image IB 210 are not discussed again. At block 610, a set of images is obtained as regularized deconvolution results. Each image in the set results from a particular set of weights corresponding with a set of predefined regularization parameter λ values.
At block 620, the function Q(λ) selects the set of weights that results in an image with a minimum MSD relative to the sharp image Im 315. The SoftMin function then rescales the weights to ensure that they are between 0 and 1. The result is the weighting function g(λ) such that the area under the curve will add to 1. At block 530, the input image IB 210 is subjected to an SVD to generate decomposition matrices, similarly to EQ. 2 (as discussed with reference to FIG. 5 ). At block 350, according to the exemplary embodiment, implementing the first neural network 355 results in a set of weights indicated as the function f(λ). At block 640, a weighted sum of images, according to the functions g(λ) and f(λ) are compared. The process flow 600 may be repeated for a large set of the sharp images Im 315 to train the first neural network 355.
FIG. 7 shows additional processes 710 needed to generate ground truth in order to train the first neural network 355 when the blur is in two dimensions. The previously discussed processes will not be detailed again. The additional processes 710 include performing an FFT on the sharp image Im 315 that is an input to the additional processes 710. The additional processes 710 also include IFFTs at the outputs of the additional processes 710. At block 720, regularized deconvolution results are obtained in the Fourier space. The additional processes 710 may be used to train a first neural network 355 for two-dimensional blur whether the first neural network 355 provides a single regularization parameter λ or weights for a predefined set of regularization parameter λ values.
FIG. 8 shows an exemplary process flow 800 for end-to-end training of the neural networks used to perform noise-adaptive non-blind image deblurring according to one or more embodiments. The exemplary process flow 800 is used when the first neural network 355 outputs a regularization parameter λ. The exponential at block 810 of the log result provides the regularization parameter λ used by the regularization deconvolution at block 360. The bypass 820 is a pretraining bypass and allows bypassing the second neural network 375 (implemented at block 370). This facilitates a comparison, at block 380, of the deblurred image IDB 220 with the sharp image Im 315. When the bypass 820 is not used, the output image 230 is compared with the sharp image Im 315 such that the result of both the first neural network 355 (implemented at block 350) and the second neural network 375 (implemented at block 370) is verified as part of the overall system 301.
FIG. 9 shows an exemplary process flow 900 for end-to-end training of the neural networks used to perform noise-adaptive non-blind image deblurring according to one or more embodiments. The exemplary process flow 900 is used when the first neural network 355 outputs a set of weights corresponding to a predefined set of regularization parameter λ values. As indicated in FIG. 9 , implementing the first neural network 355, at block 350, results in the weights as a function f(λ) of the predefined set of regularization parameter λ values. At block 610, a set of deconvolved images is generated with each deconvolved image resulting from a different one of the predefined set of regularization parameter λ values. At block 910, a weighted sum of the deconvolved images (from block 610) is obtained based on the weights obtained from the first neural network 355.
Like the bypass 820 in FIG. 8 , the pretraining bypass 920 in FIG. 9 allows bypassing the second neural network 375 (implemented at block 370). This facilitates a comparison, at block 380, of the deblurred image IDB 220 with the sharp image Im 315. When the bypass 920 is not used, the output image 230 is compared with the sharp image Im 315 such that the result of both the first neural network 355 (implemented at block 350) and the second neural network 375 (implemented at block 370) is verified as part of the overall system 301.
FIG. 10 is a block diagram of the system 301 to perform noise-adaptive non-blind image deblurring according to one or more embodiments. The system 301 may be implemented by processing circuitry of the controller 120 of the vehicle 100, for example. A camera 110 provides the blurred input image I B 210. The camera 110 itself and/or sensors 130 that indicate motion of the vehicle 100 provide the PSF that indicates the cause of the blur and facilitates non-blind deblurring. At block 350, implementing a first neural network 355 provides a regularization parameter λ or weights corresponding to a predefined set of regularization parameter λ values. The first neural network 355 facilitates control of the noise in the input image IB 210 (i.e., noise-adaptive deblurring). At block 360, regularized deconvolution provides a deblurred image IDB 220. At block 370, implementing the second neural network 375 facilitates removing artifacts from the deblurred image IDB 220 to generate the output image 230. This output image 230 may be displayed in the vehicle 100 or used for object detection and classification.
While the above disclosure has been described with reference to exemplary embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from its scope. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the disclosure without departing from the essential scope thereof. Therefore, it is intended that the present disclosure not be limited to the particular embodiments disclosed, but will include all embodiments falling within the scope thereof.

Claims (17)

What is claimed is:
1. A method of performing noise-adaptive non-blind deblurring on an input image that includes blur and noise, the method comprising:
implementing, using processing circuitry, a first neural network on the input image to obtain one or more parameters, wherein the implementing the first neural network includes obtaining a one-dimensional vector of singular values from the input image and implementing a one-dimensional residual convolutional neural network (CNN);
performing, using the processing circuitry, regularized deconvolution to obtain a deblurred image from the input image, wherein the regularized deconvolution uses the one or more parameters to control noise in the deblurred image; and
implementing, using the processing circuitry, a second neural network to remove artifacts from the deblurred image and provide an output image.
2. The method according to claim 1, wherein the implementing the first neural network results in one parameter that is a regularization parameter.
3. The method according to claim 1, wherein the implementing the first neural network results in two or more parameters that are weights corresponding with a set of predefined regularization parameters.
4. The method according to claim 1, further comprising training the first neural network and the second neural network individually or together in an end-to-end arrangement.
5. The method according to claim 1, further comprising obtaining, by the processing circuitry, a point spread function that defines the blur in the input image.
6. The method according to claim 5, wherein the input image is obtained by a camera in a vehicle and the point spread function is obtained from one or more sensors of the vehicle or from the camera based on a calibration.
7. A non-transitory computer-readable storage medium storing instructions which, when processed by processing circuitry, cause the processing circuitry to implement a method of performing noise-adaptive non-blind deblurring on an input image that includes blur and noise, the method comprising:
implementing a first neural network on the input image to obtain one or more parameters, wherein the implementing the first neural network includes obtaining a one-dimensional vector of singular values from the input image and implementing a one-dimensional residual convolutional neural network (CNN);
performing regularized deconvolution to obtain a deblurred image from the input image, wherein the regularized deconvolution uses the one or more parameters to control noise in the deblurred image; and
implementing a second neural network to remove artifacts from the deblurred image and provide an output image.
8. The non-transitory computer-readable storage medium according to claim 7, wherein the implementing the first neural network results in one parameter that is a regularization parameter.
9. The non-transitory computer-readable storage medium according to claim 7, wherein the implementing the first neural network results in two or more parameters that are weights corresponding with a set of predefined regularization parameters.
10. The non-transitory computer-readable storage medium according to claim 7, further comprising training the first neural network and the second neural network individually or together in an end-to-end arrangement.
11. The non-transitory computer-readable storage medium according to claim 7, further comprising obtaining, by the processing circuitry, a point spread function that defines the blur in the input image.
12. The non-transitory computer-readable storage medium according to claim 11, wherein the input image is obtained by a camera in a vehicle and the point spread function is obtained from one or more sensors of the vehicle or from the camera based on a calibration.
13. A vehicle comprising:
a camera configured to obtain an input image that includes blur and noise; and
processing circuitry configured to implement a first neural network on the input image to obtain one or more parameters, wherein the first neural network is configured to obtain a one-dimensional vector of singular values from the input image and implement a one-dimensional residual convolutional neural network (CNN), to perform regularized deconvolution to obtain a deblurred image from the input image, wherein the regularized deconvolution uses the one or more parameters to control noise in the deblurred image, and to implement a second neural network to remove artifacts from the deblurred image and provide an output image.
14. The vehicle according to claim 13, wherein the processing circuitry is configured to implement the first neural network and obtain one parameter that is a regularization parameter or obtain two or more parameters that are weights corresponding with a set of predefined regularization parameters.
15. The vehicle according to claim 13, wherein the processing circuitry is configured to train the first neural network and the second neural network individually or together in an end-to-end arrangement.
16. The vehicle according to claim 13, wherein the processing circuitry is configured to obtain a point spread function that defines the blur in the input image.
17. The vehicle according to claim 16, wherein the processing circuitry is configured to obtain the point spread function from one or more sensors of the vehicle that measure a movement of the vehicle or from a calibration of the camera.
US17/099,995 2020-11-17 2020-11-17 Noise-adaptive non-blind image deblurring Active 2041-09-03 US11798139B2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US17/099,995 US11798139B2 (en) 2020-11-17 2020-11-17 Noise-adaptive non-blind image deblurring
CN202110509845.3A CN114511451A (en) 2020-11-17 2021-05-11 Noise adaptive non-blind image deblurring
DE102021114064.1A DE102021114064A1 (en) 2020-11-17 2021-05-31 Noise-adaptive non-blind image sharpening

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US17/099,995 US11798139B2 (en) 2020-11-17 2020-11-17 Noise-adaptive non-blind image deblurring

Publications (2)

Publication Number Publication Date
US20220156892A1 US20220156892A1 (en) 2022-05-19
US11798139B2 true US11798139B2 (en) 2023-10-24

Family

ID=81345826

Family Applications (1)

Application Number Title Priority Date Filing Date
US17/099,995 Active 2041-09-03 US11798139B2 (en) 2020-11-17 2020-11-17 Noise-adaptive non-blind image deblurring

Country Status (3)

Country Link
US (1) US11798139B2 (en)
CN (1) CN114511451A (en)
DE (1) DE102021114064A1 (en)

Citations (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070165961A1 (en) * 2006-01-13 2007-07-19 Juwei Lu Method And Apparatus For Reducing Motion Blur In An Image
US20080137978A1 (en) * 2006-12-07 2008-06-12 Guoyi Fu Method And Apparatus For Reducing Motion Blur In An Image
WO2008106282A1 (en) * 2007-02-28 2008-09-04 Microsoft Corporation Image deblurring with blurred/noisy image pairs
DE102008036334A1 (en) * 2008-08-04 2009-04-09 Daimler Ag Vehicle i.e. car, operating method, involves determining point spreading function based on movement parameter of vehicle, and executing reconstruction of images detected by image detecting unit based on point spreading function
US20110033130A1 (en) * 2009-08-10 2011-02-10 Eunice Poon Systems And Methods For Motion Blur Reduction
US20110158541A1 (en) * 2009-12-25 2011-06-30 Shinji Watanabe Image processing device, image processing method and program
KR101181161B1 (en) * 2011-05-19 2012-09-17 한국과학기술원 An apparatus and a method for deblurring image blur caused by camera ego motion
US20140348441A1 (en) * 2012-03-29 2014-11-27 Nikon Corporation Algorithm for minimizing latent sharp image cost function and point spread function with a spatial mask in a fidelity term
US20140355901A1 (en) * 2012-03-29 2014-12-04 Nikon Corporation Algorithm for minimizing latent sharp image cost function and point spread function cost function with a spatial mask in a regularization term
US20160070979A1 (en) * 2014-09-05 2016-03-10 Huawei Technologies Co., Ltd. Method and Apparatus for Generating Sharp Image Based on Blurry Image
CN105447828A (en) * 2015-11-23 2016-03-30 武汉工程大学 Single-viewpoint image deblurring method for carrying out one-dimensional deconvolution along motion blur path
CN106485685A (en) * 2016-08-30 2017-03-08 重庆大学 A kind of vehicle-mounted high-quality imaging method being restored based on two steps
US20170191945A1 (en) * 2016-01-01 2017-07-06 Kla-Tencor Corporation Systems and Methods for Defect Detection Using Image Reconstruction
US20180089809A1 (en) * 2016-09-27 2018-03-29 Nikon Corporation Image deblurring with a multiple section, regularization term
US20180158175A1 (en) * 2016-12-01 2018-06-07 Almalence Inc. Digital correction of optical system aberrations
CN108198151A (en) * 2018-02-06 2018-06-22 东南大学 A kind of star chart deblurring method based on improvement RL Deconvolution Algorithm Based on Frequency
CN108416752A (en) * 2018-03-12 2018-08-17 中山大学 A method of image is carried out based on production confrontation network and removes motion blur
CN109636733A (en) * 2018-10-26 2019-04-16 华中科技大学 Fluorescent image deconvolution method and system based on deep neural network
US20190122378A1 (en) * 2017-04-17 2019-04-25 The United States Of America, As Represented By The Secretary Of The Navy Apparatuses and methods for machine vision systems including creation of a point cloud model and/or three dimensional model based on multiple images from different perspectives and combination of depth cues from camera motion and defocus with various applications including navigation systems, and pattern matching systems as well as estimating relative blur between images for use in depth from defocus or autofocusing applications
US20190205614A1 (en) * 2018-01-03 2019-07-04 Samsung Electronics Co., Ltd. Method and apparatus for recognizing object
US10360664B2 (en) * 2017-01-12 2019-07-23 Postech Academy-Industry Foundation Image processing apparatus and method using machine learning
US20200090322A1 (en) * 2018-09-13 2020-03-19 Nvidia Corporation Deep neural network processing for sensor blindness detection in autonomous machine applications
US20200097772A1 (en) * 2018-09-25 2020-03-26 Honda Motor Co., Ltd. Model parameter learning device, control device, and model parameter learning method
US20200160490A1 (en) * 2018-11-20 2020-05-21 Idemia Identity & Security France Method for deblurring an image
US20200193570A1 (en) * 2017-09-05 2020-06-18 Sony Corporation Image processing device, image processing method, and program
CN112241669A (en) * 2019-07-18 2021-01-19 杭州海康威视数字技术股份有限公司 Target identification method, device, system and equipment, and storage medium
CN108632502B (en) * 2017-03-17 2021-04-30 深圳开阳电子股份有限公司 Image sharpening method and device
US20210142146A1 (en) * 2019-11-13 2021-05-13 Micron Technology, Inc. Intelligent image sensor stack
US20210152735A1 (en) * 2019-11-14 2021-05-20 Microsoft Technology Licensing, Llc Image restoration for through-display imaging
US20210183015A1 (en) * 2018-09-13 2021-06-17 Samsung Electronics Co., Ltd. Image processing apparatus and operation method thereof
KR20210099456A (en) * 2020-02-04 2021-08-12 엘지전자 주식회사 Image processor, artificial intelligence apparatus and method for generating image data by enhancing specific function
WO2021169136A1 (en) * 2020-02-28 2021-09-02 深圳市商汤科技有限公司 Image processing method and apparatus, and electronic device and storage medium
CN114092416A (en) * 2021-11-04 2022-02-25 上海市特种设备监督检验技术研究院 DR blurred image blind deconvolution restoration method and system
TW202211154A (en) * 2020-08-07 2022-03-16 美商奈米創尼克影像公司 Deep learning model for noise reduction in low snr imaging conditions
US20220245776A1 (en) * 2021-02-01 2022-08-04 Microsoft Technology Licensing, Llc Simultaneously correcting image degradations of multiple types in an image of a face

Patent Citations (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070165961A1 (en) * 2006-01-13 2007-07-19 Juwei Lu Method And Apparatus For Reducing Motion Blur In An Image
US20080137978A1 (en) * 2006-12-07 2008-06-12 Guoyi Fu Method And Apparatus For Reducing Motion Blur In An Image
WO2008106282A1 (en) * 2007-02-28 2008-09-04 Microsoft Corporation Image deblurring with blurred/noisy image pairs
DE102008036334A1 (en) * 2008-08-04 2009-04-09 Daimler Ag Vehicle i.e. car, operating method, involves determining point spreading function based on movement parameter of vehicle, and executing reconstruction of images detected by image detecting unit based on point spreading function
US20110033130A1 (en) * 2009-08-10 2011-02-10 Eunice Poon Systems And Methods For Motion Blur Reduction
US20110158541A1 (en) * 2009-12-25 2011-06-30 Shinji Watanabe Image processing device, image processing method and program
KR101181161B1 (en) * 2011-05-19 2012-09-17 한국과학기술원 An apparatus and a method for deblurring image blur caused by camera ego motion
US20140355901A1 (en) * 2012-03-29 2014-12-04 Nikon Corporation Algorithm for minimizing latent sharp image cost function and point spread function cost function with a spatial mask in a regularization term
US20140348441A1 (en) * 2012-03-29 2014-11-27 Nikon Corporation Algorithm for minimizing latent sharp image cost function and point spread function with a spatial mask in a fidelity term
US20160070979A1 (en) * 2014-09-05 2016-03-10 Huawei Technologies Co., Ltd. Method and Apparatus for Generating Sharp Image Based on Blurry Image
CN105447828A (en) * 2015-11-23 2016-03-30 武汉工程大学 Single-viewpoint image deblurring method for carrying out one-dimensional deconvolution along motion blur path
US20170191945A1 (en) * 2016-01-01 2017-07-06 Kla-Tencor Corporation Systems and Methods for Defect Detection Using Image Reconstruction
CN106485685A (en) * 2016-08-30 2017-03-08 重庆大学 A kind of vehicle-mounted high-quality imaging method being restored based on two steps
US20180089809A1 (en) * 2016-09-27 2018-03-29 Nikon Corporation Image deblurring with a multiple section, regularization term
US20180158175A1 (en) * 2016-12-01 2018-06-07 Almalence Inc. Digital correction of optical system aberrations
US10360664B2 (en) * 2017-01-12 2019-07-23 Postech Academy-Industry Foundation Image processing apparatus and method using machine learning
CN108632502B (en) * 2017-03-17 2021-04-30 深圳开阳电子股份有限公司 Image sharpening method and device
US20190122378A1 (en) * 2017-04-17 2019-04-25 The United States Of America, As Represented By The Secretary Of The Navy Apparatuses and methods for machine vision systems including creation of a point cloud model and/or three dimensional model based on multiple images from different perspectives and combination of depth cues from camera motion and defocus with various applications including navigation systems, and pattern matching systems as well as estimating relative blur between images for use in depth from defocus or autofocusing applications
US20200193570A1 (en) * 2017-09-05 2020-06-18 Sony Corporation Image processing device, image processing method, and program
US20190205614A1 (en) * 2018-01-03 2019-07-04 Samsung Electronics Co., Ltd. Method and apparatus for recognizing object
CN108198151A (en) * 2018-02-06 2018-06-22 东南大学 A kind of star chart deblurring method based on improvement RL Deconvolution Algorithm Based on Frequency
CN108416752A (en) * 2018-03-12 2018-08-17 中山大学 A method of image is carried out based on production confrontation network and removes motion blur
US20210183015A1 (en) * 2018-09-13 2021-06-17 Samsung Electronics Co., Ltd. Image processing apparatus and operation method thereof
US20200090322A1 (en) * 2018-09-13 2020-03-19 Nvidia Corporation Deep neural network processing for sensor blindness detection in autonomous machine applications
US20200097772A1 (en) * 2018-09-25 2020-03-26 Honda Motor Co., Ltd. Model parameter learning device, control device, and model parameter learning method
CN109636733A (en) * 2018-10-26 2019-04-16 华中科技大学 Fluorescent image deconvolution method and system based on deep neural network
US20200160490A1 (en) * 2018-11-20 2020-05-21 Idemia Identity & Security France Method for deblurring an image
CN112241669A (en) * 2019-07-18 2021-01-19 杭州海康威视数字技术股份有限公司 Target identification method, device, system and equipment, and storage medium
US20210142146A1 (en) * 2019-11-13 2021-05-13 Micron Technology, Inc. Intelligent image sensor stack
US20210152735A1 (en) * 2019-11-14 2021-05-20 Microsoft Technology Licensing, Llc Image restoration for through-display imaging
KR20210099456A (en) * 2020-02-04 2021-08-12 엘지전자 주식회사 Image processor, artificial intelligence apparatus and method for generating image data by enhancing specific function
WO2021169136A1 (en) * 2020-02-28 2021-09-02 深圳市商汤科技有限公司 Image processing method and apparatus, and electronic device and storage medium
TW202211154A (en) * 2020-08-07 2022-03-16 美商奈米創尼克影像公司 Deep learning model for noise reduction in low snr imaging conditions
US20220245776A1 (en) * 2021-02-01 2022-08-04 Microsoft Technology Licensing, Llc Simultaneously correcting image degradations of multiple types in an image of a face
CN114092416A (en) * 2021-11-04 2022-02-25 上海市特种设备监督检验技术研究院 DR blurred image blind deconvolution restoration method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Jin et al., "Deep Convolutional Neural Network for Inverse Problems in Imaging", IEEE Transactions on Image Processing, vol. 26, No. 9, Sep. 2017, pp. 4509-4522.

Also Published As

Publication number Publication date
DE102021114064A1 (en) 2022-05-19
US20220156892A1 (en) 2022-05-19
CN114511451A (en) 2022-05-17

Similar Documents

Publication Publication Date Title
US20210350168A1 (en) Image segmentation method and image processing apparatus
Li et al. An all-in-one network for dehazing and beyond
Li et al. Aod-net: All-in-one dehazing network
Zhu et al. Removing atmospheric turbulence via space-invariant deconvolution
CN109035319B (en) Monocular image depth estimation method, monocular image depth estimation device, monocular image depth estimation apparatus, monocular image depth estimation program, and storage medium
US20150254814A1 (en) Globally dominant point spread function estimation
CN110874827B (en) Turbulent image restoration method and device, terminal equipment and computer readable medium
CN113409200B (en) System and method for image deblurring in a vehicle
Lau et al. Variational models for joint subsampling and reconstruction of turbulence-degraded images
CN112215773A (en) Local motion deblurring method and device based on visual saliency and storage medium
CN113793285A (en) Ultrafast restoration method and system for pneumatic optical effect target twin image
US11263773B2 (en) Object detection apparatus, object detection method, computer program product, and moving object
CN110111261B (en) Adaptive balance processing method for image, electronic device and computer readable storage medium
CN113344800B (en) System and method for training non-blind image deblurring module
US11798139B2 (en) Noise-adaptive non-blind image deblurring
CN112465712B (en) Motion blur star map restoration method and system
KR101362183B1 (en) Depth image noise removal apparatus and method based on camera pose
CN116012265B (en) Infrared video denoising method and device based on time-space domain adaptive filtering
JP7034837B2 (en) 3D convolution arithmetic unit, visual odometry system, and 3D convolution program
Braun et al. Direct tracking from compressive imagers: A proof of concept
KR102342940B1 (en) Method for One-Step L0 Smoothing via Deep Gradient Prior
CN114037636A (en) Multi-frame blind restoration method for correcting image by adaptive optical system
López-Martínez et al. Blind adaptive method for image restoration using microscanning
Javaran Blur length estimation in linear motion blurred images using evolutionary algorithms
Zappa et al. Estimation and compensation of motion blur for the reduction of uncertainty in DIC measurements of flexible bodies

Legal Events

Date Code Title Description
AS Assignment

Owner name: GM GLOBAL TECHNOLOGY OPERATIONS LLC, MICHIGAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SLUTSKY, MICHAEL;REEL/FRAME:054388/0121

Effective date: 20201116

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE