CN113344800A - System and method for training a non-blind image deblurring module - Google Patents

System and method for training a non-blind image deblurring module Download PDF

Info

Publication number
CN113344800A
CN113344800A CN202110230767.3A CN202110230767A CN113344800A CN 113344800 A CN113344800 A CN 113344800A CN 202110230767 A CN202110230767 A CN 202110230767A CN 113344800 A CN113344800 A CN 113344800A
Authority
CN
China
Prior art keywords
image
module
deconvolution
regularization
function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110230767.3A
Other languages
Chinese (zh)
Other versions
CN113344800B (en
Inventor
M.斯卢茨基
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GM Global Technology Operations LLC
Original Assignee
GM Global Technology Operations LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GM Global Technology Operations LLC filed Critical GM Global Technology Operations LLC
Publication of CN113344800A publication Critical patent/CN113344800A/en
Application granted granted Critical
Publication of CN113344800B publication Critical patent/CN113344800B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • G06T5/60
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details

Abstract

Methods and systems for training a non-blind deblurring module are disclosed. An unblurred test image and a blurred test image are received, wherein each blurred test image is associated with a corresponding one of the unblurred test images by a blur kernel term and a noise term. The regularizing deconvolution sub-modules and the convolutional neural network are co-trained by adjusting regularization parameters of the regularizing deconvolution function and weights of the convolutional neural network so as to minimize a cost function representing a difference between each deblurred output image and a corresponding one of the unblurred test images.

Description

System and method for training a non-blind image deblurring module
Technical Field
The present disclosure relates generally to non-blind image deblurring, and more particularly to methods and systems for training a non-blind image deblurring module.
Background
The image captured by the camera may be blurred for a variety of reasons. For example, a camera may be moving or shaking at all times during image capture. Image blur may also be caused by optical aberrations. Chromatic blur is also common, whereby the degree of refraction differs for different wavelengths. Non-blind deconvolution (deconvolution) techniques are known whereby a clearer, deblurred output image is obtained by processing a blurred input image. According to such deconvolution techniques, a blurred input image is transformed into a deblurred output image using a blur kernel. The blur kernel may be determined from a point spread function representing the nature of the expected blur effect. In the case where the camera is attached to a running vehicle, a point spread function is derived based on knowledge of the vehicle motion, and a deblurring kernel is determined based on the point spread function. That is, blur sources are generally well known in imaging and the blur process can be well modeled using Point Spread Functions (PSFs) either measured directly or derived from knowledge of the blur physics.
Most blurred images include noise as well as blur. Noise further complicates the deblurring problem. Typical techniques will usually eliminate blurring but will add other imperfections. Brute force direct application of deep neural networks may be successful, but only for relatively weak ambiguities. In addition, since the deconvolution method greatly varies according to the influence of noise and the variation of the blur kernel, the accuracy and the calculation speed of the method using the deep learning technique are limited.
Deblurring of an image can generally be reduced to deconvolution using a blur kernel. Deconvolution is an ill-posed inverse problem. Therefore, it should be regularized. The regularization parameter increases the stability of the solution. However, optimizing the regularization parameters is a difficult task. If the stability is too high, a blurred output may result, while too low stability may result in noise amplification.
Accordingly, it is desirable to provide a system and method of non-blind image deblurring that operates efficiently by using regularized deconvolution techniques such that regularization parameters are efficiently and optimally selected to improve the output of the deblurred image. Furthermore, other desirable features and characteristics of the present invention will become apparent from the subsequent detailed description and the appended claims, taken in conjunction with the accompanying drawings and the foregoing technical field and background.
Disclosure of Invention
According to an exemplary embodiment, a method of training a non-blind deblurring module is provided. The non-blind deblurring module includes a regularization deconvolution sub-module and a convolutional neural network sub-module. The regularization deconvolution sub-module is configured to perform a regularization deconvolution function on the blurred input image to produce a deconvolution image that may have image artifacts. The convolutional neural network sub-module is configured to receive the deconvolved image as an input to a convolutional neural network and remove image artifacts, thereby providing a deblurred output image. The method includes receiving, via at least one processor, an unblurred test image and a blurred test image. Each blurred test image is correlated with a corresponding one of the unblurred test images by a blur kernel term and a noise term. The method includes co-training, via at least one processor, a regularized deconvolution sub-module and a convolutional neural network. Co-training includes adjusting the regularization parameters of the regularized deconvolution function and the weights of the convolutional neural network to minimize a cost function representing the difference between each deblurred output image and a corresponding one of the unblurred test images, thereby providing trained regularization parameters, trained weights, and a trained non-blind deblurring module. The method further comprises the following steps: receiving, via the at least one processor, a blurred input image from an imaging device; deblurring, via the at least one processor, the blurred input image using a trained non-blind deblurring module; and outputting, via the at least one processor, the deblurred output image.
In an embodiment, the deconvolution function is a wiener deconvolution function. In an embodiment, the deconvolution function is a Tikhonov-regularized deconvolution function.
In an embodiment, the method includes deblurring the blurred input image using a trained non-blind deblurring module, thereby producing a deblurred output image. The regularization deconvolution sub-module performs a regularization deconvolution function on the blurred input image to produce a deconvolution image that may have image artifacts, the regularization deconvolution function including trained regularization parameters, and a convolutional neural network sub-module processes the deconvolved image through a convolutional neural network to remove the image artifacts, the convolutional neural network including trained weights.
In an embodiment, the convolutional neural network outputs a residual, and the trained non-blind deblurring module adds the residual to the deconvolved image, thereby producing a deblurred output image.
In an embodiment, the method includes adjusting, via at least one processor, the regularization parameters and weights using a back propagation algorithm. In an embodiment, the backpropagation algorithm adjusts the regularization parameter based on the gradients that have been fed back from the CNN and the derivative of the deconvolved image that may have image artifacts relative to the regularization parameter.
In an embodiment, at least one processor receives an unblurred test image and artificially generates a blurred test image on the unblurred test image using a blur kernel function and a noise function.
In an embodiment, a blurred input image is received from an imaging device mounted to a vehicle.
In an embodiment, a vehicle includes a vehicle controller. The method includes a vehicle controller controlling at least one vehicle function based on the deblurred output image.
According to another exemplary embodiment, a system for training a non-blind deblurring module is provided. The system includes a non-blind deblurring module that includes a regularized deconvolution sub-module and a convolutional neural network sub-module. The regularization deconvolution sub-module is configured to perform a regularization deconvolution function on the blurred input image to produce a deconvolution image that may have image artifacts. The convolutional neural network sub-module is configured to receive the deconvolved image as an input to a convolutional neural network and remove image artifacts, thereby providing a deblurred output image. The system includes at least one processor configured to execute program instructions. The program instructions are configured to cause at least one processor to receive an unblurred test image and a blurred test image. Each blurred test image is correlated with a corresponding one of the unblurred test images by a blur kernel term and a noise term. The program instructions are configured to cause the at least one processor to co-train the regularizing deconvolution sub-module and the convolutional neural network by adjusting regularization parameters of the regularizing deconvolution function and weights of the convolutional neural network to minimize a cost function. The cost function represents the difference between each deblurred output image and a corresponding one of the unblurred test images, providing trained regularization parameters, trained weights, and a trained non-blind deblurring module. The program instructions also cause the at least one processor to: receiving a blurred input image from the imaging device, deblurring the blurred input image using a trained non-blind deblurring module, and outputting a deblurred output image.
In an embodiment, the deconvolution function is a wiener deconvolution function. In an embodiment, the deconvolution function is a Tikhonov-regularized deconvolution function.
In an embodiment, the trained non-blind deblurring module is configured to deblur the blurred input image, thereby producing a deblurred output image. The regularization deconvolution sub-module is configured to perform a regularization deconvolution function on the blurred input image to produce a deconvolution image that may have image artifacts using trained regularization parameters. The convolutional neural network sub-module is configured to process the deconvolved image through a convolutional neural network to remove image artifacts using the trained weights.
In an embodiment, the convolutional neural network is configured to output residuals, and the trained non-blind deblurring module is configured to add the residuals to the deconvolved image, thereby producing a deblurred output image.
The program instructions are configured to cause the at least one processor to adjust the regularization parameters and weights using a back propagation algorithm. In an embodiment, the program instructions are configured to cause the at least one processor to adjust the regularization parameter based on gradients that have been fed back from the CNN and derivatives of the deconvolved image that may have image artifacts with respect to the regularization parameter.
In an embodiment, the program instructions are configured to cause the at least one processor to receive an unblurred test image and to artificially generate a blurred test image on the unblurred test image using a blur kernel function and a noise function.
In an embodiment, the system comprises a vehicle. The vehicle includes a camera and a non-blind deblurring module. The non-blind deblurring module is configured to receive a blurred input image from the camera.
In an embodiment, a vehicle includes a control module configured to control at least one vehicle function based on a deblurred output image.
Drawings
The present disclosure will hereinafter be described in conjunction with the following drawing figures, wherein like numerals denote like elements, and wherein:
FIG. 1 is a functional block diagram of a system for non-blind deblurring, according to an exemplary embodiment;
FIG. 2 is a functional block diagram of data processing in a regularized deconvolution sub-module in accordance with an illustrative embodiment;
FIG. 3 is a functional block diagram of data processing in another regularized deconvolution sub-module in accordance with an illustrative embodiment;
FIG. 4 is a functional block diagram of a system for training a non-blind deblurring module, according to an exemplary embodiment;
FIG. 5 is a functional block diagram representing data transformations processed in a non-blind deblurring module in accordance with an illustrative embodiment;
FIG. 6 is a flowchart of a method of training and using a non-blind deblurring module, according to an example embodiment; and
fig. 7 shows a blurred input image, a deconvolved image and a deblurred output image according to the use of the system of fig. 1 according to an exemplary embodiment.
Detailed Description
The following detailed description is merely exemplary in nature and is not intended to limit the disclosure or the application and uses thereof. Furthermore, there is no intention to be bound by any theory presented in the preceding background or the following detailed description.
As used herein, the term module refers to any hardware, software, firmware, electronic control component, processing logic, and/or processor device, alone or in any combination, including but not limited to: an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and memory that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.
Embodiments of the disclosure may be described herein in terms of functional and/or logical block components and various processing steps. It should be appreciated that such block components may be realized by any number of hardware, software, and/or firmware components configured to perform the specified functions. For example, embodiments of the present disclosure may employ various integrated circuit components, e.g., memory elements, digital signal processing elements, logic elements, look-up tables, or the like, which may carry out a variety of functions under the control of one or more microprocessors or other control devices. In addition, those skilled in the art will appreciate that embodiments of the present disclosure can be practiced in conjunction with any number of systems, and that the systems described herein are merely exemplary embodiments of the disclosure.
For the sake of brevity, conventional techniques related to signal processing, data transmission, signaling, control, and other functional aspects of the systems (and the individual operating components of the systems) may not be described in detail herein. Furthermore, the connecting lines shown in the various figures contained herein are intended to represent example functional relationships and/or physical couplings between the various elements. It should be noted that many alternative or additional functional relationships or physical connections may be present in an embodiment of the disclosure.
Described herein are systems and methods for non-blind image deblurring based on a hybrid system using a classical regularized deconvolution module and a Convolutional Neural Network (CNN). The parameters of the regularization module are determined along with the CNN weights by a co-training procedure. The training data includes any relevant image database as well as system-generated blurred and noisy images. It has been found that the systems and methods disclosed herein can produce artifact-free image deblurring even for almost arbitrarily strong blurring in the input image. The systems and methods disclosed herein are related to a variety of sensing scenarios-high exposure time motion blur, color difference, etc.
Fig. 1 shows a system 10 for non-blind image deblurring. The system 10 includes a vehicle 12, an imaging device 14 mounted to the vehicle 12, vehicle sensors 16, a vehicle controller 18, and a usage processing system 26. The usage processing system 26 includes a non-blind deblurring module 34 executed by a processor 70 and computer program instructions 74 stored in a memory 72. In an embodiment, the imaging device 14 is configured to capture an image as the blurred input image 24, which is deblurred using a regularized deconvolution function 44 and a convolutional neural network, CNN42, included in a non-blind deblurring module, to produce the deblurred output image 20. The deblurred output image is used by the vehicle controller 18 to control one or more functions of the vehicle 12.
The system 10 is shown (e.g., contained within) in the context of a vehicle 12, particularly an automobile. However, the system 10 is useful in other vehicle environments, such as aircraft, marine vessels, and the like. The system 10 is applicable outside of a vehicle environment including any electronic device, such as mobile phones, cameras, and tablet devices, that captures images that are prone to blur. The present disclosure relates particularly, but not exclusively, to blur due to motion (which typically occurs in a vehicular environment, particularly at night or other times of extended exposure time). The system 10 is useful for sources of blur other than motion, such as those caused by optical aberrations or color filtering.
In various embodiments, the vehicle 12 is an autonomous vehicle, and the system 10 is incorporated into the autonomous vehicle 12. However, the system 10 is useful in any type of vehicle (autonomous or otherwise) that includes an imaging device 14 that produces images that are prone to blurring. The autonomous vehicle 12 is, for example, a vehicle that is automatically controlled to transport passengers from one location to another. The vehicle 12 is depicted in the illustrated embodiment as a passenger vehicle, but it should be understood that any other vehicle including motorcycles, trucks, Sport Utility Vehicles (SUVs), Recreational Vehicles (RVs), boats, airplanes, etc., may be used. In the exemplary embodiment, the autonomous vehicle 12 is a so-called four-level or five-level automation system. The four-level system represents "highly automated" and refers to the driving pattern specific performance of the autonomous driving system on all aspects of the dynamic driving task, even if the driver does not respond appropriately to the intervention request. A five-level system represents "fully automated" and refers to the full-time performance of an autonomous driving system in all aspects of a dynamic driving task under all road and environmental conditions that can be managed by the driver.
In an embodiment, the vehicle 12 includes a vehicle controller 18 configured to control one or more vehicle functions based on images from the imaging device 14. The vehicle controller 18 may include one or more advanced driver assistance systems configured to provide electronic driving assistance based on images from the imaging device 14. The vehicle controller 18 may include an autonomous or semi-autonomous drive configured to control the vehicle 12 through one or more actuation systems (e.g., propulsion, braking, and steering systems) based on images of inputs from the imaging device 14. In all such embodiments, better deblurring of the input image will allow the vehicle controller 18 to more safely control the vehicle 12.
According to various embodiments, the system 10 includes an imaging device 14 (e.g., a front, rear, or side mounted camera), a vehicle sensor 16, a vehicle controller 18, and a usage processing system 26. The processing system 26 is configured to receive the blurred input image 24 from the imaging device 14 by programming instructions 74 executed on the processor 70 (as described further below) and to perform the regularization deconvolution function 44 on the blurred input image dependent on the regularization parameters. The resulting deconvolved image 40 is passed through a Convolutional Neural Network (CNN)42 to remove any image artifacts resulting from the deconvolution function 44. According to the present disclosure, the weights of the CNN42 and the regularization parameters of the deconvolution function 44 are co-trained. That is, the regularization parameters are part of the back propagation chain when training the CNN 42. The co-trained network layers include the CNN42 layer and the deconvolution layer.
With continued reference to fig. 1, the usage processing system 26 includes at least one processor 70, memory 72, and the like. The processor 70 may execute program instructions 74 stored in the memory 72. The processor 70 may refer to a Central Processing Unit (CPU), a Graphics Processing Unit (GPU) or a special purpose processor on which methods and functions according to the present disclosure are performed. The memory 72 may be comprised of volatile and/or non-volatile storage media. For example, the memory 72 may include Read Only Memory (ROM) and/or Random Access Memory (RAM). The memory 72 stores at least one instruction that is executed by the processor 70 to implement the blocks, modules, and method steps described herein. Although modules 28, 30, and 34 are shown separately from processor 70, memory 72, and programming instructions 74, this is purely for visualization. Indeed, the modules 28, 30, and 34 are implemented by programming instructions 74 stored on the memory 72 and executable by one or more processors 70 using the processing system 26.
The imaging device 14 is any suitable camera or video device that produces images. For purposes of this disclosure, an image is assumed to include blur (and is therefore labeled as a blurred input image 24) due to motion blur or other types of sources of blur. The imaging device 14 may be a color imaging device or a grayscale imaging device. The imaging device 14 may operate in the visible spectrum and/or the infrared spectrum. The imaging device 14 may produce a one-, two-, or three-dimensional (1D, 2D, or 3D) image that is used as the blurred input image 24.
The vehicle sensors 16 include various sensors used by a vehicle controller 18 to control operation of the vehicle 12. Of particular relevance to the present disclosure are velocity (e.g., wheel speed sensors), acceleration (accelerometers and gyroscopes) and other vehicle sensors 16 that provide data 22 indicative of sensed parameters of vehicle motion. As further described herein, the motion parameter data 22 is used by the usage parameter processing system 26 to determine a Point Spread Function (PSF). The point spread function determined by the processing system is used in the deconvolution function 44 to blur the blurred input image 24. Although much of this disclosure is described in terms of a dynamically determined PSF based on motion parameter data 22, the disclosed systems and methods may be applied to other applications as long as the fuzzy model is known. Sometimes, the blur is inherent in the imaging device 14 itself, and the imaging device can be calibrated to directly measure the PSF without reference to external sensing data. The present disclosure may also find use in those applications.
The blur of the input image can be mathematically represented by the following equation:
IB=I×KB(equation 1)
Wherein IBIs a blurred input image 24, I is an unknown, non-blurred image corresponding to the blurred input image 24, KBIs a matrix or blur kernel that models a Point Spread Function (PSF) that describes the nature of the blur in the blurred input image 24. Since the present disclosure relates to non-blind deblurring, it is assumed that the PSF is known, and thus the blur kernel KBCan be derived from the PSF. PSFs for all blur modes are known in the art, including blur caused by motion of the imaging device 14 during exposure. Theoretically, the inverse of the fuzzy kernel or matrix (the inverse is defined by
Figure BDA0002957799870000081
Representation) is multiplied by the blurred input image 24 to resolve the unblurred image I. However, noise in the blurred input image 24 makes such direct deconvolution impractical. During the deconvolution process, the noise component will amplify in an uncontrolled manner, which may result in a deconvolved image that is less sharp (or blurry) than the original blurred input image 24. One solution to this noise amplification problem is to deblur the blurred input image 24 using the regularized inverse of the blur kernel.
Such regularized deconvolution functions are known in the art, and two examples will be provided below. The regularized deconvolution function relies on a regularization parameter λ to mitigate the effects of noise. The regularization parameter λ has a large influence on the quality of the deblurring. If the regularization parameter λ is too low, noise may have a significant impact on the output image. If the regularization parameter λ is too high, the blur will be amplified in the output image. According to the present disclosure, the regularization parameter λ is determined as part of a co-training process for CNN weights of CNN 42.
Referring to FIG. 1, the regularization deconvolution sub-module 36 receives the blurred input image 24 and operates a regularization deconvolution function 44 thereon. The regularized deconvolution function 44 includes an inverse regularized blur kernel or matrix that is determined based on a trained regularization parameter λ. The usage processing system 26 includes a point spread function module 28, the point spread function module 28 receiving the motion parameter data 22, the motion parameter data including at least velocity and optionally acceleration data, to determine Point Spread Function (PSF) data 30 representative of a PSF function. The Psf data 31 may vary depending on vehicle motion (e.g., the faster the vehicle, the greater the spread or blur defined by the Psf) and camera data 76 representing relevant camera parameters (e.g., exposure time) obtained from the imaging device 14. The Psf data 31 is determined by a point spread function module 28 that includes a modeling function for determining an expected Psf based on the motion parameter data 22 and the camera data 76. The blur kernel determination module 30 converts the Psf defined in the Psf data 31 into a matrix form and outputs corresponding blur kernel data 32. In case the motion induced blur is not a source of blur, the PSF does not have to be determined in this way.
The regularization deconvolution sub-module 36 receives the blur kernel data 32 representing the blur kernel K _ B and utilizes it in regularizing deconvolution of the blurred input image 24 to generate a deconvolved image 40. It should be understood that many PSFs and methods of determining them may be used, depending on the nature of the source of the ambiguity. Although the present disclosure is largely described in terms of motion blur and in association with vehicular applications, other sources of blur and, thus, other ways of determining the PSF may be incorporated into the present disclosure depending on the application. Therefore, the point spread function module 28 does not have to rely on the motion parameter data 22 or the camera data 76 (specifically, the exposure time). For example, when imaging over a wide range of wavelengths (e.g., more than one color), the combined image often blurs because each of the multiple wavelength bands is refracted to different degrees by the optics of the imaging device 14. A point spread function may be defined to reverse this color blur.
FIG. 2 shows an exemplary data flow diagram of a regularization deconvolution function 44 used by the regularization deconvolution sub-module 36 in the case of one-dimensional motion blur. In this case, the normalized deconvolution function 44 is a Tikhonov normalized deconvolution function. Following FIG. 2, the PSF module 28 generates PSF data 31 representing a PSF that is converted to a fuzzy core K defined by fuzzy core data 32B. Singular Value Decomposition (SVD) is performed on the blur kernel to generate a USV decomposition matrix according to equation 2:
KB=USVT(equation 2)
The inverse function of the regularized blur kernel is found to be:
Figure BDA0002957799870000091
where I (the unsharp version of the blurred input image 24) is:
Figure BDA0002957799870000092
and IBIs a blurred input image 24.
With continued reference to FIG. 2, since the fuzzy kernel K is being matchedBThe regularized deconvolution function 44 is composed of inverse decomposition matrices 80, 82, 84. The matrix 82 is a function of the trained regularization parameter λ 78, which is of the form S (S)22I)-1. The blurred input image 24 is multiplied by the decomposition matrices 80, 82, 84 as part of the regularized Tikhonov deconvolution function 44 to provide a deconvolved image 40.
FIG. 3 shows an exemplary data flow diagram of a regularization deconvolution function 44 used by the regularization deconvolution sub-module 36 in the case of 2D motion blur. In this case, the normalized deconvolution function 44 is a Wiener (Wiener) normalized deconvolution function. The wiener regularized deconvolution function is defined by the following equation:
Figure BDA0002957799870000101
Figure BDA0002957799870000102
Figure BDA0002957799870000103
following FIG. 3, the PSF module 28 generates PSF data 31 representing the PSF, which is converted to fuzzy dataFuzzy kernel K defined by kernel data 32B. Fuzzy kernel KBUndergoes a first Fast Fourier Transform (FFT) operation 88 as required by equation 7. Similarly, the blurred input image 24 is subjected to a second FFT operation 90, and then at operation 92 by the inverse of the regularizing blur kernel
Figure BDA0002957799870000104
Is representative of and is a function of the trained regularization parameter λ 78. The wiener regularization deconvolution function also includes an inverse FFT operation 94 for the output of operation 92. Thus, the deconvolved image 40 is output by the regularization deconvolution sub-module 36.
The deconvolved image 40 will typically have artifacts, which are inherent in the normalization deconvolution process. For this reason, the non-blind deblurring module 34 includes a CNN sub-module 38, the CNN sub-module 38 having a CNN42, the CNN42 configured to be trained to remove any image artifacts in the deconvolved image 40. In one embodiment, CNN42 is a residual U-net. The CNN42 is trained to remove artifacts while the regularization deconvolution sub-module 36 is trained to determine the optimal regularization parameter λ.
FIG. 4 provides a functional block diagram of a training processing system 50 for training the non-blind deblurring module 34, according to an exemplary embodiment. Training processing system 50 includes an image database 56 that serves as a source of unblurred test images 60. The point spread function generation module 52 generates an artificial PSF contained in the artificial PSF data 54. The artificial PSF is generated to be similar to the PSF that would occur during use of the non-blind deblurring module 34. For example, in the case of a vehicular application, a randomized PSF representing 1D or 2D deblurring is generated based on the exposure time and motion parameters that may be encountered during operation of the vehicle 12. The noise generation module 62 provides artificial noise data 64 representative of the noise to be applied to the unblurred test image 60. The noise generation module 62 may utilize a gaussian function. Blurred image generation module 58 receives unblurred test image 60, artificial PSF data 54, and artificial noise data 64 and generates a blurred test image 66 corresponding to unblurred test image 60 but including blur based on artificial PSF data 54 and noise based on artificial noise data 64.
Training processing system 50 includes a processor 91 and a memory 93 that stores programming instructions 95. The processor 91 executes programming instructions 95 to facilitate the training process, as will be described, as well as to train the various modules of the processing system 50. During the training process, the regularization deconvolution sub-module 36 is configured to receive the blurred test image 66 and deconvolve (as described elsewhere herein) the blurred test image 66 into a sharp (sharp) but potentially artifact-entangled deconvolved image 40. Exemplary artifacts include the occurrence of an unpleasant ringing (ringing) artifact near a strong edge. The CNN sub-module 38 is configured to receive the deconvolved image and generate an output residual 48 after passing the deconvolved image through the layers of the CNN 42. The residual 48 represents image artifacts after the CNN42 has been trained. The residual 48 and the deconvolved image 40 are combined at a summing function 46 to produce the deblurred output image 20. The training process performed by the training processing system 50 adjusts the weights of the CNN42 and the regularization parameter λ so that the deblurred output image 20 matches the unblurred test image 60 as closely as possible. That is, cost function module 96 implements a cost function 100 to generate cost data 98 representing the difference between deblurred output image 20 and unblurred test image 60.
The training processing system 50 co-adjusts the weights of the CNN42 and the regularization parameter λ of the regularization deconvolution sub-module 36 in an iterative process to minimize the cost data 98. In an embodiment, the training processing system 50 utilizes a back propagation algorithm. The back propagation algorithm calculates the gradient of the cost function 100 with respect to the weights of the CNN42 and the gradient of the cost function 100 with respect to the regularization parameter λ for the unblurred test image 60 and the deblurred output image 20. Such a gradient method co-trains the layers of the CNN42 and the regularized deconvolution layer, and updates the weights and regularization parameter λ to minimize a cost function 100 (e.g., minimize a cost value defined by the cost data 98). The back-propagation algorithm computes the gradient of each cost function 100 with respect to each weight and regularization parameter λ by a chain rule, computing the gradient one layer at a time, iterating back (closest to the output) from the last layer of CNN42 to the first layer of CNN42, and ending with a regularized deconvolution layer.
The layer-based concept of the training of the non-blind deblurring module 34 is shown in fig. 5. Blurred test image 66 passes through regularized deconvolution layer 102 to produce deconvoluted image 40, which deconvoluted image 40 then passes through the layers of CNN42 to produce residual 48, which is added to deconvoluted image 40 to provide deblurred output image 20. In accordance with the present disclosure, CNN layer 104 and regularized deconvolution layer 102 are trained by adjusting the weights of CNN layer 104 and regularization parameter λ of regularized deconvolution layer 102 during the same back propagation process.
In a standard gradient based cost function optimization scheme (e.g., SGD optimization), the update relationship for any Neural Network (NN) parameter θ (from step n-1 to step n) is:
Figure BDA0002957799870000111
where L is the cost function of the neural network and η is the learning rate. In general, since the dependency of the cost function on any neural network parameter θ is expressed by a chain dependency scheme, the required gradient is calculated by a back propagation algorithm.
Specifically, if:
L=g0(g1(g2(...(gN(θ))) (equation 10)
Then
Figure BDA0002957799870000121
With respect to the present disclosure, the total cost calculated by the cost function module 96 as a function of the regularization parameter λ is:
L=L(JDB(λ)) (equation 12)
That is, the cost function 100 is the deconvolved image J DB40, which is itself a function of the regularization parameter λ. To determine the derivative of the cost L with respect to the normalization parameter λ as required by equation 9, there are:
Figure BDA0002957799870000122
Figure BDA0002957799870000123
representing the back propagation from CNN42 and representing the accumulation of the layer-by-layer derivatives calculated by back propagation through CNN layer 104. Input gradient
Figure BDA0002957799870000124
Fed back from CNN42 and representing a cost function 100 with respect to the deconvolved image JDBA change in the variation of (c). The term is multiplied by
Figure BDA0002957799870000125
Which represents the deconvolved image J DB40 relative to the change in the regularization parameter lambda.
In the case of Tikhonov regularized deconvolution,
Figure BDA0002957799870000126
which can be derived from equation 3. The updating scheme of the regularization parameter is as follows:
Figure BDA0002957799870000127
which can be derived from the combination of equations 9 and 14.
In the case of regularized wiener deconvolution,
Figure BDA0002957799870000128
for both equations 15 and 16, the gradient obtained by back propagation of CNN42 allows calculation
Figure BDA0002957799870000131
Which is multiplied by the change in the deconvolved image 40 with respect to the regularization parameter change to provide the value of the regularization parameter for the current step or iteration of the training processing system 50. The training processing system 50 repeats such steps in order to minimize the cost function 100 with respect to both CNN weights and regularization parameters. Training processing system 26 outputs non-blind deblurring module 34 with trained CNN weights 106 loaded in CNN42 and trained regularization parameters 78 loaded in deconvolution function 44.
FIG. 6 is a flowchart of a method 200 for training and using the non-blind deblurring module 34, according to an example embodiment. According to an exemplary embodiment, the method 200 may be implemented in connection with the vehicle 12 of fig. 1 and the use and training processing systems 26, 50 of fig. 1 and 4.
The method 200 includes a step 210 of receiving a blurred input image 24. In use, the blurred input image 24 is received from the imaging device 14, which may be associated with the vehicle 12 but may also be associated with another device or apparatus. In training, the blurred input image 66 may be manually generated by the blurred image generation module 58 to incorporate the noise data 64 and the artificial PSF data 54 into the unblurred test image 60. It is further contemplated that the unblurred test image 60 and the blurred test image 66 are received as part of the blurred and corresponding unblurred images without the need for manually generating the blurred images.
In step 220, the blurred input image 24, 66 is deconvolved by the regularization deconvolution sub-module 36, which applies the deconvolution function 44, providing a deconvolved image 40. The deconvolution function 44 includes a regularization parameter λ that has not been optimized during training, but which is best selected during use according to the training method described herein.
In step 230, the deconvolved image 40 is received by the convolutional neural network sub-module 38 and passed through the CNN42 to determine the artifact to be removed. The weights of CNN42 are in the optimization process during training and have been optimized and stabilized during use.
In step 240, the deblurred image is output by the non-blind deblurring module 34. CNN42 generates a residual 48 that is added to the deconvolved image 40 to generate the deblurred output image 20.
The using step 250 includes controlling vehicle functions by the vehicle controller 18 based on the deblurred output image 20. Other non-vehicle applications are contemplated for the presently disclosed non-blind deblurring techniques.
According to step 260, during training, the cost function 100 is evaluated by the cost function module 96 based on the difference between the unblurred test image 60 and the deblurred output image 20. In step 270, the training processing system 50 jointly adjusts the weights of the regularization parameters of the CNN42 and the deconvolution function 44 so as to minimize the cost (defined by the cost data 98) calculated by the cost function 100. In an embodiment, the method 200 includes iteratively adjusting the regularization parameter λ step-by-step to minimize the cost, wherein each iteration includes multiplying the backpropagation gradient through each layer of the CNN by the derivative of the deconvolved image 40 with respect to the regularization parameter λ. The counter-propagating gradient represents the derivative of the cost function with respect to the deconvolved image 40.
According to the systems and methods described herein, the regularization parameter λ may be trained to provide enhanced deblurring of the input image in a process efficient manner. An example result of the non-blind deblurring module 34 of CNN weights and regularization parameters λ determined according to the co-training scheme described herein is shown in fig. 7. Line 1 shows three different blurred input images a1, B1 and C1 with 2D motion blur. These images are operated on by a deconvolution function 44 (normalized wiener deconvolution) and its associated trained regularization parameter λ. The resulting deconvolved images a2, B2, and C2 are shown in row 2. These images are clearer and clearer, but contain some artifacts. Ringing artifacts are particularly visible in image C3. After passing through CNN42, clear, sharp and artifact-free deblurred output images a3, B3 and C3 are provided. These output images may be used for further processing steps, such as display on a user interface or display in a machine vision control function. The machine vision control function will be able to better identify features based on the deblurred output image of line 3, which would not be feasible based on the blurred input image of line 1. This is particularly useful in vehicle control systems where safety is improved as deblurring is improved.
It will be understood that the disclosed methods, systems, and vehicles may differ from those depicted in the figures and described herein. For example, the vehicle 12, the usage and training processes 26, 50, and the system 10 and/or various components thereof may differ from those depicted in fig. 1-5 and described in connection therewith. Additionally, it will be recognized that certain steps of the method 200 may differ from those shown in FIG. 6. It will similarly be appreciated that certain steps of the above-described method may occur simultaneously or in a different order than that shown in fig. 6.
While at least one exemplary embodiment has been presented in the foregoing detailed description, it should be appreciated that a vast number of variations exist. It should also be appreciated that the exemplary embodiment or exemplary embodiments are only examples, and are not intended to limit the scope, applicability, or configuration of the disclosure in any way. Rather, the foregoing detailed description will provide those skilled in the art with a convenient road map for implementing the exemplary embodiment or exemplary embodiments. It should be understood that various changes can be made in the function and arrangement of elements without departing from the scope of the appended claims and the legal equivalents thereof.

Claims (10)

1. A method for training a non-blind deblurring module,
wherein the non-blind deblurring module comprises a regularization deconvolution sub-module and a convolutional neural network sub-module,
the regularization deconvolution sub-module configured to perform a regularization deconvolution function on the blurred input image to produce a deconvolution image that may have image artifacts, and the convolution neural network sub-module configured to receive the deconvolution image as an input to a convolution neural network and remove the image artifacts, thereby providing a deblurred output image,
the method comprises the following steps:
receiving, via at least one processor, an unblurred test image and a blurred test image, wherein each blurred test image is associated with a corresponding one of the unblurred test images by a blur kernel term and a noise term;
co-training a regularization deconvolution sub-module and a convolutional neural network via at least one processor by adjusting regularization parameters of a regularization deconvolution function and weights of the convolutional neural network to minimize a cost function representing a difference between each deblurred output image and a corresponding one of the unblurred test images, thereby providing trained regularization parameters, trained weights, and a trained non-blind deblurring module;
receiving, via the at least one processor, a blurred input image from an imaging device;
deblurring, via the at least one processor, the blurred input image using a trained non-blind deblurring module; and
outputting, via the at least one processor, the deblurred output image.
2. The method of claim 1, wherein the deconvolution function is a wiener deconvolution function.
3. The method of claim 1, wherein the deconvolution function is a Tikhonov-regularized deconvolution function.
4. The method of claim 1, wherein adjusting, via the at least one processor, the regularization parameters and the weights uses a back propagation algorithm.
5. The method of claim 4, wherein the back propagation algorithm adjusts the regularization parameter based on gradients that have been fed back from the CNNs and derivatives of the deconvolved image that may have image artifacts relative to the regularization parameter.
6. The method of claim 1, wherein the at least one processor receives the unblurred test image and manually generates the blurred test image for the unblurred test image using a blurring kernel function and a noise function.
7. The method of claim 1, wherein the blurred input image is received from an imaging device mounted to a vehicle.
8. The method of claim 1, wherein the vehicle comprises a vehicle controller, and the method comprises controlling at least one vehicle function based on the deblurred output image.
9. A system for training a non-blind deblurring module, comprising:
a non-blind deblurring module comprising a regularization deconvolution sub-module and a convolutional neural network sub-module, wherein the regularization deconvolution sub-module is configured to perform a regularization deconvolution function on the blurred input image to produce a deconvolved image that may have image artifacts, and
wherein the convolutional neural network sub-module is configured to receive the deconvolved image as an input to the convolutional neural network and remove image artifacts, thereby providing a deblurred output image;
an imaging device; and
at least one processor configured to execute program instructions, wherein the program instructions are configured to cause the at least one processor to:
receiving an unblurred test image and blurred test images, wherein each blurred test image is associated with a corresponding one of the unblurred test images by a blur kernel term and a noise term;
training the regularization deconvolution sub-modules and the convolutional neural networks together by adjusting regularization parameters of the regularization deconvolution function and weights of the convolutional neural networks to minimize a cost function representing a difference between each deblurred output image and a corresponding one of the unblurred test images, thereby providing trained regularization parameters, trained weights, and a trained non-blind deblurring module;
receiving a blurred input image from the imaging device;
deblurring the blurred input image by using the trained non-blind deblurring module; and
the output image with blur is output.
10. The system of claim 9, wherein the program instructions are configured to cause the at least one processor to adjust the regularization parameters and weights using a back propagation algorithm.
CN202110230767.3A 2020-03-02 2021-03-02 System and method for training non-blind image deblurring module Active CN113344800B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US16/806,135 2020-03-02
US16/806,135 US11354784B2 (en) 2020-03-02 2020-03-02 Systems and methods for training a non-blind image deblurring module

Publications (2)

Publication Number Publication Date
CN113344800A true CN113344800A (en) 2021-09-03
CN113344800B CN113344800B (en) 2023-09-29

Family

ID=77270921

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110230767.3A Active CN113344800B (en) 2020-03-02 2021-03-02 System and method for training non-blind image deblurring module

Country Status (3)

Country Link
US (1) US11354784B2 (en)
CN (1) CN113344800B (en)
DE (1) DE102021102663A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11354784B2 (en) * 2020-03-02 2022-06-07 GM Global Technology Operations LLC Systems and methods for training a non-blind image deblurring module
CN114549361B (en) * 2022-02-28 2023-06-30 齐齐哈尔大学 Image motion blur removing method based on improved U-Net model

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008106282A1 (en) * 2007-02-28 2008-09-04 Microsoft Corporation Image deblurring with blurred/noisy image pairs
WO2016183716A1 (en) * 2015-05-15 2016-11-24 北京大学深圳研究生院 Method and system for image deblurring
KR101871098B1 (en) * 2017-01-12 2018-06-25 포항공과대학교 산학협력단 Apparatus and method for image processing
CN108230223A (en) * 2017-12-28 2018-06-29 清华大学 Light field angle super-resolution rate method and device based on convolutional neural networks
CN109345474A (en) * 2018-05-22 2019-02-15 南京信息工程大学 Image motion based on gradient field and deep learning obscures blind minimizing technology
CN110490822A (en) * 2019-08-11 2019-11-22 浙江大学 The method and apparatus that image removes motion blur

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108475415B (en) * 2015-12-21 2022-05-27 商汤集团有限公司 Method and system for image processing
CN107292842B (en) * 2017-06-15 2020-08-07 北京大学深圳研究生院 Image deblurring method based on prior constraint and outlier suppression
US11475536B2 (en) * 2018-02-27 2022-10-18 Portland State University Context-aware synthesis for video frame interpolation
CN108876833A (en) * 2018-03-29 2018-11-23 北京旷视科技有限公司 Image processing method, image processing apparatus and computer readable storage medium
US11257191B2 (en) * 2019-08-16 2022-02-22 GE Precision Healthcare LLC Systems and methods for deblurring medical images using deep neural network
WO2021118270A1 (en) * 2019-12-11 2021-06-17 Samsung Electronics Co., Ltd. Method and electronic device for deblurring blurred image
US11354784B2 (en) * 2020-03-02 2022-06-07 GM Global Technology Operations LLC Systems and methods for training a non-blind image deblurring module

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008106282A1 (en) * 2007-02-28 2008-09-04 Microsoft Corporation Image deblurring with blurred/noisy image pairs
WO2016183716A1 (en) * 2015-05-15 2016-11-24 北京大学深圳研究生院 Method and system for image deblurring
KR101871098B1 (en) * 2017-01-12 2018-06-25 포항공과대학교 산학협력단 Apparatus and method for image processing
CN108230223A (en) * 2017-12-28 2018-06-29 清华大学 Light field angle super-resolution rate method and device based on convolutional neural networks
CN109345474A (en) * 2018-05-22 2019-02-15 南京信息工程大学 Image motion based on gradient field and deep learning obscures blind minimizing technology
CN110490822A (en) * 2019-08-11 2019-11-22 浙江大学 The method and apparatus that image removes motion blur

Also Published As

Publication number Publication date
DE102021102663A1 (en) 2021-09-02
US11354784B2 (en) 2022-06-07
US20210272248A1 (en) 2021-09-02
CN113344800B (en) 2023-09-29

Similar Documents

Publication Publication Date Title
KR101871098B1 (en) Apparatus and method for image processing
CN113409200B (en) System and method for image deblurring in a vehicle
CN113344800B (en) System and method for training non-blind image deblurring module
JP7079445B2 (en) Model parameter learning device, control device and model parameter learning method
US8411145B2 (en) Vehicle periphery monitoring device, vehicle periphery monitoring program and vehicle periphery monitoring method
US7574122B2 (en) Image stabilizing device
JP2019028616A (en) Identification apparatus
US8606035B2 (en) Image processing apparatus and image processing method
US11188777B2 (en) Image processing method, image processing apparatus, learnt model manufacturing method, and image processing system
FR2988191B1 (en) FILTERING METHOD AND FILTER DEVICE FOR SENSOR DATA
CN110895807A (en) System for evaluating image, operation assisting method and working equipment
US11263773B2 (en) Object detection apparatus, object detection method, computer program product, and moving object
JP6462557B2 (en) Vehicle pitch angle estimation device
CN115147826A (en) Image processing system and method for automobile electronic rearview mirror
EP1943626B1 (en) Enhancement of images
JP5388059B2 (en) Object detection method and object detection apparatus based on background image estimation
KR101877741B1 (en) Apparatus for detecting edge with image blur
Zhao et al. An improved image deconvolution approach using local constraint
US20220222528A1 (en) Method for Making a Neural Network More Robust in a Function-Specific Manner
US11798139B2 (en) Noise-adaptive non-blind image deblurring
CN114037636A (en) Multi-frame blind restoration method for correcting image by adaptive optical system
JP6808753B2 (en) Image correction device and image correction method
JP2022049261A (en) Information processor and information processing method
JP6748003B2 (en) Image processing device
Lahouli et al. Accelerating existing non-blind image deblurring techniques through a strap-on limited-memory switched Broyden method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant