CN106991650B - Image deblurring method and device - Google Patents

Image deblurring method and device Download PDF

Info

Publication number
CN106991650B
CN106991650B CN201610039224.2A CN201610039224A CN106991650B CN 106991650 B CN106991650 B CN 106991650B CN 201610039224 A CN201610039224 A CN 201610039224A CN 106991650 B CN106991650 B CN 106991650B
Authority
CN
China
Prior art keywords
dvs
deblurring
picture
image
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610039224.2A
Other languages
Chinese (zh)
Other versions
CN106991650A (en
Inventor
郭萍
王强
朴根柱
李圭彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Samsung Telecommunications Technology Research Co Ltd
Samsung Electronics Co Ltd
Original Assignee
Beijing Samsung Telecommunications Technology Research Co Ltd
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Samsung Telecommunications Technology Research Co Ltd, Samsung Electronics Co Ltd filed Critical Beijing Samsung Telecommunications Technology Research Co Ltd
Priority to CN201610039224.2A priority Critical patent/CN106991650B/en
Priority to KR1020160107863A priority patent/KR102563750B1/en
Priority to US15/356,808 priority patent/US10062151B2/en
Publication of CN106991650A publication Critical patent/CN106991650A/en
Application granted granted Critical
Publication of CN106991650B publication Critical patent/CN106991650B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/40Picture signal circuits
    • H04N1/409Edge or detail enhancement; Noise or error suppression
    • H04N1/4092Edge or detail enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20201Motion blur correction

Abstract

The application discloses an image deblurring method and device, comprising the following steps: acquiring a fuzzy picture and a DVS event set within exposure time; taking the fuzzy picture as a current fuzzy picture; and carrying out deblurring processing on the blurred picture according to the DVS event set. By applying the method and the device, the deblurring effect of the image can be improved.

Description

Image deblurring method and device
Technical Field
The present application relates to image processing technologies, and in particular, to a method and an apparatus for deblurring an image.
Background
When a camera takes a picture, the picture may be blurred due to various reasons such as hand trembling. To remove or reduce image blur due to camera motion, the imaged blurred picture is typically deblurred to obtain a sharper picture.
At present, the existing deblurring mode can achieve the deblurring effect only aiming at a few images. The reason for this is that the prior art uses deconvolution to perform deblurring, and the process of using deconvolution to perform deblurring is itself a pathological process, since the sharp image and the cause of blurring the sharp image are not known in advance. Because the cause of image blurring is unknown, many methods make some assumptions about the sharp image to be solved, however, when the actual image does not match the assumptions, the deblurring method fails.
Disclosure of Invention
The application provides an image deblurring method and device based on DVS, which can improve the image deblurring effect.
In order to achieve the purpose, the following technical scheme is adopted in the application:
an image deblurring method, comprising:
acquiring a fuzzy picture to be processed and a DVS event set recorded by a DVS sensor within the exposure time of the fuzzy picture;
and carrying out deblurring processing on the blurred picture according to the DVS event set.
Preferably, according to the DVS event set, the deblurring processing is performed on the blurred picture, and includes:
and estimating a DVS edge estimation image according to the DVS event set, and performing the deblurring processing according to the DVS edge estimation image.
Preferably, the deblurring process performed on the DVS edge estimation image includes one of:
aligning the blurred picture with the DVS edge estimation image, and performing deblurring processing according to an alignment result;
aligning the blurred picture with the DVS edge estimation image, performing deblurring processing according to an alignment result, determining an average edge picture according to the deblurring processing result when the deblurring processing result does not reach a preset deblurring processing effect and does not reach a set maximum iteration number, realigning the blurred picture with the DVS edge estimation image by using the average edge picture until the deblurring processing result reaches the preset deblurring processing effect or the set maximum iteration number, and outputting the blurred processing result as a deblurring result;
aligning the blurred picture with the DVS edge estimation image, performing deblurring processing according to an alignment result, determining an average edge picture according to the deblurring processing result when the current maximum iteration number is not reached, realigning the blurred picture and the DVS edge estimation image according to the average edge picture until the set maximum iteration number is reached, and outputting the blurred processing result as a deblurring result;
and aligning the blurred picture with the DVS edge estimation image, performing deblurring processing according to an alignment result, determining an average edge picture according to the deblurring processing result when the deblurring processing result does not reach a preset deblurring processing effect and does not reach a set maximum iteration number currently, realigning the blurred picture and the DVS edge estimation image according to the average edge picture until the deblurring processing result reaches the preset deblurring processing effect or the set maximum iteration number, and outputting the blurred processing result as a deblurring result.
Preferably, according to the DVS event set, the deblurring processing is performed on the blurred picture, and includes:
estimating a camera motion trajectory and a DVS edge estimation image within the exposure time according to the DVS event set; and performing the deblurring processing according to the camera motion trajectory and the DVS edge estimation image.
Preferably, the deblurring process according to the camera motion trajectory and the DVS edge estimation image includes one of the following:
aligning the blurred picture with the DVS edge estimation image, performing deblurring processing according to an alignment result, determining an average edge picture in the motion trail direction of the camera according to the deblurring processing result when the deblurring processing result does not reach a preset deblurring processing effect, realigning the blurred picture and the DVS edge estimation image according to the average edge picture until the deblurring processing result reaches the preset deblurring processing effect, and outputting the blurred processing result as a deblurring result;
aligning the blurred picture with the DVS edge estimation image, performing deblurring processing according to an alignment result, determining an average edge image in the motion trail direction of the camera according to the deblurring processing result when the current maximum iteration number is not reached, realigning the blurred picture and the DVS edge estimation image according to the average edge image until the set maximum iteration number is reached, and outputting the blurred processing result as a deblurring result;
and aligning the blurred picture with the DVS edge estimation image, performing deblurring processing according to an alignment result, determining an average edge image in the motion trajectory direction of the camera according to the deblurring processing result when the deblurring processing result does not reach a preset deblurring processing effect and does not reach a set maximum iteration number currently, realigning the blurred picture and the DVS edge estimation image according to the average edge image until the deblurring processing result reaches the preset deblurring processing effect or reaches the set maximum iteration number, and outputting the blurred processing result as a deblurring result.
Preferably, the deblurring process according to the alignment result includes: according to the DVS edge estimation image after the current alignment, deblurring processing is carried out on the blurred image after the current alignment by using deconvolution transformation, and a clear image of the iteration is obtained and is used as a deblurring processing result;
and if the error between the current time and the clear picture of the last iteration is smaller than a preset threshold value, determining that a preset deblurring effect is achieved, otherwise, determining that the preset deblurring effect is not achieved.
Preferably, said estimating a camera motion trajectory from said set of DVS events comprises: dividing the exposure time into N time segments according to the time sequence, independently imaging the DVS events positioned in the same time segment in the DVS event set into pictures, and determining the camera motion track in the exposure time according to the pictures imaged in the N time segments.
Preferably, the manner of determining the camera motion trajectory includes: and in the images imaged in every two continuous time slices, determining the camera displacement in the two continuous time slices according to the position relation of the DVS event, and connecting all the camera displacements in every two continuous time slices in the exposure time according to the time sequence to form a camera motion track in the exposure time.
Preferably, said estimating said DVS edge estimate image from a DVS event set comprises: dividing the exposure time into N time segments according to the time sequence, uniformly imaging the DVS events positioned in the same time segment in the DVS event set into pictures, carrying out spatial alignment and superposition on the images imaged in all the time segments, and calculating a skeleton image of the superposed pictures as the DVS edge estimation image.
Preferably, the method for spatially aligning the pictures of any two time slices includes: for pictures A and B of any two time slices, calculating
Figure BDA0000911345470000031
Wherein, (x, y) is a two-dimensional coordinate of a pixel point in the picture, a (x, y) and B (x, y) respectively represent values of the pixel points in the picture a and B, (Δ x, Δ y) is a two-dimensional displacement value required by the picture B when the picture B is aligned to the picture a, and argmin (·) represents an independent variable value when the minimum value is taken; and shifting the picture B and/or the picture A according to the calculated (delta x, delta y) to realize the alignment of the picture A and the picture B.
Preferably, the spatially aligning and superimposing the pictures imaged for all time slices includes one of:
for every two pictures of consecutive time slices, calculating (Δ x, Δ y); aligning and superposing the pictures with the later time to the pictures with the earlier time in sequence according to the time sequence;
for every two pictures of consecutive time slices, calculating (Δ x, Δ y); and aligning and overlapping the pictures with the time in front to the pictures with the time in the back in sequence according to the time sequence.
Preferably, the method for calculating the blur kernel when performing the deblurring processing is as follows: fuzzy kernel
Figure BDA0000911345470000041
Wherein, the expression represents the matrix x as a vector form, the expression represents the operation of solving the gradient, I is a clear picture after deblurring processing, C is a fuzzy picture after current alignment, E is a DVS edge estimation image after current alignment, | | vec | a2Representing the 2-norm of the calculation vector, | vec | | survival11 norm, λ, representing the calculated vector1And λ2For two preset weight values, argmin (·) represents the argument value when (·) takes the minimum value.
Preferably, the determining the average edge map comprises:
and calculating edge images in each segmentation direction in the camera motion trail for the clear picture of the iteration, and calculating a skeleton image after overlapping all the edge images to be used as the average edge image.
An image deblurring apparatus, comprising: the device comprises an acquisition module and an image deblurring module;
the acquisition module is used for acquiring a fuzzy picture to be processed and a DVS event set recorded by a DVS sensor within the exposure time of the fuzzy picture;
and the image deblurring module is used for deblurring the blurred image according to the DVS event set.
According to the technical scheme, the DVS events recorded by the DVS sensor are introduced, and the blurred picture is subjected to deblurring processing. Specifically, a camera motion trajectory may be estimated, and edge estimation of a blurred image may be performed using the motion trajectory; and simultaneously, carrying out DVS edge estimation by using the DVS event, aligning the blurred image or the edge estimation image of the blurred image obtained by estimation with the DVS edge estimation result, and participating in deblurring processing. By the above processing, the effect of deblurring can be improved.
Drawings
FIG. 1 is a schematic diagram of a basic flow of an image deblurring method according to the present application;
FIG. 2 is a schematic diagram of the relationship between the imaging of a DVS event set and the pictures imaged within each time slice;
FIG. 3 is a schematic diagram of camera displacement in two consecutive time slices;
FIG. 4 is a schematic diagram of DVS edge estimation;
FIG. 5 is a schematic diagram of the RGB-DVS image alignment process;
FIG. 6 is a schematic diagram of calculating a sharp picture;
FIG. 7 is a schematic diagram of the generation of an average edge map;
FIG. 8 is a schematic diagram of the basic structure of an image deblurring apparatus according to the present application;
FIG. 9 is a schematic structural diagram of an image deblurring module in the image deblurring apparatus according to the present application;
fig. 10 is a schematic diagram illustrating the effect of the image deblurring method in the present application compared with the prior art.
Detailed Description
For the purpose of making the objects, technical means and advantages of the present application more apparent, the present application will be described in further detail with reference to the accompanying drawings.
The DVS sensor is a super-speed camera with microsecond time resolution that can record and output DVS events representing changes occurring within the camera lens between microsecond time units. Since the location where the change occurs within the lens is usually an edge in the image, when the DVS sensor is fixed with the normal camera, a relatively sharp edge image under fast motion of the normal camera can be obtained. In the application, the relatively clear edge image obtained by the DVS sensor is used for participating in the deblurring processing of the blurred image shot by the common camera, so that the deblurring effect is greatly improved.
The most basic image deblurring method of the application comprises the following steps:
step 1, obtaining a fuzzy picture to be processed and a DVS event set recorded by a DVS sensor in the exposure time of the fuzzy picture.
And 2, performing deblurring processing on the blurred picture according to the DVS event set.
In step 2, preferably, a DVS edge estimation image may be estimated according to the DVS event set, and then deblurring processing may be performed according to the DVS edge estimation image. Or, the camera motion track and the DVS edge estimation image can be estimated according to the DVS event set, and then the deblurring processing is performed according to the camera motion track and the DVS edge estimation image.
Fig. 1 is a schematic flow chart of an image deblurring method in the present application. Here, processing of RGB images will be described as an example. As shown in fig. 1, the method includes:
step 101, acquiring a blur picture to be processed and a DVS event set recorded by a DVS sensor within the exposure time of the blur picture.
The method comprises the steps of obtaining RGB blurred pictures shot by a common camera, fixing a DVS sensor and the common camera for shooting together, thus obtaining records with the same visual angle as the common camera, and reflecting the motion of the common camera for shooting by the recorded DVS events. And acquiring a DVS event set in the exposure time of the blurred picture, and reflecting the motion condition of the camera in the corresponding exposure time.
And step 102, estimating a camera motion track and a DVS edge estimation image in the exposure time according to the DVS event set.
The exposure time is divided into N time segments according to the time sequence, N is a preset positive integer and can be set according to needs, and the specific setting basis is not limited in the application. The DVS event set in the exposure time includes a plurality of DVS events, each of the corresponding DVS events has a time stamp, and the DVS events located in the same time slice are independently imaged according to the divided time slices and the time stamps of the DVS events, and the specific independent imaging mode may be any realizable mode, which is not limited in the present application. As shown in fig. 2, is the relationship between the imaging of the DVS event set and the pictures imaged within the respective time slices.
The generation of blurred pictures and the effect of the imaged pictures of DVS events are first analyzed. If the subject during the exposure time of the camera is considered to be stationary, the blurring of the image is caused by the movement of the camera itself relative to the stationary subject. Based on the above, the picture imaged in each time slice is a relatively clear edge image under the rapid relative motion of the camera in the corresponding time range; for pictures in different time slices, the shot object is the same, that is, the outline of the edge image should be the same, except that the edge image has a relative displacement in the background, and the displacement just represents the relative motion track of the camera between different time slices. Meanwhile, the pictures imaged in different time slices represent the same edge images at different time points, but the edge of each picture may not be the most accurate and complete due to the influence of noise and the like; if the pictures in all the N time slices are restored to the same time point according to the motion trail of the camera, and then the N pictures at the same time point are overlapped together, a relatively clearer edge map can be obtained.
Briefly, for pictures in two different time slices, the pictures should have the same shape, and the change of the position relationship of the shape in the whole image represents the linear motion relationship between the two time slices of the camera. And performing reverse motion on one picture according to the motion relation to obtain a picture in the time period of the other picture, and overlapping the two pictures to obtain an edge map with enhanced effect.
Based on the above analysis, the camera motion trajectory and DVS edge estimation image within the exposure time are determined according to the pictures imaged within the N time slices in the present application.
Specifically, the method for determining the motion trajectory of the camera within the exposure time may be: and in the images imaged in every two continuous time slices, determining the camera displacement in the two continuous time slices according to the position relation of the DVS event, and connecting all the camera displacements in every two continuous time slices in the exposure time according to the time sequence to form a camera motion track in the exposure time. Wherein the camera displacement in two consecutive time slices is shown in fig. 3.
Here, an exemplary method of determining camera displacement within two consecutive time segments is given: for any two pictures A and B of consecutive time slices, calculating
Figure BDA0000911345470000061
Wherein, (x, y) is a two-dimensional coordinate of a pixel point in the picture, a (x, y) and B (x, y) respectively represent values of the pixel points in the picture a and B, (Δ x, Δ y) is a two-dimensional displacement of the camera from a time point represented by the picture B to a time point represented by the picture a, and a motion vector shown in fig. 3 can be obtained according to the two-dimensional displacement; here, it is assumed that the camera makes a linear motion in two consecutive time slices. And connecting the motion vectors in all the two continuous time periods according to the time sequence, so as to obtain the camera track in the exposure time.
The manner of determining the DVS edge estimate image may be: and performing spatial alignment and superposition on the images imaged in all the time slices, and calculating a skeleton diagram of the superposed images to be used as a DVS edge estimation image. As shown in fig. 4.
When the space alignment is carried out, the pictures of all time segments can be aligned to the same time segment and then uniformly overlapped; or, the pictures of a certain time slice may be aligned to another time slice, and the two are superimposed, and then the superimposed result is aligned to another time slice, and so on. Here, the above formula (1) may be used to calculate two-dimensional displacement values (Δ x, Δ y) between pictures of any two time slices, which represent two-dimensional displacement values required for aligning picture B to picture a. The shift of picture B (Δ x, Δ y) may be aligned to picture a, or the shift of picture a (- Δ x, - Δ y) may be aligned to picture B, or picture a and picture B may be shifted together and aligned to other time slices between picture a and picture B.
In addition, since the movement between two time slices is assumed to be linear motion when the displacement is calculated using the formula (1), and in fact, as the time interval increases, the deviation of the linear motion from the actual motion may be larger, and the estimation is less accurate, it is preferable that the (Δ x, Δ y) be calculated for every two pictures of consecutive time slices when picture alignment and superimposition are performed, and the pictures of later/earlier time slices are sequentially aligned and superimposed to the pictures of earlier/later time slices in chronological order. Such processing can increase the accuracy of the estimation. Where "/" denotes or.
After this step, an iteration is started to achieve image deblurring.
And 103, aligning the acquired blurred picture with the DVS edge estimation image to obtain a current aligned blurred picture and a current aligned DVS edge estimation image.
The process of RGB-DVS image alignment is shown in fig. 5. First, the RGB image and the DVS edge estimation image are de-distorted separately. And finding corresponding key points in the two images, and finally aligning the two images by calculating an affine transformation model. The specific processes of finding distortion removal, determining corresponding key points and aligning images through models are all the prior art, and are not described herein again. The process including the distortion removal in this step is a preferred mode, and if the distortion removal is not performed due to implementation complexity or other considerations, the key points may be directly determined for image alignment. In the alignment, the RGB image may be aligned with the DVS edge estimation image, or the DVS edge estimation image may be aligned with the RGB image. Preferably, the RGB image can be aligned to the DVS edge estimation image, and the alignment effect is better.
In addition, when iteration is carried out for the first time, the RGB blurred image and the DVS edge estimation image are directly used for carrying out alignment processing, and in the later iteration, the RGB blurred image and the DVS edge estimation image are aligned according to the RGB edge estimation image determined in the previous iteration, so that the registration accuracy of RGB and DVS is improved.
And step 104, performing deblurring processing on the currently aligned blurred picture by using deconvolution transformation according to the currently aligned DVS edge estimation image to obtain a sharp picture of the iteration.
The image deblurring module processes to estimate the blurring kernel of the image and uses the deconvolution transformation to realize deblurring. As shown in fig. 6. Wherein, the image E is estimated using the DVS edge after the current alignment when calculating the blur kernel. Specifically, the process of deblurring includes:
computing a blur kernel k such that
Figure BDA0000911345470000081
Wherein
Figure BDA0000911345470000082
The representation represents the matrix x as a vector,
Figure BDA0000911345470000083
representing the gradient-finding operation, I being the processed clear RGB image, C being the blur after the current alignmentA picture, E is the DVS edge estimation image after current alignment, | vec | | survival2Representing the 2-norm of the calculation vector, | vec | | survival11 norm, λ, representing the calculated vector1And λ2The two preset weight values are argmin (·) represents the independent variable value when the value is the minimum value;
after the blur kernel k is found, a clear RGB image I is found,
Figure BDA0000911345470000084
the representation represents the matrix x as a vector form, | vec | | survival2Representing the 2-norm of the calculation vector, | vec | | survival1The calculation of 1 norm is shown, and B' is the fuzzy picture after the current alignment.
And the processes of solving the fuzzy kernel and solving the clear image are alternately carried out until the set alternation times is reached, or the clear image is not changed any more, and the I obtained by calculation is used as the clear image obtained by the iteration.
In the process of this step, the manner of calculating I is the same as in the prior art, and the calculation of the blur kernel k is performed based on the currently aligned DVS edge estimation image E. In this way, the image E is introduced into the calculation of the blur kernel k, so that the image blur condition is reflected more completely, and the calculated sharp picture I is closer to the original picture.
This process may be ended. The image is deblurred from the DVS edge estimate. If the processing capacity permits, it is also preferable to continue with the step of optimizing the deblurred processing result by an iterative processing procedure.
Step 105, judging whether the clear picture of the current iteration is the same as the clear picture of the previous iteration, if so, taking the clear picture as a deblurring result, and ending the process; otherwise, step 106 is performed.
When the clear pictures of the two iterations are kept unchanged, the best deblurring effect is considered to be achieved, the clear pictures are output, and the deblurring processing is finished; otherwise, the edge graph of RGB is continuously calculated for the next iteration. The determination method of whether the clear pictures of the two iterations are the same may be: when the error between the current clear picture and the clear picture of the previous iteration is smaller than a preset threshold value, the clear pictures of the two iterations are considered to be the same, otherwise, the clear pictures of the two iterations are considered to be different. Of course, other determination methods may be adopted, and the present application is not limited thereto.
And step 106, determining an average edge image according to the clear image of the current iteration, returning to the step 103, and taking the average edge image as a basis for aligning the RGB fuzzy image and the DVS edge estimation image in the next iteration.
The processing of this step is used to generate an oriented edge map of the RGB image, referred to herein as an average edge map. Specifically, based on the clear picture estimated in the current iteration, an edge map of the clear picture in a set direction is obtained, and then the edge maps are averaged to obtain an average edge map. Wherein the set direction may be several directions randomly designated, or, preferably, may be a direction of a camera motion trajectory.
Then, the average edge map is used as a basis for aligning the RGB image with the DVS edge estimation image in the next iteration. Specifically, the average edge map may be compared with the DVS edge estimation image in the next iteration to determine an aligned displacement, and then the RGB blurred image and/or the DVS edge estimation image may be moved to be aligned, so as to improve the registration accuracy between the RGB image and the DVS image. The alignment process of the RGB image and the DVS edge estimation image according to the average edge map may be implemented in an existing manner, and the present application is not limited thereto.
When the set direction is the camera motion trajectory direction, the generation mode of the average edge map may specifically be: and segmenting the motion track of the camera within the exposure time, calculating the edge images of the clear picture of the iteration in each segmentation direction, and calculating a skeleton image as an average edge image after overlapping all the obtained edge images.
In the simplest case, the segmentation of the camera motion trajectory may be performed directly according to the respective segmentation performed when the motion trajectory is constructed in step 102. The generation method of the average edge map may be as shown in fig. 7.
The foregoing is a specific implementation of the image deblurring method in the present application. In the iteration processing flow, when the clear pictures of two iterations are the same, the iteration process is ended to obtain the final deblurred processing result. Of course, the maximum iteration number can be set more simply, iteration is repeated before the maximum iteration number is reached, and the iteration process is ended after the maximum iteration number is reached, so that the final deblurred processing result is obtained. Or, the clear pictures of the two iterations before and after the comparison and the setting of the maximum iteration number can be combined, when any one of the conditions is reached, the iteration process is ended, otherwise, repeated iteration is performed. The maximum number of iterations may be set according to actual needs and processing capabilities of the device, which is not limited in this application.
In the deblurring processing, a DVS event recorded by a DVS sensor can be introduced, a camera motion track is estimated, and edge estimation of a blurred image is performed by using the motion track; and simultaneously, carrying out DVS edge estimation by using the DVS event, aligning the blurred image or the edge estimation image of the blurred image obtained by estimation with the DVS edge estimation result, and participating in deblurring processing. Through the above processing, a motion hypothesis more approaching to reality can be provided by using the camera motion track and the DVS edge estimation image, thereby improving the deblurring effect.
The application also provides an image deblurring device which can be used for implementing the deblurring method shown in the figure 1. Fig. 8 is a schematic diagram of a basic structure of an image deblurring apparatus according to the present application, as shown in fig. 8, the apparatus includes: the device comprises an acquisition module and an image deblurring module.
The acquisition module is used for acquiring the to-be-processed blurred picture and a DVS event set recorded by a DVS sensor within the exposure time of the blurred picture. And the image deblurring module is used for deblurring the blurred image according to the DVS event set.
More preferably, the image deblurring module may further include a trajectory estimation sub-module, an image registration sub-module, an image deblurring sub-module, and an average edge map generation sub-module. As shown in fig. 9.
And the track estimation submodule is used for estimating a camera motion track and a DVS edge estimation image in the exposure time according to the DVS event set. In particular for implementing the processing of steps 101-102.
And the image registration submodule is used for aligning the blurred picture with the DVS edge estimation image to obtain the blurred picture after current alignment and the DVS edge estimation image after current alignment. In particular for implementing the processing of step 103.
And the image deblurring submodule is used for deblurring the currently aligned blurred picture according to the currently aligned DVS edge estimated image by using deconvolution transformation to obtain a current clear picture, outputting the clear picture as a deblurring result if the error between the current clear picture and the last iteration clear picture is smaller than a preset threshold, and otherwise, outputting the current clear picture to the average edge picture output module. In particular for performing the processing of steps 104-105.
And the average edge image output module is used for determining an average edge image according to the clear picture input by the image deblurring module. And inputting the average edge image into an image registration module for being used as a basis for aligning the fuzzy image and the DVS edge estimation image at the next time. Preferably, an average edge map in the direction of the camera motion trajectory is determined, in particular for implementing the processing of step 106.
Experiments of deblurring processing are respectively carried out by adopting the deblurring method and the existing deblurring method, results are compared, and fig. 10 is a schematic diagram comparing the application with the existing deblurring method. As can be seen from fig. 10, the deblurring effect obtained by the present application is better.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (24)

1. An image deblurring method, comprising:
acquiring a fuzzy picture to be processed and a dynamic visual sensor DVS event set recorded by a DVS sensor within the exposure time of the fuzzy picture;
estimating a DVS edge estimation image according to the DVS event set; wherein said estimating the DVS edge estimate image from the DVS event set comprises: performing spatial alignment and superposition on the images imaged in all the time slices, and calculating a skeleton diagram of the superposed images as a DVS edge estimation image;
and carrying out deblurring processing on the blurred picture according to the DVS edge estimation image.
2. The method of claim 1, wherein deblurring is performed on the DVS edge-estimated image in a manner that includes one of:
aligning the blurred picture with the DVS edge estimation image, and performing deblurring processing according to an alignment result;
aligning the blurred picture with the DVS edge estimation image, performing deblurring processing according to an alignment result, determining an average edge picture according to the deblurring processing result when the deblurring processing result does not reach a preset deblurring processing effect and does not reach a set maximum iteration number, realigning the blurred picture with the DVS edge estimation image by using the average edge picture until the deblurring processing result reaches the preset deblurring processing effect or the set maximum iteration number, and outputting the blurred processing result as a deblurring result;
aligning the blurred picture with the DVS edge estimation image, performing deblurring processing according to an alignment result, determining an average edge picture according to the deblurring processing result when the current maximum iteration number is not reached, realigning the blurred picture and the DVS edge estimation image according to the average edge picture until the set maximum iteration number is reached, and outputting the blurred processing result as a deblurring result;
and aligning the blurred picture with the DVS edge estimation image, performing deblurring processing according to an alignment result, determining an average edge picture according to the deblurring processing result when the deblurring processing result does not reach a preset deblurring processing effect and does not reach a set maximum iteration number currently, realigning the blurred picture and the DVS edge estimation image according to the average edge picture until the deblurring processing result reaches the preset deblurring processing effect or the set maximum iteration number, and outputting the blurred processing result as a deblurring result.
3. The method of claim 1,
when the DVS edge estimation image is estimated according to the DVS event set, further estimating a camera motion trail in the exposure time according to the DVS event set;
the deblurring processing of the blurred picture according to the DVS edge estimation image includes: and performing the deblurring processing according to the camera motion trail and the DVS edge estimation image.
4. A method according to claim 3, wherein the deblurring process is performed on the camera motion trajectory and DVS edge estimate images in a manner that includes one of:
aligning the blurred picture with the DVS edge estimation image, performing deblurring processing according to an alignment result, determining an average edge picture in the motion trail direction of the camera according to the deblurring processing result when the deblurring processing result does not reach a preset deblurring processing effect, realigning the blurred picture and the DVS edge estimation image according to the average edge picture until the deblurring processing result reaches the preset deblurring processing effect, and outputting the blurred processing result as a deblurring result;
aligning the blurred picture with the DVS edge estimation image, performing deblurring processing according to an alignment result, determining an average edge image in the motion trail direction of the camera according to the deblurring processing result when the current maximum iteration number is not reached, realigning the blurred picture and the DVS edge estimation image according to the average edge image until the set maximum iteration number is reached, and outputting the blurred processing result as a deblurring result;
and aligning the blurred picture with the DVS edge estimation image, performing deblurring processing according to an alignment result, determining an average edge image in the motion trajectory direction of the camera according to the deblurring processing result when the deblurring processing result does not reach a preset deblurring processing effect and does not reach a set maximum iteration number currently, realigning the blurred picture and the DVS edge estimation image according to the average edge image until the deblurring processing result reaches the preset deblurring processing effect or reaches the set maximum iteration number, and outputting the blurred processing result as a deblurring result.
5. The method according to claim 2 or 4, wherein the deblurring according to the alignment result comprises: according to the DVS edge estimation image after the current alignment, deblurring processing is carried out on the blurred image after the current alignment by using deconvolution transformation, and a clear image of the iteration is obtained and is used as a deblurring processing result;
and if the error between the current time and the clear picture of the last iteration is smaller than a preset threshold value, determining that a preset deblurring effect is achieved, otherwise, determining that the preset deblurring effect is not achieved.
6. A method according to claim 3, wherein said estimating a camera motion trajectory from the set of DVS events comprises: dividing the exposure time into N time segments according to the time sequence, independently imaging the DVS events positioned in the same time segment in the DVS event set into pictures, and determining the camera motion track in the exposure time according to the pictures imaged in the N time segments.
7. The method of claim 6, wherein determining the camera motion trajectory comprises: and in the images imaged in every two continuous time slices, determining the camera displacement in the two continuous time slices according to the position relation of the DVS event, and connecting all the camera displacements in every two continuous time slices in the exposure time according to the time sequence to form a camera motion track in the exposure time.
8. A method according to claim 1 or 3, wherein spatially aligning and overlaying the imaged pictures over all temporal segments according to the DVS event set comprises: dividing the exposure time into N time segments according to the time sequence, uniformly imaging the DVS events positioned in the same time segment in the DVS event set into pictures, and spatially aligning and superposing the imaged pictures in all the time segments.
9. The method of claim 8, wherein spatially aligning the pictures of any two temporal segments comprises: for pictures A and B of any two time slices, calculating
Figure FDA0002377028030000031
Wherein, (x, y) is a two-dimensional coordinate of a pixel point in the picture, a (x, y) and B (x, y) respectively represent values of the pixel points in the picture a and B, (Δ x, Δ y) is a two-dimensional displacement value required by the picture B when the picture B is aligned to the picture a, and argmin (·) represents an independent variable value when the minimum value is taken; and shifting the picture B and/or the picture A according to the calculated (delta x, delta y) to realize the alignment of the picture A and the picture B.
10. The method of claim 8, wherein spatially aligning and overlaying all time slice imaged pictures comprises one of:
for every two pictures of consecutive time slices, calculating (Δ x, Δ y); aligning and superposing the pictures with the later time to the pictures with the earlier time in sequence according to the time sequence;
for every two pictures of consecutive time slices, calculating (Δ x, Δ y); and aligning and overlapping the pictures with the time in front to the pictures with the time in the back in sequence according to the time sequence.
11. The method of claim 5, wherein the blur kernel is computed when performing the deblurring process by: fuzzy kernel
Figure FDA0002377028030000032
Wherein the content of the first and second substances,
Figure FDA0002377028030000033
the representation represents the matrix x as a vector,
Figure FDA0002377028030000034
representing gradient calculation, I being a clear picture after deblurring processing, C being a blurred picture after current alignment, E being a DVS edge estimation image after current alignment, | | vec | the non-linear2Representing the 2-norm of the calculation vector, | vec | | survival11 norm, λ, representing the calculated vector1And λ2For two preset weight values, argmin (-) represents the independent variable value when (-) takes the minimum value.
12. The method of claim 5, wherein determining the average edge map comprises:
and calculating edge images in each segmentation direction in the camera motion trail for the clear picture of the iteration, and calculating a skeleton image after overlapping all the edge images to be used as the average edge image.
13. An image deblurring apparatus, comprising: the device comprises an acquisition module and an image deblurring module;
the acquisition module is used for acquiring a fuzzy picture to be processed and a DVS event set recorded by a DVS sensor within the exposure time of the fuzzy picture;
the image deblurring module is used for estimating a DVS edge estimation image according to the DVS event set; carrying out deblurring processing on the blurred picture according to the DVS edge estimation image;
wherein said estimating the DVS edge estimate image from the DVS event set comprises: and performing spatial alignment and superposition on the images imaged in all the time slices, and calculating a skeleton diagram of the superposed images to be used as a DVS edge estimation image.
14. The apparatus of claim 13, wherein the image deblurring module deblurrs the image based on the DVS edge estimation image in a manner that includes one of:
aligning the blurred picture with the DVS edge estimation image, and performing deblurring processing according to an alignment result;
aligning the blurred picture with the DVS edge estimation image, performing deblurring processing according to an alignment result, determining an average edge picture according to the deblurring processing result when the deblurring processing result does not reach a preset deblurring processing effect and does not reach a set maximum iteration number, realigning the blurred picture with the DVS edge estimation image by using the average edge picture until the deblurring processing result reaches the preset deblurring processing effect or the set maximum iteration number, and outputting the blurred processing result as a deblurring result;
aligning the blurred picture with the DVS edge estimation image, performing deblurring processing according to an alignment result, determining an average edge picture according to the deblurring processing result when the current maximum iteration number is not reached, realigning the blurred picture and the DVS edge estimation image according to the average edge picture until the set maximum iteration number is reached, and outputting the blurred processing result as a deblurring result;
and aligning the blurred picture with the DVS edge estimation image, performing deblurring processing according to an alignment result, determining an average edge picture according to the deblurring processing result when the deblurring processing result does not reach a preset deblurring processing effect and does not reach a set maximum iteration number currently, realigning the blurred picture and the DVS edge estimation image according to the average edge picture until the deblurring processing result reaches the preset deblurring processing effect or the set maximum iteration number, and outputting the blurred processing result as a deblurring result.
15. The apparatus of claim 13, wherein the image deblurring module further comprises a trajectory estimation sub-module and an image deblurring sub-module;
the track estimation submodule is used for estimating a camera motion track in the exposure time according to the DVS event set;
and the image deblurring submodule is used for performing deblurring processing according to the camera motion trail and the DVS edge estimation image.
16. The apparatus of claim 15 wherein the image deblurring sub-module deblurrs the image based on the camera motion trajectory and DVS edge estimate images in a manner that includes one of:
aligning the blurred picture with the DVS edge estimation image, performing deblurring processing according to an alignment result, determining an average edge picture in the motion trail direction of the camera according to the deblurring processing result when the deblurring processing result does not reach a preset deblurring processing effect, realigning the blurred picture and the DVS edge estimation image according to the average edge picture until the deblurring processing result reaches the preset deblurring processing effect, and outputting the blurred processing result as a deblurring result;
aligning the blurred picture with the DVS edge estimation image, performing deblurring processing according to an alignment result, determining an average edge image in the motion trail direction of the camera according to the deblurring processing result when the current maximum iteration number is not reached, realigning the blurred picture and the DVS edge estimation image according to the average edge image until the set maximum iteration number is reached, and outputting the blurred processing result as a deblurring result;
and aligning the blurred picture with the DVS edge estimation image, performing deblurring processing according to an alignment result, determining an average edge image in the motion trajectory direction of the camera according to the deblurring processing result when the deblurring processing result does not reach a preset deblurring processing effect and does not reach a set maximum iteration number currently, realigning the blurred picture and the DVS edge estimation image according to the average edge image until the deblurring processing result reaches the preset deblurring processing effect or reaches the set maximum iteration number, and outputting the blurred processing result as a deblurring result.
17. The apparatus of claim 14 or 16, wherein the image deblurring module further comprises an image registration sub-module and an average edge map generation sub-module;
the image registration submodule is used for aligning the blurred picture with the DVS edge estimation image to obtain a current aligned blurred picture and a current aligned DVS edge estimation image;
the image deblurring submodule is used for deblurring the currently aligned blurred picture according to the currently aligned DVS edge estimated image by using deconvolution transformation to obtain a current clear picture, and outputting the clear picture as a deblurring result if the error between the current clear picture and the last iteration clear picture is smaller than a preset threshold, otherwise, outputting the current clear picture to the average edge picture output module;
the average edge image output module is used for determining an average edge image according to the sharp image input by the image deblurring module, and inputting the average edge image to the image registration module, and the average edge image is used as a basis for aligning the next blurred image with the DVS edge estimation image.
18. The apparatus of claim 15, wherein the trajectory estimation sub-module estimating a camera motion trajectory from the set of DVS events comprises: dividing the exposure time into N time segments according to the time sequence, independently imaging the DVS events positioned in the same time segment in the DVS event set into pictures, and determining the camera motion track in the exposure time according to the pictures imaged in the N time segments.
19. The apparatus of claim 18, wherein determining the camera motion profile comprises: and in the images imaged in every two continuous time slices, determining the camera displacement in the two continuous time slices according to the position relation of the DVS event, and connecting all the camera displacements in every two continuous time slices in the exposure time according to the time sequence to form a camera motion track in the exposure time.
20. The apparatus of claim 13 or 15, wherein the image deblurring module spatially aligns and superimposes the imaged pictures over all temporal segments according to the DVS event set comprises: dividing the exposure time into N time segments according to the time sequence, uniformly imaging the DVS events positioned in the same time segment in the DVS event set into pictures, and spatially aligning and superposing the imaged pictures in all the time segments.
21. The apparatus of claim 20, wherein spatially aligning the pictures of any two temporal segments comprises: for pictures A and B of any two time slices, calculating
Figure FDA0002377028030000061
Wherein, (x, y) is a two-dimensional coordinate of a pixel point in the picture, a (x, y) and B (x, y) respectively represent values of the pixel points in the picture a and B, (Δ x, Δ y) is a two-dimensional displacement value required by the picture B when the picture B is aligned to the picture a, and argmin (·) represents an independent variable value when the minimum value is taken; and shifting the picture B and/or the picture A according to the calculated (delta x, delta y) to realize the alignment of the picture A and the picture B.
22. The apparatus of claim 20, wherein spatially aligning and overlaying all time slice imaged pictures comprises one of:
for every two pictures of consecutive time slices, calculating (Δ x, Δ y); aligning and superposing the pictures with the later time to the pictures with the earlier time in sequence according to the time sequence;
for every two pictures of consecutive time slices, calculating (Δ x, Δ y); and aligning and overlapping the pictures with the time in front to the pictures with the time in the back in sequence according to the time sequence.
23. The apparatus according to claim 17, wherein the blur kernel is calculated in the deblurring process by: fuzzy kernel
Figure FDA0002377028030000062
Wherein the content of the first and second substances,
Figure FDA0002377028030000063
the representation represents the matrix x as a vector,
Figure FDA0002377028030000064
representing gradient calculation, I being a clear picture after deblurring processing, C being a blurred picture after current alignment, E being a DVS edge estimation image after current alignment, | | vec | the non-linear2Representing the 2-norm of the calculation vector, | vec | | survival11 norm, λ, representing the calculated vector1And λ2For two preset weight values, argmin (-) represents the independent variable value when (-) takes the minimum value.
24. The apparatus of claim 17, wherein the mean edge map output module determines the mean edge map by:
and calculating edge images in each segmentation direction in the camera motion trail for the clear picture of the iteration, and calculating a skeleton image after overlapping all the edge images to be used as the average edge image.
CN201610039224.2A 2016-01-21 2016-01-21 Image deblurring method and device Active CN106991650B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201610039224.2A CN106991650B (en) 2016-01-21 2016-01-21 Image deblurring method and device
KR1020160107863A KR102563750B1 (en) 2016-01-21 2016-08-24 Method and Device of Image Deblurring
US15/356,808 US10062151B2 (en) 2016-01-21 2016-11-21 Image deblurring method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610039224.2A CN106991650B (en) 2016-01-21 2016-01-21 Image deblurring method and device

Publications (2)

Publication Number Publication Date
CN106991650A CN106991650A (en) 2017-07-28
CN106991650B true CN106991650B (en) 2020-09-15

Family

ID=59414291

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610039224.2A Active CN106991650B (en) 2016-01-21 2016-01-21 Image deblurring method and device

Country Status (2)

Country Link
KR (1) KR102563750B1 (en)
CN (1) CN106991650B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107451973B (en) * 2017-07-31 2020-05-22 西安理工大学 Motion blurred image restoration method based on rich edge region extraction
CN108629751A (en) * 2018-05-08 2018-10-09 深圳市唯特视科技有限公司 A kind of image deblurring method always to be made a variation based on weight weighted graph
KR102508992B1 (en) 2018-06-19 2023-03-14 삼성디스플레이 주식회사 Image processing device and image processing method
KR102435519B1 (en) * 2018-06-20 2022-08-24 삼성전자주식회사 Method and apparatus for processing 360 degree image
KR20200029270A (en) 2018-09-10 2020-03-18 삼성전자주식회사 Electronic apparatus for recognizing an object and controlling method thereof
KR102584501B1 (en) * 2018-10-05 2023-10-04 삼성전자주식회사 Method for recognizing object and autonomous driving device therefor
CN110428397A (en) * 2019-06-24 2019-11-08 武汉大学 A kind of angular-point detection method based on event frame
CN111340733B (en) * 2020-02-28 2022-07-26 深圳市商汤科技有限公司 Image processing method and device, electronic equipment and storage medium
CN111369482B (en) * 2020-03-03 2023-06-23 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN113724142B (en) * 2020-05-26 2023-08-25 杭州海康威视数字技术股份有限公司 Image Restoration System and Method
CN113784014B (en) * 2020-06-04 2023-04-07 杭州海康威视数字技术股份有限公司 Image processing method, device and equipment
CN113923319B (en) * 2021-12-14 2022-03-08 成都时识科技有限公司 Noise reduction device, noise reduction method, chip, event imaging device and electronic equipment
CN114494085B (en) * 2022-04-14 2022-07-15 季华实验室 Video stream restoration method, system, electronic device and storage medium
CN116527407B (en) * 2023-07-04 2023-09-01 贵州毅丹恒瑞医药科技有限公司 Encryption transmission method for fundus image

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103533263A (en) * 2012-07-03 2014-01-22 三星电子株式会社 Image sensor chip, operation method, and system having the same
KR20140042016A (en) * 2012-09-26 2014-04-07 삼성전자주식회사 Proximity sensor and proximity sensing method using design vision sensor

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201937736U (en) * 2007-04-23 2011-08-17 德萨拉技术爱尔兰有限公司 Digital camera
US9696812B2 (en) * 2013-05-29 2017-07-04 Samsung Electronics Co., Ltd. Apparatus and method for processing user input using motion of object
US9349165B2 (en) * 2013-10-23 2016-05-24 Adobe Systems Incorporated Automatically suggesting regions for blur kernel estimation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103533263A (en) * 2012-07-03 2014-01-22 三星电子株式会社 Image sensor chip, operation method, and system having the same
KR20140042016A (en) * 2012-09-26 2014-04-07 삼성전자주식회사 Proximity sensor and proximity sensing method using design vision sensor

Also Published As

Publication number Publication date
CN106991650A (en) 2017-07-28
KR102563750B1 (en) 2023-08-04
KR20170087814A (en) 2017-07-31

Similar Documents

Publication Publication Date Title
CN106991650B (en) Image deblurring method and device
Su et al. Deep video deblurring for hand-held cameras
EP3216216B1 (en) Methods and systems for multi-view high-speed motion capture
Shah et al. Resolution enhancement of color video sequences
JP4883223B2 (en) Motion vector generation apparatus and motion vector generation method
US8290212B2 (en) Super-resolving moving vehicles in an unregistered set of video frames
JP2010122934A (en) Image processing apparatus, image processing method, and program
KR20140141392A (en) Method and apparatus for processing the image
KR101671391B1 (en) Method for deblurring video using modeling blurred video with layers, recording medium and device for performing the method
JP4941565B2 (en) Corresponding point search apparatus and corresponding point searching method
WO2016193393A1 (en) Real-time temporal filtering and super-resolution of depth image sequences
JP2015524946A (en) Method and measuring apparatus for forming a super-resolution image with improved image resolution
JP6494402B2 (en) Image processing apparatus, imaging apparatus, image processing method, and program
JPH07505033A (en) Mechanical method for compensating nonlinear image transformations, e.g. zoom and pan, in video image motion compensation systems
KR101362183B1 (en) Depth image noise removal apparatus and method based on camera pose
Gaidhani Super-resolution
Al Ismaeil et al. Dynamic super resolution of depth sequences with non-rigid motions
KR100805802B1 (en) Apparatus and method for camera auto-calibration in motion blurred sequence, Augmented reality system using it
Mohan Adaptive super-resolution image reconstruction with lorentzian error norm
JP2012073703A (en) Image blur amount calculation device and program for the same
KR101867212B1 (en) Apparatus and method for synchronication of image frames captured from color and depth camera
Webster et al. Radial deblurring with ffts
Chiao et al. Rolling shutter correction for video with large depth of field
Chandramouli et al. Inferring Image Transformation and Structure from Motion-Blurred Images.
KR20200103526A (en) Image processing method and apparatus therefor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant