CN110677556A - Image deblurring method based on camera positioning - Google Patents

Image deblurring method based on camera positioning Download PDF

Info

Publication number
CN110677556A
CN110677556A CN201910711598.8A CN201910711598A CN110677556A CN 110677556 A CN110677556 A CN 110677556A CN 201910711598 A CN201910711598 A CN 201910711598A CN 110677556 A CN110677556 A CN 110677556A
Authority
CN
China
Prior art keywords
image
depth
camera
deblurring
fuzzy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910711598.8A
Other languages
Chinese (zh)
Other versions
CN110677556B (en
Inventor
颜成钢
李明珠
孙垚棋
张继勇
张勇东
沈韬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Hangzhou Electronic Science and Technology University
Original Assignee
Hangzhou Electronic Science and Technology University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Electronic Science and Technology University filed Critical Hangzhou Electronic Science and Technology University
Priority to CN201910711598.8A priority Critical patent/CN110677556B/en
Publication of CN110677556A publication Critical patent/CN110677556A/en
Application granted granted Critical
Publication of CN110677556B publication Critical patent/CN110677556B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses an image deblurring method based on camera positioning. The invention comprises two stages: a blurred image deblurring stage and a deblurred image feature extraction stage. A blurred image deblurring stage: according to the method, the depth information of the scene is obtained by utilizing the depth image of the blurred image of the current frame of the camera, and then the three-dimensional coordinates of the scene point are obtained. And obtain camera motion information, including translation and rotation, using an Inertial Measurement Unit (IMU). And calculating a fuzzy kernel of the selected block by using the data, and performing deconvolution operation by using the fuzzy kernel to obtain a deblurred image. A deblurred image feature extraction stage: this stage performs ORB feature extraction on the deblurred image and performs a subsequent SLAM process using the extracted features. The invention only carries out deblurring on the selected block, reduces the calculated amount to a certain extent and improves the operation speed.

Description

Image deblurring method based on camera positioning
Technical Field
The invention belongs to the field of computer vision, particularly relates to an image deblurring method based on camera positioning, and aims at motion blur caused by camera motion in exposure time.
Background
In the visual SLAM, a blur phenomenon occurs in a photographed image due to a shake of a camera during an exposure time. The presence of motion blur makes it difficult to perform data correlation on reconstructed landmarks and reconstruct new features, resulting in failure of the SLAM system to locate or reconstruct.
Image deblurring is an important direction in the field of image restoration, and has wide application in real life. In SLAM, more features can be provided after the image is deblurred, so that the system can better perform data association and reconstruct new features, the SLAM process can be successfully continued in subsequent frames, and the robustness of the visual SLAM on motion blur is improved.
High-quality methods for removing motion blur have been developed in recent decades, but most require a large amount of computation, and thus it is difficult to restore the image of the visual SLAM using these methods. Fergus et al obtained a good effect in the restoration of a single image based on statistical information of natural image gradients and a variational Bayesian method, but the speed was very slow due to parameter estimation, and the restoration results of some large blur kernels were not stable enough. On the basis, QiShan et al analyze the generation of the ringing effect and propose global prior and local prior, which both achieve better effects on the accuracy of fuzzy kernel estimation and the suppression of the ringing effect, but the algorithm convergence speed is not fast enough, and the setting of parameters has a large influence on the result, and the time complexity is large.
The invention mainly researches an image deblurring algorithm based on camera positioning, calculates camera motion generating motion blur by using a depth image and an Inertial Measurement Unit (IMU), and then deblurrs an image block by using the camera motion. Because the algorithm only deblurs the blocks in the image, the calculation amount is small, and the operation speed is obviously improved.
Disclosure of Invention
The invention provides an image deblurring method based on camera positioning, aiming at image blurring generated by camera shake in exposure time.
The image deblurring system is mainly divided into two stages: a blurred image deblurring stage and a deblurred image feature extraction stage.
A blurred image deblurring stage:
according to the method, the depth information of the shot scene is obtained by utilizing the depth image acquired by the depth camera of the visual SLAM system, and then the three-dimensional coordinates of the scene points are obtained. Motion information, including translation and rotation information, of the camera over the exposure time is obtained using an Inertial Measurement Unit (IMU). And calculating a fuzzy kernel of the selected image block by using the data, and performing Lucy-Richardson deconvolution operation by using the fuzzy kernel to obtain a deblurred image.
A deblurred image feature extraction stage:
in the stage, ORB feature extraction is carried out on the image processed in the blurred image deblurring stage, and the extracted features are used for carrying out the subsequent SLAM process.
The method is implemented according to the following steps:
step 1, obtaining camera motion track information which generates image blurring through an Inertial Measurement Unit (IMU), wherein the camera motion track information comprises translation and rotation information, and further calculating the motion of a camera in exposure time.
And 2, acquiring a depth image corresponding to the blurred image by using a depth camera, and performing filtering and denoising pretreatment on the depth image. And obtaining the depth information of the scene points by utilizing the processed depth image, and further constructing the three-dimensional coordinates L of the scene points.
And 3, calculating a fuzzy core of the selected image block by using the camera motion information obtained in the step 1 and the three-dimensional coordinates L of the scene points obtained in the step 2.
And 4, deblurring the selected image block by using the fuzzy kernel obtained in the step 3 through a Lucy-Richardson (LR) deconvolution algorithm.
And 5, extracting the characteristic points of the deblurred image by using an ORB characteristic extraction method, and performing a subsequent SLAM process by using the extracted characteristic points.
The method of the invention has the advantages and beneficial results that:
1. without deblurring, it is difficult to obtain sufficient localization function and the accuracy of visual SLAM is greatly reduced. The invention proposes to combine the visual SLAM with an image deblurring algorithm, and to use the information obtained in the visual SLAM for estimating a motion blur kernel by considering motion blur. By deblurring the image, the data association in visual SLAM is greatly enhanced and the localization function can be performed robustly even for blurred scenes.
2. The depth information and the camera motion information of the scene points are obtained by utilizing the depth image and the inertial measurement unit, and then the fuzzy core of the image block is calculated. From the camera motion and the three-dimensional spatial coordinates of the scene points, the motion blur kernel of the selected block can be easily predicted without any complex image processing algorithms.
3. The present invention pre-computes the blur kernel, so the deblurring problem is called non-blind deconvolution, which is simpler and faster than most blind deconvolution methods.
4. Depth information of a scene is obtained using a depth image, thereby eliminating non-uniform blur caused by camera motion and depth variation of the scene. The use of accelerometers and gyroscopes of the inertial measurement unit to improve the blurred image may help improve the accuracy and efficiency of deblurring.
5. The invention only carries out deblurring on the selected block, reduces the calculated amount to a certain extent and improves the operation speed.
Drawings
FIG. 1 is a system flow diagram of the present invention;
Detailed Description
The present invention will be described in detail with reference to specific embodiments.
The image deblurring method based on camera positioning is implemented according to the following steps.
Step 1, dividing the blurred image into small blocks of 5 × 5. Then, the Inertial Measurement Unit (IMU) is used for obtaining the camera motion information including translation and rotation information to obtain the motion P of the camera in the exposure timek
Figure BDA0002153962290000031
Where k denotes the k-th frame image, T1Representing a transformation matrix, T, between two successive frames of images1The data in (1) is derived from an inertial measurement unit, R is a 3 × 3 orthogonal matrix and represents rotation information, t is a three-dimensional vector, and the three dimensions represent translation in x, y and z directions respectively.
Figure BDA0002153962290000032
Representing a set of real numbers, SO (3) is called a special orthogonal group or a rotating matrix group.
And 2, preprocessing the depth image, including filtering and denoising. And obtaining the depth information of the scene point by using the processed depth image, namely the coordinate of the Z axis in the three-dimensional coordinate. And converting the image coordinate of a certain point in the scene into a three-dimensional coordinate L in a depth camera coordinate system by using the internal reference of the depth camera.
Figure BDA0002153962290000041
Wherein (u)k,vk) For a point on the blurred image of the k-th frame, ZkDepth is (u) on the corresponding depth imagek,vk) The depth value of (c). f. ofx、fy、cx、cyFor the internal reference of the depth camera, the constructed matrix is called the internal reference matrix of the depth camera. (X)k,Yk,Zk_ depth) is (u)k,vk) Corresponding to coordinate values in the depth camera coordinate system, i.e. L ═ Xk,Yk,Zk_depth,1)T
Step 3, using the camera motion P obtained in step 1kAnd the scene point three obtained in the step 2The dimension coordinate L is calculated to obtain (u)k-1,vk-1). And then calculating a fuzzy kernel of a selected block in the fuzzy image, and defining the fuzzy kernel k as k (l, phi), wherein l is a fuzzy amount, phi is a fuzzy direction, and calculating by using the following formula:
Figure BDA0002153962290000043
Figure BDA0002153962290000044
wherein K1Being an internal reference matrix, Z, of the depth camerak-1And (h) is a mapping function of homogeneous coordinates to inhomogeneous coordinates.
And 4, deblurring the block by using the fuzzy kernel obtained in the step 3 through a Lucy-Richardson (LR) deconvolution algorithm.
And 5, extracting feature points from the deblurred block on the obtained deblurred image by using an ORB feature extraction method, and performing a subsequent SLAM process by using the extracted features.

Claims (4)

1. An image deblurring method based on camera positioning is characterized by comprising the following steps:
step 1, obtaining camera motion track information for generating image blur through an inertial measurement unit, wherein the camera motion track information comprises translation and rotation information, and further calculating the motion of a camera in exposure time;
step 2, acquiring a depth image corresponding to the blurred image by using a depth camera, and performing filtering and denoising pretreatment on the depth image; obtaining depth information of the scene points by using the processed depth images, and further constructing three-dimensional coordinates L of the scene points;
step 3, calculating a fuzzy core of the selected image block by using the camera motion information obtained in the step 1 and the three-dimensional coordinates L of the scene points obtained in the step 2;
step 4, deblurring the selected image block by using the fuzzy kernel obtained in the step 3 through a Lucy-Richardson (LR) deconvolution algorithm;
and 5, extracting the characteristic points of the deblurred image by using an ORB characteristic extraction method, and performing a subsequent SLAM process by using the extracted characteristic points.
2. The method according to claim 1, wherein the step 1 is implemented as follows:
dividing the blurred image into small blocks of 5 multiplied by 5; then, the inertial measurement unit is used for obtaining the camera motion information including translation and rotation information to obtain the motion P of the camera in the exposure timek
Figure FDA0002153962280000011
Where k denotes the k-th frame image, T1Representing a transformation matrix, T, between two successive frames of images1The data in the method is derived from an inertial measurement unit, R is a 3 x 3 orthogonal matrix and represents rotation information, t is a three-dimensional vector, and the three dimensions respectively represent translation in x, y and z directions;
Figure FDA0002153962280000012
representing a set of real numbers, SO (3) is called a special orthogonal group or a rotating matrix group.
3. The method according to claim 2, wherein the step 2 is implemented as follows:
preprocessing the depth image, including filtering and denoising; obtaining depth information of the scene points by using the processed depth image, namely a coordinate of a Z axis in a three-dimensional coordinate; converting the image coordinate of a certain point in the scene into a three-dimensional coordinate L under a depth camera coordinate system by using the internal reference of the depth camera;
Figure FDA0002153962280000021
wherein (u)k,vk) For a point on the blurred image of the k-th frame, ZkDepth is (u) on the corresponding depth imagek,vk) The depth value of (d); f. ofx、fy、cx、cyThe constructed matrix is called an internal reference matrix of the depth camera; (X)k,Yk,Zk_ depth) is (u)k,vk) Corresponding to coordinate values in the depth camera coordinate system, i.e. L ═ Xk,Yk,Zk_depth,1)T
4. The image deblurring method based on camera positioning as claimed in claim 3, wherein the step 3 is implemented as follows:
using the camera motion P obtained in step 1kAnd (u) is obtained by calculating the three-dimensional coordinate L of the scene point obtained in the step 2k-1,vk-1) (ii) a And then calculating a fuzzy kernel of a selected block in the fuzzy image, and defining the fuzzy kernel k as k (l, phi), wherein l is a fuzzy amount, phi is a fuzzy direction, and calculating by using the following formula:
Figure FDA0002153962280000022
Figure FDA0002153962280000023
Figure FDA0002153962280000024
wherein K1Being an internal reference matrix, Z, of the depth camerak-1And (h) is a mapping function of homogeneous coordinates to inhomogeneous coordinates.
CN201910711598.8A 2019-08-02 2019-08-02 Image deblurring method based on camera positioning Active CN110677556B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910711598.8A CN110677556B (en) 2019-08-02 2019-08-02 Image deblurring method based on camera positioning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910711598.8A CN110677556B (en) 2019-08-02 2019-08-02 Image deblurring method based on camera positioning

Publications (2)

Publication Number Publication Date
CN110677556A true CN110677556A (en) 2020-01-10
CN110677556B CN110677556B (en) 2021-09-28

Family

ID=69068676

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910711598.8A Active CN110677556B (en) 2019-08-02 2019-08-02 Image deblurring method based on camera positioning

Country Status (1)

Country Link
CN (1) CN110677556B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110942430A (en) * 2019-10-08 2020-03-31 杭州电子科技大学 Method for improving motion blur robustness of TOF camera
CN112069980A (en) * 2020-09-03 2020-12-11 三一专用汽车有限责任公司 Obstacle recognition method, obstacle recognition system, and storage medium
CN113222863A (en) * 2021-06-04 2021-08-06 中国铁道科学研究院集团有限公司 High-speed railway operation environment based video self-adaptive deblurring method and device
CN113643217A (en) * 2021-10-15 2021-11-12 广州市玄武无线科技股份有限公司 Video motion blur removing method and device, terminal equipment and readable storage medium
US11704777B2 (en) 2021-08-27 2023-07-18 Raytheon Company Arbitrary motion smear modeling and removal

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103440624A (en) * 2013-08-07 2013-12-11 华中科技大学 Image deblurring method and device based on motion detection
US20170070673A1 (en) * 2013-03-15 2017-03-09 Pelican Imaging Corporation Systems and Methods for Synthesizing High Resolution Images Using Image Deconvolution Based on Motion and Depth Information
CN108876897A (en) * 2018-04-20 2018-11-23 杭州电子科技大学 The quickly scene three-dimensional reconstruction method under movement

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170070673A1 (en) * 2013-03-15 2017-03-09 Pelican Imaging Corporation Systems and Methods for Synthesizing High Resolution Images Using Image Deconvolution Based on Motion and Depth Information
CN103440624A (en) * 2013-08-07 2013-12-11 华中科技大学 Image deblurring method and device based on motion detection
CN108876897A (en) * 2018-04-20 2018-11-23 杭州电子科技大学 The quickly scene three-dimensional reconstruction method under movement

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HYEOUNGHO BAE 等: "Accurate Motion Deblurring using Camera Motion Tracking and Scene Depth", 《2013 IEEE WORKSHOP ON APPLICATIONS OF COMPUTER VISION (WACV)》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110942430A (en) * 2019-10-08 2020-03-31 杭州电子科技大学 Method for improving motion blur robustness of TOF camera
CN112069980A (en) * 2020-09-03 2020-12-11 三一专用汽车有限责任公司 Obstacle recognition method, obstacle recognition system, and storage medium
CN112069980B (en) * 2020-09-03 2022-01-25 三一专用汽车有限责任公司 Obstacle recognition method, obstacle recognition system, and storage medium
CN113222863A (en) * 2021-06-04 2021-08-06 中国铁道科学研究院集团有限公司 High-speed railway operation environment based video self-adaptive deblurring method and device
CN113222863B (en) * 2021-06-04 2024-04-16 中国铁道科学研究院集团有限公司 Video self-adaptive deblurring method and device based on high-speed railway operation environment
US11704777B2 (en) 2021-08-27 2023-07-18 Raytheon Company Arbitrary motion smear modeling and removal
CN113643217A (en) * 2021-10-15 2021-11-12 广州市玄武无线科技股份有限公司 Video motion blur removing method and device, terminal equipment and readable storage medium
CN113643217B (en) * 2021-10-15 2022-03-29 广州市玄武无线科技股份有限公司 Video motion blur removing method and device, terminal equipment and readable storage medium

Also Published As

Publication number Publication date
CN110677556B (en) 2021-09-28

Similar Documents

Publication Publication Date Title
CN110677556B (en) Image deblurring method based on camera positioning
Lee et al. Copy-and-paste networks for deep video inpainting
CN110163818B (en) Low-illumination video image enhancement method for maritime unmanned aerial vehicle
CN106991650B (en) Image deblurring method and device
CN107749987B (en) Digital video image stabilization method based on block motion estimation
US9406108B2 (en) Deblurring of an image from a sequence of images
KR101839617B1 (en) Method and apparatus for removing non-uniform motion blur using multiframe
CN112164011B (en) Motion image deblurring method based on self-adaptive residual error and recursive cross attention
Zhu et al. Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring
CN111028166B (en) Video deblurring method based on iterative neural network
CN113793285B (en) Ultrafast restoration method and system for pneumatic optical effect target twin image
Song et al. An adaptive l 1–l 2 hybrid error model to super-resolution
CN109767389B (en) Self-adaptive weighted double-norm remote sensing image blind super-resolution reconstruction method based on local and non-local combined prior
CN112509144A (en) Face image processing method and device, electronic equipment and storage medium
Kim et al. Dynamic scene deblurring using a locally adaptive linear blur model
CN110930324A (en) Fuzzy star map restoration method
JP2009111921A (en) Image processing device and image processing method
CN111028168B (en) High-energy flash image deblurring method containing noise blur
CN112365516A (en) Virtual and real occlusion processing method in augmented reality
Zhao et al. Infrared image deblurring based on generative adversarial networks
CN116704123A (en) Three-dimensional reconstruction method combined with image main body extraction technology
Zhu et al. Learning Spatio-Temporal Sharpness Map for Video Deblurring
Peng et al. PDRF: progressively deblurring radiance field for fast scene reconstruction from blurry images
Mohan Adaptive super-resolution image reconstruction with lorentzian error norm
CN113902847A (en) Monocular depth image pose optimization method based on three-dimensional feature constraint

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant