CN113222863B - Video self-adaptive deblurring method and device based on high-speed railway operation environment - Google Patents

Video self-adaptive deblurring method and device based on high-speed railway operation environment Download PDF

Info

Publication number
CN113222863B
CN113222863B CN202110625709.0A CN202110625709A CN113222863B CN 113222863 B CN113222863 B CN 113222863B CN 202110625709 A CN202110625709 A CN 202110625709A CN 113222863 B CN113222863 B CN 113222863B
Authority
CN
China
Prior art keywords
coordinate system
camera
image
matrix
initial coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110625709.0A
Other languages
Chinese (zh)
Other versions
CN113222863A (en
Inventor
王凡
赵宏伟
刘俊博
王胜春
赵鑫欣
武斯全
苏文婧
曹佳伟
方玥
王乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Academy of Railway Sciences Corp Ltd CARS
Infrastructure Inspection Institute of CARS
Beijing IMAP Technology Co Ltd
Original Assignee
China Academy of Railway Sciences Corp Ltd CARS
Infrastructure Inspection Institute of CARS
Beijing IMAP Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Academy of Railway Sciences Corp Ltd CARS, Infrastructure Inspection Institute of CARS, Beijing IMAP Technology Co Ltd filed Critical China Academy of Railway Sciences Corp Ltd CARS
Priority to CN202110625709.0A priority Critical patent/CN113222863B/en
Publication of CN113222863A publication Critical patent/CN113222863A/en
Application granted granted Critical
Publication of CN113222863B publication Critical patent/CN113222863B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a video self-adaptive deblurring method and device based on a high-speed railway running environment, wherein the method comprises the following steps: obtaining a camera motion parameter; determining a rotation matrix and a translation matrix of the camera based on the camera motion parameters; determining a sparse resampling matrix according to the rotation matrix and the translation matrix of the camera; obtaining a blurred image; based on the sparse resampling matrix and the blurred image, a deconvolution algorithm with spatial variation is adopted to obtain a restored image. The invention can restore the image with higher quality, is beneficial to improving the quality of the video of the vehicle-mounted high-speed railway running environment and provides effective data guarantee for the high-speed railway running environment safety inspection task.

Description

Video self-adaptive deblurring method and device based on high-speed railway operation environment
Technical Field
The invention relates to the technical field of image processing, in particular to a video self-adaptive deblurring method and device based on a high-speed railway running environment.
Background
This section is intended to provide a background or context to the embodiments of the invention that are recited in the claims. The description herein is not admitted to be prior art by inclusion in this section.
The high-speed railway in China has long mileage and large span, is very complex in natural environment along the line, and is beneficial to timely finding out potential safety hazards of the line and guaranteeing the safe operation of the high-speed train by periodically carrying out the safety inspection of the running environment of the high-speed railway. At present, automatic detection based on video data by utilizing technologies such as computer vision, artificial intelligence and the like is a main technical means for high-speed railway running environment safety inspection. The collection mode of the high-speed railway operation environment video is divided into 2 types:
(1) All-weather monitoring is carried out on a specific scene by using cameras arranged beside a high-speed railway. The video image obtained by the method has higher quality, and is beneficial to subsequent security check tasks. However, the natural environment along the high-speed railway is complex, and partial sections do not have the condition of erecting the camera, so that the operation environment information of the whole line cannot be obtained;
(2) And shooting an environment video by using the vehicle-mounted camera in the running process of the high-speed train. The method can acquire the running environment information of the whole line, but the running speed of the high-speed train is high, the shot video image has motion blur, the blur degree is positively related to the speed of the train, and the follow-up running environment safety detection task is very difficult. In addition, the vibration is inevitably generated in the running process of the high-speed train under the influence of track smoothness, and meanwhile, the vehicle-mounted camera vibrates, so that defocusing and blurring are caused.
In order to adaptively remove motion blur and defocusing blur caused by motion of a vehicle-mounted camera under a high-speed driving condition, an existing image motion blur restoration method is used for processing, so that the purpose of collecting high-quality high-speed railway operation environment videos is achieved. The deconvolution method is a classical image motion blur restoration method, and can be classified into a blind deconvolution method and a non-blind deconvolution method according to whether a point spread function (Point Spread Function, PSF) is known or not. The blind deconvolution method is generally a stochastic method based on a Bayesian framework, utilizes the statistical characteristics of local content of an image and the prior knowledge of an imaging system, and adopts a Bayesian derivation method to solve PSF and a restored image, such as: a least mean square error method, a hidden variable method, a maximum posterior probability method and a variance method. The non-blind deconvolution method generally carries out iterative estimation on PSF of an image based on strong edge prediction, image frequency domain and image content prior, and then carries out deconvolution processing by using Richardson-Lucy, first-order original dual optimization, hyper-Laplacian and other algorithms, so as to obtain a restored image.
In recent years, deep convolutional neural networks (Deep Convolutional Neural Network, DCNN) have achieved many excellent results in the field of image deblurring, such as the Deblu GAN series, EDVR, and PSS-NSC. The basic principle of the method is that a condition generation type countermeasure network (Conditional Generative Adversarial Networks, C-GAN) is utilized to reconstruct a restored image from a fuzzy image, then the similarity between the restored image and a clear reference image is measured as a network loss value, and parameters of each layer of a network model are iteratively updated until the similarity between the restored image and the clear image is optimal.
However, when the existing method is applied to the vehicle-mounted high-speed railway operation environment video, there are the following 2 problems:
(1) Classical image deblurring methods generally assume spatially invariant image blurring, and estimate the PSF against a priori knowledge of specific image scenes, image content, etc., to obtain a restored image. However, the environment along the high-speed railway is continuously changed, the blurring degree of the video image is also influenced by the running speed of the train and the smoothness of the track, the generated image blurring is spatially changed, and the self-adaptive deblurring is difficult to realize by the existing method;
(2) The DCNN-based image deblurring method requires the manual construction of a large number of sharp-blurred image sample pairs as sample data for training the network model. However, under high-speed driving conditions, it is difficult to obtain a clear reference image for constructing a training sample set.
Disclosure of Invention
The embodiment of the invention provides a video self-adaptive deblurring method based on a high-speed railway running environment, which comprises the following steps:
obtaining a camera motion parameter;
determining a rotation matrix and a translation matrix of the camera based on the camera motion parameters;
determining a sparse resampling matrix according to the rotation matrix and the translation matrix of the camera;
obtaining a blurred image;
based on the sparse resampling matrix and the blurred image, a deconvolution algorithm with spatial variation is adopted to obtain a restored image.
The embodiment of the invention also provides a video self-adaptive deblurring device based on the high-speed railway running environment, which comprises the following steps:
the camera motion parameter obtaining module is used for obtaining camera motion parameters;
the camera rotation matrix and translation matrix determining module is used for determining a camera rotation matrix and a translation matrix based on camera motion parameters;
the sparse resampling matrix determining module is used for determining a sparse resampling matrix according to the rotation matrix and the translation matrix of the camera;
the fuzzy image obtaining module is used for obtaining a fuzzy image;
the restored image obtaining module is used for obtaining a restored image by adopting a deconvolution algorithm with spatial variation based on the sparse resampling matrix and the blurred image.
The embodiment of the invention also provides computer equipment, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the video self-adaptive deblurring method based on the high-speed railway running environment when executing the computer program.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when being executed by a processor, realizes the steps of the video self-adaptive deblurring method based on the high-speed railway running environment.
In the embodiment of the invention, the camera motion parameters are obtained; determining a rotation matrix and a translation matrix of the camera based on the camera motion parameters; determining a sparse resampling matrix according to the rotation matrix and the translation matrix of the camera; obtaining a blurred image; based on the sparse resampling matrix and the fuzzy image, a deconvolution algorithm with spatial variation is adopted to obtain a restored image, the fuzzy image with spatial variation can be restored, and effective data guarantee is provided for high-speed railway running environment safety inspection.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art. In the drawings:
FIG. 1 is a flowchart of a video adaptive deblurring method based on a high-speed railway operating environment in an embodiment of the invention;
FIG. 2 is an original image of a video of a high-speed railway operating environment in an embodiment of the invention;
FIG. 3 is a flowchart of a method for adaptively deblurring a video of a specific high-speed railway operating environment in an embodiment of the present invention;
FIG. 4 is a comparison chart of deblurring effects of a video of a high-speed railway running environment in an embodiment of the invention;
FIG. 5 is a BRISQUE index contrast test result of an original image and a restored image in the embodiment of the present invention;
fig. 6 is a block diagram of a video adaptive deblurring device based on a high-speed railway running environment in an embodiment of the invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the embodiments of the present invention will be described in further detail with reference to the accompanying drawings. The exemplary embodiments of the present invention and their descriptions herein are for the purpose of explaining the present invention, but are not to be construed as limiting the invention.
Fig. 1 is a flowchart of a video adaptive deblurring method based on a high-speed railway operating environment in an embodiment of the present invention, and as shown in fig. 1, the method includes:
step 101: obtaining a camera motion parameter;
step 102: determining a rotation matrix and a translation matrix of the camera based on the camera motion parameters;
step 103: determining a sparse resampling matrix according to the rotation matrix and the translation matrix of the camera;
step 104: obtaining a blurred image;
step 105: based on the sparse resampling matrix and the blurred image, a deconvolution algorithm with spatial variation is adopted to obtain a restored image.
Specifically, a specific processing flow of the video self-adaptive deblurring method based on the high-speed railway running environment is shown in fig. 2, the method utilizes an inertia measurement unit to measure the motion parameters of a camera, so that the motion track of the camera is calculated to obtain a translation matrix and a rotation matrix, then a sparse resampling matrix is calculated, and the sparse resampling matrix and a blurred image are input into a deconvolution algorithm of spatial variation together to obtain a restored image.
In an embodiment of the invention, the spatially varying image blur model:
in general, spatially invariant image blur is considered as a result of the point spread function and additive white gaussian noise acting together on a sharp image, and can be formalized as:
where I represents a potentially sharp image, K represents a point spread function, N represents noise, B represents a known blurred image,representing a convolution.
However, movement of the camera during the exposure time can result in spatially varying image blur, such as: (1) Defocus blur due to focal length variation of the image plane; (2) motion blur due to camera translational motion; (3) Motion blur due to the rolling, yaw and pitch movements of the camera.
In a real scene, the light intensity of a point (X, Y, Z) at a time t passes through a camera projection matrix P t Is projected onto the image plane (u) t ,v t ) The result at this point is the pixel value, and the process can be expressed as:
(u t ,v t ,1) T =P t (X,Y,Z,1) T (2)
during the exposure time, if the camera is in translational or rotational motion, P t Can change over time, resulting in the same point in the scene being projected to a different location on the image plane at each instant, producing image blur. The projected trajectory of the point is the PSF.
Camera projection matrix P t Can be expressed as an internal reference matrix A, a standard perspective projection matrix C and an external reference matrix E of the camera t Is the product of:
P t =ACE t (3)
wherein E is t Is formed by rotating a matrix R by a camera t And a translation matrix T t A time-dependent extrinsic matrix is composed. According to the planar homography, at the time t=0, the point (u 0 ,v 0 ) Can be mapped to other instants t:
(u t ,v t ,1) T =H t (d)(u 0 ,v 0 ,1) T (4)
where d represents a depth value and M is a unit vector orthogonal to the image plane.
Thus, given an image I at time t=0 0 Image I at subsequent time t t Can be expressed as:
I t =F t (d)I 0 (6)
wherein F is t (d) A sparse resampling matrix (fuzzy sampling matrix) is represented for achieving image warping and resampling caused by planar homography. F (F) t (d) Each row contains weight parameters and the image plane (u) can be calculated using bilinear interpolation algorithm t ,v t ) The value at (u) is taken as the point (u 0 ,v 0 ,1) T =H t (d) -1 (u t ,v t ,1) T Is used for the interpolation of (a).
Thus, the spatially varying image blur model can be expressed as follows:
wherein S represents an integration interval, formula (7) integrates time t, and S represents total time;
for step 103: equations (4) and (5) are based on rotation and translation matrices to calculate the single point from t 0 Mapping relationship H to t t (d)。F t (d) Then it is a mapping of the entire image.
The method for measuring the movement track of the camera based on the inertia measuring unit comprises the following steps:
based on the above spatially varying image blur model, removing spatially varying image blur requires solving 4 unknowns, i.e., the camera rotation matrix R during the exposure time t Translation matrix T t The scene depth d and the camera internal reference matrix A, wherein A can be obtained through camera calibration.
The blurring of the video of the vehicle-mounted high-speed railway running environment is caused by the movement of the camera, so that the rotation matrix R can be calculated according to the movement track of the camera in the exposure time t And a translation matrix T t . Based on the motion parameters, the invention provides a camera motion track measuring method based on an inertia measuring unit, and a camera rotation matrix and a camera translation matrix are calculated according to the motion parameters measured by the inertia measuring unit.
The inertial measurement unit is a strapdown inertial navigation device, and generally consists of a 3-axis gyroscope and a 3-axis accelerometer, and is used for measuring the angular speed and the acceleration of a carrier in 3 coordinate axes in space, namely, the motion parameters. In the vehicle-mounted high-speed railway running environment video acquisition system, the camera and the inertia measurement unit are fixed on the train body through bolts and can be regarded as rigid connection, namely, the motion parameters measured by the inertia measurement unit simultaneously represent the motion parameters of the train body and the vehicle-mounted camera.
The angular velocity measured by the inertial measurement unit is the angular velocity of the rotation of the camera in the t-moment coordinate system, and the measured acceleration is the sum of translational acceleration, centripetal acceleration generated by rotation and gravitational acceleration in the t-moment coordinate system. Specifically, in the exposure time [0, t ], the motion parameters measured by the inertial measurement unit are as follows:
in the method, in the process of the invention,for the measured angular velocity at time t of the time t coordinate system (i.e. the current coordinate system), t R i representing a rotation matrix from an initial coordinate system (denoted by i) to a time t coordinate system, +.>For the angular velocity at the moment t of the initial coordinate system, < >>For the acceleration in the measured t moment coordinate system, +.>For the angular acceleration at time t in the initial coordinate system, < >>For camera gravitational acceleration, +.>For acceleration at time t in the initial coordinate system, +.>Is the vector of the accelerometer to the center of rotation.
The measured angular velocity is sequentially integrated and rotated to an initial coordinate system, so that the angular position of the initial coordinate system at each moment can be calculatedThereby obtaining a camera rotation matrix.
In the method, in the process of the invention,the angular position at the time t under the initial coordinate system is represented; i R t-1 representing a rotation matrix from a time t-1 coordinate system to an initial coordinate system; />The angular velocity at the time t-1 under the coordinate system at the time t-1 is represented; Δt represents integrating the time t; />The angular position at the time t-1 under the initial coordinate system is represented; f (·) represents the conversion of the angular position into a rotation matrix.
The initial coordinate system acceleration may then be calculated from the rotation matrix:
wherein,representing acceleration in an initial coordinate system; />Indicating the acceleration in the measured t-moment coordinate system.
Then the measured acceleration is integrated in sequence and rotated to an initial coordinate system, and the relative speed of the initial coordinate system at each moment can be calculatedAnd relative position->
Wherein,representing the relative position at time t of the initial coordinate system, < >>Indicating acceleration at time t-1 in the initial coordinate system,/->Representing the gravitational acceleration of the camera, Δt representing the integration of the time t; />Representing the relative velocity at time t-1 in the initial coordinate system,/->The relative position at time t-1 of the initial coordinate system is indicated.
In the invention, as the movement frequency of the camera is high, the measured acceleration can be regarded as normal distribution about constant gravity, and the average value of the acceleration is the gravity acceleration:
wherein,the acceleration at time t in the initial coordinate system is shown.
Thus, the position of the camera at each moment in the initial coordinate system is calculated according to the following equation, thereby obtaining a camera translation matrix T t
Note that at time t=0, the initial angular positionInitial speed->And initial position->All set to 0 and the initial rotation matrix is the identity matrix.
Spatially varying deconvolution algorithm:
the invention performs image deconvolution based on a Bayesian framework, and utilizes a known blurred image B, a resampling matrix F and a noise level sigma 2 The optimal restored image I' is solved. According to the bayesian criterion, the solving process can be expressed as maximization of the posterior probability distribution, which is equivalent to minimizing the sum of negative log-likelihoods:
P(I′|B,F)=P(B|I′)P(I′)/P(B) (17)
arg max I′ P(I′|B)=arg min I′ (L(B|I′)+L(I′)) (18)
wherein P (I' |b, F) represents the posterior probability of a sharp image with respect to a blurred image and a resampling matrix; p (b|i') represents the posterior probability of a blurred image with respect to a clear image; p (I') represents the posterior probability of a sharp image; p (B) represents the posterior probability of the blurred image; p (I' |b) represents the posterior probability of a sharp image with respect to a blurred image; l (b|i') represents the negative log likelihood of the blurred image with respect to the sharp image; l (I') represents the negative log likelihood of a sharp image.
According to the spatially varying image blur model, the negative log likelihood of the blurred image with respect to the sharp image is rewritten as:
L(B|I′)=||B-F(d)I′|| 22 (19)
to guarantee sparsity, L (I') needs to satisfy the superlaplace distribution, i.e., the sparse gradient penalty term:
wherein λ represents a penalty weighting factor, which penalty guarantees a complexSparsity of original image gradients;representing the gradient of the restored image I'.
Note that the scene depth d is still unknown. Therefore, in solving the optimal restored image I', the scene depth d needs to be implicitly calculated. Specifically, for a point (u, v) in the image plane, its potential projection end point (u ', v') is searched in image space, where the depth d can be calculated according to the following equation:
d=u/u′=v/v′ (21)
and finally, the sampling is optimized by a least square method, and the optimal restoration image can be obtained. The optimization objective function may be rewritten as:
the invention is illustrated by means of specific examples.
The resolution of video images of the vehicle-mounted high-speed railway running environment video acquisition system is 2048 multiplied by 1536, the frame rate is 25FPS, and the photographed original video images are shown in fig. 3. Obviously, when the train speed is detected to be 301Km/h, the image has serious motion blur, and deblurring processing is needed.
Based on the video self-adaptive deblurring method based on the running environment of the high-speed railway shown in fig. 1 and 2, a restored image is obtained, a high-speed comprehensive detection train is taken as a platform, test verification is carried out on an actual high-speed railway line, and a statistical characteristic deblurring method is selected to be compared with the method of the invention, so that the effectiveness and the superiority of the method of the invention are verified.
The comparison result is shown in figure 4, wherein the video images of the 1 st action with the speed of 50-100km/h, the video images of the 2 nd action with the speed of 100-200km/h, the video images of the 3 rd action with the speed of 200-300km/h, the video images of the speed of more than 300km/h and the video images of different speed grades have different blur degrees. To facilitate the observation of the deblurring effect, a partial region of the image is displayed in enlargement.
As can be seen from the comparison result, the method can effectively remove the motion blur of the video images with different speed grades. Compared with a statistical characteristic deblurring method, the method can better remove image blurring, restore more detail information of images and is more beneficial to a security check task of a high-speed railway running environment.
Because a clear reference image cannot be obtained, common image evaluation indexes such as PSNR (particle swarm optimization), RMS (root mean square) and the like cannot be used for evaluating the deblurring effect of the method. Thus, the BRISQUE index of the reference image is not required to evaluate the quality of the restored image and compare it with the statistical characteristic deblurring method. Specifically, 20 original video images are selected, 5 of each speed level are deblurred by the method and the statistical characteristic deblurring method respectively, restored images are obtained, and then BRISQUE indexes of each image are calculated. The comparison result is shown in FIG. 5, wherein the lower the BRISQUE value, the better the image quality.
According to the comparison test result, the quality of the original images with different speed grades has great difference, and the lower the speed grade is, the lower the blurring degree of the image is, and the better the image quality is. At low speed, the deblurring effect difference between the method and the statistical characteristic deblurring method is not obvious, but at high speed, the method can still keep a lower BRISQUE index, recover an image with higher quality, and fully prove the effectiveness and superiority of the method.
The results of the two comparison tests show that the method is more suitable for removing motion blur of the video image of the vehicle-mounted high-speed railway running environment, and the purpose of collecting high-quality running environment video data is achieved.
Compared with the existing method, the method has lower calculation complexity, can adaptively remove image blurring of different speed grades, can still keep lower BRISQUE index at high speed, recovers images with higher quality, is beneficial to improving the quality of video of the vehicle-mounted high-speed railway running environment, and provides effective data guarantee for the high-speed railway running environment safety inspection task.
The embodiment of the invention also provides a video self-adaptive deblurring device based on the high-speed railway running environment, as described in the following embodiment. Because the principle of the device for solving the problems is similar to that of the video self-adaptive deblurring method based on the high-speed railway operation environment, the implementation of the device can be referred to the implementation of the video self-adaptive deblurring method based on the high-speed railway operation environment, and the repetition is omitted.
Fig. 6 is a block diagram of a video adaptive deblurring device based on a high-speed railway running environment according to an embodiment of the present invention, and as shown in fig. 6, the device includes:
a camera motion parameter obtaining module 02 for obtaining camera motion parameters;
the rotation matrix and translation matrix determining module 04 of the camera is used for determining the rotation matrix and translation matrix of the camera based on the camera motion parameters;
a sparse resampling matrix determining module 06, configured to determine a sparse resampling matrix according to the rotation matrix and the translation matrix of the camera;
a blurred image obtaining module 08 for obtaining a blurred image;
the restored image obtaining module 10 is configured to obtain a restored image by using a deconvolution algorithm with spatial variation based on the sparse resampling matrix and the blurred image.
In the embodiment of the invention, the camera motion parameters comprise angular velocity and acceleration;
the angular speed is the angular speed of the rotation of the current coordinate system camera;
the acceleration is the sum of translational acceleration, centripetal acceleration and gravitational acceleration generated by rotation under the current coordinate system.
In the embodiment of the present invention, the rotation matrix and translation matrix determining module 04 of the camera is specifically configured to:
and after the angular speed is integrated and rotated to the initial coordinate system in sequence, calculating the angular position of the initial coordinate system at each moment, and determining the rotation matrix of the camera according to the angular position.
In the embodiment of the present invention, the rotation matrix and translation matrix determining module 04 of the camera is specifically configured to:
determining a rotation matrix of the camera based on the camera motion parameters according to the following formula:
wherein, among them,the angular position at the time t under the initial coordinate system is represented; i R t-1 representing a rotation matrix from a time t-1 coordinate system to an initial coordinate system; />The angular velocity at the time t-1 under the coordinate system at the time t-1 is represented; Δt represents integrating the time t; />The angular position at the time t-1 under the initial coordinate system is represented; t R i a rotation matrix from the initial coordinate system to the t-moment coordinate system is represented; f (·) represents the conversion of the angular position into a rotation matrix,>the angular velocity at the time t under the coordinate system at the time t is represented;is the angular velocity at time t in the initial coordinate system.
In the embodiment of the present invention, the rotation matrix and translation matrix determining module 04 of the camera is specifically configured to:
determining initial coordinate system acceleration according to the rotation matrix of the camera and the acceleration in the motion parameters of the camera;
sequentially integrating and rotating acceleration of the initial coordinate system to the initial coordinate system, and determining the relative speed and the relative position of the initial coordinate system at each moment;
and determining a translation matrix of the camera according to the relative speed and the relative position of the initial coordinate system at each moment.
In the embodiment of the present invention, the rotation matrix and translation matrix determining module 04 of the camera is specifically configured to:
determining a translation matrix of the camera based on the camera motion parameters according to the following formula:
wherein,representing the relative position at time t under an initial coordinate system; t R i a rotation matrix representing the coordinate system from the initial coordinate system to the moment t is +.>For the angular velocity at the moment t of the initial coordinate system, < >>Representing the relative position at time t of the initial coordinate system, < >>Indicating acceleration at time t-1 in the initial coordinate system,/->Representing the gravitational acceleration of the camera, Δt representing the integration of the time t; />Representing the relative velocity at time t-1 in the initial coordinate system,/->Representing the relative position of the initial coordinate system t-1 moment; />Acceleration at time t in the initial coordinate system, < +.>Representing acceleration in an initial coordinate system; i R t a rotation matrix from the coordinate system at the time t to the initial coordinate system is represented; />Represents the acceleration in the measured t moment coordinate system,/->Acceleration at time t in the initial coordinate system, < +.>A vector representing the accelerometer to the center of rotation; />Is the angular acceleration at time t in the initial coordinate system.
In the embodiment of the present invention, the restored image obtaining module 10 is specifically configured to:
based on a sparse resampling matrix and a blurred image, a deconvolution algorithm with spatial variation is adopted to obtain a restored image according to the following formula:
wherein L (b|i') represents the negative log likelihood of the blurred image with respect to the sharp image; b represents a blurred image; i' represents a restored image; f (d) represents a sparse resampling matrix based on scene depth d; sigma (sigma) 2 Representing noise level; l (I') represents the negative log likelihood of a sharp image; d represents scene depth; λ represents a penalty term weight coefficient; i 'represents the gradient of the restored image I'; (u, v) represents coordinate points in the blurred image; (u ', v') represents the projection end point of the blurred image in the image space.
The embodiment of the invention also provides computer equipment, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the video self-adaptive deblurring method based on the high-speed railway running environment when executing the computer program.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when being executed by a processor, realizes the steps of the video self-adaptive deblurring method based on the high-speed railway running environment.
In the embodiment of the invention, the camera motion parameters are obtained; determining a rotation matrix and a translation matrix of the camera based on the camera motion parameters; determining a sparse resampling matrix according to the rotation matrix and the translation matrix of the camera; obtaining a blurred image; based on the sparse resampling matrix and the fuzzy image, a deconvolution algorithm with spatial variation is adopted to obtain a restored image, the fuzzy image with spatial variation can be restored, and effective data guarantee is provided for high-speed railway running environment safety inspection.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing description of the embodiments has been provided for the purpose of illustrating the general principles of the invention, and is not meant to limit the scope of the invention, but to limit the invention to the particular embodiments, and any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (9)

1. A video self-adaptive deblurring method based on a high-speed railway operation environment is characterized by comprising the following steps:
obtaining a camera motion parameter;
determining a rotation matrix and a translation matrix of the camera based on the camera motion parameters;
determining a sparse resampling matrix according to the rotation matrix and the translation matrix of the camera;
obtaining a blurred image;
based on the sparse resampling matrix and the blurred image, a deconvolution algorithm with spatial variation is adopted to obtain a restored image;
based on a sparse resampling matrix and a blurred image, a deconvolution algorithm with spatial variation is adopted to obtain a restored image according to the following formula:
wherein L (B|I') represents the negative logarithm of the blurred image with respect to the sharp imageLikelihood; b represents a blurred image; i' represents a restored image; f (d) represents a sparse resampling matrix based on scene depth d; sigma (sigma) 2 Representing noise level; l (I') represents the negative log likelihood of a sharp image; d represents scene depth; λ represents a penalty term weight coefficient;representing the gradient of the restored image I'; (u, v) represents coordinate points in the blurred image; (u ', v') represents the projection end point of the blurred image in the image space.
2. The high-speed railway running environment-based video adaptive deblurring method according to claim 1, wherein the camera motion parameters include angular velocity and acceleration;
the angular speed is the angular speed of the rotation of the current coordinate system camera;
the acceleration is the sum of translational acceleration, centripetal acceleration and gravitational acceleration generated by rotation under the current coordinate system.
3. The high-speed railway operation environment-based video adaptive deblurring method according to claim 2, wherein determining a rotation matrix of the camera based on the camera motion parameters comprises:
and after the angular speed is integrated and rotated to the initial coordinate system in sequence, calculating the angular position of the initial coordinate system at each moment, and determining the rotation matrix of the camera according to the angular position.
4. The high-speed railway operation environment video self-adaptive deblurring method according to claim 3, wherein the rotation matrix of the camera is determined based on the camera motion parameters according to the following formula:
wherein,the angular position at the time t under the initial coordinate system is represented; i R t-1 representing a rotation matrix from a time t-1 coordinate system to an initial coordinate system; />The angular velocity at the time t-1 under the coordinate system at the time t-1 is represented; Δt represents integrating the time t; />The angular position at the time t-1 under the initial coordinate system is represented; t R i a rotation matrix from the initial coordinate system to the t-moment coordinate system is represented; f (·) represents the conversion of the angular position into a rotation matrix,>the angular velocity at the time t under the coordinate system at the time t is represented; />Is the angular velocity at time t in the initial coordinate system.
5. The high-speed railway operating environment video-based adaptive deblurring method according to claim 2, wherein determining a translation matrix of the camera based on camera motion parameters comprises:
determining initial coordinate system acceleration according to the rotation matrix of the camera and the acceleration in the motion parameters of the camera;
sequentially integrating and rotating acceleration of the initial coordinate system to the initial coordinate system, and determining the relative speed and the relative position of the initial coordinate system at each moment;
and determining a translation matrix of the camera according to the relative speed and the relative position of the initial coordinate system at each moment.
6. The high-speed railway operation environment-based video adaptive deblurring method according to claim 5, wherein the translation matrix of the camera is determined based on the camera motion parameters according to the following formula:
wherein,representing the relative position at time t under an initial coordinate system; t R i a rotation matrix representing the coordinate system from the initial coordinate system to the moment t is +.>For the angular velocity at the moment t of the initial coordinate system, < >>Representing the relative position at the moment of the initial coordinate system t,indicating acceleration at time t-1 in the initial coordinate system,/->Representing the gravitational acceleration of the camera, Δt representing the integration of the time t; />Representing the relative velocity at time t-1 in the initial coordinate system,/->Representing the relative position of the initial coordinate system t-1 moment; />Acceleration at time t in the initial coordinate system, < +.>Representing acceleration in an initial coordinate system; i R t a rotation matrix from the coordinate system at the time t to the initial coordinate system is represented; />Represents the acceleration in the measured t moment coordinate system,/->Acceleration at time t in the initial coordinate system, < +.>A vector representing the accelerometer to the center of rotation; />Is the angular acceleration at time t in the initial coordinate system.
7. The utility model provides a video self-adaptation deblurring device based on high-speed railway operation environment which characterized in that includes:
the camera motion parameter obtaining module is used for obtaining camera motion parameters;
the camera rotation matrix and translation matrix determining module is used for determining a camera rotation matrix and a translation matrix based on camera motion parameters;
the sparse resampling matrix determining module is used for determining a sparse resampling matrix according to the rotation matrix and the translation matrix of the camera;
the fuzzy image obtaining module is used for obtaining a fuzzy image;
the restored image obtaining module is used for obtaining a restored image by adopting a deconvolution algorithm with spatial variation based on the sparse resampling matrix and the blurred image;
the restored image obtaining module is specifically configured to obtain a restored image based on a sparse resampling matrix and a blurred image according to the following formula by adopting a deconvolution algorithm with spatial variation:
wherein L (b|i') represents the negative log likelihood of the blurred image with respect to the sharp image; b represents a blurred image; i' represents a restored image; f (d) represents a sparse resampling matrix based on scene depth d; sigma (sigma) 2 Representing noise level; l (I') represents the negative log likelihood of a sharp image; d represents scene depth; λ represents a penalty term weight coefficient;representing the gradient of the restored image I'; (u, v) represents coordinate points in the blurred image; (u ', v') represents the projection end point of the blurred image in the image space.
8. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the high-speed railway operating environment-based video adaptive deblurring method of any one of claims 1 to 6 when the computer program is executed.
9. A computer readable storage medium having stored thereon a computer program, characterized in that the program when executed by a processor implements the steps of the high speed railway operating environment based video adaptive deblurring method of any of claims 1 to 6.
CN202110625709.0A 2021-06-04 2021-06-04 Video self-adaptive deblurring method and device based on high-speed railway operation environment Active CN113222863B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110625709.0A CN113222863B (en) 2021-06-04 2021-06-04 Video self-adaptive deblurring method and device based on high-speed railway operation environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110625709.0A CN113222863B (en) 2021-06-04 2021-06-04 Video self-adaptive deblurring method and device based on high-speed railway operation environment

Publications (2)

Publication Number Publication Date
CN113222863A CN113222863A (en) 2021-08-06
CN113222863B true CN113222863B (en) 2024-04-16

Family

ID=77082955

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110625709.0A Active CN113222863B (en) 2021-06-04 2021-06-04 Video self-adaptive deblurring method and device based on high-speed railway operation environment

Country Status (1)

Country Link
CN (1) CN113222863B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102073993A (en) * 2010-12-29 2011-05-25 清华大学 Camera self-calibration-based jittering video deblurring method and device
KR101181161B1 (en) * 2011-05-19 2012-09-17 한국과학기술원 An apparatus and a method for deblurring image blur caused by camera ego motion
US9007490B1 (en) * 2013-03-14 2015-04-14 Amazon Technologies, Inc. Approaches for creating high quality images
CN109272456A (en) * 2018-07-25 2019-01-25 大连理工大学 The blurred picture high-precision restoring method of view-based access control model prior information
CN110677556A (en) * 2019-08-02 2020-01-10 杭州电子科技大学 Image deblurring method based on camera positioning

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8040382B2 (en) * 2008-01-07 2011-10-18 Dp Technologies, Inc. Method and apparatus for improving photo image quality
US8264553B2 (en) * 2009-11-12 2012-09-11 Microsoft Corporation Hardware assisted image deblurring

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102073993A (en) * 2010-12-29 2011-05-25 清华大学 Camera self-calibration-based jittering video deblurring method and device
KR101181161B1 (en) * 2011-05-19 2012-09-17 한국과학기술원 An apparatus and a method for deblurring image blur caused by camera ego motion
US9007490B1 (en) * 2013-03-14 2015-04-14 Amazon Technologies, Inc. Approaches for creating high quality images
CN109272456A (en) * 2018-07-25 2019-01-25 大连理工大学 The blurred picture high-precision restoring method of view-based access control model prior information
CN110677556A (en) * 2019-08-02 2020-01-10 杭州电子科技大学 Image deblurring method based on camera positioning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
鲁棒的非线性优化的立体视觉-惯导SLAM;林辉灿;吕强;王国胜;卫恒;梁冰;;机器人(06);全文 *

Also Published As

Publication number Publication date
CN113222863A (en) 2021-08-06

Similar Documents

Publication Publication Date Title
Li et al. Aod-net: All-in-one dehazing network
CN110097509B (en) Restoration method of local motion blurred image
JP5865552B2 (en) Video processing apparatus and method for removing haze contained in moving picture
KR101633377B1 (en) Method and Apparatus for Processing Frames Obtained by Multi-Exposure
US9589328B2 (en) Globally dominant point spread function estimation
Nair et al. At-ddpm: Restoring faces degraded by atmospheric turbulence using denoising diffusion probabilistic models
CN112597864B (en) Monitoring video anomaly detection method and device
CN113034634B (en) Adaptive imaging method, system and computer medium based on pulse signal
CN104282003B (en) Digital blurred image blind restoration method based on gradient screening
CN111861894A (en) Image motion blur removing method based on generating type countermeasure network
US9008453B2 (en) Blur-kernel estimation from spectral irregularities
CN109743495A (en) Video image electronic stability augmentation method and device
CN107220945B (en) Restoration method of multiple degraded extremely blurred image
CN108876807B (en) Real-time satellite-borne satellite image moving object detection tracking method
CN111993429B (en) Improved Gaussian resampling particle filter target tracking method based on affine group
CN113222863B (en) Video self-adaptive deblurring method and device based on high-speed railway operation environment
CN111031258B (en) Lunar vehicle navigation camera exposure parameter determination method and device
CN114972081A (en) Blind restoration-based image restoration method under complex optical imaging condition
CN114399532A (en) Camera position and posture determining method and device
Carbajal et al. Single image non-uniform blur kernel estimation via adaptive basis decomposition
CN113379821B (en) Stable monocular video depth estimation method based on deep learning
CN111369592A (en) Rapid global motion estimation method based on Newton interpolation
Gao et al. Aero-optical image and video restoration based on mean filter and adversarial network
Zheng A Survey on Single Image Deblurring
CN114723611B (en) Image reconstruction model training method, reconstruction method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant