CN117714870A - Video stability enhancement method, device and storage medium - Google Patents

Video stability enhancement method, device and storage medium Download PDF

Info

Publication number
CN117714870A
CN117714870A CN202311562581.3A CN202311562581A CN117714870A CN 117714870 A CN117714870 A CN 117714870A CN 202311562581 A CN202311562581 A CN 202311562581A CN 117714870 A CN117714870 A CN 117714870A
Authority
CN
China
Prior art keywords
video
offset
directions
camera
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311562581.3A
Other languages
Chinese (zh)
Inventor
田原
李成城
贾运红
马立森
李小燕
贾曲
陈宁
索艳春
张婷
郭皇煌
董孟阳
李涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Coal Research Institute CCRI
Taiyuan Institute of China Coal Technology and Engineering Group
Shanxi Tiandi Coal Mining Machinery Co Ltd
Original Assignee
China Coal Research Institute CCRI
Taiyuan Institute of China Coal Technology and Engineering Group
Shanxi Tiandi Coal Mining Machinery Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Coal Research Institute CCRI, Taiyuan Institute of China Coal Technology and Engineering Group, Shanxi Tiandi Coal Mining Machinery Co Ltd filed Critical China Coal Research Institute CCRI
Priority to CN202311562581.3A priority Critical patent/CN117714870A/en
Publication of CN117714870A publication Critical patent/CN117714870A/en
Pending legal-status Critical Current

Links

Landscapes

  • Studio Devices (AREA)

Abstract

The present invention relates to the field of video image processing technologies, and in particular, to a method, an apparatus, a device, and a computer storage medium for frequency-increasing stabilization. According to the video stability augmentation method, a vibration video image under a vibration working condition is obtained through a camera, triaxial angular speed and triaxial acceleration data of the camera are recorded by using an IMU sensor of the camera, then translation estimation and rotation estimation are carried out, and finally motion compensation is carried out, so that a stable video image is obtained; because the IMU sensor can provide real-time attitude change data for the equipment independent of image content, the method has better adaptability to the image stabilization requirement of the working condition with poor image quality; because linear acceleration data in the IMU is subjected to secondary integration to easily generate the problem of larger accumulated error, the translational motion estimation data of the IMU is corrected by a translational motion method based on visual features, and the image stabilizing effect is further improved.

Description

Video stability enhancement method, device and storage medium
Technical Field
The present invention relates to the field of video image processing technologies, and in particular, to a method, an apparatus, a device, and a computer storage medium for frequency-increasing stabilization.
Background
Vision-based image perception techniques are an important way for unmanned vehicles to perceive changes in the surrounding environment. Because the severe road conditions under the coal mine and the self vibration of the vehicle cause the on-board camera to easily collect the jittering image, the jittering image affects the accuracy of the subsequent image recognition, and therefore the video jittering is required to be removed through the video stabilizing technology. Motion estimation, motion smoothing and motion compensation are important processes for realizing electronic image stabilization, and the effect of electronic image stabilization realized based on image processing depends greatly on the quality of the image, so that the image stabilization effect of video equipment is poor when the image quality is poor.
Disclosure of Invention
Therefore, the invention aims to solve the technical problem of poor image stabilizing effect when the image quality is poor in the prior art.
In order to solve the technical problems, the invention provides a video stability augmentation method, which comprises the following steps:
acquiring linear acceleration data and angular velocity data of the camera along three directions of an X axis, a Y axis and a Z axis under the vibration working condition by using an IMU sensor built in the camera;
calculating offset in three directions according to the linear acceleration data, and correcting the offset in the three directions by adopting a visual characteristic estimation method;
calculating rotation angles of three directions according to the angular speed data;
and performing motion compensation on the video according to the offset and the rotation angle.
Preferably, before acquiring the linear acceleration data and the angular velocity data of the camera along the three directions of the X axis, the Y axis and the Z axis under the vibration working condition by using the IMU sensor built in the camera, the method includes:
and acquiring a video sequence, judging whether the video is jittered according to the video sequence, and performing subsequent image stabilizing processing when the judgment result is that the video is jittered, otherwise, not performing processing.
Preferably, the determining whether the video is jittered according to the video sequence includes:
respectively calculating gray scale projections of adjacent video frames in the horizontal direction;
calculating the cross correlation between adjacent video frames according to the gray level projection;
calculating offset between adjacent video frames according to the cross-correlation;
when the offset between adjacent video frames is larger than a preset threshold, the video is dithered.
Preferably, the calculating the offset in three directions according to the linear acceleration data includes:
calculating the time difference between the camera and the IMU sensor by utilizing a space synchronization principle, and carrying out synchronization processing on the camera and the IMU sensor according to the time difference;
and carrying out secondary integration on the linear acceleration data to obtain the offset in the three directions.
Preferably, the correcting the offset in the three directions by using the visual feature estimation method includes:
extracting feature points in the video frames by using a feature point extraction algorithm, performing feature matching on the feature points in adjacent video frames, and constructing a perspective transformation model to describe the motion change relation between the video frames;
calculating a correction offset between video frames according to the perspective transformation model;
and fusing the correction offset, and correcting the offset in the three directions.
Preferably, the calculating rotation angles of three directions according to the angular velocity data includes:
and carrying out integral operation on the angular velocity data to obtain rotation angles of the three directions, and calculating a rotation matrix representing the change of the rotation posture of the camera according to the rotation angles.
Preferably, the video motion compensation method further comprises the following steps of:
binarizing the video frame after motion compensation, determining a black edge area, and cutting;
the pixels of the clipping region are restored by an interpolation algorithm.
The invention also provides a video stability augmentation device, which comprises:
the data acquisition module is used for acquiring linear acceleration data and angular velocity data of the camera along the X axis, the Y axis and the Z axis under the vibration working condition by utilizing an IMU sensor built in the camera;
the translation estimation module is used for calculating offset in three directions according to the linear acceleration data and correcting the offset in the three directions by adopting a visual characteristic estimation method;
the rotation estimation module is used for calculating rotation angles of three directions according to the angular speed data;
and the motion compensation module is used for performing motion compensation on the video according to the offset and the rotation angle.
The invention also provides a video stability augmentation device, comprising:
a memory for storing a computer program;
and the processor is used for realizing the video stabilization method steps when executing the computer program.
The invention also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program realizes the steps of the video stabilization method when being executed by a processor.
Compared with the prior art, the technical scheme of the invention has the following advantages:
according to the video stability augmentation method, a vibration video image under a vibration working condition is obtained through a camera, triaxial angular speed and triaxial acceleration data of the camera are recorded by using an IMU sensor of the camera, then translation estimation and rotation estimation are carried out, and finally motion compensation is carried out, so that a stable video image is obtained; because the IMU sensor can provide real-time attitude change data for the equipment independent of image content, the method has better adaptability to the image stabilization requirement of the working condition with poor image quality; because linear acceleration data in the IMU is subjected to secondary integration to easily generate the problem of larger accumulated error, the translational motion estimation data of the IMU is corrected by a translational motion method based on visual characteristics, and the image stabilizing effect is further improved; after motion compensation, a large number of undefined pixel points are generated at the edge of an image due to the fact that the compensated image deviates from the original imaging plane, and then black edges are generated.
Drawings
In order that the invention may be more readily understood, a more particular description of the invention will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings, in which:
FIG. 1 is a flow chart of an implementation of a video stabilization method provided by the invention;
fig. 2 is a schematic flow chart of a video stabilization method according to an embodiment of the present invention;
fig. 3 is a schematic diagram of the image after binarization.
Detailed Description
The core of the invention is to provide a video stabilization method, a device, equipment and a computer storage medium, which effectively improve the video stabilization effect.
In order to better understand the aspects of the present invention, the present invention will be described in further detail with reference to the accompanying drawings and detailed description. It will be apparent that the described embodiments are only some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1 and fig. 2, fig. 1 is a flowchart of an implementation of a video stabilization method provided by the present invention, and fig. 2 is a flowchart of a video stabilization method provided by an embodiment of the present invention; the specific operation steps are as follows:
s101: acquiring linear acceleration data and angular velocity data of the camera along three directions of an X axis, a Y axis and a Z axis under the vibration working condition by using an IMU sensor built in the camera;
in one embodiment, an industrial USB camera with built-in IMU sensor is manufactured with model number Intel RealSense D435i, camera frame rate 30fps/s and resolution 1920×1080; the built-in IMU model in the camera is Boshi BMI055, and can output triaxial angular velocity and triaxial linear acceleration information.
S102: calculating offset in three directions according to the linear acceleration data, and correcting the offset in the three directions by adopting a visual characteristic estimation method;
s103: calculating rotation angles of three directions according to the angular speed data;
s104: and performing motion compensation on the video according to the offset and the rotation angle.
Based on the above embodiment, before acquiring the linear acceleration data and the angular velocity data of the camera along the three directions of the X axis, the Y axis and the Z axis under the vibration working condition by using the IMU sensor built in the camera includes:
acquiring a video sequence, judging whether the video is jittered according to the video sequence, and performing subsequent image stabilizing processing when the judgment result is that the video is jittered, otherwise, not performing processing:
in general, if a stable video sequence is input, gray projection curves between two adjacent frames of images are coincident with each other; if a jittered video is input, the gray projection curves of the jittered video will not overlap, and the difference between the gray projection curves corresponds to the offset between jittered video frames, so that whether the input video sequence generates jittering can be judged based on the difference, which comprises the following specific steps:
A. adjacent jittering video frames in an input video sequence are determined, gray level projections of the adjacent jittering video frames in the horizontal direction are calculated respectively, and a specific calculation formula is as follows:
x=1,2,…,M-1
x=1,2,…,M-1
wherein I is 0 (x, y) and I 1 (x, y) are two adjacent dithered video frames of size MxN, C 0 (x) And C 1 (x) Respectively gray projection values of adjacent jittery video frames in the x-th row;
B. computing cross-correlation between dithered video frames
Wherein D is C (i, j) is the difference in gray projection between adjacent dithered video frames, corr C (i) Is the cross-correlation between adjacent dithered video frames at offset i;
C. calculating the offset of adjacent jittered video frames when Corr C (i) When the value is minimum, the offset between the jittering video frames is DeltaxDeltax=argmin [ Corr ] c (i)]
D. Setting an offset threshold value a, and when deltax is larger than a, indicating that a jittered video exists in the input video sequence, starting an image stabilizing algorithm module to perform image stabilizing processing on the jittered video, otherwise, performing no processing.
Based on the above embodiments, the present embodiment describes step S102 in detail:
the IMU sensor comprises a triaxial accelerometer, and linear acceleration data of the camera along the X axis, the Y axis and the Z axis can be obtained by obtaining the information of the IMU sensor in the camera under the vibration working condition. The corresponding offset is obtained by twice integrating the linear accelerometer data. However, the accumulated error of the acceleration secondary integration is larger, so that the translational estimation error is larger, and the image stabilizing effect is further poor. Therefore, the translational motion estimation based on the IMU is corrected by adopting a visual characteristic estimation method, and the translational motion estimation with higher precision is obtained. The specific steps of the method can be as follows:
A. due to the existence of time delays such as triggering and transmission, the time of the sampling of the camera and the IMU is not matched with the time of the time stamp, the time difference between the camera and the IMU is calculated by utilizing the principle of space synchronization, the synchronization of the IMU sensor and the camera is realized, and the calculation formula of the time difference is as follows:
t IMU =t cam +t d
wherein t is cam And t IMU Camera and I respectivelyTime stamp of MU sensor, camera time stamp shift t d And then the synchronization of the camera and the IMU can be realized.
B. The acquired acceleration data in the IMU is subjected to secondary integration, and the offset of the acceleration data on three coordinate axes is calculated, wherein the specific calculation formula is as follows:
wherein [ ax ay az]Is acceleration of the camera in three directions of X axis, Y axis and Z axis, [ t ] x t y t z ]The offset value of the acceleration data after twice integration;
C. estimating the offset of a jittering video frame by adopting a visual method, firstly extracting characteristic points in images of the jittering video frame by utilizing a characteristic point extraction algorithm, then carrying out characteristic matching on the characteristic points in adjacent video frames, and constructing a perspective transformation model to describe the motion change relation among the jittering video frames, wherein the method comprises the following specific steps:
Frame i+1 =H×Frame i
wherein Frame is i+1 And Frame i The target frame and the reference frame of the jittered video are respectively. The offset between the frames of the jittered video can be obtained by the perspective transformation matrix H, and the specific formula is as follows:
wherein h is 13 And h 23 Offset in the horizontal and vertical directions of adjacent frame images, respectively. The camera horizontal and vertical offsets can be expressed as T 1 =[h 13 h 23 ]
D. In order to effectively reduce the influence of noise on translational motion estimation, the translational motion estimation based on a visual method and an IMU method is subjected to fusion correction, and the specific formula is as follows:
T=kT 1 +(1-k)T 2
wherein T is the offset after fusion, and k isThe scale factor is k epsilon (0, 1), T 2 For offset estimation using IMU acceleration data, T is a significant concern in video stabilization because of the offset of the camera in the X-and Y-directions 2 Can be expressed as
T 2 =[t x t y ]
Based on the above embodiments, the present embodiment describes in detail step S103:
the IMU sensor comprises three-axis angular velocity data, and the angular velocity data of the IMU acquisition device in three coordinate axis directions is assumed to be [ gyro ] x ,gyro y ,gyro z ]The rotation angles [ theta ] of the equipment in three directions can be obtained by carrying out integral operation on the rotation angles x ,θ y ,θ z ]. To efficiently represent the rotation state of the camera, the change of the rotation posture of the camera is represented by a rotation matrix R, and the specific formula is as follows r=i+sinθ· [ k ]] x +(1-cosθ)[k] 2 x
Wherein I is an identity matrix, and the overall rotation angle of the camera can be expressed as[k] x Representing a cross product matrix of k.
Based on the above embodiments, the present embodiment describes in detail step S104:
and performing motion compensation on the jittered video frames according to the rotation matrix and the translation matrix estimated by the IMU and the visual method to generate a stable video sequence, wherein the specific compensation method comprises the following steps of:
wherein K is an internal reference matrix of the camera, the camera can be calibrated to obtain,for dithering the motion transformation matrix between video frames, including the rotation and translation amounts of the camera pose, P k P for jittered video frames k_new Is a stable video frame after reverse compensation.
Based on the above embodiment, the motion compensation of the video according to the offset and the rotation angle further includes:
binarization processing is carried out on the video frame after the motion compensation, a black edge area is determined, clipping is carried out, and pixels of the clipping area are restored through an interpolation algorithm:
after motion compensation, a large number of undefined pixels are generated at the edges of the image due to the deviation of the compensated image from the original imaging plane, thereby resulting in the generation of black edges. In order to improve the image stabilizing effect, the self-adaptive image repairing method is provided for processing the black edge, accurately positioning the black edge, accurately cutting, repairing the cutting area, and reserving more image information as much as possible.
A. The gray value of the pixel at the black edge of the compensated image is 0, and the values of other surrounding pixels are generally larger than 0. According to the characteristic of image pixel value distribution, the image is subjected to binarization processing, black edges are separated, the principle of binarization is shown in the following formula, thresh represents a set image threshold value, and maxVal represents the maximum value 255 of image pixel gray scale.
After binarization of the image, only two pixel points of black and white exist in the image, as shown in 3, the coordinates in the horizontal direction are determined by traversing the horizontal direction of the image to find the maximum position and the minimum position of the pixel value 255, and the coordinates in the vertical direction are determined by traversing the vertical direction to find the maximum position and the minimum position of the pixel value 255. And determining the position of the white region through coordinates of the point a, the point b, the point c and the point d of the four positions, and carrying out directional interception on the white region.
B. Repair of cropped regions
And cutting the determined black edge position, and restoring pixels at the black edge position by an interpolation method to finally obtain a complete image.
According to the video stability augmentation method, a vibration video image under a vibration working condition is obtained through a camera, triaxial angular speed and triaxial acceleration data of the camera are recorded by using an IMU sensor of the camera, then translation estimation and rotation estimation are carried out, and finally motion compensation is carried out, so that a stable video image is obtained; because the IMU sensor can provide real-time attitude change data for the equipment independent of image content, the method has better adaptability to the image stabilization requirement of the working condition with poor image quality; because linear acceleration data in the IMU is subjected to secondary integration to easily generate the problem of larger accumulated error, the translational motion estimation data of the IMU is corrected by a translational motion method based on visual characteristics, and the image stabilizing effect is further improved; after motion compensation, a large number of undefined pixel points are generated at the edge of an image due to the fact that the compensated image deviates from the original imaging plane, and then black edges are generated.
The embodiment of the invention also provides a video stabilization device; the specific apparatus may include:
the data acquisition module is used for acquiring linear acceleration data and angular velocity data of the camera along the X axis, the Y axis and the Z axis under the vibration working condition by utilizing an IMU sensor built in the camera;
the translation estimation module is used for calculating offset in three directions according to the linear acceleration data and correcting the offset in the three directions by adopting a visual characteristic estimation method;
the rotation estimation module is used for calculating rotation angles of three directions according to the angular speed data;
and the motion compensation module is used for performing motion compensation on the video according to the offset and the rotation angle.
The video stabilization device further comprises:
and the jitter analysis module is used for acquiring a video sequence, judging whether the video is jittered according to the video sequence, and carrying out subsequent image stabilizing processing when the judgment result is that the video is jittered, or else, not carrying out processing.
And the image patching module is used for carrying out binarization processing on the video frame after the motion compensation, determining a black edge area, clipping, and recovering pixels of the clipping area through an interpolation algorithm.
The video stability augmentation device of the present embodiment is configured to implement the foregoing video stability augmentation method, so a specific implementation manner in the video stability augmentation device may be an example portion of the foregoing Wen Shipin stability augmentation method, for example, the data acquisition module, the translation estimation module, the rotation estimation module, and the motion compensation module are respectively configured to implement steps S101, S102, S103, and S104 in the foregoing video stability augmentation method, so specific implementation manners thereof may refer to descriptions of examples of respective portions and are not repeated herein.
The specific embodiment of the invention also provides video stability augmentation equipment, which comprises the following steps: a memory for storing a computer program; and the processor is used for realizing the steps of the video stabilization method when executing the computer program.
The specific embodiment of the invention also provides a computer readable storage medium, wherein the computer readable storage medium is stored with a computer program, and the computer program realizes the steps of the video stabilization method when being executed by a processor.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It is apparent that the above examples are given by way of illustration only and are not limiting of the embodiments. Other variations and modifications of the present invention will be apparent to those of ordinary skill in the art in light of the foregoing description. It is not necessary here nor is it exhaustive of all embodiments. While still being apparent from variations or modifications that may be made by those skilled in the art are within the scope of the invention.

Claims (10)

1. A method for video stabilization, comprising:
acquiring linear acceleration data and angular velocity data of the camera along three directions of an X axis, a Y axis and a Z axis under the vibration working condition by using an IMU sensor built in the camera;
calculating offset in three directions according to the linear acceleration data, and correcting the offset in the three directions by adopting a visual characteristic estimation method;
calculating rotation angles of three directions according to the angular speed data;
and performing motion compensation on the video according to the offset and the rotation angle.
2. The method for video stabilization according to claim 1, wherein the step of obtaining the linear acceleration data and the angular velocity data of the camera along the three directions of the X axis, the Y axis and the Z axis under the vibration condition by using the IMU sensor built in the camera comprises:
and acquiring a video sequence, judging whether the video is jittered according to the video sequence, and performing subsequent image stabilizing processing when the judgment result is that the video is jittered, otherwise, not performing processing.
3. The method of claim 2, wherein determining whether the video is jittered based on the video sequence comprises:
respectively calculating gray scale projections of adjacent video frames in the horizontal direction;
calculating the cross correlation between adjacent video frames according to the gray level projection;
calculating offset between adjacent video frames according to the cross-correlation;
when the offset between adjacent video frames is larger than a preset threshold, the video is dithered.
4. The video stabilization method according to claim 1, wherein the calculating the offset amounts in three directions from the linear acceleration data includes:
calculating the time difference between the camera and the IMU sensor by utilizing a space synchronization principle, and carrying out synchronization processing on the camera and the IMU sensor according to the time difference;
and carrying out secondary integration on the linear acceleration data to obtain the offset in the three directions.
5. The video stabilization method according to claim 1, wherein correcting the offset amounts in the three directions by using a visual feature estimation method comprises:
extracting feature points in the video frames by using a feature point extraction algorithm, performing feature matching on the feature points in adjacent video frames, and constructing a perspective transformation model to describe the motion change relation between the video frames;
calculating a correction offset between video frames according to the perspective transformation model;
and fusing the correction offset, and correcting the offset in the three directions.
6. The video stabilization method according to claim 1, wherein the calculating rotation angles of three directions from the angular velocity data includes:
and carrying out integral operation on the angular velocity data to obtain rotation angles of the three directions, and calculating a rotation matrix representing the change of the rotation posture of the camera according to the rotation angles.
7. The video stabilization method according to claim 1, wherein the motion compensation of the video according to the offset and the rotation angle further comprises:
binarizing the video frame after motion compensation, determining a black edge area, and cutting;
the pixels of the clipping region are restored by an interpolation algorithm.
8. A video stabilization device, comprising:
the data acquisition module is used for acquiring linear acceleration data and angular velocity data of the camera along the X axis, the Y axis and the Z axis under the vibration working condition by utilizing an IMU sensor built in the camera;
a translation estimation module for calculating the offset of three directions according to the linear acceleration data,
correcting the offset in the three directions by adopting a visual characteristic estimation method;
the rotation estimation module is used for calculating rotation angles of three directions according to the angular speed data;
and the motion compensation module is used for performing motion compensation on the video according to the offset and the rotation angle.
9. A video stabilization device, comprising:
a memory for storing a computer program;
a processor for implementing the steps of a video stabilization method according to any one of claims 1 to 7 when executing said computer program.
10. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of a video stabilization method according to any one of claims 1 to 7.
CN202311562581.3A 2023-11-21 2023-11-21 Video stability enhancement method, device and storage medium Pending CN117714870A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311562581.3A CN117714870A (en) 2023-11-21 2023-11-21 Video stability enhancement method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311562581.3A CN117714870A (en) 2023-11-21 2023-11-21 Video stability enhancement method, device and storage medium

Publications (1)

Publication Number Publication Date
CN117714870A true CN117714870A (en) 2024-03-15

Family

ID=90145156

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311562581.3A Pending CN117714870A (en) 2023-11-21 2023-11-21 Video stability enhancement method, device and storage medium

Country Status (1)

Country Link
CN (1) CN117714870A (en)

Similar Documents

Publication Publication Date Title
CN109902637B (en) Lane line detection method, lane line detection device, computer device, and storage medium
US9270891B2 (en) Estimation of panoramic camera orientation relative to a vehicle coordinate frame
CN108805934B (en) External parameter calibration method and device for vehicle-mounted camera
US20170339397A1 (en) Stereo auto-calibration from structure-from-motion
EP2757527B1 (en) System and method for distorted camera image correction
CN108827341B (en) Method for determining a deviation in an inertial measurement unit of an image acquisition device
JP2016516977A (en) Generating a 3D model of the environment
CN111415387A (en) Camera pose determining method and device, electronic equipment and storage medium
US20090141043A1 (en) Image mosaicing apparatus for mitigating curling effect
CN109618103B (en) Anti-shake method for unmanned aerial vehicle image transmission video and unmanned aerial vehicle
CN111800589B (en) Image processing method, device and system and robot
US20230298344A1 (en) Method and device for determining an environment map by a server using motion and orientation data
WO2020092051A1 (en) Rolling shutter rectification in images/videos using convolutional neural networks with applications to sfm/slam with rolling shutter images/videos
KR20110089299A (en) Stereo matching process system, stereo matching process method, and recording medium
JP2019109747A (en) Position attitude estimation apparatus, position attitude estimation method, and program
CN117714870A (en) Video stability enhancement method, device and storage medium
CN113763481B (en) Multi-camera visual three-dimensional map construction and self-calibration method in mobile scene
CN111955005A (en) Method and system for processing 360-degree image content
CN113011212B (en) Image recognition method and device and vehicle
JP2021140262A (en) Camera calibration device, camera calibration method, and program
JP6992452B2 (en) Information processing equipment and information processing system
CN113643355A (en) Method and system for detecting position and orientation of target vehicle and storage medium
CN113139456A (en) Electronic equipment state tracking method and device, electronic equipment and control system
JP5056436B2 (en) 3D map generation apparatus and program
JP2006195758A (en) Stereo matching apparatus and method, and program for this method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination