CN108900775A - A kind of underwater robot realtime electronic image stabilizing method - Google Patents
A kind of underwater robot realtime electronic image stabilizing method Download PDFInfo
- Publication number
- CN108900775A CN108900775A CN201810921737.5A CN201810921737A CN108900775A CN 108900775 A CN108900775 A CN 108900775A CN 201810921737 A CN201810921737 A CN 201810921737A CN 108900775 A CN108900775 A CN 108900775A
- Authority
- CN
- China
- Prior art keywords
- frame
- image
- information
- robot
- interframe
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
- H04N23/681—Motion detection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
- H04N23/681—Motion detection
- H04N23/6812—Motion detection based on additional sensors, e.g. acceleration sensors
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
A kind of underwater robot realtime electronic image stabilizing method, in turn includes the following steps:The image and the corresponding IMU posture information obtained that will acquire carry out time alignment, and determine time relationship;Robot coordinates of motion referential lower swing information is detected by IMU unit, is based on swing information estimated state transfer matrix;Using the state-transition matrix between before and after frames image, image interframe correlative relationship is calculated;Using the actual motion information and randomized jitter information of filtering method separation robot, this method flexibility is high, and computation complexity is low, can steady picture in real time, and reduce that treated the problem of bird caging deformation occur in video image, further increase steady image quality amount.
Description
Technical field
The present invention relates to intelligent robots, undersea detection field, and in particular to a kind of underwater robot real time electronic steady image
Method.
Background technique
Currently, with the continuous development of science and technology and the demand of great market, people are intended to the exploration in the underwater world
It hopes more more and more intense, has directly facilitated the development of underwater robot technology.
The underwater robot system complicated as one, is integrated with artificial intelligence, the detection of submarine target and identification, data
Fusion, intelligent control and navigation and each subsystem of communication, be one can be executed in complicated underwater environment it is various military and
The unmanned platform of intelligence of civilian task.Since underwater robot has long-range prospect in maritime study and ocean development,
Following underwater information obtains, is also widely used in precision strike and " asymmetric intelligence war ", therefore underwater robot technology exists
It is all an important and positive research and development field in world's every country.
Existing underwater camera mode is mostly handheld device shooting, and it is existing that the video shot in this way will appear shake often
As, and due to holding shooting under water in the process in view of periodically diving movement can also generate the week similar to ripple in shooting
The disturbance of phase property exists.The underwater photograph technical being directed under Career Information generally disappears jitter apparatus by big machinery, such device one
As it is at high cost, and volume is big, not portable.Video camera collected picture mounted for existing underwater robot is deposited
It is serious to affect the quality for shooting video in jitter phenomenon.In particular, at present consumer level underwater robot technology also in
Starting stage, being badly in need of can be effectively using the method for carrying out video capture in robot under water.
Currently, including real-time applicable cases and processed offline situation using the digital image stabilization method of image processing method.Real-time
Under disposition, in order to meet the requirement of real-time, image frame-to-frame correlation can only be generally analyzed at low dimensional (low degree-of-freedom)
Transformation relation, treated, and video image locally will appear distortion, problem on deformation.It is general to extract in processed offline
More complicated characteristics of image or piecemeal extract characteristics of image, then estimate high-dimensional transformation relation, can reduce in this way
The bird caging problem of treated video image, but complexity is higher, it is easier by noise when characteristics of image to extract
It influences, the stability that the motion state of extraction has is poorer, and can be only applied to processed offline situation, is unable to satisfy real-time
Demand.
In addition, directlying adopt Kalman filter algorithm, Mean Filtering Algorithm or other smothing filtering algorithms are slided to estimate
Robot different motion state is not distinguished in the actual motion of robot, is easy to disappear when robot is in floating state and tremble effect
It is undesirable;Directly machine is obtained by obtaining Inertial Measurement Unit IMU (Inertial Measurement Unit, IMU) sensor
The randomized jitter information of device people, due to inherently containing measurement noise in the information of IMU measurement, if it is considered that environmental factor pair
The influence of sensor, then may also be subjected to the influence of ambient noise, so directly obtaining the letter of motion state by IMU module
Breath is inaccurate.
Summary of the invention
It is an object of the invention to overcome the deficiencies of the prior art and provide a kind of underwater robot realtime electronic image stabilizings
Method, flexibility are high, and computation complexity is low, can steady picture in real time, and reduce that treated bird caging occur in video image
The problem of deformation, further increases steady image quality amount.
The present invention provides a kind of underwater robot realtime electronic image stabilizing methods, in turn include the following steps:
(1) picture frame and the corresponding IMU posture information obtained that will acquire carry out time alignment, and determine different sensings
The time consistency of device acquisition information;
(2) robot coordinates of motion referential lower swing information is detected by IMU sensor unit, is estimated based on swing information
Count robot spin matrix R;
(3) matching relationship for utilizing characteristic point between the spin matrix R and before and after frames image in (2), estimates the phase of interframe
To state-transition matrix d, and then obtain the global motion position state relative to start frame.
(4) it is filtered using filtering method axial global position curve each to robot, separates robot
It is practical to be intended to motion information and randomized jitter information;
(5) upward in each kinematic axis, according to filtered smoothed curve and former global operation curve every frame displacement difference,
The position of present frame is moved backward, to offset displacement difference.It realizes the purpose trembled that disappears, finally image edge is sheared, eliminate
The mobile white space generated of every frame.
Further, time alignment is by joining at the time of sensor sample by the time of low sampling rate in step (1)
Examine, or system timestamp sticked to sample information to each sampling instant, to time of different sample informations with the time difference most
Approximately principle is aligned different sensor informations.
Further, swing information is pitching, in the swing information in three directions of rolling and course deviation in the step (2)
One kind, two or three.
Further, the step (2) is specially:
The rotation transformation R that robot is detected by IMU unit, then have R=θpitch·θroll, wherein pitching is θpitch
It is θ with rolling pitchingroll。
It further, include obtaining estimation equation in the step (3), wherein obtaining estimation equation can indicate
For:X '=RX+d=θpitch·θrollX+d, wherein d is translational movement, and X and X ' are respectively the quantity of state of before and after frames image.
Further, image interframe correlative relationship is calculated in step (3) to use based on the matched side of gray scale module
Method, based on bitmap statistics correlation calculations method, the correlation calculations method based on gray-scale statistical, based on the correlation of light stream
Calculation method or correlation calculations method based on feature.
Further, the Block- matching based in the matched method of gray scale module is during to be based on matched characteristic point be
The module of the heart.
Further, described to be specially based on the matched method of gray scale module:
1) using the image characteristic point information of feature point detection algorithm detection left images sensor imaging focal plane, pass through
Feature point description operator carries out the matching of characteristic point using Feature Points Matching algorithm;
2) three-dimensional seat of the characteristic point under camera coordinates system is derived by the pixel relationship between matched characteristic point
Mark information;
3) it by the Pixel Information of the projection focal plane of matching interframe characteristic point, constructs centered on the point of matching characteristic
Block;
4) the Feature Points Matching error based on Pixel-level is further corrected by matched piece of interframe of match information;
5) update interframe matching characteristic point projection focal plane positional relationship, and utilize this relation derivation low dimensional
Inter frame image information transition status.
Further, filtering method uses region filtering in step (4), slides mean filter method, weights sliding average
Method, limit filtration method or particle filter method.
Further, using but be not limited to FAST feature point extraction algorithm, SIFT, SURF can also be used, ORB characteristic point mentions
Take algorithm.BRIEF feature point description operator.Using direct method matching characteristic point.It is biggish using RANSAC algorithm removal error
Point estimates interframe relative motion relation in the way of optimal estimation.
Further, directly using the former global position curve and smoothed out curve relative to initial frame, in every frame
Displacement difference, Contrary compensation original global motion.In order to reduce the global accumulation calculating and evaluated error relative to initial frame, can incite somebody to action
Entire processing video sample is divided into N sections, every section of video disappear tremble when so that adjacent two sections of videos have each other
There is the intersection of M frame, wherein M is theoretically not more than every section of totalframes, for intersection by the way of weighted sum
It is overlapped.The weight of M frame is 1/M, the weight of every frame of the weight and back segment video of leading portion video inverse M frame before back segment video
1 is added up to, so that the weight of leading portion video M frame is uniformly to reduce, the weight of back segment video M frame is uniformly to increase.
Underwater robot realtime electronic image stabilizing method of the invention, may be implemented:
1) digital image stabilization method in the case of hovering for underwater robot and in motion process (including start and stop transition stage),
It is applicable in just for the steady picture in one of situation, flexibility is high.
2) in such a way that acquisition low-dimensional image frame-to-frame correlation and IMU sensor blend, computation complexity is low, realizes
Video is steady as processing in real time.
3) fused high-dimensional inter frame image correlation estimation stability with higher, the video that reduces that treated
There is the problem of bird caging deformation in image.
4) robot is static and motion state by distinguishing, can be preferably using the actual motion of filter separation robot
And randomized jitter, further increase steady image quality amount.
Detailed description of the invention
Fig. 1 is underwater robot realtime electronic image stabilizing method flow chart.
Specific embodiment
The following detailed description of specific implementation of the invention, it is necessary to it is indicated herein to be, implement to be only intended to this hair below
Bright further explanation, should not be understood as limiting the scope of the invention, and field person skilled in the art is according to above-mentioned
Some nonessential modifications and adaptations that summary of the invention makes the present invention, still fall within protection scope of the present invention.
The present invention provides a kind of underwater robot realtime electronic image stabilizing methods to be somebody's turn to do using in robot device under water
Method combines attitude measurement mode based on Inertial Measurement Unit IMU and image interframe correlation analysis two ways goes to estimate
The global shake of underwater robot, by analyzing image frame-to-frame correlation relationship, and adoption status transfer matrix indicates this frame
Between correlation carry out fast reading effective processing.
What the object image information in view of passing through the photographic device measurement that robot is loaded showed is a two dimension
The pixel information of the plane of delineation shifts square here with the state between the information direct estimation before and after frames image measured
Battle array [R d], then have the estimation equation to be:X '=RX+d, observational equation are λ x=K (X+d), and wherein K is the projection of camera
Parameter.In view of reducing the dimension by calculating the conversion of inter frame image information estimated state to meet the needs of real-time operation
Degree only estimates that the translation state under robot coordinate is:X '=X+d, wherein λ is normalized parameter, and d is translational movement, and R is rotation
Turn converted quantity, X and X ' are respectively the quantity of state of before and after frames image.
Under normal circumstances if simply by image analysis method, the transformation model between adjacent two field pictures, shape are analyzed
The dimension of state transfer matrix determines the complexity of transformation model.The transformation model of higher-dimension can preferably describe the two frame figures that are connected
Transformational relation as between, but dimension is higher, it is more complicated using image analysis method, more it is easy to appear error, and state
Stability is poorer, and time complexity is higher.For the embedded platform in underwater robot, and need to carry out in real time surely as at
The case where reason, it is clear that cannot meet the requirements.So in terms of the present invention calculates image correlation for interframe, using being based on
Gray scale module matched mode calculates image frame-to-frame correlation, it should be noted that can also be using being counted based on bitmap
Correlation calculations method, the correlation calculations method based on gray-scale statistical, the correlation calculations method based on light stream are based on feature
Correlation calculations method realize.
It is based on gray level image based on the matched method of gray scale module to be calculated, the more typical grey blocks match party of the present invention
Unlike method, Block- matching is based on the module centered on matched characteristic point, and the quantity of block depends on matched feature
Points.The specific steps are:
1) using the image information of feature point detection algorithm detection left images sensor imaging focal plane, pass through characteristic point
Operator is described, the matching of characteristic point is carried out using Feature Points Matching algorithm.
2) three-dimensional coordinate information of characteristic point is derived by the pixel relationship between matched characteristic point.
3) it by the Pixel Information of the projection focal plane of matching interframe characteristic point, constructs centered on the point of matching characteristic
Block.
4) the Feature Points Matching error based on Pixel-level is further corrected by matched piece of interframe of match information.
5) update interframe matching characteristic point projection focal plane positional relationship, and utilize this relation derivation low dimensional
Inter frame image information transition status.
Pitching under robot coordinates of motion referential, rolling and course deviation three directions are detected additionally by IMU unit
Swing information (or any of them or two kinds of swing informations), actually in water the main shake of robot from
Pitching and rolling swaying direction, so the metrical information of IMU compensates for low dimensional image interframe correlation analysis robot global
The deficiency of wobble information.
The rotation transformation that robot is detected by IMU unit is R, directly can extract Eulerian angles according to IMU here,
Not needing other extra operations, (Eulerian angles include:Pitching, rolling, course deviation);Pitching θ is rotated to be in view of mainpitchWith
Roll θroll, then have R=θpitch·θroll, therefore estimation equation is represented by:X '=RX+d=θpitch·θroll·X+
D, meanwhile, meet the demand of real-time application.
The present invention is obtaining image and is obtaining a possibility that IMU posture information is inconsistent in time in view of robot,
So application the image stabilization system before, need first by it is this it is inconsistent be transformed on time consistency so that obtain image
It is synchronous with the realization of IMU posture, and determine this correct time relationship.
Since the sample frequency of IMU sensor is about 200Hz, and the imaging sensor sample frequency in robot is
30Hz, so the time consistency in order to guarantee image information and IMU information, simple method are at the time of sensor sample
Using low sampling rate as time reference, or system timestamp is sticked to sample information to each sampling instant, different samplings are believed
Different sensor informations is aligned with time difference nearest principle in the time consistency problem of breath.
The present invention not only includes the digital image stabilization method of Multi-sensor Fusion of the underwater robot under floating state, also includes needle
To the digital image stabilization method of the Multi-sensor Fusion in robot under water motion process.In moving process, pass through image interframe phase
The analysis of closing property and IMU measurement method not only obtain the moving movement information of robot, also contain random wobble information, adopt
With the actual motion information and randomized jitter information of Kalman filter method separation robot, it should be noted that can also adopt
With mean filter, mean filter is slided, weights sliding average, limit filtration, particle filter method is realized.
In view of actual submerged applications scene, it is periodic that the main shake of when underwater static shows similar sine and cosine
Variation, and rare periodic jitter in dynamic process, again there are initial amplitude is larger in sound stateful switchover process, with
The sine and cosine for the trend that time successively decreases step by step changes.This state can be divided into three kinds of type of sports:
1. the lower straight line to be parallel to time shaft of static state, i.e.,:DP (t)=0.
2. static state is excessively conic section to motion state, such as:D3P (t)=0.
3. being linearity curve under steady state of motion, i.e.,:D2P (t)=0, wherein DnP indicates n rank derivation operation.
Although for illustrative purposes, it has been described that exemplary embodiments of the present invention, those skilled in the art
Member it will be understood that, can be in form and details in the case where the scope and spirit for not departing from invention disclosed in appended claims
On the change that carry out various modifications, add and replace etc., and all these changes all should belong to appended claims of the present invention
Protection scope, and each step in the claimed each department of product and method, can in any combination
Form is combined.Therefore, to disclosed in this invention the description of embodiment be not intended to limit the scope of the invention,
But for describing the present invention.Correspondingly, the scope of the present invention is not limited by embodiment of above, but by claim or
Its equivalent is defined.
Claims (10)
1. a kind of underwater robot realtime electronic image stabilizing method, which is characterized in that in turn include the following steps:
(1) picture frame and the corresponding IMU posture information obtained that will acquire carry out time alignment, and determine that different sensors obtain
It wins the confidence the time consistency of breath;
(2) robot coordinates of motion referential lower swing information is detected by IMU sensor unit, machine is estimated based on swing information
Device people's spin matrix R;
(3) matching relationship for utilizing characteristic point between the spin matrix R and before and after frames image in (2), estimates the opposite shape of interframe
State transfer matrix d, and then obtain the global motion position state relative to start frame;
(4) it is filtered using filtering method axial global position curve each to robot, is carried out according to the equation of motion
Motion compensation and jitter elimination;
(5) upward in each kinematic axis, according to filtered smoothed curve and former global operation curve every frame displacement difference, reversely
It realizes the purpose trembled that disappears to offset displacement difference, finally image edge is sheared, eliminates every frame in the position of mobile present frame
The mobile white space generated.
2. the method as described in claim 1, it is characterised in that:Time alignment is by sensor sample in step (1)
Moment sticks system timestamp to sample information using low sampling rate as time reference, or to each sampling instant, adopts to difference
The time of sample information is aligned different sensor informations with time difference nearest principle.
3. the method as described in claim 1, it is characterised in that:Swing information is pitching, rolling and course deviation in the step (2)
One of the swing information in three directions, two or three.
4. the method according to claim 1, it is characterised in that:The step (2) is specially:
The rotation transformation R that robot is detected by IMU unit, then have R=θpitch·θroll, wherein pitching is θpitchWith turn over
Rolling pitching is θroll。
5. the method according to claim 1, it is characterised in that:It include obtaining estimation in the step (3)
Equation is represented by wherein obtaining estimation equation:X '=RX+d=θpitch·θrollX+d, wherein d is translational movement, X
It is respectively the quantity of state of before and after frames image with X ', R is rotation transformation.
6. the method as described in claim 1, it is characterised in that:Image interframe correlative relationship is calculated in step (3) to adopt
With by the matched method of gray scale module, by bitmap statistics correlation calculations method, based on the correlation of gray-scale statistical
Calculation method, the correlation calculations method based on light stream or the correlation calculations method based on feature.
7. method as claimed in claim 6, it is characterised in that:It is described to be specially based on the matched method of gray scale module:
1) using the image characteristic point information of feature point detection algorithm detection left images sensor imaging focal plane, pass through feature
Point description operator, the matching of characteristic point is carried out using Feature Points Matching algorithm;
2) derive that three-dimensional coordinate of the characteristic point under camera coordinates system is believed by the pixel relationship between matched characteristic point
Breath;
3) by the Pixel Information of the projection focal plane of matching interframe characteristic point, the block centered on the point of matching characteristic is constructed;
4) the Feature Points Matching error based on Pixel-level is further corrected by matched piece of interframe of match information;
5) update interframe matching characteristic point projection focal plane positional relationship, and using this relation derivation low dimensional frame
Between image information transition status.
8. the method as described in claim 1, it is characterised in that:Filtering method uses region filtering in step (4), slides mean value
Filtering method weights moving average method, limit filtration method or particle filter method.
9. motion compensation process as claimed in claim 5, it is characterized in that:Directly utilize the global position of original relative to initial frame
Curve and smoothed out curve are set, the displacement difference in every frame, Contrary compensation original global motion.In order to reduce relative to initial frame
Global accumulation calculate and evaluated error, entire processing video sample can be divided into N sections, be trembled to every section of video disappear
When, so that adjacent two sections of videos have the intersection of M frame each other, wherein M is theoretically not more than every section of totalframes, right
It is overlapped by the way of weighted sum in intersection.The weight of M frame is 1/M, leading portion video inverse M frame before back segment video
Weight and the weight of every frame of back segment video add up to 1 so that the weight of leading portion video M frame is uniformly to reduce, back segment video M
The weight of frame is uniformly to increase.
10. method as claimed in claim 6, it is characterised in that:Using but be not limited to FAST feature point extraction algorithm, can also adopt
With SIFT, SURF, ORB feature point extraction algorithm.BRIEF feature point description operator.Using direct method matching characteristic point.Using
RANSAC algorithm removes the biggish point of error, and interframe relative motion relation is estimated in the way of optimal estimation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810921737.5A CN108900775B (en) | 2018-08-14 | 2018-08-14 | Real-time electronic image stabilization method for underwater robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810921737.5A CN108900775B (en) | 2018-08-14 | 2018-08-14 | Real-time electronic image stabilization method for underwater robot |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108900775A true CN108900775A (en) | 2018-11-27 |
CN108900775B CN108900775B (en) | 2020-09-29 |
Family
ID=64355018
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810921737.5A Active CN108900775B (en) | 2018-08-14 | 2018-08-14 | Real-time electronic image stabilization method for underwater robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108900775B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112414400A (en) * | 2019-08-21 | 2021-02-26 | 浙江商汤科技开发有限公司 | Information processing method and device, electronic equipment and storage medium |
CN113243103A (en) * | 2018-12-26 | 2021-08-10 | 华为技术有限公司 | Imaging apparatus, image stabilization apparatus, imaging method, and image stabilization method |
CN113766121A (en) * | 2021-08-10 | 2021-12-07 | 国网河北省电力有限公司保定供电分公司 | Device and method for keeping image stable based on quadruped robot |
WO2024088125A1 (en) * | 2022-10-27 | 2024-05-02 | 杭州零零科技有限公司 | Image stabilization system |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009245542A (en) * | 2008-03-31 | 2009-10-22 | Sony Corp | Information processing device and method, program, and recording/reproducing device |
CN102780846A (en) * | 2012-07-11 | 2012-11-14 | 清华大学 | Electronic image stabilization method based on inertial navigation information |
CN103139568A (en) * | 2013-02-05 | 2013-06-05 | 上海交通大学 | Video image stabilizing method based on sparseness and fidelity restraining |
CN103402056A (en) * | 2013-07-31 | 2013-11-20 | 北京阳光加信科技有限公司 | Compensation processing method and system applied to image capture device |
CN106027852A (en) * | 2016-06-24 | 2016-10-12 | 西北工业大学 | Video image stabilization method for micro/nano-satellite |
CN106375669A (en) * | 2016-09-30 | 2017-02-01 | 重庆零度智控智能科技有限公司 | Image stabilization method and apparatus, and drone |
CN108259736A (en) * | 2016-12-29 | 2018-07-06 | 昊翔电能运动科技(昆山)有限公司 | Holder stability augmentation system and holder increase steady method |
-
2018
- 2018-08-14 CN CN201810921737.5A patent/CN108900775B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009245542A (en) * | 2008-03-31 | 2009-10-22 | Sony Corp | Information processing device and method, program, and recording/reproducing device |
CN102780846A (en) * | 2012-07-11 | 2012-11-14 | 清华大学 | Electronic image stabilization method based on inertial navigation information |
CN103139568A (en) * | 2013-02-05 | 2013-06-05 | 上海交通大学 | Video image stabilizing method based on sparseness and fidelity restraining |
CN103402056A (en) * | 2013-07-31 | 2013-11-20 | 北京阳光加信科技有限公司 | Compensation processing method and system applied to image capture device |
CN106027852A (en) * | 2016-06-24 | 2016-10-12 | 西北工业大学 | Video image stabilization method for micro/nano-satellite |
CN106375669A (en) * | 2016-09-30 | 2017-02-01 | 重庆零度智控智能科技有限公司 | Image stabilization method and apparatus, and drone |
CN108259736A (en) * | 2016-12-29 | 2018-07-06 | 昊翔电能运动科技(昆山)有限公司 | Holder stability augmentation system and holder increase steady method |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113243103A (en) * | 2018-12-26 | 2021-08-10 | 华为技术有限公司 | Imaging apparatus, image stabilization apparatus, imaging method, and image stabilization method |
CN113243103B (en) * | 2018-12-26 | 2022-11-22 | 华为技术有限公司 | Imaging apparatus, image stabilization apparatus, imaging method, and image stabilization method |
CN112414400A (en) * | 2019-08-21 | 2021-02-26 | 浙江商汤科技开发有限公司 | Information processing method and device, electronic equipment and storage medium |
CN113766121A (en) * | 2021-08-10 | 2021-12-07 | 国网河北省电力有限公司保定供电分公司 | Device and method for keeping image stable based on quadruped robot |
CN113766121B (en) * | 2021-08-10 | 2023-08-08 | 国网河北省电力有限公司保定供电分公司 | Device and method for maintaining image stability based on quadruped robot |
WO2024088125A1 (en) * | 2022-10-27 | 2024-05-02 | 杭州零零科技有限公司 | Image stabilization system |
Also Published As
Publication number | Publication date |
---|---|
CN108900775B (en) | 2020-09-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110796010B (en) | Video image stabilizing method combining optical flow method and Kalman filtering | |
CN108900775A (en) | A kind of underwater robot realtime electronic image stabilizing method | |
CN110692083B (en) | Block-matched optical flow and stereoscopic vision for dynamic vision sensor | |
CN108363946B (en) | Face tracking system and method based on unmanned aerial vehicle | |
CN105955308B (en) | The control method and device of a kind of aircraft | |
US8488010B2 (en) | Generating a stabilized video sequence based on motion sensor data | |
Klein et al. | Parallel tracking and mapping on a camera phone | |
CN113286194A (en) | Video processing method and device, electronic equipment and readable storage medium | |
US9025859B2 (en) | Inertial sensor aided instant autofocus | |
WO2019084804A1 (en) | Visual odometry and implementation method therefor | |
CN105678809A (en) | Handheld automatic follow shot device and target tracking method thereof | |
US11223764B2 (en) | Method for determining bias in an inertial measurement unit of an image acquisition device | |
CN105681674A (en) | Image stabilizing method and compound image stabilizing system based on mechanical image stabilizing and electronic image stabilizing | |
CN112115980A (en) | Binocular vision odometer design method based on optical flow tracking and point line feature matching | |
EP2851868A1 (en) | 3D Reconstruction | |
CN111951325B (en) | Pose tracking method, pose tracking device and electronic equipment | |
CN103841297A (en) | Electronic image-stabilizing method suitable for resultant-motion camera shooting carrier | |
CN112204946A (en) | Data processing method, device, movable platform and computer readable storage medium | |
CN111798373A (en) | Rapid unmanned aerial vehicle image stitching method based on local plane hypothesis and six-degree-of-freedom pose optimization | |
Civera et al. | Drift-free real-time sequential mosaicing | |
CN105100546A (en) | Motion estimation method and device | |
CN110119189B (en) | Initialization method, AR control method, device and system of SLAM system | |
CN111213159A (en) | Image processing method, device and system | |
Shen et al. | Fast video stabilization algorithm for UAV | |
CN117152243A (en) | Alarm positioning method based on monocular zooming of PTZ camera |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP03 | Change of name, title or address | ||
CP03 | Change of name, title or address |
Address after: 518000 806, block B, Jiuzhou electric appliance building, No. 007, Keji South 12th Road, high tech Zone, Yuehai street, Nanshan District, Shenzhen, Guangdong Patentee after: Shenzhen Yidong Blue Technology Co.,Ltd. Address before: 518000 room 209, building 17, maker Town, No. 1201 Liuxian Avenue, Taoyuan Street, Nanshan District, Shenzhen, Guangdong Patentee before: SHENZHEN NAVA TECHNOLOGY Co.,Ltd. |