CN108900775B - Real-time electronic image stabilization method for underwater robot - Google Patents

Real-time electronic image stabilization method for underwater robot Download PDF

Info

Publication number
CN108900775B
CN108900775B CN201810921737.5A CN201810921737A CN108900775B CN 108900775 B CN108900775 B CN 108900775B CN 201810921737 A CN201810921737 A CN 201810921737A CN 108900775 B CN108900775 B CN 108900775B
Authority
CN
China
Prior art keywords
information
frame
image
time
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810921737.5A
Other languages
Chinese (zh)
Other versions
CN108900775A (en
Inventor
陶师正
利亚托亨利
安德烈亚斯维迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yidong Blue Technology Co.,Ltd.
Original Assignee
Shenzhen Nava Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Nava Technology Co ltd filed Critical Shenzhen Nava Technology Co ltd
Priority to CN201810921737.5A priority Critical patent/CN108900775B/en
Publication of CN108900775A publication Critical patent/CN108900775A/en
Application granted granted Critical
Publication of CN108900775B publication Critical patent/CN108900775B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6812Motion detection based on additional sensors, e.g. acceleration sensors

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

A real-time electronic image stabilization method for an underwater robot sequentially comprises the following steps: carrying out time alignment on the acquired image and the corresponding acquired IMU attitude information, and determining a time relation; detecting swing information under a robot motion coordinate reference system through an IMU unit, and estimating a state transition matrix based on the swing information; calculating to obtain the correlation relationship between the image frames by using a state transition matrix between the previous frame image and the next frame image; the filtering method is used for separating the actual motion information and the random jitter information of the robot, the method is high in flexibility and low in calculation complexity, real-time image stabilization can be achieved, the problem that a processed video image is locally distorted and deformed is solved, and image stabilization quality is further improved.

Description

Real-time electronic image stabilization method for underwater robot
Technical Field
The invention relates to the field of intelligent robots and underwater detection, in particular to a real-time electronic image stabilizing method for an underwater robot.
Background
At present, with the continuous development of science and technology and the demand of huge market, people have stronger and stronger exploration desire for the underwater world, and the development of underwater robot technology is directly promoted.
The underwater robot is used as a complex system, integrates various subsystems of artificial intelligence, underwater target detection and identification, data fusion, intelligent control, navigation and communication, and is an intelligent unmanned platform capable of executing various military and civil tasks in a complex underwater environment. The underwater robot has a great prospect in maritime research and ocean development and is widely applied to underwater information acquisition, accurate striking and 'asymmetric information warfare', so that the underwater robot technology is an important and active research and development field in various countries in the world.
Most of the existing underwater shooting modes are shooting by handheld equipment, so that a shaking phenomenon often occurs in a shot video, and periodic disturbance similar to ripples can be generated due to the fact that periodic diving action in shooting is considered in the underwater handheld shooting process. For underwater shooting under professional conditions, a large mechanical shake eliminating device is generally used, and the device is generally high in cost, large in size and not easy to carry. The images collected by a camera carried by the existing underwater robot have a shaking phenomenon, and the quality of the shot video is seriously influenced. Particularly, the current consumer-grade underwater robot technology is still in a starting stage, and a method capable of being effectively applied to an underwater robot for video shooting is urgently needed.
Currently, image stabilization methods using image processing methods include real-time application situations and offline processing situations. In the real-time processing situation, in order to meet the requirement of real-time performance, generally, only the transformation relation of the image inter-frame correlation in a low dimension (low degree of freedom) can be analyzed, and the problem of local distortion and deformation of the processed video image can occur. Under the condition of offline processing, more complex image features are generally extracted, or the image features are extracted in blocks, and then the transformation relation of high dimensionality is estimated, so that the problem of local distortion of the processed video image can be reduced, but the higher the complexity is, the more easily the image features are influenced by noise when being extracted, the poorer the stability of the extracted motion state is, and the motion state can only be applied to the offline processing condition, and the requirement of real-time property cannot be met.
In addition, the Kalman filtering algorithm, the sliding mean filtering algorithm or other smooth filtering algorithms are directly adopted to estimate the actual motion of the robot, different motion states of the robot are not distinguished, and the shake elimination effect is not ideal when the robot is in a hovering state; the method includes the steps that random jitter information of a robot is directly obtained through obtaining an Inertial Measurement Unit (IMU) sensor, and since Measurement noise is contained in the IMU measured information, if the influence of environmental factors on the sensor is considered, the influence of the environmental noise can be received, and therefore the fact that the information of the motion state is obtained directly through an IMU module is inaccurate.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a real-time electronic image stabilizing method for an underwater robot, which has high flexibility and low calculation complexity, can stabilize images in real time, reduces the problem of local distortion and deformation of processed video images and further improves the image stabilizing quality.
The invention provides a real-time electronic image stabilizing method for an underwater robot, which sequentially comprises the following steps:
(1) time alignment is carried out on the acquired image frames and the correspondingly acquired IMU attitude information, and the time consistency of the information acquired by different sensors is determined;
(2) detecting swing information under a robot motion coordinate reference system through an IMU sensor unit, and estimating a robot rotation matrix R based on the swing information, wherein the swing information is one, two or three of swing information in three directions of pitching, rolling and yawing;
(3) and (3) estimating a relative state transition matrix d between frames by using the rotation matrix R in the step (2) and the matching relation of the characteristic points between the images of the previous frame and the next frame, and further obtaining the global motion position state relative to the initial frame.
(4) Filtering the global position curve of each axial direction of the robot by using a filtering method, and separating actual intention motion information and random jitter information of the robot;
(5) in each motion axis, the position of the current frame is moved reversely according to the displacement difference of the filtered smooth curve and the global operation curve in each frame to offset the displacement difference, so that the purpose of eliminating jitter is achieved, and finally, the edge of the image is cut to eliminate a blank area generated by the movement of each frame;
wherein, the step (3) includes obtaining a motion estimation equation, wherein obtaining the motion estimation equation can be expressed as: x ═ R · X + d ═ θpitch·θrollX + d, where X and X' are the state quantities of the preceding and succeeding frame images, θpitchIs the pitch swing sum thetarollRolling and swinging;
in the step (5), the original global motion is compensated reversely by directly using the displacement difference between the original global position curve relative to the initial frame and the smoothed curve in each frame.
Further, in the step (1), the time alignment is performed by taking a low sampling rate as a time reference at the sampling time of the sensor, or attaching a system time stamp to the sampling information at each sampling time, and aligning different sensor information according to the time difference nearest principle for the time of different sampling information.
Further, the step (2) is specifically: detection by IMU unitThe rotation matrix R of the robot has R ═ thetapitch·θroll
Further, the correlation relationship between the image frames calculated in the step (3) adopts a gray module matching-based method, a bitmap statistics-based correlation calculation method, a gray statistics-based correlation calculation method, an optical flow-based correlation calculation method or a feature-based correlation calculation method.
Further, the block matching in the method based on the gray module matching is based on the matched feature point as the center module.
Further, the method based on the gray module matching specifically comprises the following steps:
1) detecting image characteristic point information of imaging focal planes of the left and right image sensors by using a characteristic point detection algorithm, and matching characteristic points by using a characteristic point matching algorithm through a characteristic point description operator;
2) deducing three-dimensional coordinate information of the feature points in a camera coordinate system through pixel relations among the matched feature points;
3) constructing a block taking the matched characteristic points as a center by matching pixel information of a projection focal plane of the characteristic points between frames;
4) further correcting the pixel-level-based feature point matching error through the matching information of the matched blocks between frames;
5) and updating the position relation of the matched characteristic points between frames on the projection focal plane, and deducing the conversion state of the low-dimensional image information between frames by using the relation.
Further, the filtering method in the step (4) adopts a segmented filtering method, a moving average filtering method, a weighted moving average method, a limiting filtering method or a particle filtering method.
Further, but not limited to, FAST feature point extraction algorithm, SIFT, SURF, or ORB feature point extraction algorithm may also be used. The BRIEF feature point description operator adopts a direct method to match feature points, adopts an RANSAC algorithm to remove points with larger errors, and utilizes an optimal estimation mode to estimate the relative motion relationship between frames.
Further, in order to reduce global cumulative calculation and estimation errors relative to an initial frame, a whole processed video sample can be equally divided into N sections, when each section of video is subjected to jitter elimination, two adjacent sections of video are enabled to have overlapping parts of M frames, wherein M is not larger than the total frame number of each section theoretically, the overlapping parts are overlapped in a weighted summation mode, the weight of the front M frame of the rear section of video is 1/M, the weight of the reciprocal M frame of the front section of video and the weight of each frame of the rear section of video are added to be 1, the weight of the M frame of the front section of video is uniformly reduced, and the weight of the M frame of the rear section of video is uniformly increased.
The real-time electronic image stabilization method for the underwater robot can realize the following steps:
1) the image stabilizing method under the hovering condition and in the motion process (including the over-start and over-stop stage) of the underwater robot is also suitable for image stabilization under one condition, and the flexibility is high.
2) By means of obtaining the inter-frame correlation of the low-dimensional image and fusing the IMU sensor, the calculation complexity is low, and real-time video stabilization processing is achieved.
3) The fused high-dimensional interframe image correlation estimation has high stability, and the problem of local distortion and deformation of the processed video image is reduced.
4) By distinguishing the static state and the motion state of the robot, the filter can be better adopted to separate the actual motion and the random jitter of the robot, and the image stabilization quality is further improved.
Drawings
Fig. 1 is a flow chart of a real-time electronic image stabilization method for an underwater robot.
Detailed Description
Reference will now be made in detail to the embodiments of the present invention, the following examples of which are intended to be illustrative only and are not to be construed as limiting the scope of the invention.
The invention provides a real-time electronic image stabilizing method for an underwater robot, which is applied to an underwater robot device and estimates the global jitter of the underwater robot by combining an attitude measurement mode based on an Inertial Measurement Unit (IMU) and an image inter-frame correlation analysis mode, and performs quick-reading effective processing by analyzing the image inter-frame correlation relationship and adopting a state transfer matrix to represent the inter-frame correlation.
Considering that the object image information measured by the camera device loaded on the robot presents pixel point information of a two-dimensional image plane, the measured information is used to directly estimate the state transition matrix [ R d ] between the previous and next frame images, and the motion estimation equation is: x ═ R · X + d, the observation equation is λ X ═ K (X + d), where K is the projection parameter of the camera. Considering that the dimensionality of state transition estimation by calculating the inter-frame image information is reduced to meet the requirement of real-time operation, only the translation state under the robot coordinate is estimated, namely: x 'is X + d, where λ is a normalization parameter, and X' are state quantities of the preceding and following frame images, respectively.
Generally, if the transformation model between two adjacent frames of images is analyzed only by an image analysis method, the dimension of the state transition matrix determines the complexity of the transformation model. The high-dimensional transformation model can better describe the conversion relation between two connected frames of images, but the higher the dimensionality is, the more complicated the image analysis method is, the more errors are easy to occur, and the worse the state stability is, the higher the time complexity is. For the embedded platform in the underwater robot, the situation that image stabilization processing needs to be performed in real time is obviously not satisfactory. Therefore, in the aspect of calculating the image correlation between frames, the invention adopts a gray module matching-based method to calculate the image correlation between frames, and it should be noted that a correlation calculation method based on bitmap statistics, a correlation calculation method based on gray statistics, a correlation calculation method based on optical flow, and a correlation calculation method based on features may also be adopted.
The gray module matching based method is based on gray image calculation, and the more common gray block matching method of the invention is different in that the block matching is based on a module with matched characteristic points as the center, and the number of the blocks depends on the number of the matched characteristic points. The method comprises the following specific steps:
1) and detecting the image information of the imaging focal planes of the left and right image sensors by using a characteristic point detection algorithm, and matching the characteristic points by using a characteristic point matching algorithm through a characteristic point description operator.
2) And deriving the three-dimensional coordinate information of the characteristic points through the pixel relation between the matched characteristic points.
3) And constructing a block taking the matched characteristic point as a center by matching the pixel information of the projection focal plane of the characteristic point between frames.
4) The pixel-level-based feature point matching error is further corrected by the matching information of the matched blocks between frames.
5) And updating the position relation of the matched characteristic points between frames on the projection focal plane, and deducing the conversion state of the low-dimensional image information between frames by using the relation.
In addition, the IMU unit detects the swing information (or any one or two swing information) of the robot in the pitching, rolling and yaw directions under the motion coordinate reference system, and the main shaking of the robot in water actually comes from the pitching and rolling swing directions, so that the measurement information of the IMU makes up the defect of the correlation between frames of the low-dimensional image and the global shaking information of the robot.
The rotation matrix of the robot is detected to be R through the IMU unit, and the Euler angle can be extracted directly according to the IMU without other redundant operations (the Euler angle comprises pitching, rolling and yaw); considering that the predominant rotation is pitch-yaw θpitchAnd roll oscillation thetarollThen R is equal to thetapitch·θrollTherefore, the motion estimation equation can be expressed as: x ═ R · X + d ═ θpitch·θrollX + d, and meets the requirement of real-time application.
The present invention takes into account the possibility that the robot may not be temporally consistent between acquiring the image and acquiring the IMU pose information, so that prior to applying the image stabilization system, it is necessary to first convert such inconsistencies to temporal consistency, synchronize the acquired image with the IMU pose, and determine such accurate temporal relationships.
Since the sampling frequency of the IMU sensor is about 200Hz, and the sampling frequency of the image sensor on the robot is 30Hz, in order to ensure the time consistency of the image information and the IMU information, a simple method is to use a low sampling rate as a time reference at the sampling time of the sensor, or to attach a system time stamp to the sampling information at each sampling time, and to align different sensor information with the time consistency problem of different sampling information according to the time difference nearest principle.
The invention not only comprises an image stabilizing method of multi-sensor fusion of the underwater robot in a hovering state, but also comprises an image stabilizing method aiming at multi-sensor fusion of the robot in an underwater motion process. In the moving process, the moving motion information of the robot is obtained through image inter-frame correlation analysis and IMU measurement methods, random jitter information is also included, the actual motion information and the random jitter information of the robot are separated through a Kalman filtering method, and the method can be realized through mean filtering, sliding mean filtering, weighted sliding average, amplitude limiting filtering and particle filtering.
In consideration of the actual underwater application situation, the main shaking in the underwater static state presents periodic variation similar to sine and cosine, the periodic shaking is rarely seen in the dynamic process, and the sine and cosine variation with the trend of larger initial amplitude and gradually reduced time exists in the dynamic and static state switching process. This state can be classified into three types of motion:
1. at rest, a straight line parallel to the time axis, i.e.: dp (t) is 0.
2. The transition from static to moving state is a quadratic curve, for example: d3P(t)=0。
3. The linear curve is obtained under the stable motion state, namely: d2P (t) ═ 0, where DnP denotes an n-order differential operation.
Although exemplary embodiments of the present invention have been described for illustrative purposes, those skilled in the art will appreciate that various modifications, additions, substitutions and the like can be made in form and detail without departing from the scope and spirit of the invention as disclosed in the accompanying claims, all of which are intended to fall within the scope of the claims, and that various steps in the various sections and methods of the claimed product can be combined together in any combination. Therefore, the description of the embodiments disclosed in the present invention is not intended to limit the scope of the present invention, but to describe the present invention. Accordingly, the scope of the present invention is not limited by the above embodiments, but is defined by the claims or their equivalents.

Claims (8)

1. A real-time electronic image stabilization method for an underwater robot is characterized by sequentially comprising the following steps:
(1) time alignment is carried out on the acquired image frames and the correspondingly acquired IMU attitude information, and the time consistency of the information acquired by different sensors is determined;
(2) detecting swing information under a robot motion coordinate reference system through an IMU sensor unit, and estimating a robot rotation matrix R based on the swing information, wherein the swing information is one, two or three of swing information in three directions of pitching, rolling and yawing;
(3) estimating a relative state transition matrix d between frames by using the rotation matrix R in the step (2) and the matching relation of the characteristic points between the images of the previous frame and the next frame, and further obtaining the global motion position state relative to the initial frame;
(4) filtering the global position curve of each axial direction of the robot by using a filtering method, and performing motion compensation and jitter elimination according to a motion equation;
(5) in each motion axis, the position of the current frame is moved reversely according to the displacement difference of the filtered smooth curve and the global position curve in each frame to offset the displacement difference, so that the purpose of eliminating jitter is achieved, and finally, the edge of the image is cut to eliminate a blank area generated by the movement of each frame;
wherein, the step (3) includes obtaining a motion estimation equation, wherein obtaining the motion estimation equation can be expressed as: x ═ R · X + d ═ θpitch·θrollX + d, where X and X' are the states of the preceding and succeeding frame images, respectivelyAmount, thetapitchIs the pitch swing sum thetarollRolling and swinging;
in the step (5), the original global motion is compensated reversely by directly using the displacement difference between the original global position curve relative to the initial frame and the smoothed curve in each frame.
2. The method of claim 1, wherein: in the step (1), time alignment is to align different sensor information by taking a low sampling rate as time reference at the sampling time of the sensor, or attaching a system time stamp to sampling information at each sampling time, and aligning the different sensor information by the time difference nearest principle for the time of different sampling information.
3. The method of any of claims 1-2, wherein: the step (2) is specifically as follows: detecting a rotation matrix R of the robot through the IMU unit, wherein R is thetapitch·θroll
4. The method of claim 1, wherein: and (4) adopting a gray module matching-based method, a bitmap statistics-based correlation calculation method, a gray statistics-based correlation calculation method, an optical flow-based correlation calculation method or a feature-based correlation calculation method to obtain the image inter-frame correlation relationship in the step (3).
5. The method of claim 1, wherein: the method based on the gray module matching specifically comprises the following steps:
1) detecting image characteristic point information of imaging focal planes of the left and right image sensors by using a characteristic point detection algorithm, and matching characteristic points by using a characteristic point matching algorithm through a characteristic point description operator;
2) deducing three-dimensional coordinate information of the feature points in a camera coordinate system through pixel relations among the matched feature points;
3) constructing a block taking the matched characteristic points as a center by matching pixel information of a projection focal plane of the characteristic points between frames;
4) further correcting the pixel-level-based feature point matching error through the matching information of the matched blocks between frames;
5) and updating the position relation of the matched characteristic points between frames on the projection focal plane, and deducing the conversion state of the low-dimensional image information between frames by using the relation.
6. The method of claim 1, wherein: the filtering method in the step (4) adopts a segmented filtering method, a moving average filtering method, a weighted moving average method, an amplitude limiting filtering method or a particle filtering method.
7. The method of claim 1, wherein: in order to reduce global cumulative calculation and estimation errors relative to an initial frame, a whole processed video sample can be equally divided into N sections, when each section of video is subjected to jitter elimination, two adjacent sections of video are enabled to have an overlapped part of M frames, wherein M is not more than the total frame number of each section theoretically, the overlapped part is overlapped in a weighted summation mode, the weight of the front M frame of the rear section of video is 1/M, the weight of the reciprocal M frame of the front section of video and the weight of each frame of the rear section of video are added to be 1, the weight of the M frame of the front section of video is enabled to be uniformly reduced, and the weight of the M frame of the rear section of video is uniformly increased.
8. The method of claim 4, wherein: the method adopts a FAST feature point extraction algorithm, also can adopt SIFT, SURF and ORB feature point extraction algorithms and BRIEF feature point description operators, adopts a direct method to match feature points, adopts an RANSAC algorithm to remove points with larger errors, and utilizes an optimal estimation mode to estimate the relative motion relationship between frames.
CN201810921737.5A 2018-08-14 2018-08-14 Real-time electronic image stabilization method for underwater robot Active CN108900775B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810921737.5A CN108900775B (en) 2018-08-14 2018-08-14 Real-time electronic image stabilization method for underwater robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810921737.5A CN108900775B (en) 2018-08-14 2018-08-14 Real-time electronic image stabilization method for underwater robot

Publications (2)

Publication Number Publication Date
CN108900775A CN108900775A (en) 2018-11-27
CN108900775B true CN108900775B (en) 2020-09-29

Family

ID=64355018

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810921737.5A Active CN108900775B (en) 2018-08-14 2018-08-14 Real-time electronic image stabilization method for underwater robot

Country Status (1)

Country Link
CN (1) CN108900775B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020132917A1 (en) * 2018-12-26 2020-07-02 Huawei Technologies Co., Ltd. Imaging device, image stabilization device, imaging method and image stabilization method
CN112414400B (en) * 2019-08-21 2022-07-22 浙江商汤科技开发有限公司 Information processing method and device, electronic equipment and storage medium
CN113766121B (en) * 2021-08-10 2023-08-08 国网河北省电力有限公司保定供电分公司 Device and method for maintaining image stability based on quadruped robot
CN117998212A (en) * 2022-10-27 2024-05-07 杭州零零科技有限公司 Image stabilizing system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009245542A (en) * 2008-03-31 2009-10-22 Sony Corp Information processing device and method, program, and recording/reproducing device
CN102780846A (en) * 2012-07-11 2012-11-14 清华大学 Electronic image stabilization method based on inertial navigation information
CN103139568A (en) * 2013-02-05 2013-06-05 上海交通大学 Video image stabilizing method based on sparseness and fidelity restraining
CN103402056A (en) * 2013-07-31 2013-11-20 北京阳光加信科技有限公司 Compensation processing method and system applied to image capture device
CN106027852A (en) * 2016-06-24 2016-10-12 西北工业大学 Video image stabilization method for micro/nano-satellite
CN106375669A (en) * 2016-09-30 2017-02-01 重庆零度智控智能科技有限公司 Image stabilization method and apparatus, and drone
CN108259736A (en) * 2016-12-29 2018-07-06 昊翔电能运动科技(昆山)有限公司 Holder stability augmentation system and holder increase steady method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009245542A (en) * 2008-03-31 2009-10-22 Sony Corp Information processing device and method, program, and recording/reproducing device
CN102780846A (en) * 2012-07-11 2012-11-14 清华大学 Electronic image stabilization method based on inertial navigation information
CN103139568A (en) * 2013-02-05 2013-06-05 上海交通大学 Video image stabilizing method based on sparseness and fidelity restraining
CN103402056A (en) * 2013-07-31 2013-11-20 北京阳光加信科技有限公司 Compensation processing method and system applied to image capture device
CN106027852A (en) * 2016-06-24 2016-10-12 西北工业大学 Video image stabilization method for micro/nano-satellite
CN106375669A (en) * 2016-09-30 2017-02-01 重庆零度智控智能科技有限公司 Image stabilization method and apparatus, and drone
CN108259736A (en) * 2016-12-29 2018-07-06 昊翔电能运动科技(昆山)有限公司 Holder stability augmentation system and holder increase steady method

Also Published As

Publication number Publication date
CN108900775A (en) 2018-11-27

Similar Documents

Publication Publication Date Title
CN108900775B (en) Real-time electronic image stabilization method for underwater robot
CN107255476B (en) Indoor positioning method and device based on inertial data and visual features
CN111258313B (en) Multi-sensor fusion SLAM system and robot
CN107341814B (en) Four-rotor unmanned aerial vehicle monocular vision range measurement method based on sparse direct method
CN112598757B (en) Multi-sensor time-space calibration method and device
CN112304307A (en) Positioning method and device based on multi-sensor fusion and storage medium
CN106873619B (en) Processing method of flight path of unmanned aerial vehicle
CN111024066A (en) Unmanned aerial vehicle vision-inertia fusion indoor positioning method
CN111897349A (en) Underwater robot autonomous obstacle avoidance method based on binocular vision
CN111354043A (en) Three-dimensional attitude estimation method and device based on multi-sensor fusion
CN107300382B (en) Monocular vision positioning method for underwater robot
CN112115980A (en) Binocular vision odometer design method based on optical flow tracking and point line feature matching
CN110827321B (en) Multi-camera collaborative active target tracking method based on three-dimensional information
CN115131420A (en) Visual SLAM method and device based on key frame optimization
CN112580683B (en) Multi-sensor data time alignment system and method based on cross correlation
CN114608561A (en) Positioning and mapping method and system based on multi-sensor fusion
CN113551665A (en) High dynamic motion state sensing system and sensing method for motion carrier
Tian et al. Research on multi-sensor fusion SLAM algorithm based on improved gmapping
CN114693754A (en) Unmanned aerial vehicle autonomous positioning method and system based on monocular vision inertial navigation fusion
CN110598370B (en) Robust attitude estimation of multi-rotor unmanned aerial vehicle based on SIP and EKF fusion
CN112991400A (en) Multi-sensor auxiliary positioning method for unmanned ship
CN113701750A (en) Fusion positioning system of underground multi-sensor
CN112945233A (en) Global drift-free autonomous robot simultaneous positioning and map building method
KR101806453B1 (en) Moving object detecting apparatus for unmanned aerial vehicle collision avoidance and method thereof
CN109410254B (en) Target tracking method based on target and camera motion modeling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 518000 806, block B, Jiuzhou electric appliance building, No. 007, Keji South 12th Road, high tech Zone, Yuehai street, Nanshan District, Shenzhen, Guangdong

Patentee after: Shenzhen Yidong Blue Technology Co.,Ltd.

Address before: 518000 room 209, building 17, maker Town, No. 1201 Liuxian Avenue, Taoyuan Street, Nanshan District, Shenzhen, Guangdong

Patentee before: SHENZHEN NAVA TECHNOLOGY Co.,Ltd.