CN109543694A - A kind of visual synchronization positioning and map constructing method based on the sparse strategy of point feature - Google Patents
A kind of visual synchronization positioning and map constructing method based on the sparse strategy of point feature Download PDFInfo
- Publication number
- CN109543694A CN109543694A CN201811138347.7A CN201811138347A CN109543694A CN 109543694 A CN109543694 A CN 109543694A CN 201811138347 A CN201811138347 A CN 201811138347A CN 109543694 A CN109543694 A CN 109543694A
- Authority
- CN
- China
- Prior art keywords
- road sign
- feature
- constraint
- moment
- matching
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/20—Drawing from basic elements, e.g. lines or circles
- G06T11/206—Drawing of charts or graphs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
Abstract
The visual synchronization based on the sparse strategy of point feature that the invention discloses a kind of positions and map constructing method, feature extraction processing is carried out to the underwater picture obtained by sensor first, point feature extraction is carried out to sonar image using traditional SIFT algorithm, then the matching that SIFT feature is carried out by multinomial matching constraint improves matching accuracy.During EKF-SLAM, sparsity constraints are carried out to road sign, control the distribution of the locational space and feature space of road sign, effectively prevent road sign matching to obscure, to improve the accuracy of data correlation.Present invention introduces the sparse type strategies in road sign library, it is limited simultaneously for road sign in the distribution density of locational space and feature space, can rationally limit the rate of climb of road sign, reduce calculation amount, shorten positioning time, the diversity that can maintain road sign feature again, improves the accuracy of data correlation.
Description
Technical field
The present invention relates to Underwater Navigation and navigation field, in particular to a kind of visual synchronization based on the sparse strategy of point feature
Positioning and map constructing method.
Background technique
Problem is for synchronous superposition (Simultaneous Localization and Mapping-SLAM)
Hot issue as Present navigation positioning field research.SLAM problem can be described as: not have the two of priori map at one
In dimensional plane, it is known that a movable body of motion model has several environmental characteristics at one from a unknown initial point
It is moved in space, determines the two-dimensional coordinate and the three-dimensional of itself of feature space by the data that sensor obtains according to movable body
Coordinate.
Solve the problems, such as that a kind of more commonly used method of SLAM is EKF-SLAM (Extended Kalman Filter- at present
Simultaneous Localization and Mapping) --- the mobile synchronous positioning based on extended Kalman filter
With map structuring algorithm.Existing EKF-SLAM algorithm is a kind of optimal loop iteration estimation, but since space midpoint is more,
It causes operand larger, so that the real-time of EKF-SLAM algorithm is poor, is unable to satisfy the need for needing to position in real time at present
It asks.
Summary of the invention
Present invention aim to address positioning time is shortened in the biggish situation of road sign disturbing factor in an underwater environment, subtract
Few computation complexity simultaneously improves data correlation accuracy, provide a kind of visual synchronization positioning based on the sparse strategy of point feature with
Figure construction method, the method for the present invention is for submarine navigation device operation in small range space and feature more and environment of counting is complicated
Sea area when, can lead to the problem of that calculation amount is excessive, the sparsity strategy in road sign library be introduced, for road sign in locational space and spy
The diversity for levying the substep feature in space, improves the accuracy of data correlation.
The technical scheme adopted by the invention is that: a kind of visual synchronization positioning and map structure based on the sparse strategy of point feature
Construction method, comprising the following steps:
Step 1, point feature extraction is carried out to sonar image using traditional SIFT algorithm, about by epipolar-line constraint, parallax
Beam, direction constraint, the matching of dimensional constraints, unique match constraint progress SIFT feature;
Step 2, sparsity constraints are carried out to road sign, and positioning and composition is carried out according to EKF-SLAM algorithm.
Further, step 1 specifically includes the following steps:
Step 1-1, the detection of scale space SIFT feature
Firstly, establishing gaussian pyramid image using Gaussian kernel G (x, y, σ), the scale space L (x, y, σ) of image is by one
The Gaussian function G (x, y, σ) and original image I (x, y) convolution of a variation scale are generated:
Wherein, (x, y) is space coordinate, the location of pixels of representative image;It indicates
Changeable scale Gaussian function;Indicate convolution;σ indicates the scale space factor;
Then, by obtaining the Gaussian difference component of the response of a DOG to two adjacent Gaussian scale-space image subtractions
As D (x, y, σ), difference of Gaussian image function D (x, y, σ) is then indicated are as follows:
Wherein, k is positive integer;
Step 2-2, SIFT feature point vector generate
SIFT algorithm takes the window of 16*16 centered on the characteristic point detected, and it is equidistantly divided into 4*4 sub-district
Domain;The gradient orientation histogram on 8 directions is calculated in each subregion, draws the accumulated value of each gradient direction;Every height
The histogram in region is divided into 8 direction scopes for 0 °~360 °, and each range is 45 degree, generates 4*4*8=128 data,
128 dimensional feature vectors are ultimately generated, then 128 dimensional feature vectors are normalized;
Step 2-3, SIFT feature vector distance
Characteristic point vector distance d (x, y) is calculated using Euclidean distance:
Wherein, 128 be 128 dimensional feature vectors generated;
Step 2-4, SIFT feature matching
Multinomial matching constraint is introduced, to reduce error hiding points, wherein multinomial matching constraint includes:
Epipolar-line constraint: less than 1 pixel of the coordinate difference of characteristic point in vertical direction;
Disparity constraint: less than 20 pixels of the coordinate difference of characteristic point in the horizontal direction;
Direction constraint: the difference in characteristic point direction is less than 20 °;
Dimensional constraints: left and right feature point scale difference is less than 1.5;
Unique match constraint: if to have multiple characteristic points to meet constraint condition matched for a characteristic point, then it is assumed that should
Feature point failure;
Image Feature Matching is carried out based on above-mentioned constraint condition, improves matching accuracy, reduces error hiding characteristic point
Number.
Further, step 2 specifically includes the following steps:
Step 2-1 carries out sparsity constraints to road sign
The pose of robot is placed in a state vector with cartographic information and is updated simultaneously by EKF-SLAM algorithm,
State vector are as follows:
Wherein, Xr(k)=[xr,k,yr,k,θr,k]T, indicate to estimate the pose of robot under global coordinate system, Xl(k)
=[x1,y1,....,xn,yn]T, it is map Road target coordinate information;
For road sign any in map officeIf road signCorresponding two spaces neighborhood is respectively A and B, whereinRegion other than B is defined as C, and target of satisfying the need locational space and feature space carry out sparsity limitation: in road sign's
In a-quadrant, do not allow that there are any other road signs;In road signB in region outside A, do not allow to exist and road signShape
The identical road sign of shape;
Pair map office Road punctuate is traversed, when meeting insertion condition for all road signs, which just allows to be inserted into, i.e.,
In any newfound road signMeet one of formula (5) (6):
Wherein,For the road sign of map officeWith it is newfoundThe distance between,For the threshold value of distance
Constant;
Step 2-2, robot location's prediction
The robotary and error covariance square of+1 step of kth are predicted according to kBu robot pose and control input
Battle array P (k+1):
Wherein,For the state estimation at k+1 moment;For the state estimation at k moment;U (k) is input control;For k+1 moment variance evaluation;F (k) is local derviation of the f to X (k);QkFor covariance matrix;
Step 2-3, observation: at the k+1 moment, robot is moved at preset path point by motion model control, is utilized
Self-contained sensor perceives environment, obtains data;
Data correlation: step 2-4 after completing observation, executes matching with the data correlation method of mixed self-adapting and searches
Rope;
Step 2-5, state update:
Wherein, z (k+1) indicates the actual measured value at k+1 moment;Indicate the prediction measured value at k+1 moment;K(k+
1) kalman gain at k+1 moment is indicated;Jh(k+1) Jacobian matrix at k+1 moment is indicated;
At the k+1 moment, until all features being successfully associated are to being completed to predicted state X's (k+1) and P (k+1)
Correction, at this moment, the EKF-SLAM " prediction-correction " based on Current observation are completed;
Step 2-6, map rejuvenation generate increment type map.
The beneficial effects of the present invention are: a kind of visual synchronization positioning and map structure based on the sparse strategy of point feature of the present invention
Construction method introduces the sparse type strategy in road sign library, carries out simultaneously for road sign in the distribution density of locational space and feature space
Limitation can rationally limit the rate of climb of road sign, reduce calculation amount, shorten positioning time, and can maintain the more of road sign feature
Sample improves the accuracy of data correlation.
Detailed description of the invention
A kind of Fig. 1: visual synchronization positioning and map constructing method flow chart based on the sparse strategy of point feature of the present invention;
Fig. 2: the working principle diagram of synchronous positioning and patterning process;
Fig. 3: characteristic point sparse constraint figure.
Specific embodiment
In order to further understand the content, features and effects of the present invention, the following examples are hereby given, and cooperate attached drawing
Detailed description are as follows:
The present invention it is a kind of based on the sparse strategy of point feature visual synchronization positioning and map constructing method, first to pass through biography
The underwater picture that sensor obtains carries out feature extraction processing, we use traditional SIFT (Scale-invariant feature
Transform, Scale invariant features transform) algorithm is to sonar image progress point feature extraction, then by multinomial matching constraint
The matching of SIFT feature is carried out, matching accuracy is improved.During EKF-SLAM, sparsity constraints, control are carried out to road sign
The distribution of the locational space and feature space of road sign processed effectively prevent road sign matching to obscure, to improve the accurate of data correlation
Property, this improved procedure not only reduces calculation amount while shortening positioning time.It is specific as follows:
As shown in Fig. 1, a kind of visual synchronization positioning and map constructing method based on the sparse strategy of point feature, including with
Lower step:
Step 1, point feature extraction is carried out to sonar image using traditional SIFT algorithm, about by epipolar-line constraint, parallax
Beam, direction constraint, the matching of dimensional constraints, unique match constraint progress SIFT feature.
Step 1-1, the detection of scale space SIFT feature
Firstly, establishing gaussian pyramid image using Gaussian kernel G (x, y, σ), the scale space L (x, y, σ) of image is by one
The Gaussian function G (x, y, σ) and original image I (x, y) convolution of a variation scale are generated:
Wherein, (x, y) is space coordinate, the location of pixels of representative image;It indicates
Changeable scale Gaussian function;Indicate convolution;σ indicate the scale space factor, the smaller expression image of value be smoothed it is fewer, accordingly
Scale it is also smaller.
It, can be by empty to two adjacent Gauss scales in order to effectively detect the position of key point on scale space
Between image subtraction, obtain the difference of Gaussian image D of the response of DOG (Difference of Gaussians, difference of Gaussian)
(x, y, σ), difference of Gaussian image function D (x, y, σ) are then indicated are as follows:
Wherein, k is positive integer.
Step 2-2, SIFT feature point vector generate
SIFT algorithm takes the window of 16*16 centered on the characteristic point detected, and it is equidistantly divided into 4*4 sub-district
Domain.The gradient orientation histogram on 8 directions is calculated in each subregion, draws the accumulated value of each gradient direction.Every height
The histogram in region is divided into 8 direction scopes for 0 °~360 °, and each range is 45 degree, generates 4*4*8=128 altogether in this way
A data ultimately generate 128 dimensional feature vectors, then 128 dimensional feature vectors are normalized.
Step 2-3, SIFT feature vector distance
In order to reduce computation complexity, the present invention calculates characteristic point vector distance d (x, y) using Euclidean distance:
Wherein, 128 be 128 dimensional feature vectors generated.
Step 2-4, SIFT feature matching
Multinomial matching constraint is introduced, to reduce error hiding points.Wherein, multinomial matching constraint includes:
Epipolar-line constraint: less than 1 pixel of the coordinate difference of characteristic point in vertical direction;
Disparity constraint: less than 20 pixels of the coordinate difference of characteristic point in the horizontal direction;
Direction constraint: the difference in characteristic point direction is less than 20 °;
Dimensional constraints: left and right feature point scale difference is less than 1.5;
Unique match constraint: if to have multiple characteristic points to meet constraint condition matched for a characteristic point, then it is assumed that should
Feature point failure.
Image Feature Matching is carried out based on above-mentioned constraint condition, improves matching accuracy, reduces error hiding characteristic point
Number.
Step 2, sparsity constraints are carried out to road sign, and positioning and composition is carried out according to EKF-SLAM algorithm.Wherein, it is based on
With the working principle diagram of patterning process as shown in Fig. 2, in figure, white marker is the underwater boat of real work for the synchronous positioning of EKF
Row device, black designation are the submarine navigation device of estimation, and white " o " indicates the practical road sign of submarine navigation device, black " o " indicate according to
The road sign estimated according to algorithm, X0It is the physical location at 0 moment of submarine navigation device, X1It is the actual bit at 1 moment of submarine navigation device
It sets, u1The process for being estimation and updating, X1' it is submarine navigation device estimation and updated position for the first time, X1" it is underwater navigation
Second of position estimated and updated of device, miIt is the road sign in map office,It is the road sign of observation respectively.
Step 2-1 carries out sparsity constraints (that is, carrying out the building of system road sign state) to road sign
The pose of robot is placed in a state vector with cartographic information and is updated simultaneously by EKF-SLAM algorithm,
State vector are as follows:
Wherein, Xr(k)=[xr,k,yr,k,θr,k]T, indicate to estimate the pose of robot under global coordinate system, Xl(k)
=[x1,y1,....,xn,yn]T, it is map Road target coordinate information.
For road sign any in map officeIn order to rationally control the quantity and distribution density of road sign, road sign is introduced herein
Sparsity limitation, as shown in figure 3, in figure, light grey circular mark and light grey triangle mark respectively represent two kinds of maps
Existing road sign in library, Dark grey triangle mark represent newfound m-th of road sign Lvm, Dark grey circular mark represents Xin Fa
N-th existing of road sign LvnIf road signCorresponding two spaces neighborhood be respectively A and B (wherein), the region other than B
It is defined as C, target of satisfying the need locational space and feature space carry out sparsity limitation (in road signA-quadrant in, do not allow to deposit
In any other road sign;In road signB in region outside A, do not allow to exist and road signThe identical road sign of shape).It is real
When border operates, map office Road punctuate is needed to be traversed for, when meeting insertion condition for all road signs, which just allows to be inserted
Enter, i.e., for any newfound road signMeet one of formula (5) (6):
Wherein,For the road sign of map officeWith it is newfoundThe distance between,For the threshold value of distance
Constant.
Step 2-2, robot location's prediction
The robotary and error covariance square of+1 step of kth are predicted according to kBu robot pose and control input
Battle array P (k+1):
Wherein,For the state estimation at k+1 moment;For the state estimation at k moment;U (k) is input control;For k+1 moment variance evaluation;F (k) is local derviation of the f to X (k);QkFor covariance matrix.
Step 2-3, observation: at the k+1 moment, robot is moved at preset path point by motion model control, is utilized
Self-contained sensor perceives environment, obtains data.
Data correlation: step 2-4 after completing observation, executes matching with the data correlation method of mixed self-adapting and searches
Rope.
Step 2-5, state update:
Wherein, z (k+1) indicates the actual measured value at k+1 moment;Indicate the prediction measured value at k+1 moment;K(k+
1) kalman gain at k+1 moment is indicated;Jh(k+1) Jacobian matrix at k+1 moment is indicated.
At the k+1 moment, until all features being successfully associated are to being completed to predicted state X's (k+1) and P (k+1)
Correction, at this moment, the EKF-SLAM " prediction-correction " based on Current observation are completed.
Step 2-6, map rejuvenation generate increment type map.
The present invention is the visual synchronization positioning and map under a kind of intensive environment of characteristic point based on the control of point feature sparsity
Developing algorithm improves, and the characteristic point in picture library carries out distribution density limit in locational space and feature space simultaneously to the mechanism over the ground
System, huge calculation amount problem caused by overcoming because of characteristic point rapid increase are realized the distributing homogeneity problem of characteristic point, are mentioned
The high accuracy of data correlation, the method for the present invention mainly solve to shorten positioning time under the premise of meeting positioning accuracy request
The problem of.
Although the preferred embodiment of the present invention is described above in conjunction with attached drawing, the invention is not limited to upper
The specific embodiment stated, the above mentioned embodiment is only schematical, be not it is restrictive, this field it is common
Technical staff under the inspiration of the present invention, without breaking away from the scope protected by the purposes and claims of the present invention, may be used also
By make it is many in the form of, within these are all belonged to the scope of protection of the present invention.
Claims (3)
1. a kind of visual synchronization positioning and map constructing method based on the sparse strategy of point feature, which is characterized in that including following
Step:
Step 1, point feature extraction is carried out to sonar image using traditional SIFT algorithm, passes through epipolar-line constraint, disparity constraint, side
The matching of SIFT feature is carried out to constraint, dimensional constraints, unique match constraint;
Step 2, sparsity constraints are carried out to road sign, and positioning and composition is carried out according to EKF-SLAM algorithm.
2. a kind of visual synchronization positioning and map constructing method based on the sparse strategy of point feature according to claim 1,
It is characterized in that, step 1 specifically includes the following steps:
Step 1-1, the detection of scale space SIFT feature
Firstly, establishing gaussian pyramid image using Gaussian kernel G (x, y, σ), the scale space L (x, y, σ) of image is become by one
The Gaussian function G (x, y, σ) and original image I (x, y) convolution for changing scale are generated:
Wherein, (x, y) is space coordinate, the location of pixels of representative image;Indicate scale
Variable Gaussian function;Indicate convolution;σ indicates the scale space factor;
Then, by obtaining the difference of Gaussian image D of the response of a DOG to two adjacent Gaussian scale-space image subtractions
(x, y, σ), difference of Gaussian image function D (x, y, σ) are then indicated are as follows:
Wherein, k is positive integer;
Step 2-2, SIFT feature point vector generate
SIFT algorithm takes the window of 16*16 centered on the characteristic point detected, and it is equidistantly divided into 4*4 sub-regions;
The gradient orientation histogram on 8 directions is calculated in each subregion, draws the accumulated value of each gradient direction;Each subregion
Histogram be divided into 8 direction scopes for 0 °~360 °, each range is 45 degree, 4*4*8=128 data of generation, finally
128 dimensional feature vectors are generated, then 128 dimensional feature vectors are normalized;
Step 2-3, SIFT feature vector distance
Characteristic point vector distance d (x, y) is calculated using Euclidean distance:
Wherein, 128 be 128 dimensional feature vectors generated;
Step 2-4, SIFT feature matching
Multinomial matching constraint is introduced, to reduce error hiding points, wherein multinomial matching constraint includes:
Epipolar-line constraint: less than 1 pixel of the coordinate difference of characteristic point in vertical direction;
Disparity constraint: less than 20 pixels of the coordinate difference of characteristic point in the horizontal direction;
Direction constraint: the difference in characteristic point direction is less than 20 °;
Dimensional constraints: left and right feature point scale difference is less than 1.5;
Unique match constraint: if to have multiple characteristic points to meet constraint condition matched for a characteristic point, then it is assumed that this feature
Point failure;
Image Feature Matching is carried out based on above-mentioned constraint condition, improves matching accuracy, reduces the number of error hiding characteristic point.
3. a kind of visual synchronization positioning and map constructing method based on the sparse strategy of point feature according to claim 1,
It is characterized in that, step 2 specifically includes the following steps:
Step 2-1 carries out sparsity constraints to road sign
The pose of robot is placed in a state vector with cartographic information and is updated simultaneously by EKF-SLAM algorithm, state
Vector are as follows:
Wherein, Xr(k)=[xr,k,yr,k,θr,k]T, indicate to estimate the pose of robot under global coordinate system, Xl(k)=[x1,
y1,....,xn,yn]T, it is map Road target coordinate information;
For road sign any in map officeIf road signCorresponding two spaces neighborhood is respectively A and B, whereinB
Region in addition is defined as C, and target of satisfying the need locational space and feature space carry out sparsity limitation: in road signA-quadrant
It is interior, do not allow that there are any other road signs;In road signB in region outside A, do not allow to exist and road signShape phase
Same road sign;
Map office Road punctuate is traversed, when meeting insertion condition for all road signs, which just allows to be inserted into, i.e., for appointing
What newfound road signMeet one of formula (5) (6):
Wherein,For the road sign of map officeWith it is newfoundThe distance between,For the threshold value constant of distance;
Step 2-2, robot location's prediction
The robotary and error co-variance matrix P (k of+1 step of kth are predicted according to kBu robot pose and control input
+ 1):
Wherein,The state estimation at moment;For the state estimation at k moment;U (k) is input control;For k+1 moment variance evaluation;F (k) is local derviation of the f to X (k);QkFor covariance matrix;
Step 2-3, observation: at the k+1 moment, robot is moved at preset path point by motion model control, utilizes itself
Carry sensors perceive environment, obtain data;
Data correlation: step 2-4 after completing observation, executes matching search with the data correlation method of mixed self-adapting;
Step 2-5, state update:
Wherein, z (k+1) indicates the actual measured value at k+1 moment;Indicate the prediction measured value at k+1 moment;K (k+1) table
Show the kalman gain at k+1 moment;Jh(k+1) Jacobian matrix at k+1 moment is indicated;
At the k+1 moment, until correction of all features being successfully associated to being completed to predicted state X (k+1) and P (k+1),
At this moment, the EKF-SLAM " prediction-correction " based on Current observation is completed;
Step 2-6, map rejuvenation generate increment type map.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811138347.7A CN109543694A (en) | 2018-09-28 | 2018-09-28 | A kind of visual synchronization positioning and map constructing method based on the sparse strategy of point feature |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811138347.7A CN109543694A (en) | 2018-09-28 | 2018-09-28 | A kind of visual synchronization positioning and map constructing method based on the sparse strategy of point feature |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109543694A true CN109543694A (en) | 2019-03-29 |
Family
ID=65843645
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811138347.7A Pending CN109543694A (en) | 2018-09-28 | 2018-09-28 | A kind of visual synchronization positioning and map constructing method based on the sparse strategy of point feature |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109543694A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110175654A (en) * | 2019-05-29 | 2019-08-27 | 广州小鹏汽车科技有限公司 | A kind of update method and system of track road sign |
CN112581610A (en) * | 2020-10-16 | 2021-03-30 | 武汉理工大学 | Robust optimization method and system for establishing map from multi-beam sonar data |
CN112665584A (en) * | 2020-12-30 | 2021-04-16 | 哈尔滨工程大学 | Underwater robot positioning and composition method based on multi-sensor fusion |
CN113052940A (en) * | 2021-03-14 | 2021-06-29 | 西北工业大学 | Space-time associated map real-time construction method based on sonar |
CN114228959A (en) * | 2021-12-29 | 2022-03-25 | 中国科学院沈阳自动化研究所 | Underwater robot polar region under-ice recovery method based on acoustic road sign and optical road sign combined auxiliary navigation |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010013236A1 (en) * | 2008-07-29 | 2010-02-04 | Superfish Ltd | Increasing interest point coverage in an image |
KR20110116609A (en) * | 2010-04-19 | 2011-10-26 | 인하대학교 산학협력단 | High speed slam system and method based on graphic processing unit |
CN104574387A (en) * | 2014-12-29 | 2015-04-29 | 张家港江苏科技大学产业技术研究院 | Image processing method in underwater vision SLAM system |
CN105469405A (en) * | 2015-11-26 | 2016-04-06 | 清华大学 | Visual ranging-based simultaneous localization and map construction method |
CN106679648A (en) * | 2016-12-08 | 2017-05-17 | 东南大学 | Vision-inertia integrated SLAM (Simultaneous Localization and Mapping) method based on genetic algorithm |
CN106910229A (en) * | 2015-12-23 | 2017-06-30 | 韩华泰科株式会社 | Image processing equipment and method |
CN106931962A (en) * | 2017-03-29 | 2017-07-07 | 武汉大学 | A kind of real-time binocular visual positioning method based on GPU SIFT |
CN107742276A (en) * | 2017-03-27 | 2018-02-27 | 苏州星宇测绘科技有限公司 | One kind is based on the quick processing system of the airborne integration of unmanned aerial vehicle remote sensing image and method |
-
2018
- 2018-09-28 CN CN201811138347.7A patent/CN109543694A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010013236A1 (en) * | 2008-07-29 | 2010-02-04 | Superfish Ltd | Increasing interest point coverage in an image |
KR20110116609A (en) * | 2010-04-19 | 2011-10-26 | 인하대학교 산학협력단 | High speed slam system and method based on graphic processing unit |
CN104574387A (en) * | 2014-12-29 | 2015-04-29 | 张家港江苏科技大学产业技术研究院 | Image processing method in underwater vision SLAM system |
CN105469405A (en) * | 2015-11-26 | 2016-04-06 | 清华大学 | Visual ranging-based simultaneous localization and map construction method |
CN106910229A (en) * | 2015-12-23 | 2017-06-30 | 韩华泰科株式会社 | Image processing equipment and method |
CN106679648A (en) * | 2016-12-08 | 2017-05-17 | 东南大学 | Vision-inertia integrated SLAM (Simultaneous Localization and Mapping) method based on genetic algorithm |
CN107742276A (en) * | 2017-03-27 | 2018-02-27 | 苏州星宇测绘科技有限公司 | One kind is based on the quick processing system of the airborne integration of unmanned aerial vehicle remote sensing image and method |
CN106931962A (en) * | 2017-03-29 | 2017-07-07 | 武汉大学 | A kind of real-time binocular visual positioning method based on GPU SIFT |
Non-Patent Citations (1)
Title |
---|
郭文县等: "基于特征稀疏策略的室内机器人SLAM研究", 《计算机工程与应用》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110175654A (en) * | 2019-05-29 | 2019-08-27 | 广州小鹏汽车科技有限公司 | A kind of update method and system of track road sign |
CN110175654B (en) * | 2019-05-29 | 2021-06-01 | 广州小鹏自动驾驶科技有限公司 | Method and system for updating track signposts |
CN112581610A (en) * | 2020-10-16 | 2021-03-30 | 武汉理工大学 | Robust optimization method and system for establishing map from multi-beam sonar data |
CN112581610B (en) * | 2020-10-16 | 2023-06-13 | 武汉理工大学 | Robust optimization method and system for building map from multi-beam sonar data |
CN112665584A (en) * | 2020-12-30 | 2021-04-16 | 哈尔滨工程大学 | Underwater robot positioning and composition method based on multi-sensor fusion |
CN112665584B (en) * | 2020-12-30 | 2022-07-15 | 哈尔滨工程大学 | Underwater robot positioning and composition method based on multi-sensor fusion |
CN113052940A (en) * | 2021-03-14 | 2021-06-29 | 西北工业大学 | Space-time associated map real-time construction method based on sonar |
CN113052940B (en) * | 2021-03-14 | 2024-03-15 | 西北工业大学 | Space-time correlation map real-time construction method based on sonar |
CN114228959A (en) * | 2021-12-29 | 2022-03-25 | 中国科学院沈阳自动化研究所 | Underwater robot polar region under-ice recovery method based on acoustic road sign and optical road sign combined auxiliary navigation |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110070615B (en) | Multi-camera cooperation-based panoramic vision SLAM method | |
CN109543694A (en) | A kind of visual synchronization positioning and map constructing method based on the sparse strategy of point feature | |
US20220028163A1 (en) | Computer Vision Systems and Methods for Detecting and Modeling Features of Structures in Images | |
EP3309751B1 (en) | Image processing device, method, and program | |
JP4677046B2 (en) | Fast human pose estimation using appearance and motion via multidimensional boost regression | |
CN107677274B (en) | Unmanned plane independent landing navigation information real-time resolving method based on binocular vision | |
CN106529538A (en) | Method and device for positioning aircraft | |
CN104392228A (en) | Unmanned aerial vehicle image target class detection method based on conditional random field model | |
WO2012084703A1 (en) | Detection and tracking of moving objects | |
CN108615246A (en) | It improves visual odometry system robustness and reduces the method that algorithm calculates consumption | |
Chen et al. | Transforming a 3-d lidar point cloud into a 2-d dense depth map through a parameter self-adaptive framework | |
WO2021021862A1 (en) | Mapping and localization system for autonomous vehicles | |
CN109214254B (en) | Method and device for determining displacement of robot | |
CN111998862A (en) | Dense binocular SLAM method based on BNN | |
Civera et al. | Dimensionless monocular SLAM | |
CN108573280A (en) | A kind of unmanned boat independently passes through the method for bridge | |
CN108369739A (en) | Article detection device and object detecting method | |
JP2022080303A (en) | Lidar localization using optical flow | |
Sergiyenko et al. | Machine vision sensors | |
CN114358133A (en) | Method for detecting looped frames based on semantic-assisted binocular vision SLAM | |
Wong et al. | Monocular localization within sparse voxel maps | |
Kumar et al. | Template matching application in geo-referencing of remote sensing temporal image | |
CN115830116A (en) | Robust visual odometer method | |
CN115170621A (en) | Target tracking method and system under dynamic background based on relevant filtering framework | |
Abratkiewicz et al. | The concept of applying a SIFT algorithm and orthophotomaps in SAR-based augmented integrity navigation systems |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20190329 |