CN103278170A - Mobile robot cascading map building method based on remarkable scenic spot detection - Google Patents

Mobile robot cascading map building method based on remarkable scenic spot detection Download PDF

Info

Publication number
CN103278170A
CN103278170A CN2013101835776A CN201310183577A CN103278170A CN 103278170 A CN103278170 A CN 103278170A CN 2013101835776 A CN2013101835776 A CN 2013101835776A CN 201310183577 A CN201310183577 A CN 201310183577A CN 103278170 A CN103278170 A CN 103278170A
Authority
CN
China
Prior art keywords
map
node
mobile robot
robot
topological
Prior art date
Application number
CN2013101835776A
Other languages
Chinese (zh)
Other versions
CN103278170B (en
Inventor
钱堃
马旭东
戴先中
房芳
Original Assignee
东南大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 东南大学 filed Critical 东南大学
Priority to CN201310183577.6A priority Critical patent/CN103278170B/en
Publication of CN103278170A publication Critical patent/CN103278170A/en
Application granted granted Critical
Publication of CN103278170B publication Critical patent/CN103278170B/en

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0274Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device

Abstract

The invention relates to the technical field of mobile robot navigation and discloses a mobile robot cascading map building method based on remarkable scenic spot detection. The mobile robot cascading map building method comprises the steps of: 1) according to image data collected by a mobile robot sensor, carrying out on-line detection on a road sign of a natural scene corresponding to a remarkable scene so as to generate a topological node in a global map; 2) updating the pose of a mobile robot and a local grid map; and 3) building a global topological map structure by taking a remarkable scenic spot as the topological node, and optimizing the topological structure by introducing a weighted scanning and matching method and a relaxation method on the basis of the closed detection of trajectory of the robot to ensure the global consistency of the topological map. The mobile robot cascading map building method is applicable to the autonomous path planning and navigation application for various mobile robots in large indoor environment ranges such as a plurality of rooms and corridors.

Description

Mobile robot's cascade map creating method based on remarkable scene point detection

Technical field

The present invention relates to the Mobile Robotics Navigation technical field, particularly relate to a kind of mobile robot's cascade map creating method that detects based on remarkable scene point.

Background technology

Intelligent robot is understood and utilized the profound level of its work space environment is the important prerequisite that robot realizes the autonomous intelligence behavior.The mobile robot has only obtained environmental map, just can carry out path planning, track following, overall situation location etc., thereby arrive at the destination smoothly.At present, the autonomous map creating method used always of mobile robot mainly is divided into following three classes:

1, robot autonomous exploration in environment navigated, by laser sensor scanning circumstance information, sensors such as odometer are created grating map (referring to " Grisetti G.Improved Techniques for Grid Mapping with Rao-Blackwellized Particle Filters.IEEE Transaction on Robotics; 2007; 23 (1): 34-46 "), the easy maintenance of such map also is used for location Calculation, but the accuracy of metric depends on the uncertain degree for the treatment of of odometer precision and distance measuring sensor, and the data volume of storage and maintenance is big.

2, robot in environment according to certain regular independent navigation, in a period of time, carry out the topology location by sensors such as sonar and odometers, and mobile route is communicated with becomes topological structure, form topological map (referring to " B.Kuipers.Local Metrical and Global Topological Maps in the Hybrid Spatial Semantic Hierarchy.IEEE International Conference on Robotics and Automation; 2004,4845-4851 ").Topological map focuses on the quantitative description of object on the spacetime coordinates, advantage be accurately and explain brief, but the accurate description of environment geological information can't be provided.

3, robot motion under operating personnel's control, utilize laser sensor or the stereo camera of band cloud platform to obtain depth image, rebuild the three dimensional depth point cloud map (referring to " P.Henry.RGB-D mapping:Using Kinect-style depth cameras for dense3D modeling of indoor environments; The International Journal of Robotics Research, 2012 ") of environment by a cloud.The three-dimensional point cloud map can be described obstacle height information, but the constructive process calculated amount is huge, and heavy computation burden is difficult to satisfy real-time requirement in the environment on a large scale.

Though time location and map environment modeling method can be under robot self pose and environmental characteristic be coupled uncertain condition to tradition based on grid, solve the joint probability estimation problem of online grating map creating, but increase, circumstance complication degree and the not raising of intellectual along with the environment scale, this method exists environment to describe the problem too simple, that counting yield is low, has restricted the profound understandability of robot to environment.How from the characteristic of human intelligible physical environment scene, not by artificial road sign means, utilize multi-sensor information fusion such as the vehicle-mounted vision of robot, range finding, set up environment descriptive model efficiently, and solving synchronous robot location and Online Map establishment problem, have great importance.

When needs are described the mixed information such as road sign, place, grid, environmental sensor pose of large-scale complex environment, should set up the environmental model of mixed form, for example topology/grid mixing map, grid/feature are mixed (referring to " Z.Lin.Recognition-based Indoor Topological Navigation Using Robust Invariant Features.IEEE/RSJ International Conference on Intelligent Robots and Systems.2005,3975-3980 ") such as maps.Robot generally follows stratification (Hierarchical) basic ideas with map building (SLAM) method in the location in the time of under this type of environment is described, and is also referred to as son ground drawing method.General this kind method need rely on certain map partition principle, utilize distance measuring sensor to set up some separate sub-maps, shared information antithetical phrase map between the sub-map of recycling splices fusion (referring to " J.Tard ó s.Robust mapping and localization in indoor environments using sonar data; The International Journal of Robotics Research; 2002,21 (4): 311-330. ").When the crucial difficult point of stratification SLAM method comprises in the grid sub-map between the cutting apart automatically of location and map building, sub-map, sub-map topology related, and problem such as global level global coherency, be to determine the map building key of precision and reliability as a result.

Location and the following several method of the general employing of map building (SLAM) in the time of in the grid sub-map: EKF (Extended Kalman Filter, EKF), Rao-Blackwellized particle filter (Rao-Blackwellized Particle Filter, RBPF), sparse extend information filtering (Sparse Extended Information Filter, SEIF).The well-known shortcoming of EKF is, when estimated value and actual value depart from when big, nonlinear model carried out the approximate mode of first-order linear can introduce bigger linearization error.It is more effective that the RBPF algorithm is set up problem at processing complex environment map; but preserved a complete environmental map because each particle is all corresponding; so it is very high to use RBPF to carry out the computation complexity of simple grating map creating under the fairly large environment, is difficult to carry out.Also there is shortcoming in actual applications in sparse extend information filtering (SEIF), and the estimation average and the variance computation complexity that information vector and information matrix are converted to state vector very high at every turn, also is difficult to implementing in extensive environment.

Cutting apart automatically of sub-map is one of difficult point of cascade map building.Each present seed map method substantially all is to make robot cut apart sub-map automatically by the artificial road sign of identification.Method utilization artificial training scene feature database is in advance also arranged, can not realize that but robot obtains (referring to " Schleicher D.Real-time hierarchical stereo Visual SLAM in large-scale environments.Robotics and Autonomous Systems; 2010,58:991-1002. ") automatically to the critical path target.Also have how much change informations identification of certain methods environment for use natural scene road sign, realize that sub-map cuts apart automatically, the point of crossing between the Voronoi unit etc. for example, but this methods and results can obtain a large amount of road signs, and lost the brief advantage of topological structure.

Detecting to cut apart automatically for sub-map based on the natural scene of vision significance provides a kind of solution thinking, also becomes the mobile robot understands environment in circumstances not known important means.It is walked around image object and detects problem, extracts significant scene description as the road sign of topological map, has also brought new problem simultaneously, namely how to select significant scene to take into account storage demand and road sign matching problem.Some researchists pay close attention under the inspiration of mechanism (Visual Attention) in human vision, have proposed multiple visually-perceptible model for the interesting image regions detection.Its ultimate principle is, when biosome is observed surrounding environment, often because behavior purpose demand or local scene clue concentrate on notice on certain or some scenery selectively as outstanding object, various environment can effectively be distinguished and represent to such scenery as road sign.The Bayesian Surprise model that has newly occurred the unusual variation of a kind of scene visual feature in recent years, it compares vision significance (Saliency) and the Shannon entropy has more dominance energy (referring to " A.Ranganathan.Bayesian Surprise and Landmark Detection; IEEE International Conference on Robotics and Automation, 2009 ").Bayesian Surprise model has been obtained the initial success application in fields such as the monitoring of video anomalous event, natural landmark detections, has represented its capacity of orientation to the scene marked change.But at present, there is not bibliographical information from the angle that Bayesian Surprise conspicuousness detects cascade environmental map model and establishment problem thereof to be discussed as yet.

In addition, how solving topology association between sub-map, and guarantee the global coherency of topological node, is another difficult point of cascade map building problem.In grid/topological cascade map, if in order to reduce the complexity of cascade map calculation and storage, and ignore the correlativity between road sign and the global level, can introduce the inconsistency of global map, cause the topological structure creating out can't be closed into ring.

Look into newly through patent retrieval, people such as Xiong Rong have applied for Chinese invention patent No. 200610053690.2, and name is called " mobile robot locatees the method with map structuring simultaneously in circumstances not known ".A kind of mobile robot location and the method for map structuring simultaneously in circumstances not known are disclosed, the data of utilizing distance measuring sensor to obtain, make up local line segment characteristics map and grating map, utilize current robot pose estimated result that local map is carried out coordinate transform, thereby upgrade overall line segment characteristics map and overall grating map.This method is based on extended Kalman filter, and is not high enough for the robustness than complex environment.In addition, this method has been created how much line segment characteristics maps of environment simultaneously and has been occupied two kinds of maps of grating map, but not adopts stratification cascade map strategy.Under extensive environment, because mistake appears in robot location easily, tangible stitching error appears in these two kinds of maps that can cause creating; In addition, if to complete grating map of extensive creating environments, can cause huge calculated amount and memory space, actual being difficult to satisfies.

People such as Wen Feng have applied for Chinese invention patent No. 201110376468.7, and name is called " a kind of mobile robot of realization locatees the method with map structuring simultaneously ".This method is taken into account the road sign observation data according to a boat supposition mileage, adopts " strong tracking filter " to improve the precision of robot location and map building.But the map that this inventive method is created belongs to characteristics map, but not grid or topological map can not be directly used in robot path planning and navigation.In addition, this method also only is used for indoor environment on a small scale and the artificial road sign object that arranges, and this has also all limited the range of application of this method.

Beam the people such as turns to and has applied for Chinese invention patent No. 200710019784.2, and name is called " a kind of mobile robot's map building system and map creating method ".This disclosure of the Invention a kind of mobile robot's map building system and method based on wireless sensor network.The wireless sensor network data that this method utilization forms is created the environmental map that contains multiple information.This belongs to a kind of robot map creating method by external sensor, yet because the wireless sensor node measuring accuracy is lower, can only carry out rough location to robot, and the environmental map of creating does not reach the demand of precision navigation yet.In addition, this method need be spread on wireless sensor network node cloth in the guarded region, this method and inapplicable under the indoor environment that does not possess this condition.

Summary of the invention

Technical matters: at two subject matters that exist in the cascade map building, namely how to select significant scene to take into account storage demand and road sign matching problem, realize that sub-map cuts apart automatically; How to solve topology association between sub-map, and guarantee the global coherency of topological node; The present invention proposes a kind of mobile robot's cascade map creating method based on remarkable scene point.

Technical scheme: based on mobile robot's cascade map creating method that remarkable scene point detects, described method comprises:

According to the view data that mobile robot's sensor is gathered, carry out online remarkable scene point and detect, generate overall topological node;

Upgrade mobile robot's pose and local grid sub-map: according to laser sensor data, dead reckoning sensor, the one-period of the lasting local grid sub-map that pose is estimated and early stage, structure obtained to the mobile robot, estimate current period mobile robot's local pose, and upgrade the grid sub-map local coordinate frame;

Create overall topological map structure with remarkable scene point as topological node, on the basis of robot trajectory's occlusion detection, introduce weighted scanning matching method and relaxation method topological structure is optimized, revise the consistance of overall topological map.

Wherein, significantly the scene point detection method comprises scene image feature extraction and the calculating of scene conspicuousness, and step is as follows:

1) gathers marking area by the sampling of vision noticing mechanism guiding SURF feature, reject a large amount of information that characterize the place feature that are beneficial to of not having;

2) by the K-mean clustering algorithm cluster is carried out in the SURF proper vector set of image, and adopt the space word bag model structure scene appearance features that merges spatial relationship to describe, scene image is described as a kind of vision word histogram that merges spatial relationship;

3) utilize vision word histogram to set up the place Multivariate Polya model of describing based on this feature, and by calculation expectation Bayesian Surprise threshold value, judge whether the residing place of current robot is remarkable scene point.

Comprise the accurate description that barrier occupies composite characters such as grid, scene place feature in the described local grid sub-map; Local grid sub-map adopts with current period mobile robot position and is initial point, is the coordinate system of X-axis with current period mobile robot positive dirction;

Robot adopts the Rao-Blackwellized particle filter algorithm with map building in the location in the time of described local grid sub-map, create first overall topological node after finishing initial alignment, and the grating map of current establishment is saved as the sub-map of described first overall topological node correspondence.

Described overall topological map is to adopt with remarkable scene point to be overall topological node, to have connected the graph structure that adjacent topological node constitutes with fillet; Each node occupies grid sub-map with a part and is associated, every limit correspondence the transformation relation of adjacent sub-map reference framework.

Described overall topological map foundation step is:

1) robot acquisition vision sensor data, and carry out the scene conspicuousness and detect, when detecting new remarkable scene point, in overall topological map, create new topological node, with the true origin of the end pose of robot in a last node as new sub-map, be the basis coordinates of new node, the feature with current scene point adds historical data in addition;

2) iteration uses coupling summation formula that robot global position variance is carried out On-line Estimation and renewal, and the mark of the pedestal between adjacent node transformation relation is attached on the internodal fillet, realizes that the relative position transmission is calculated between node;

3) create new topological node at every turn, all mate with the historical data of topological node, thereby judge whether the mobile robot has arrived the place of having visited again; Wherein adopt the similarity distance matching method of scene image SURF characteristic model to calculate the similarity probability in current scene place and historical scene place;

4) after detecting bot access ground locus of points closure, to the topological node created the order of connection by each node, adopt the weighted scanning matching method to calculate relative displacement converted quantity between the corresponding sub-map basis coordinates of adjacent node.

Described global coherency modification method is, mutual relationship is regarded a kind of position constraint between the node as between the node that will be obtained by the robot sensor observed quantity, adopts relaxation method to find the solution and satisfies constraint and optimum topological institutional framework, and step is as follows:

1) at certain node i, utilizes estimated position and the variance of the described node of each neighbors j position calculation of described node;

2) again according to the resulting estimator about node i of all neighbors, weighted mean obtains the new coordinate about i;

3) repeat above step, in whole map the position of all nodes in twice iteration successively error and less than certain given threshold value, when perhaps iteration surpasses certain total degree, finish this relaxed algorithm process.

Preferably, described sensor comprises monocular cam and laser range finder.The data that described laser range finder obtains are laser each distance and angles with respect to the mobile robot on the barrier in the environment that range finding height 35cm plane scanning obtains, per 1 ° of resolution obtains laser beam data in 0 °~180 ° scopes, amounts to 181 laser beam.

The present invention adopts technique scheme, has following beneficial effect:

1, at the residing fairly large complex environment of mobile work robot, set up the two-layer cascade map structure that comprises local grid sub-map and overall topological map, compare conventional cartographic model, taken into account the accurate description of grid sub-map to the environment barrier effectively, and these two advantages of brief statement of relative position relation between the sub-map of topological map.At barrier, the equal condition of unknown of mobile robot self pose, realized comprising the online establishment of two-layer cascade map of local grid sub-map and overall topological map, and solved in the map building process robot pose uncertain with map uncertainty related, the interactional problem that intercouples.

2, under the natural scene situation of no artificial road sign thing, proposed to be used as cutting apart between the sub-map by the remarkable scene point in the robot autonomous testing environment.Introduce the human visual attention model, adopt a kind of natural landmark detection method based on remarkable scenario B ayesian Surprise, in the large-scale complex indoor environment, have lower loss and false drop rate, thereby the sub-map that has solved in the environment cascade map building is cut apart key issue automatically.

3, in overall topological map constructive process, dynamically calculate relative coordinate relation between node, and on the basis of robot trajectory's occlusion detection, introducing weighted scanning matching method and relaxation method are optimized to the topological structure of global map, guarantee its global coherency.Finally less by the shared storage space of cascade map and the computational resource of robot establishment, can realize the online establishment of map of fairly large complicated circumstances not known, resulting cascade map is suitable for each robotlike and carries out path planning and navigation application.

Description of drawings

Fig. 1 is initial alignment and the first overall topological node process flow diagram of the embodiment of the invention;

Fig. 2 is the environment Mixed cascading cartographic model structural representation of the embodiment of the invention, comprises local grid sub-map and overall topological map;

Fig. 3 is that the coordinate transform between two overall topological nodes of the embodiment of the invention concerns synoptic diagram;

Fig. 4 is the remarkable scene point detection method process flow diagram of the embodiment of the invention; The remarkable scene of calculating comprising the sBoW latent structure of image acquisition, SURF feature extraction, vision attention guiding, based on apparent Surprise is judged;

Fig. 5 a, 5b, 5c are environment vision attention figure and the SURF feature extraction synoptic diagram of remarkable scene point detection method typical scene in the actual environment that comprises a plurality of zones such as (a) corridor, (c) office, (b) laboratory of the embodiment of the invention; Wherein the little figure in the left side of each subgraph has shown the intensity that vision is watched attentively, and the circle among the little figure in right side is the scene SURF characteristic pattern under the vision attention guiding;

Fig. 6 is that the local grid sub-map constructive process of the embodiment of the invention and the coordinate transform between the corresponding overall topological node concern synoptic diagram;

Fig. 7 is the visioning procedure figure of the integrated environment cascade map of the embodiment of the invention;

Fig. 8 is the present invention's two-layer grid/topological cascade map that robot creates and is spliced in example environment.In overall topology, detect when visiting certain road sign again, carry out the global coherency optimization of topological map, the overall topological node among the figure is connected with line segment; Curve is robot positioning result track among the figure, has shown the kinematic roadmap of robot in the map building process;

The some local grid sub-map of Fig. 9 for creating in the example shown in Figure 8.

Embodiment

Below in conjunction with specific embodiment, further illustrate the present invention, should understand these embodiment only is used for explanation the present invention and is not used in and limits the scope of the invention, after having read the present invention, those skilled in the art all fall within the application's claims institute restricted portion to the modification of the various equivalent form of values of the present invention.

The method step of the embodiment of the invention is:

According to the view data of mobile robot's vehicle-mounted monocular camera collection, the natural scene road sign of the remarkable scene correspondence of online detection generates the topological node in the global map;

According to range finding laser sensor data, dead reckoning odometer sensor, the one-period of the lasting local grid sub-map that pose is estimated and early stage, structure obtained to the mobile robot, estimate current period mobile robot's local pose, and upgrade the grid sub-map local coordinate frame;

Create overall topological map structure with remarkable scene point as topological node, on the basis of robot trajectory's occlusion detection, introduce weighted scanning matching method and relaxation method topological structure is optimized, guarantee the global coherency of topological map.

Specific implementation is:

The initial alignment and the first overall topological node that are illustrated in figure 1 as present embodiment are created.At first the optional position of mobile robot from circumstances not known begins to start (S1) to any direction, initial motion one segment distance in current room, gather laser sensor data (S2), and initial alignment (S3) is carried out with map creating method in the robot location when adopting in the local grid sub-map in motion process, creating the sub-map that obtains this moment will be as the corresponding sub-map of first node in the overall topological map, and this moment, the range of movement of robot only limited in the room at initial place.When particle assembly did not converge in the certain limit yet, the judgement initial alignment was not finished (S4), and this moment, circulation was finished until initial alignment repeatedly; If initial alignment is finished, then obtain the current pose under local coordinate system of robot, robot continues to move then, and the zones such as each room and corridor of can coming in and going out, and meanwhile carries out the cascade map building (S5) that detects based on remarkable scene point.

In examples of implementation, the data that the vehicle-mounted range finding laser sensor of robot obtains are laser each distance and angles with respect to the mobile robot on the barrier in the environment that range finding height 35cm plane scanning obtains, per 1 ° of resolution obtains laser beam data in 0 °~180 ° scopes, amounts to 181 laser beam.The vehicle-mounted vision sensor of robot is PTZ CCD camera, gathers RGB image resolution ratio 320*240.

Preferably, environment cascade map structure is suitable for describing the floor indoor environment that for example hospital, office, laboratory homalographic are big, comprise area of space such as corridor, a plurality of rooms in the present embodiment, double-layer structure with local grid sub-map and overall topological map, as shown in Figure 2.Wherein:

1) comprised barrier in the local grid sub-map and occupied the accurate description of composite characters such as grid, scene place feature, be suitable for the mobile robot and use vehicle-mounted distance measuring sensor under the sub-map framework in part, to carry out accurate localization.The part is occupied grid sub-map and is adopted with current period mobile robot position and be initial point, be the coordinate system of X-axis with current period mobile robot positive dirction;

Sub-map M has comprised the current pose of robot , probability grid P, a n place road sign position coordinates These variablees are all set up with respect to the local loop coordinate system B of robot, and basis coordinates can be taken as the coordinate of sub-map building initial time robot.For example the relation of the coordinate transform between the adjacent sub-map can be designated as , the relative coordinate converted quantity between expression node 2 and the node 1.Each sub-map is created by corresponding process respectively, only uses odometer and other sensor reading in this process during establishment, and sub-map reference is also with respect to the pose of robot at sub-map building initial time.Therefore each sub-map is safeguarded between relatively relatively independently, and as yet through overall situation correction.Crucial place is designated as FP={fp 1..., fp M, natural scene is characterized as , wherein It is the SURF feature descriptor of scene.

M = ( x R B , P , x F 1 B , · · · , x F n B ) T - - - ( 1 )

2) overall topological map adopts with remarkable scene point and is overall topological node, has connected the graph structure that adjacent topological node constitutes with fillet.Each node occupies grid sub-map with a part and is associated, every limit correspondence the transformation relation of adjacent sub-map reference framework.

Each topological node i and sub-map M iBe associated, world coordinates is designated as [x i, y i, θ i].When node was created, this value initialization was current odometer reading.The position variance of node i is designated as v i, the variance yields that is initialized as last node adds compares 2% of last node forward travel distance.In robot advances process, when creating or visiting a node again, this node is linked to each other with a limit with last node.Relative metric position relation between topological node i and the node j is designated as , as shown in Figure 3, it has variance v Ij, be taken as 0.02d Ij

Fig. 4 is the remarkable scene point detection method process flow diagram of present embodiment, the monocular vision sensor vehicle-mounted according to the mobile robot, and the remarkable scene point of testing environment in robot motion's process comprises the calculating of scene image feature extraction and scene conspicuousness, step is as follows:

1) vision noticing mechanism of introducing simulating human vision guides the sampling of SURF feature to gather marking area, rejects a large amount of information that are beneficial to sign place feature of not having, and improves the image treatment effeciency;

To the input scene image, sample respectively and set up the monochromatic gaussian pyramid (s of 5 grades of yardsticks 0..., s 4), 5 grades of yardstick CIE LAB color space gaussian pyramids , and 4 grades of yardstick difference of Gaussian pyramid (DoG 1..., DoG 4).Be that intensity (I), color (C) and three passages of direction (O) extract characteristic pattern respectively to every layer of pyramid decomposition then, and adopt the normalization operation Handle the back by waiting weights to add up, obtain vision attention figure:

Wherein on-center and off-center are respectively entad operator and the centrifugal operators of cellula visualis in anthropomorphic dummy's eyes retina.The pixel region that is higher than certain threshold value at significance probability in the marking area carries out the sampling of SURF feature, thereby the scope of feature extraction is limited in the marking area, in order to avoid introduce the noise spot that the scene key feature is not had symbolical meanings, usually from backgrounds such as body of wall, ground.

2) by the K-mean clustering algorithm cluster is carried out in the SURF proper vector set of image, and space word bag model (the Spatial Bag-of-Word of spatial relationship is merged in employing, sBoW) construct the scene appearance features and describe, scene image is described as a kind of vision word histogram that merges spatial relationship;

By the K-mean clustering algorithm cluster is carried out in the SURF proper vector set of image, acquire k cluster centre as trunk vision word.Characteristic quantification process employing arest neighbors classification method is approximately trunk vision word with each topography's piece in the image to be represented, as the formula (3), and with SURF proper vector s iBe quantified as vision word w iExpression.

w i = arg min j ( | | s i - v j | | 2 ) - - - ( 3 )

Because cluster centre quantity is huge, in order further to accelerate quantification speed, can set up KD-Tree to cluster centre, can significantly accelerate nearest neighbor search speed.Through behind the characteristic quantification, add up vision word frequency of occurrence in every width of cloth image, structure generates vision word histogram, thereby has reduced the complexity of iamge description.

Further introducing the spatial relationship of vision word describes.If image vision word histogram vectors is designated as X=(n 1, n 2..n W), gather geometric center O=(x with each vision word with respect to feature set 0, y 0) two features of distance and angle its space distribution is described, and set up histogram respectively.

A) distance component: the Euclidean distance (L that calculates each unique point and geometric center 1, L 2..., L W), get intermediate value as unit length L, other length is divided into 0~L with reference to the ratio of L MaxFour interior intervals;

B) angle component: constitute angle theta with each unique point nearest neighbor point anticlockwise with it.Select a unique point as starting point F arbitrarily 0, other point with counterclockwise from the inside to the outside number consecutively be F 1, F 2..., F W-1Calculate With Angle theta i, i=1,2 ..., W-1, and it is quantified as 0 °~θ MaxFive interior intervals.

So obtain the scene visual appearance features vector shown in (4) formula, wherein preceding W dimensional vector { n WBe the vision word statistics frequency that the word statistics in the visual vocabulary table constitutes, satisfy , back Q dimensional vector { p QIt is the vision word spatial relationship statistics frequency.

a=[n 1,n 2,...,n W,p 1,n 2,...,p Q]??????????(4)

3) utilize vision word histogram to set up the place Multivariate Polya(MP that describes based on this feature) model, and by calculation expectation Bayesian Surprise threshold value, judge whether the residing place of current robot is remarkable scene point.

If current place model of place is M, has prior probability distribution p (M).Bayesian Surprise is defined as when online acquisition observed quantity z, at the posteriority distribution p (M|z) of model M and the KL distance between the prior distribution, that is:

S ( z ) = ∫ M p ( M ) log p ( M ) p ( M | z ) - - - ( 5 )

If the Bayesian Surprise of robot current scene observation surpasses certain Surprise threshold value, think that then current place is to have enough notable features.The Surprise threshold value adopts a kind of Monte Carlo method to come the approximate treatment expectation value herein, is called expectation Surprise.N observed quantity z namely samples from current ground point model p (M) 1:NSo, expectation Surprise just be calculated as the corresponding Surprise value of these samples institutes on average:

E ( z ) = 1 N Σ i = 1 N S ( z i ) - - - ( 6 )

If the observation A={a} of given certain road sign, wherein vectorial a as the formula (4).If w vision word occurrence number is n wProbability be θ wSo vectorial a meets the multinomial distribution that parameter is n and θ, wherein θ=[θ 1, θ 2..., θ W].Consideration is θ to a kind of specific θ value s=[θ S1..., θ SW], it obeys multinomial distribution:

p ( a | θ s ) = n ! n 1 ! n 2 ! . . . n W ! θ s 1 n 1 θ s 2 n 2 . . . θ sW n w - - - ( 7 )

Introduce parameter alpha s=[α S1, α S2..., α SW], expression is at θ s, to w kind vision word observation α SwLearn that the probability that this word occurs is θ for-1 time later on Sw, θ then sObedience is with α sBe the Dirichlet distribution of condition, that is:

p ( θ s | α s ) = Γ ( Σ w = 1 W α sw ) Π w = 1 W Γ ( α sw ) θ s 1 α s 1 - 1 θ s 2 α s 2 - 1 . . . θ sW α sW - 1 - - - ( 8 )

Wherein Γ () is the Gamma function.

Note α=Σ wα w, place model of place p (α | A) can use bayes rule to be calculated as:

p(α|A)∝p(A|α)p(α)??????????(9)

With likelihood score p (A| α) expansion of histogram observation, so following formula is rewritten as:

p ( α | A ) ∝ p ( α ) ∫ θ p ( θ | α ) ∏ a ∈ A p ( a | θ ) - - - ( 10 )

Following formula is called the Multivariate Polya(MP of place scene) model.With the integration item of formula (7) (8) substitution (10) formula, priori p (α) gets and does evenly to distribute, and then the place model of place is calculated as:

p ( α | A ) ∝ n ! Π w = 1 W n w ! Γ ( α ) Γ ( n + α ) Π w = 1 W Γ ( n w + α w ) Γ ( α w ) - - - ( 11 )

The maximal possibility estimation of Dirichelet profile parameter can obtain by sample training.Therefore at the some typical scenes in the environment (corridor, office, laboratory etc.), given a certain amount of sample image also uses the study of iterative gradient decline optimization to obtain α, thereby obtains the priori MP model of scene.In order to calculate the expectation Surprise shown in (6) formula, utilize this MP model earlier according to (8) formula sampling θ s, according to (7) formula sampling a, that obtain is exactly observation sample z, shown in the frame of broken lines among Fig. 4 again.Fig. 5 is the remarkable scene point detection method of utilizing in the invention, environment vision attention figure and the SURF feature extraction example of typical scene in the actual environment that comprises a plurality of zones such as (a) corridor, (c) office, (b) laboratory; Wherein the little figure in the left side of each subgraph has shown the intensity that vision is watched attentively, circle among the little figure in right side is the scene SURF feature under the vision attention guiding, as seen this method is limited to the feature extraction scope of scene in the marking area, in order to avoid introduce the noise spot (usually from backgrounds such as body of wall, ground) that the scene key feature is not had symbolical meanings.

According to the vehicle-mounted range finding of robot, odometer sensing data, make up the current environment cascade map that has observed: it comprises the structure of local grid sub-map and the structure of overall topological map.The Rao-Blackwellized particle filter algorithm is adopted with map building in the robot location in the time of wherein in the local grid sub-map, creates first overall topological node after finishing initial alignment, and the grating map of current establishment is saved as corresponding sub-map.Fig. 7 is after finishing initial alignment and creating first overall topological node, the visioning procedure of integrated environment cascade map, and entire cascaded map building step is:

1) robot continues long distance motion in environment, during pass through each room and zone, corridor (step S71); Gather laser sensor data (step S72), mobile robot's pose is estimated and made up the local grid sub-map that obtains early stage according to dead reckoning odometer sensor, last one-period, still adopt the Rao-Blackwellized particle filter algorithm, estimate current period mobile robot's local pose, and upgrade grid sub-map local coordinate frame (step S73);

2) robot acquisition vision sensor data (step S74), and carry out the scene conspicuousness and detect (step S75), when detecting new remarkable scene point, in overall topological map, create new topological node (step S76), with the true origin of the end pose of robot in a last node as new sub-map, be the basis coordinates of new node, the feature with current scene point adds historical data in addition;

3) iteration uses coupling summation formula that robot global position variance is carried out On-line Estimation and renewal, pedestal between adjacent node mark transformation relation is attached on the internodal fillet, thereby realizes that (step S77) calculated in the relative position transmission between node;

If first node base coordinate is [0,0,0], as the global coordinate system reference origin.As new node fp I+1During establishment, robot is at a last node fp iIn pose just as new node fp I+1True origin, namely For realizing relative position transmission between node, iteration uses Coupling Summation formula meter to robot global position variance Carry out On-line Estimation and renewal.Note Be robot global coordinate system pose, select for use the basis coordinates initial point of first node as the global coordinate system initial point, therefore usually Also can remember work If the corresponding place FP of new node i(is fp i) the relative fp of robot when creating iLocal pose for and variance be respectively With Coupling Summation formula utilizes the relative coordinate relation of two adjacent nodes to calculate that the relative coordinate between the non-conterminous node concerns.At first utilize With Calculate

x robot G = f ( x fp i G , x robot fp i ) = x fp i G + x robot fp i cos θ fp i G - y robot fp i sin θ fp i G y fp i G + x robot fp i sin θ fp i G + y robot fp i cos θ fp i G θ fp i G + θ robot fp i , - - - ( 12 )

P robot G = ▿ f x fp i G P fp i G ▿ f x fp i G T + ▿ f x robot fp i P robot fp i ▿ f x robot fp i T . - - - ( 13 )

Wherein:

▿ f x fp i G = ∂ x robot G ∂ x fp i G = 1 0 - x robot fp i cos θ fp i G - y robot fp i sin θ fp i G 0 1 x robot fp i sin θ fp i G - y robot fp i cos θ fp i G 0 0 1 , - - - ( 14 )

▿ f x robot fp i = ∂ x robot G ∂ x robot fp i = cos θ fp i G - sin θ fp i G 0 sin θ fp i G cos θ fp i G 0 0 0 1 , - - - ( 15 )

With Be respectively robot and fp iCorresponding overall absolute coordinates.Then, in order to obtain present node fp iOverall variance , adopt Coupling Summation formula similarly, utilize With Calculate

P fp i G = ∂ x fp i G ∂ x fp i - 1 G P fp i - 1 G ( ∂ x fp i G ∂ x fp i - 1 G ) T + ∂ x fp i G ∂ x fp i fp i - 1 P fp i fp i - 1 ( ∂ x fp i G ∂ x fp i fp i - 1 ) T . - - - ( 16 )

Use (16) formula to calculate iteratively Up to For Calculating need to utilize With , have this moment , and Be known as [0,0,0], therefore can obtain

4) create new topological node at every turn, all mate with the historical data of topological node, thereby judge whether the mobile robot has arrived the place of having visited (step S78) again; Wherein adopt the similarity distance matching method of scene image SURF characteristic model to calculate the similarity probability in current scene place and historical scene place;

Robot visual angle image SURF characteristic matching strategy is investigated two scene fp AAnd fp BThe SURF visual feature vector, its descriptor is respectively With , wherein detected SURF feature is respectively m AAnd m BIndividual, namely δ → A = { δ → A 1 , . . . , δ → A m A } , δ → B = { δ → B 1 , . . . , δ → B m B } 。Matching process at first calculates With Between Euclidean distance dist (fp A, fp B), adopt the RANSAC method to remove impure point then and can enough agree with the interior point of model to keep, thereby calculate similar probability metrics.

5) after detecting bot access ground locus of points closure, to the topological node created the order of connection by each node, adopt the weighted scanning matching method to calculate relative displacement converted quantity (step S79) between the corresponding sub-map basis coordinates of adjacent node.

Before circuit closed, feature F is at sub-map M jBasis coordinates be B jWith sub-map M iBasis coordinates be B iHave different local location coordinates down, and these two sub-maps are separate.And behind circuit closed, sub-map M iKeeping its basis coordinates in continuation is B iOutside also will introduce B jWith respect to B iEstimation, namely And sub-map M jAlso preserve B jWith respect to B J-1Estimation, namely This just makes sets up new being connected between node j-1 and the node i.(Weighted Scan Matching, WSM) method is mated the grid sub-map of adjacent node i and node i+1 correspondence, calculates d to adopt the weighted scanning coupling I, i+1, θ I, i+1With , the WSM method has high calculating speed and precision for laser scanning data.Finish after the converted quantity that calculates between adjacent sub-map basis coordinates, carry out the correction of global map consistance.

Owing to scan the error of calculation that exists in the coupling, after all sub-maps were finished in establishment, the global coherency of high-rise topological node was difficult to guarantee, shows as only according to the local relative position relation (d between each node Ij, θ IjWith ) global map of recover exists than mistake.Detecting robot again after the access node, in upper level node, carry out the global coherency correction (step S710) of topological map for this reason.

The implementation method of global coherency correction is: mutual relationship is regarded a kind of position constraint between the node as between the node that will be obtained by the robot sensor observed quantity, adopts relaxation method (Relaxation) to find the solution and satisfies constraint and optimum topological institutional framework.Fillet between the node can be regarded as being similar to " spring ", the target of this relaxed algorithm makes the energy of all " springs " in the whole map reach minimum exactly.For each node i, the per step iteration of relaxed algorithm may further comprise the steps:

1) at certain node i, utilize its each neighbors j position to calculate estimated position and the variance of this node:

(x i′) j=x j+d jicos(θ jij),??????????(17)

(y i′) j=y j+d jisin(θ jij),??????????(18)

And the variance (v of node i i') j=v j+ v Ji

2) again according to the resulting estimator about i of all neighbors, weighted mean obtains the new coordinate about i:

v i = n i Σ j 1 ( v i ′ ) j , - - - ( 20 )

N wherein iIt is the neighbors number of node i.

x i = 1 n i Σ j ( x i ′ ) j v i ( v i ′ ) j , - - - ( 21 )

y i = 1 n i Σ j ( y i ′ ) j v i ( v i ′ ) j , - - - ( 22 )

θ i = arctan ( Σ j sin ( θ i ′ ) j ( v i ′ ) j / Σ j cos ( θ i ′ ) j ( v i ′ ) j ) . - - - ( 23 )

First node location was designated as [0 when algorithm was initial, 0,0] two go on foot constantly iteration and carry out,, in whole map the position of all nodes in twice iteration successively error and less than certain given threshold value, finish this relaxed algorithm process when perhaps iteration surpasses certain total degree.

Claims (8)

1. based on mobile robot's cascade map creating method of remarkable scene point detection, it is characterized in that described method comprises:
According to the view data that mobile robot's sensor is gathered, carry out online remarkable scene point and detect, generate overall topological node;
Upgrade mobile robot's pose and local grid sub-map: according to laser sensor data, dead reckoning sensor, the one-period of the lasting local grid sub-map that pose is estimated and early stage, structure obtained to the mobile robot, estimate current period mobile robot's local pose, and upgrade the grid sub-map local coordinate frame;
Create overall topological map structure with remarkable scene point as topological node, on the basis of robot trajectory's occlusion detection, introduce weighted scanning matching method and relaxation method topological structure is optimized, revise the consistance of overall topological map.
2. the mobile robot's cascade map creating method that detects based on remarkable scene point according to claim 1, it is characterized in that: significantly the scene point detection method comprises scene image feature extraction and the calculating of scene conspicuousness, and step is as follows:
1) gathers marking area by the sampling of vision noticing mechanism guiding SURF feature, reject a large amount of information that characterize the place feature that are beneficial to of not having;
2) by the K-mean clustering algorithm cluster is carried out in the SURF proper vector set of image, and adopt the space word bag model structure scene appearance features that merges spatial relationship to describe, scene image is described as a kind of vision word histogram that merges spatial relationship;
3) utilize vision word histogram to set up the place Multivariate Polya model of describing based on this feature, and by calculation expectation Bayesian Surprise threshold value, judge whether the residing place of current robot is remarkable scene point.
3. the mobile robot's cascade map creating method that detects based on remarkable scene point according to claim 1 is characterized in that: comprise the accurate description that barrier occupies composite characters such as grid, scene place feature in the described local grid sub-map; Local grid sub-map adopts with current period mobile robot position and is initial point, is the coordinate system of X-axis with current period mobile robot positive dirction;
Robot adopts the Rao-Blackwellized particle filter algorithm with map building in the location in the time of described local grid sub-map, create first overall topological node after finishing initial alignment, and the grating map of current establishment is saved as the sub-map of described first overall topological node correspondence.
4. the mobile robot's cascade map creating method that detects based on remarkable scene point according to claim 1, described overall topological map is to adopt with remarkable scene point to be overall topological node, to have connected the graph structure that adjacent topological node constitutes with fillet; Each node occupies grid sub-map with a part and is associated, every limit correspondence the transformation relation of adjacent sub-map reference framework.
5. the mobile robot's cascade map creating method that detects based on remarkable scene point according to claim 1, it is characterized in that: described overall topological map foundation step is:
1) robot acquisition vision sensor data, and carry out the scene conspicuousness and detect, when detecting new remarkable scene point, in overall topological map, create new topological node, with the true origin of the end pose of robot in a last node as new sub-map, be the basis coordinates of new node, the feature with current scene point adds historical data in addition;
2) iteration uses coupling summation formula that robot global position variance is carried out On-line Estimation and renewal, and the mark of the pedestal between adjacent node transformation relation is attached on the internodal fillet, realizes that the relative position transmission is calculated between node;
3) create new topological node at every turn, all mate with the historical data of topological node, thereby judge whether the mobile robot has arrived the place of having visited again; Wherein adopt the similarity distance matching method of scene image SURF characteristic model to calculate the similarity probability in current scene place and historical scene place;
4) after detecting bot access ground locus of points closure, to the topological node created the order of connection by each node, adopt the weighted scanning matching method to calculate relative displacement converted quantity between the corresponding sub-map basis coordinates of adjacent node.
6. the mobile robot's cascade map creating method that detects based on remarkable scene point according to claim 1, it is characterized in that: described global coherency modification method is, mutual relationship is regarded a kind of position constraint between the node as between the node that will be obtained by the robot sensor observed quantity, adopt relaxation method to find the solution and satisfy constraint and optimum topological institutional framework, step is as follows:
1) at certain node i, utilizes estimated position and the variance of the described node of each neighbors j position calculation of described node;
2) again according to the resulting estimator about node i of all neighbors, weighted mean obtains the new coordinate about i;
3) repeat above step, in whole map the position of all nodes in twice iteration successively error and less than certain given threshold value, when perhaps iteration surpasses certain total degree, finish this relaxed algorithm process.
7. the mobile robot's cascade map creating method that detects based on remarkable scene point according to claim 1, it is characterized in that: described sensor comprises monocular cam and laser range finder.
8. the mobile robot's cascade map creating method that detects based on remarkable scene point according to claim 7, it is characterized in that: the data that described laser range finder obtains are laser each distance and angles with respect to the mobile robot on the barrier in the environment that range finding height 35cm plane scanning obtains, 0 o-180 oScope interior per 1 oResolution obtains laser beam data, amounts to 181 laser beam.
CN201310183577.6A 2013-05-16 2013-05-16 Based on mobile robot's cascade map creating method that remarkable scene point detects CN103278170B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310183577.6A CN103278170B (en) 2013-05-16 2013-05-16 Based on mobile robot's cascade map creating method that remarkable scene point detects

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310183577.6A CN103278170B (en) 2013-05-16 2013-05-16 Based on mobile robot's cascade map creating method that remarkable scene point detects

Publications (2)

Publication Number Publication Date
CN103278170A true CN103278170A (en) 2013-09-04
CN103278170B CN103278170B (en) 2016-01-06

Family

ID=49060758

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310183577.6A CN103278170B (en) 2013-05-16 2013-05-16 Based on mobile robot's cascade map creating method that remarkable scene point detects

Country Status (1)

Country Link
CN (1) CN103278170B (en)

Cited By (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103674015A (en) * 2013-12-13 2014-03-26 国家电网公司 Trackless positioning navigation method and device
CN103983270A (en) * 2014-05-16 2014-08-13 中国科学技术大学 Graphic sonar data processing method
CN103994765A (en) * 2014-02-27 2014-08-20 北京工业大学 Positioning method of inertial sensor
CN104848848A (en) * 2015-03-30 2015-08-19 北京云迹科技有限公司 Robot map drafting and positioning method based on wireless base station and laser sensor as well as system thereof
CN104932494A (en) * 2015-04-27 2015-09-23 广州大学 Probability type indoor barrier distribution map establishing mechanism
CN105205859A (en) * 2015-09-22 2015-12-30 王红军 Similarity measurement method of environmental characteristics based on three-dimensional raster map
CN105225604A (en) * 2015-10-30 2016-01-06 汕头大学 A kind of construction method of mixing map of Mobile Robotics Navigation
CN105405156A (en) * 2014-06-30 2016-03-16 联想(北京)有限公司 Information processing method and device, and electronic equipment
CN105467994A (en) * 2015-11-27 2016-04-06 长春诺惟拉智能科技有限责任公司 Vision and ranging fusion-based food delivery robot indoor positioning system and positioning method
CN105509755A (en) * 2015-11-27 2016-04-20 重庆邮电大学 Gaussian distribution based mobile robot simultaneous localization and mapping method
CN105573318A (en) * 2015-12-15 2016-05-11 中国北方车辆研究所 Environment construction method based on probability analysis
CN105678346A (en) * 2016-04-01 2016-06-15 上海博康智能信息技术有限公司 Target matching retrieval method based on spatial clustering
CN105702151A (en) * 2016-03-31 2016-06-22 百度在线网络技术(北京)有限公司 Indoor map constructing method and device
CN105716609A (en) * 2016-01-15 2016-06-29 浙江梧斯源通信科技股份有限公司 Indoor robot vision positioning method
CN105843223A (en) * 2016-03-23 2016-08-10 东南大学 Mobile robot three-dimensional mapping and obstacle avoidance method based on space bag of words model
CN105865449A (en) * 2016-04-01 2016-08-17 深圳杉川科技有限公司 Laser and vision-based hybrid location method for mobile robot
CN105955258A (en) * 2016-04-01 2016-09-21 沈阳工业大学 Robot global grid map construction method based on Kinect sensor information fusion
WO2016146024A1 (en) * 2015-03-13 2016-09-22 北京贝虎机器人技术有限公司 Object recognition method and device, and indoor map generation method and device
CN103994768B (en) * 2014-05-23 2017-01-25 北京交通大学 Method and system for seeking for overall situation time optimal path under dynamic time varying environment
CN106403953A (en) * 2016-09-05 2017-02-15 江苏科技大学 Method for autonomous navigation and positioning underwater
CN106441151A (en) * 2016-09-30 2017-02-22 中国科学院光电技术研究所 Three-dimensional object European space reconstruction measurement system based on vision and active optics fusion
CN106461402A (en) * 2014-04-30 2017-02-22 通腾全球信息公司 Method and system for determining a position relative to a digital map
CN106651821A (en) * 2016-11-25 2017-05-10 北京邮电大学 Topological map fusion method based on secondary moment maintenance propagation algorithm and topological map fusion system thereof
WO2017079918A1 (en) * 2015-11-11 2017-05-18 中国科学院深圳先进技术研究院 Indoor scene scanning reconstruction method and apparatus
CN106840148A (en) * 2017-01-24 2017-06-13 东南大学 Wearable positioning and path guide method based on binocular camera under outdoor work environment
CN106931975A (en) * 2017-04-14 2017-07-07 北京航空航天大学 A kind of many strategy paths planning methods of mobile robot based on semantic map
WO2017143979A1 (en) * 2016-02-22 2017-08-31 中兴通讯股份有限公司 Image search method and device
CN107121142A (en) * 2016-12-30 2017-09-01 深圳市杉川机器人有限公司 The topological map creation method and air navigation aid of mobile robot
CN107179082A (en) * 2017-07-07 2017-09-19 上海阅面网络科技有限公司 Autonomous heuristic approach and air navigation aid based on topological map and measurement map fusion
CN107194332A (en) * 2017-05-09 2017-09-22 重庆大学 A kind of Mental rotation mechanism implementation model for being translated and being rotated based on space-time
WO2017166089A1 (en) * 2016-03-30 2017-10-05 Intel Corporation Techniques for determining a current location of a mobile device
CN107414624A (en) * 2017-08-28 2017-12-01 东营小宇研磨有限公司 Automate the concrete polished system of terrace robot
CN107436148A (en) * 2016-05-25 2017-12-05 深圳市朗驰欣创科技股份有限公司 A kind of robot navigation method and device based on more maps
CN107607107A (en) * 2017-09-14 2018-01-19 斯坦德机器人(深圳)有限公司 A kind of Slam method and apparatus based on prior information
CN108267121A (en) * 2018-01-24 2018-07-10 锥能机器人(上海)有限公司 The vision navigation method and system of more equipment under a kind of variable scene
WO2018145235A1 (en) * 2017-02-07 2018-08-16 驭势(上海)汽车科技有限公司 Distributed storage system for use with high-precision maps and application thereof
CN108540938A (en) * 2018-04-16 2018-09-14 绍兴文理学院 The method for repairing loophole in wireless sensor network
CN109146976A (en) * 2018-08-23 2019-01-04 百度在线网络技术(北京)有限公司 Method and apparatus for positioning unmanned vehicle
CN109737980A (en) * 2018-12-29 2019-05-10 上海岚豹智能科技有限公司 A kind of air navigation aid and its corresponding robot
WO2020014951A1 (en) * 2018-07-20 2020-01-23 深圳市道通智能航空技术有限公司 Method and apparatus for building local obstacle map, and unmanned aerial vehicle
US10611028B1 (en) 2018-11-30 2020-04-07 NextVPU (Shanghai) Co., Ltd. Map building and positioning of robot

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070293985A1 (en) * 2006-06-20 2007-12-20 Samsung Electronics Co., Ltd. Method, apparatus, and medium for building grid map in mobile robot and method, apparatus, and medium for cell decomposition that uses grid map
CN101619985A (en) * 2009-08-06 2010-01-06 上海交通大学 Service robot autonomous navigation method based on deformable topological map
CN102402225A (en) * 2011-11-23 2012-04-04 中国科学院自动化研究所 Method for realizing localization and map building of mobile robot at the same time

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070293985A1 (en) * 2006-06-20 2007-12-20 Samsung Electronics Co., Ltd. Method, apparatus, and medium for building grid map in mobile robot and method, apparatus, and medium for cell decomposition that uses grid map
CN101619985A (en) * 2009-08-06 2010-01-06 上海交通大学 Service robot autonomous navigation method based on deformable topological map
CN102402225A (en) * 2011-11-23 2012-04-04 中国科学院自动化研究所 Method for realizing localization and map building of mobile robot at the same time

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
梁志伟: "基于分布式感知的移动机器人同时定位与地图创建", 《机器人ROBOT》, vol. 31, no. 1, 31 January 2009 (2009-01-31), pages 33 - 39 *
王璐: "未知环境中移动机器人视觉环境建模与定位研究", 《中国博士论文全文数据库》, 15 January 2008 (2008-01-15), pages 60 *
钱堃: "基于层次化SLAM的未知环境级联地图创建方法", 《机器人 ROBOT》, vol. 33, no. 6, 30 November 2011 (2011-11-30), pages 736 - 741 *

Cited By (61)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103674015A (en) * 2013-12-13 2014-03-26 国家电网公司 Trackless positioning navigation method and device
CN103674015B (en) * 2013-12-13 2017-05-10 国家电网公司 Trackless positioning navigation method and device
CN103994765A (en) * 2014-02-27 2014-08-20 北京工业大学 Positioning method of inertial sensor
CN103994765B (en) * 2014-02-27 2017-01-11 北京工业大学 Positioning method of inertial sensor
CN106461402A (en) * 2014-04-30 2017-02-22 通腾全球信息公司 Method and system for determining a position relative to a digital map
US10240934B2 (en) 2014-04-30 2019-03-26 Tomtom Global Content B.V. Method and system for determining a position relative to a digital map
CN103983270A (en) * 2014-05-16 2014-08-13 中国科学技术大学 Graphic sonar data processing method
CN103983270B (en) * 2014-05-16 2016-09-28 中国科学技术大学 A kind of image conversion processing method of sonar data
CN103994768B (en) * 2014-05-23 2017-01-25 北京交通大学 Method and system for seeking for overall situation time optimal path under dynamic time varying environment
CN105405156A (en) * 2014-06-30 2016-03-16 联想(北京)有限公司 Information processing method and device, and electronic equipment
WO2016146024A1 (en) * 2015-03-13 2016-09-22 北京贝虎机器人技术有限公司 Object recognition method and device, and indoor map generation method and device
CN104848848A (en) * 2015-03-30 2015-08-19 北京云迹科技有限公司 Robot map drafting and positioning method based on wireless base station and laser sensor as well as system thereof
CN104932494A (en) * 2015-04-27 2015-09-23 广州大学 Probability type indoor barrier distribution map establishing mechanism
CN104932494B (en) * 2015-04-27 2018-04-13 广州大学 The build mechanism of distribution of obstacles figure in a kind of probabilistic type room
CN105205859A (en) * 2015-09-22 2015-12-30 王红军 Similarity measurement method of environmental characteristics based on three-dimensional raster map
CN105205859B (en) * 2015-09-22 2018-05-15 王红军 A kind of method for measuring similarity of the environmental characteristic based on 3 d grid map
CN105225604A (en) * 2015-10-30 2016-01-06 汕头大学 A kind of construction method of mixing map of Mobile Robotics Navigation
WO2017079918A1 (en) * 2015-11-11 2017-05-18 中国科学院深圳先进技术研究院 Indoor scene scanning reconstruction method and apparatus
CN105509755B (en) * 2015-11-27 2018-10-12 重庆邮电大学 A kind of mobile robot synchronous superposition method based on Gaussian Profile
CN105467994B (en) * 2015-11-27 2019-01-18 长春瑶光科技有限公司 The meal delivery robot indoor orientation method that vision is merged with ranging
CN105509755A (en) * 2015-11-27 2016-04-20 重庆邮电大学 Gaussian distribution based mobile robot simultaneous localization and mapping method
CN105467994A (en) * 2015-11-27 2016-04-06 长春诺惟拉智能科技有限责任公司 Vision and ranging fusion-based food delivery robot indoor positioning system and positioning method
CN105573318B (en) * 2015-12-15 2018-06-12 中国北方车辆研究所 environment construction method based on probability analysis
CN105573318A (en) * 2015-12-15 2016-05-11 中国北方车辆研究所 Environment construction method based on probability analysis
CN105716609B (en) * 2016-01-15 2018-06-15 浙江梧斯源通信科技股份有限公司 Vision positioning method in a kind of robot chamber
CN105716609A (en) * 2016-01-15 2016-06-29 浙江梧斯源通信科技股份有限公司 Indoor robot vision positioning method
WO2017143979A1 (en) * 2016-02-22 2017-08-31 中兴通讯股份有限公司 Image search method and device
CN105843223A (en) * 2016-03-23 2016-08-10 东南大学 Mobile robot three-dimensional mapping and obstacle avoidance method based on space bag of words model
CN105843223B (en) * 2016-03-23 2018-11-20 东南大学 A kind of mobile robot three-dimensional based on space bag of words builds figure and barrier-avoiding method
WO2017166089A1 (en) * 2016-03-30 2017-10-05 Intel Corporation Techniques for determining a current location of a mobile device
WO2017166594A1 (en) * 2016-03-31 2017-10-05 百度在线网络技术(北京)有限公司 Indoor map construction method, device, and storage method
CN105702151B (en) * 2016-03-31 2019-06-11 百度在线网络技术(北京)有限公司 A kind of indoor map construction method and device
CN105702151A (en) * 2016-03-31 2016-06-22 百度在线网络技术(北京)有限公司 Indoor map constructing method and device
CN105865449A (en) * 2016-04-01 2016-08-17 深圳杉川科技有限公司 Laser and vision-based hybrid location method for mobile robot
CN105955258A (en) * 2016-04-01 2016-09-21 沈阳工业大学 Robot global grid map construction method based on Kinect sensor information fusion
CN105865449B (en) * 2016-04-01 2020-05-05 深圳市杉川机器人有限公司 Hybrid positioning method of mobile robot based on laser and vision
CN105678346A (en) * 2016-04-01 2016-06-15 上海博康智能信息技术有限公司 Target matching retrieval method based on spatial clustering
CN105678346B (en) * 2016-04-01 2018-12-04 上海博康智能信息技术有限公司 A kind of object matching search method based on space clustering
CN105955258B (en) * 2016-04-01 2018-10-30 沈阳工业大学 Robot global grating map construction method based on the fusion of Kinect sensor information
CN107436148A (en) * 2016-05-25 2017-12-05 深圳市朗驰欣创科技股份有限公司 A kind of robot navigation method and device based on more maps
CN106403953B (en) * 2016-09-05 2019-07-16 江苏科技大学 A method of for underwater independent navigation and positioning
CN106403953A (en) * 2016-09-05 2017-02-15 江苏科技大学 Method for autonomous navigation and positioning underwater
CN106441151A (en) * 2016-09-30 2017-02-22 中国科学院光电技术研究所 Three-dimensional object European space reconstruction measurement system based on vision and active optics fusion
CN106651821A (en) * 2016-11-25 2017-05-10 北京邮电大学 Topological map fusion method based on secondary moment maintenance propagation algorithm and topological map fusion system thereof
CN107121142A (en) * 2016-12-30 2017-09-01 深圳市杉川机器人有限公司 The topological map creation method and air navigation aid of mobile robot
CN107121142B (en) * 2016-12-30 2019-03-19 深圳市杉川机器人有限公司 The topological map creation method and air navigation aid of mobile robot
CN106840148A (en) * 2017-01-24 2017-06-13 东南大学 Wearable positioning and path guide method based on binocular camera under outdoor work environment
WO2018145235A1 (en) * 2017-02-07 2018-08-16 驭势(上海)汽车科技有限公司 Distributed storage system for use with high-precision maps and application thereof
CN106931975A (en) * 2017-04-14 2017-07-07 北京航空航天大学 A kind of many strategy paths planning methods of mobile robot based on semantic map
CN106931975B (en) * 2017-04-14 2019-10-22 北京航空航天大学 Tactful paths planning method more than a kind of mobile robot based on semantic map
CN107194332A (en) * 2017-05-09 2017-09-22 重庆大学 A kind of Mental rotation mechanism implementation model for being translated and being rotated based on space-time
CN107179082A (en) * 2017-07-07 2017-09-19 上海阅面网络科技有限公司 Autonomous heuristic approach and air navigation aid based on topological map and measurement map fusion
CN107414624A (en) * 2017-08-28 2017-12-01 东营小宇研磨有限公司 Automate the concrete polished system of terrace robot
CN107607107A (en) * 2017-09-14 2018-01-19 斯坦德机器人(深圳)有限公司 A kind of Slam method and apparatus based on prior information
CN108267121A (en) * 2018-01-24 2018-07-10 锥能机器人(上海)有限公司 The vision navigation method and system of more equipment under a kind of variable scene
WO2019144617A1 (en) * 2018-01-24 2019-08-01 锥能机器人(上海)有限公司 Multi-device visual navigation method and system in variable scene
CN108540938A (en) * 2018-04-16 2018-09-14 绍兴文理学院 The method for repairing loophole in wireless sensor network
WO2020014951A1 (en) * 2018-07-20 2020-01-23 深圳市道通智能航空技术有限公司 Method and apparatus for building local obstacle map, and unmanned aerial vehicle
CN109146976A (en) * 2018-08-23 2019-01-04 百度在线网络技术(北京)有限公司 Method and apparatus for positioning unmanned vehicle
US10611028B1 (en) 2018-11-30 2020-04-07 NextVPU (Shanghai) Co., Ltd. Map building and positioning of robot
CN109737980A (en) * 2018-12-29 2019-05-10 上海岚豹智能科技有限公司 A kind of air navigation aid and its corresponding robot

Also Published As

Publication number Publication date
CN103278170B (en) 2016-01-06

Similar Documents

Publication Publication Date Title
Bowman et al. Probabilistic data association for semantic slam
Gupta et al. Cognitive mapping and planning for visual navigation
US20170053538A1 (en) Real-time system for multi-modal 3d geospatial mapping, object recognition, scene annotation and analytics
Zhou et al. StructSLAM: Visual SLAM with building structure lines
Wang et al. Road network extraction: A neural-dynamic framework based on deep learning and a finite state machine
CN104536445B (en) Mobile navigation method and system
CN105843223B (en) A kind of mobile robot three-dimensional based on space bag of words builds figure and barrier-avoiding method
Fraundorfer et al. Visual odometry: Part ii: Matching, robustness, optimization, and applications
JP2018522345A (en) Method and apparatus for real-time mapping and localization
Badino et al. Real-time topometric localization
Churchill et al. Practice makes perfect? managing and leveraging visual experiences for lifelong navigation
Kümmerle et al. Large scale graph-based SLAM using aerial images as prior information
Yang et al. Pop-up slam: Semantic monocular plane slam for low-texture environments
CN105263113B (en) A kind of WiFi location fingerprints map constructing method and its system based on crowdsourcing
JP2014222550A (en) Method for vslam optimization
Konolige et al. View-based maps
Kümmerle et al. On measuring the accuracy of SLAM algorithms
Thrun Simultaneous localization and mapping
Pathak et al. Fast registration based on noisy planes with unknown correspondences for 3-D mapping
Bachrach et al. Estimation, planning, and mapping for autonomous flight using an RGB-D camera in GPS-denied environments
Konolige et al. FrameSLAM: From bundle adjustment to real-time visual mapping
Triebel et al. Multi-level surface maps for outdoor terrain mapping and loop closing
Çelik et al. Monocular vision SLAM for indoor aerial vehicles
Suveg et al. Reconstruction of 3D building models from aerial images and maps
CN103679674B (en) Method and system for splicing images of unmanned aircrafts in real time

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160106

Termination date: 20190516