CN103278170B - Based on mobile robot's cascade map creating method that remarkable scene point detects - Google Patents

Based on mobile robot's cascade map creating method that remarkable scene point detects Download PDF

Info

Publication number
CN103278170B
CN103278170B CN201310183577.6A CN201310183577A CN103278170B CN 103278170 B CN103278170 B CN 103278170B CN 201310183577 A CN201310183577 A CN 201310183577A CN 103278170 B CN103278170 B CN 103278170B
Authority
CN
China
Prior art keywords
map
node
robot
topological
mobile robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310183577.6A
Other languages
Chinese (zh)
Other versions
CN103278170A (en
Inventor
钱堃
马旭东
戴先中
房芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201310183577.6A priority Critical patent/CN103278170B/en
Publication of CN103278170A publication Critical patent/CN103278170A/en
Application granted granted Critical
Publication of CN103278170B publication Critical patent/CN103278170B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0274Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The present invention relates to Mobile Robotics Navigation technical field.The invention discloses a kind of mobile robot's cascade map creating method detected based on remarkable scene point.Comprise the steps: 1) view data that gathers according to sensors for mobile robots, the natural scene road sign that the remarkable scene of on-line checkingi is corresponding, generates the topological node in global map; 2) mobile robot's pose and local grid sub-map is upgraded; 3) create Global Topological map structure using remarkable scene point as topological node, on the basis of robot trajectory's occlusion detection, introducing weighted scanning matching method and relaxation method are optimized topological structure, guarantee the global coherency of topological map.This invention is applicable to all kinds of mobile robot and carries out autonomous path planning and navigation application in the indoor comprising the regions such as multiple room, corridor on a large scale environment.

Description

Based on mobile robot's cascade map creating method that remarkable scene point detects
Technical field
The present invention relates to Mobile Robotics Navigation technical field, particularly relate to a kind of mobile robot's cascade map creating method detected based on remarkable scene point.
Background technology
Intelligent robot is the important prerequisite that robot realizes autonomous intelligence behavior to the deep layer understanding of its work space environment and utilization.Mobile robot only obtains environmental map, just can carry out path planning, track following, Global localization etc., thus arrive at the destination smoothly.At present, the autonomous map creating method that mobile robot commonly uses mainly is divided into following three classes:
1, navigation is independently explored in the environment by robot, by laser sensor scanning circumstance information, the sensors such as odometer create grating map (see " GrisettiG.ImprovedTechniquesforGridMappingwithRao-Blackw ellizedParticleFilters.IEEETransactiononRobotics; 2007; 23 (1): 34-46 "), such map is easily safeguarded and for location Calculation, but the accuracy of metric depends on the Uncertainty Management degree of odometer precision and distance measuring sensor, and the data volume of storage and maintenance is large.
2, robot is in the environment according to certain regular independent navigation, topology equivalence is carried out by the sensor such as sonar and odometer within a period of time, and mobile route connection is become topological structure, form topological map (see " B.Kuipers.LocalMetricalandGlobalTopologicalMapsintheHybr idSpatialSemanticHierarchy.IEEEInternationalConferenceon RoboticsandAutomation; 2004,4845-4851 ").Topological map focuses on the quantitative description of object on spacetime coordinates, and advantage is accurate and states brief, but cannot provide the accurate description of environment geological information.
3, robot motion under operating personnel control, the laser sensor of band The Cloud Terrace or stereo camera is utilized to obtain depth image, the three dimensional depth point cloud map (see " P.Henry.RGB-Dmapping:UsingKinect-styledepthcamerasforden se3Dmodelingofindoorenvironments; TheInternationalJournalofRoboticsResearch, 2012 ") of environment is rebuild by a cloud.Three-dimensional point cloud map can describe obstacle height information, but constructive process calculated amount is huge, and computation burden heavy in environment is on a large scale difficult to meet requirement of real time.
Although tradition can under robot self pose be coupled uncertain condition with environmental characteristic with map environment modeling method based on location while grid, solve the joint probability estimation problem of online grating map creating, but along with the raising of the increase of environment scale, circumstance complication degree and non-intellectual, there is environment and describe the problem too simple, counting yield is low in the method, constrains the deep layer understanding ability of robot to environment.How from the characteristic of human intelligible physical environment scene, not by artificial landmark means, utilize the multi-sensor information fusion such as the vehicle-mounted vision of robot, range finding, set up efficient environment descriptive model, and create problem in solution synchronous robot location with Online Map, have great importance.
When needing the mixed information such as road sign, place, grid, environmental sensor pose describing large-scale complex environment, the environmental model of mixed form should be set up, such as topology/grid mixing map, grid/feature mixing map etc. (see " Z.Lin.Recognition-basedIndoorTopologicalNavigationUsingR obustInvariantFeatures.IEEE/RSJInternationalConferenceon IntelligentRobotsandSystems.2005,3975-3980 ").While under this type of environment describes, robot localization and map building (SLAM) method generally follow stratification (Hierarchical) basic ideas, also referred to as son ground drawing method.General this kind of method needs to rely on certain map partition principle, distance measuring sensor is utilized to set up some separate sub-maps, the shared information antithetical phrase map recycled between sub-map carries out splicing and merges (see " J.Tard ó s.Robustmappingandlocalizationinindoorenvironmentsusings onardata; TheInternationalJournalofRoboticsResearch; 2002,21 (4): 311-330. ").The key difficulties of stratification SLAM method comprises topological correlation between the simultaneous localization and mapping in grid sub-map, the auto Segmentation of sub-map, sub-map, and the problem such as global level global coherency, be the key determining map building result precision and reliability.
Simultaneous localization and mapping (SLAM) in grid sub-map generally adopts following several method: EKF (ExtendedKalmanFilter, EKF), Rao-Blackwellized particle filter (Rao-BlackwellizedParticleFilter, RBPF), sparse Extended information filter (SparseExtendedInformationFilter, SEIF).The well-known shortcoming of EKF is, when estimated value and actual value depart from larger, carries out the mode that first-order linear is similar to can introduce larger linearized stability to nonlinear model.It is more effective that RBPF algorithm sets up problem at process complex environment map; but because each particle correspondence saves a complete environmental map; so it is very high to use RBPF to carry out the computation complexity of simple grating map creating under fairly large environment, be comparatively difficult to carry out.Also there is shortcoming in sparse Extended information filter (SEIF), the estimation average and the variance computation complexity that information vector and information matrix are converted to state vector very high in actual applications at every turn, is also comparatively difficult to implement in extensive environment.
The auto Segmentation of sub-map is one of difficult point of cascade map building.Current each seed map method is substantially all to make the sub-map of robot auto Segmentation by identification artificial landmark.Also method is had to utilize prior expert along training scene characteristic storehouse, but robot can not be realized to critical path target automatic acquisition (see " SchleicherD.Real-timehierarchicalstereoVisualSLAMinlarge-scaleenvironments.RoboticsandAutonomousSystems; 2010,58:991-1002. ").Also have certain methods environment for use Geometrical change information identification natural scene road sign, realize sub-map auto Segmentation, such as, point of crossing etc. between Voronoi unit, but the method result can obtain a large amount of road sign, and have lost the brief advantage of topological structure.
The natural scene of view-based access control model conspicuousness is detected as sub-map auto Segmentation and provides a kind of resolving ideas, also becomes the important means that mobile robot understands environment in circumstances not known.It walks around image object test problems, extracts the road sign of significant scene description as topological map, also brings new problem simultaneously, namely how to select significant scene to take into account storage demand and road sign matching problem.Some researchists, under human vision pays close attention to the inspiration of mechanism (VisualAttention), propose the multiple human perceptual model detected for interesting image regions.Its ultimate principle is, when biosome observes surrounding environment, often because notice to concentrate on certain or some scenery as outstanding object by behavior object demand or local scene clue selectively, various environment can effectively be distinguished and represented to such scenery as road sign.Newly there is the BayesianSurprise model of the unusual change of a kind of scene visual feature in recent years, it compares vision significance (Saliency) and Shannon entropy has more dominance energy (see " A.Ranganathan.BayesianSurpriseandLandmarkDetection; IEEEInternationalConferenceonRoboticsandAutomation, 2009 ").BayesianSurprise model achieves initial success application in fields such as the monitoring of video anomalous event, natural landmark detections, present its capacity of orientation to scene marked change.But at present, the angle not yet having bibliographical information to detect from BayesianSurprise conspicuousness discusses cascade environmental map model and establishment problem thereof.
In addition, how solving topological correlation between sub-map, and guarantee the global coherency of topological node, is another difficult point of cascade map building problem.In grid/topological cascade map, if the complexity in order to reduce cascade map calculation and storage, and ignoring the correlativity between road sign and global level, the inconsistency of global map can be introduced, cause the topological structure that creates out cannot be closed into ring.
Look into newly through patent retrieval, the people such as Xiong Rong have applied for No. 200610053690.2nd, Chinese invention patent, and name is called " method of mobile robot's simultaneous localization and mapping in circumstances not known ".Disclose the method for a kind of mobile robot simultaneous localization and mapping in circumstances not known, utilize the data that distance measuring sensor obtains, build local line segment characteristics map and grating map, utilize current robot pose estimated result to carry out coordinate transform to local map, thus upgrade overall line segment characteristics map and overall grating map.The method is based on extended Kalman filter, not high enough for the robustness compared with complex environment.In addition, the method creates environment geometry line segment characteristics map simultaneously and occupies grating map two kinds of maps, but not adopts stratification cascade map strategy.Under extensive environment, because robot localization easily occurs mistake, this can cause the two kinds of maps created to occur obvious stitching error; In addition, if to the complete grating map of extensive creating environments one, can cause huge calculated amount and memory space, actual being difficult to meets.
The people such as Wen Feng have applied for No. 201110376468.7th, Chinese invention patent, and name is called " a kind of method realizing mobile robot's simultaneous localization and mapping ".According to boat position, the method infers that mileage takes into account road sign observation data, adopt " strong tracking filfer " to improve the precision of robot localization and map building.But the map that this inventive method creates belongs to characteristics map, but not grid or topological map, robot path planning and navigation can not be directly used in.In addition, the method is also only for indoor small-scale environment and the artificial Landmarks arranged, and this also all limits the range of application of the method.
Beam the people such as to turn to and has applied for No. 200710019784.2nd, Chinese invention patent, and name is called " a kind of mobile robot map-building system and map creating method ".A kind of mobile robot map-building system and method based on wireless sensor network of this disclosure of the invention.The wireless sensor network data that the method utilization is formed creates the environmental map containing multiple information.This belongs to a kind of robot map creating method by external sensor, however due to wireless sensor node measuring accuracy lower, can only carry out rough location to robot, the environmental map created does not reach the demand of precision navigation yet.In addition, the method needs wireless sensor network node cloth to be spread in guarded region, and the method under the indoor environment not possessing this condition is also inapplicable.
Summary of the invention
How technical matters: for two subject matters existed in cascade map building, namely select significant scene to take into account storage demand and road sign matching problem, realize sub-map auto Segmentation; How to solve topological correlation between sub-map, and guarantee the global coherency of topological node; The present invention proposes a kind of mobile robot's cascade map creating method based on remarkable scene point.
Technical scheme: the mobile robot's cascade map creating method detected based on remarkable scene point, described method comprises:
According to the view data that the sensor of mobile robot gathers, carry out online significantly scene point and detect, generate Global Topological node;
Upgraded mobile robot's pose and local grid sub-map: according to laser sensor data, dead reckoning sensors, a upper cycle is to the pose estimation of mobile robot and in earlier stage build the local grid sub-map obtained, estimate the local pose of current period mobile robot, and upgrade grid sub-map local coordinate frame;
Create Global Topological map structure using remarkable scene point as topological node, on the basis of robot trajectory's occlusion detection, introducing weighted scanning matching method and relaxation method are optimized topological structure, revise the consistance of Global Topological map.
Wherein, remarkable scene point detection method comprises scene image feature extraction and scene significance calculates, and step is as follows:
1) guiding the sampling of SURF feature to gather marking area by vision noticing mechanism, rejecting a large amount of without being beneficial to the information characterizing Site characterization;
2) cluster is carried out by the SURF proper vector set of K-mean clustering algorithm to image, and adopt the space word bag model structure scene appearance features merging spatial relationship to describe, scene image is described as a kind of vision word histogram merging spatial relationship;
3) utilize the foundation of vision word histogram based on the place MultivariatePolya model of this feature interpretation, and by calculation expectation BayesianSurprise threshold value, judge whether the place residing for current robot is remarkable scene point.
The accurate description that barrier occupies the composite character such as grid, scene location feature is comprised in described local grid sub-map; Local grid sub-map adopt with current period position of mobile robot be initial point, the coordinate system that is X-axis with current period mobile robot positive dirction;
While described local grid sub-map, robot localization and map building adopt Rao-Blackwellized particle filter algorithm, create first Global Topological node after completing initial alignment, and the grating map of current establishment is saved as sub-map corresponding to described first Global Topological node.
Described Global Topological map adopts to be Global Topological node with remarkable scene point, to be connected to the graph structure that adjacent topological node formed with fillet; Each node occupies grid sub-map with a local and is associated, the transformation relation of adjacent sub-map reference framework that every bar limit is corresponding.
Described Global Topological map building step is:
1) robot acquisition vision sensor data, and carry out scene significance detection, in Global Topological map, new topological node is created when new remarkable scene point being detected, using the end pose of robot in a upper node as the true origin of new sub-map, the i.e. basis coordinates of new node, adds historical data by the feature at current field sight spot in addition;
2) iteration uses couplingsummation formula to carry out On-line Estimation and renewal to robot global position variance, the pedestal mark transformation relation between adjacent node is attached on internodal fillet, realizes relative position transmission between node and calculates;
3) topological node that each establishment is new, all mates with the historical data of topological node, thus judges whether mobile robot reaches the place accessed again; The similarity distance matching method of scene image SURF characteristic model is wherein adopted to calculate the similarity probability in current scene place and historic scenery place;
4) after detecting that bot access location trajectory is closed, to the order of connection of the topological node created by each node, the relative displacement converted quantity between the corresponding sub-map basis coordinates of weighted scanning matching method calculating adjacent node is adopted.
Described global coherency modification method is, mutual relationship between the node obtained by robot sensor observed quantity is regarded as a kind of position constraint between node, and employing relaxation method solves to meet and retrains and the topological organization structure of optimum, and step is as follows:
1) for certain node i, the estimated position of node described in each neighbors j position calculation utilizing described node and variance;
2) again according to the estimator about node i that all neighbors obtain, weighted mean obtains the new coordinate about i;
3) repeat above step, until in whole map all nodes position successively twice iteration medial error and be less than certain given threshold value, or when iteration exceedes certain total degree, terminate this relaxed algorithm process.
Preferably, described sensor comprises monocular cam and laser range finder.The data that described laser range finder obtains are that laser to scan in the environment obtained on barrier each relative to the Distance geometry angle of mobile robot in range finding height 35cm plane, within the scope of 0 ° ~ 180 °, every 1 ° of resolution obtains a laser beam data, amounts to 181 laser beam.
The present invention adopts technique scheme, has following beneficial effect:
1, for the fairly large complex environment residing for mobile work robot, establish the two-layer cascade map structure comprising local grid sub-map and Global Topological map, compare conventional cartographic model, effectively take into account the accurate description of grid sub-map to Environment Obstacles thing, and these two advantages of brief statement of relative position relation between topological map map.For barrier, the equal unknown situation of mobile robot self pose, achieve the two-layer cascade map comprising local grid sub-map and Global Topological map to create online, and solve robot pose uncertainty in map building process intercouple with map uncertainty associate, interactional problem.
2, in the natural scene situation of prosthetic road sign thing, the segmentation being used as between sub-map by the remarkable scene point in robot autonomous testing environment is proposed.Introduce human visual attention model, adopt a kind of natural landmark detection method based on remarkable scenario B ayesianSurprise, in large-scale complex indoor environment, there is lower loss and false drop rate, thus solve the sub-map auto Segmentation key issue in environment cascade map building.
3, in Global Topological map building process, relative coordinate relation between dynamic push operator node, and on the basis of robot trajectory's occlusion detection, introducing weighted scanning matching method and the topological structure of relaxation method to global map are optimized, and guarantee its global coherency.Storage space shared by the final cascade map created by robot and computational resource less, the map that can realize fairly large complicated circumstances not known creates online, and the cascade map obtained is suitable for all kinds of robot and carries out path planning and navigation application.
Accompanying drawing explanation
Fig. 1 is the initial alignment of the embodiment of the present invention and first Global Topological node process flow diagram;
Fig. 2 is the environment Mixed cascading cartographic model structural representation of the embodiment of the present invention, comprises local grid sub-map and Global Topological map;
Fig. 3 is the coordinate conversion relation schematic diagram between two Global Topological nodes of the embodiment of the present invention;
Fig. 4 is the remarkable scene point detection method process flow diagram of the embodiment of the present invention; The sBoW latent structure guided comprising image acquisition, SURF feature extraction, vision attention, the remarkable scene calculated based on apparent Surprise judge;
Fig. 5 a, 5b, 5c are that the environmental visual of remarkable scene point detection method typical scene in the actual environment comprising multiple regions such as (a) corridor, (c) office, (b) laboratory of the embodiment of the present invention notes figure and SURF feature extraction schematic diagram; Wherein the little figure in the left side of each subgraph shows the intensity of visual fixations, and the circle in the little figure in right side is the scene SURF characteristic pattern under vision attention guides;
Fig. 6 is the coordinate conversion relation schematic diagram between the local grid sub-map constructive process of the embodiment of the present invention and corresponding Global Topological node;
Fig. 7 is the visioning procedure figure of the integrated environment cascade map of the embodiment of the present invention;
Fig. 8 is that the present invention robot in example environment creates and the two-layer grid be spliced/topological cascade map.Detect when again accessing certain road sign in Global Topological, carry out the global coherency optimization of topological map, the Global Topological node line segment in figure is connected; In figure, curve is robot localization results trace, shows the kinematic roadmap of robot in map building process;
The some local grid sub-map of Fig. 9 for creating in example shown in Fig. 8.
Embodiment
Below in conjunction with specific embodiment, illustrate the present invention further, these embodiments should be understood only be not used in for illustration of the present invention and limit the scope of the invention, after having read the present invention, the amendment of those skilled in the art to the various equivalent form of value of the present invention has all fallen within the application's claims limited range.
The method step of the embodiment of the present invention is:
According to the view data of mobile robot's vehicle-mounted monocular camera collection, the natural scene road sign that the remarkable scene of on-line checkingi is corresponding, generates the topological node in global map;
According to range laser sensing data, dead reckoning speedometer sensor, a upper cycle is to the pose estimation of mobile robot and in earlier stage build the local grid sub-map obtained, estimate the local pose of current period mobile robot, and upgrade grid sub-map local coordinate frame;
Create Global Topological map structure using remarkable scene point as topological node, on the basis of robot trajectory's occlusion detection, introducing weighted scanning matching method and relaxation method are optimized topological structure, guarantee the global coherency of topological map.
Specific implementation is:
The initial alignment and the first Global Topological node that are illustrated in figure 1 the present embodiment create.First mobile robot starts (S1) to any direction from the optional position in circumstances not known, initial motion one segment distance in current room, gather laser sensor data (S2), and robot localization and map creating method carry out initial alignment (S3) while adopting in the grid sub-map of local in motion process, now create the sub-map that obtains using as the sub-map corresponding to first node in Global Topological map, now the range of movement of robot is only limitted in the room at initial place.When particle assembly converges in certain limit not yet, judge that initial alignment does not complete (S4), now circulate repeatedly until initial alignment completes; If initial alignment completes, then obtain the current pose under local coordinate system of robot, then robot continues mobile, and each region such as room and corridor of can coming in and going out, meanwhile carry out the cascade map building (S5) detected based on remarkable scene point.
In examples of implementation, the data that the vehicle-mounted range laser sensor of robot obtains are that laser to scan in the environment obtained on barrier each relative to the Distance geometry angle of mobile robot in range finding height 35cm plane, within the scope of 0 ° ~ 180 °, every 1 ° of resolution obtains a laser beam data, amounts to 181 laser beam.The vehicle-mounted vision sensor of robot is PTZCCD camera, gathers RGB image resolution ratio 320*240.
Preferably, in the present embodiment, environment cascade map structure is suitable for describing the floor indoor environment that such as hospital, office, laboratory homalographic are comparatively large, comprise the area of space such as corridor, multiple rooms, there is the double-layer structure of local grid sub-map and Global Topological map, as shown in Figure 2.Wherein:
1) contain the accurate description that barrier occupies the composite character such as grid, scene location feature in the grid sub-map of local, be suitable for mobile robot and use vehicle-mounted distance measuring sensor to locate accurately under the sub-map frame in local.Local occupy grid sub-map adopt with current period position of mobile robot be initial point, the coordinate system that is X-axis with current period mobile robot positive dirction;
Sub-map M contains the current pose of robot , Probabilistic Cell P, n place road sign position coordinates .These variablees are all set up relative to robot local loop coordinate system B, and basis coordinates can be taken as the coordinate of sub-map building initial time robot.Coordinate conversion relation such as between adjacent sub-map can be designated as , represent the relative coordinate converted quantity between node 2 and node 1.Every sub-map is created by corresponding process respectively, and only use the odometer in this process and other sensor reading during establishment, sub-map reference is also relative to the pose of robot at sub-map building initial time.Therefore each sub-map relatively between safeguard relatively independently, and not yet to correct through the overall situation.Crucial place is designated as FP={fp 1..., fp m, natural scene is characterized as , wherein it is the SURF Feature Descriptor of scene.
M = ( x R B , P , x F 1 B , · · · , x F n B ) T - - - ( 1 )
2) Global Topological map adopts and is Global Topological node with remarkable scene point, is connected to the graph structure that adjacent topological node formed with fillet.Each node occupies grid sub-map with a local and is associated, the transformation relation of adjacent sub-map reference framework that every bar limit is corresponding.
Each topological node i and sub-map M ibe associated, world coordinates is designated as [x i, y i, θ i].When node creates, this value is initialized as current odometer reading.The position variance of node i is designated as v i, the variance yields being initialized as previous node adds compares 2% of previous node forward travel distance.In robot advance process, when establishment or when again visiting a node, this node and previous node limit is connected.Relative metric position relationship between topological node i with node j is designated as , as shown in Figure 3, it has variance v ij, be taken as 0.02d ij.
Fig. 4 is the remarkable scene point detection method process flow diagram of the present embodiment, the monocular vision sensor vehicle-mounted according to mobile robot, and the remarkable scene point of testing environment in robot kinematics, comprises scene image feature extraction and scene significance calculates, and step is as follows:
1) vision noticing mechanism introducing simulating human vision guides SURF feature to sample and gathers marking area, rejecting a large amount of without being beneficial to the information characterizing Site characterization, improving image processing efficiency;
To input scene image, sample respectively and set up the monochromatic gaussian pyramid (s of 5 grades of yardsticks 0..., s 4), 5 grades of yardstick CIELAB color space gaussian pyramids , and 4 grades of yardstick difference of Gaussian pyramid (DoG 1..., DoG 4).Then to every layer of pyramid decomposition be intensity (I), color (C) and direction (O) three passages extract characteristic pattern respectively, and adopt normalization to operate by waiting weights to add up after process, obtain vision attention figure:
Wherein on-center and off-center is the centripetal operator of cellula visualis in simulation human eye retina and centrifugal operator respectively.The sampling of SURF feature is carried out higher than the pixel region of certain threshold value for significance probability in marking area, thus the scope of feature extraction is limited in marking area, in order to avoid introduce noise spot scene key feature to without symbolical meanings, usually from the background such as body of wall, ground.
2) cluster is carried out by the SURF proper vector set of K-mean clustering algorithm to image, and adopt the space word bag model (SpatialBag-of-Word merging spatial relationship, sBoW) construct scene appearance features to describe, scene image is described as a kind of vision word histogram merging spatial relationship;
Carry out cluster by the SURF proper vector set of K-mean clustering algorithm to image, acquire k cluster centre as trunk vision word.Each topography block in image is approximately trunk vision word and represents, as the formula (3), by SURF proper vector s by characteristic quantification process employing arest neighbors classification method ibe quantified as vision word w irepresent.
w i = arg min j ( | | s i - v j | | 2 ) - - - ( 3 )
Due to cluster centre substantial amounts, in order to accelerate quantification speed further, KD-Tree can be set up to cluster centre, can significantly accelerate nearest neighbor search speed.After characteristic quantification, add up vision word frequency of occurrence in every width image, structure generates vision word histogram, thus reduces the complexity of iamge description.
The spatial relation description of further introducing vision word.If image vision word histogram vectors is designated as X=(n 1, n 2..n w), by each vision word relative to feature set set geometric center O=(x 0, y 0) distance and angle two features its space distribution is described, and set up histogram respectively.
A) distance component: the Euclidean distance (L calculating each unique point and geometric center 1, L 2..., L w), get intermediate value as unit length L, other length is divided into 0 ~ L with reference to the ratio of L maxfour interior intervals;
B) angle component: formed angle theta with each unique point nearest neighbor point anticlockwise with it.Any selection unique point is as starting point F 0, other point with counterclockwise from the inside to the outside number consecutively for F 1, F 2..., F w-1.Calculate with angle theta i, i=1,2 ..., W-1, and be quantified as 0 ° ~ θ maxfive interior intervals.
So obtain the scene visual appearance features vector shown in (4) formula, W dimensional vector { n wherein wthe vision word statistics frequency that the word statistics in visual vocabulary table is formed, meet , rear Q dimensional vector { p qit is the vision word spatial relationship statistics frequency.
a=[n 1,n 2,...,n W,p 1,n 2,...,p Q](4)
3) foundation of vision word histogram is utilized based on the place MultivariatePolya(MP of this feature interpretation) model, and by calculation expectation BayesianSurprise threshold value, judge whether the place residing for current robot is remarkable scene point.
If current location model of place is M, there is prior probability distribution p (M).BayesianSurprise be defined as when online obtain observed quantity z time, for the KL distance between the Posterior distrbutionp p (M|z) of model M and prior distribution, that is:
S ( z ) = ∫ M p ( M ) log p ( M ) p ( M | z ) - - - ( 5 )
If the BayesianSurprise of robot current scene observation exceedes certain Surprise threshold value, then think that current location has enough notable features.Surprise threshold value adopts a kind of MonteCarlo method to carry out approximate treatment expectation value herein, is called and expects Surprise.Namely sample N number of observed quantity z from current ground point model p (M) 1:Nso, expect that Surprise is just calculated as the average of Surprise value corresponding to these samples:
E ( z ) = 1 N Σ i = 1 N S ( z i ) - - - ( 6 )
If the observation A={a} of certain road sign given, wherein vectorial a as the formula (4).If w vision word occurrence number is n wprobability be θ wso vectorial a meets the multinomial distribution that parameter is n and θ, wherein θ=[θ 1, θ 2..., θ w].Consider the specific θ value of one and θ s=[θ s1..., θ sW], it obeys multinomial distribution:
p ( a | θ s ) = n ! n 1 ! n 2 ! . . . n W ! θ s 1 n 1 θ s 2 n 2 . . . θ sW n w - - - ( 7 )
Introduce parameter alpha s=[α s1, α s2..., α sW], represent for θ s, to w kind vision word observation α swlearn that the probability that this word occurs is θ for-1 time later sw, then θ sobey with α sfor the Dirichlet of condition distributes, that is:
p ( θ s | α s ) = Γ ( Σ w = 1 W α sw ) Π w = 1 W Γ ( α sw ) θ s 1 α s 1 - 1 θ s 2 α s 2 - 1 . . . θ sW α sW - 1 - - - ( 8 )
Wherein Γ () is Gamma function.
Note α=Σ wα w, place model of place p (α | A) bayes rule can be used to be calculated as:
p(α|A)∝p(A|α)p(α)(9)
Likelihood score p (A| α) expansion that histogram is observed, so above formula is rewritten as:
p ( α | A ) ∝ p ( α ) ∫ θ p ( θ | α ) ∏ a ∈ A p ( a | θ ) - - - ( 10 )
Above formula is called the MultivariatePolya(MP of place scene) model.Formula (7) (8) are substituted into the integration item of (10) formula, priori p (α) is taken as and is uniformly distributed, then place model of place is calculated as:
p ( α | A ) ∝ n ! Π w = 1 W n w ! Γ ( α ) Γ ( n + α ) Π w = 1 W Γ ( n w + α w ) Γ ( α w ) - - - ( 11 )
The maximal possibility estimation of Dirichelet profile parameter can be obtained by sample training.Therefore for the some typical scenes (corridor, office, laboratory etc.) in environment, given a certain amount of sample image also uses iterative gradient decline optimization to learn to obtain α, thus obtains the priori MP model of scene.In order to calculate the expectation Surprise shown in (6) formula, utilize this MP model first according to (8) formula sampling θ s, then according to (7) formula sampling a, that obtain is exactly observation sample z, as shown in the dotted line frame in Fig. 4.Fig. 5 utilizes the remarkable scene point detection method in invention, and in the actual environment comprising multiple regions such as (a) corridor, (c) office, (b) laboratory, the environmental visual of typical scene notes figure and SURF feature extraction example; Wherein the little figure in the left side of each subgraph shows the intensity of visual fixations, circle in the little figure in right side is the scene SURF feature under vision attention guides, the feature extraction scope of scene is limited in marking area by visible the method, in order to avoid introduce the noise spot (usually from the background such as body of wall, ground) scene key feature to without symbolical meanings.
According to the vehicle-mounted range finding of robot, speedometer sensor data, build the current environment cascade map observed: it comprises the local structure of grid sub-map and the structure of Global Topological map.While wherein in the grid sub-map of local, robot localization and map building adopt Rao-Blackwellized particle filter algorithm, create first Global Topological node, and the grating map of current establishment is saved as corresponding sub-map after completing initial alignment.Fig. 7 has been initial alignment and after creating first Global Topological node, the visioning procedure of integrated environment cascade map, and entire cascaded map building step is:
1) robot continues long distance motion in the environment, and period passes through each room and corridor area (step S71); Gather laser sensor data (step S72), according to dead reckoning speedometer sensor, a upper cycle is to the pose estimation of mobile robot and in earlier stage build the local grid sub-map obtained, still adopt Rao-Blackwellized particle filter algorithm, estimate the local pose of current period mobile robot, and upgrade grid sub-map local coordinate frame (step S73);
2) robot acquisition vision sensor data (step S74), and carry out scene significance detection (step S75), in Global Topological map, new topological node (step S76) is created when new remarkable scene point being detected, using the end pose of robot in a upper node as the true origin of new sub-map, the i.e. basis coordinates of new node, adds historical data by the feature at current field sight spot in addition;
3) iteration uses couplingsummation formula to carry out On-line Estimation and renewal to robot global position variance, pedestal mark transformation relation between adjacent node is attached on internodal fillet, thus realizes relative position transmission calculating (step S77) between node;
If first node base coordinate is [0,0,0], as global coordinate system reference origin.As new node fp i+1during establishment, robot is at a upper node fp iin pose just as new node fp i+1true origin, namely .For realizing relative position transmission between node, iteration uses CouplingSummation formula meter to robot global position variance carry out On-line Estimation and renewal.Note for robot global coordinate system pose, usually select the basis coordinates initial point of first node as global coordinate system initial point, therefore also can be denoted as .If the corresponding place FP of new node i(is fp i) create time the relative fp of robot ilocal pose for and variance be respectively with .CouplingSummation formula utilizes the relative coordinate relation of two adjacent nodes to calculate relative coordinate relation between non-conterminous node.First utilize with calculate
x robot G = f ( x fp i G , x robot fp i ) = x fp i G + x robot fp i cos θ fp i G - y robot fp i sin θ fp i G y fp i G + x robot fp i sin θ fp i G + y robot fp i cos θ fp i G θ fp i G + θ robot fp i , - - - ( 12 )
P robot G = ▿ f x fp i G P fp i G ▿ f x fp i G T + ▿ f x robot fp i P robot fp i ▿ f x robot fp i T . - - - ( 13 )
Wherein:
▿ f x fp i G = ∂ x robot G ∂ x fp i G = 1 0 - x robot fp i cos θ fp i G - y robot fp i sin θ fp i G 0 1 x robot fp i sin θ fp i G - y robot fp i cos θ fp i G 0 0 1 , - - - ( 14 )
▿ f x robot fp i = ∂ x robot G ∂ x robot fp i = cos θ fp i G - sin θ fp i G 0 sin θ fp i G cos θ fp i G 0 0 0 1 , - - - ( 15 )
with robot and fp respectively icorresponding overall absolute coordinates.Then, in order to obtain present node fp ioverall variance , adopt CouplingSummation formula similarly, utilize with calculate
P fp i G = ∂ x fp i G ∂ x fp i - 1 G P fp i - 1 G ( ∂ x fp i G ∂ x fp i - 1 G ) T + ∂ x fp i G ∂ x fp i fp i - 1 P fp i fp i - 1 ( ∂ x fp i G ∂ x fp i fp i - 1 ) T . - - - ( 16 )
(16) formula is used to calculate iteratively until .For calculating need utilize with , now have , and be known as [0,0,0], therefore can obtain
4) topological node that each establishment is new, all mates with the historical data of topological node, thus judges whether mobile robot reaches the place (step S78) accessed again; The similarity distance matching method of scene image SURF characteristic model is wherein adopted to calculate the similarity probability in current scene place and historic scenery place;
Robot multi-view image SURF characteristic matching strategy investigates two scene fp aand fp bsURF visual feature vector, its descriptor is respectively with , the SURF feature wherein detected is respectively m aand m bindividual, namely δ → A = { δ → A 1 , . . . , δ → A m A } , δ → B = { δ → B 1 , . . . , δ → B m B } 。First matching process calculates with between Euclidean distance dist (fp a, fp b), then adopt RANSAC method to remove impure point and can enough agree with point in model to retain, thus calculate likelihood probability tolerance.
5) after detecting that bot access location trajectory is closed, to the order of connection of the topological node created by each node, the relative displacement converted quantity (step S79) between the corresponding sub-map basis coordinates of weighted scanning matching method calculating adjacent node is adopted.
Before circuit closed, feature F is at sub-map M jbasis coordinates system B jwith sub-map M ibasis coordinates system B iunder there is different local location coordinates, and these two sub-maps are separate.And after circuit closed, sub-map M iits basis coordinates system B is retained in continuation ioutside also will introduce B jrelative to B iestimation, namely and sub-map M jalso B is preserved jrelative to B j-1estimation, namely .This just makes to set up new connection between node j-1 and node i.Adopt the grid sub-map of weighted scanning coupling (WeightedScanMatching, WSM) method to adjacent node i and node i+1 correspondence to mate, calculate d i, i+1, θ i, i+1with , WSM method has higher computing velocity and precision for laser scanning data.After completing the converted quantity calculated between adjacent sub-map basis coordinates, carry out global map consistent correction.
Owing to there is the error of calculation in scan matching, after establishment completes all sub-maps, the global coherency of high-rise topological node is difficult to ensure, shows as only according to the local relative position relation (d between each node ij, θ ijwith ) global map that recovers exists comparatively big error.For this reason robot being detected again after access node, in upper level node, carry out the global coherency correction (step S710) of topological map.
The implementation method of global coherency correction is: mutual relationship between the node obtained by robot sensor observed quantity is regarded as a kind of position constraint between node, and employing relaxation method (Relaxation) solves to meet and retrains and the topological organization structure of optimum.Fillet between node can be regarded as and be similar to " spring ", the target of this relaxed algorithm makes the energy of all " spring " in whole map reach minimum exactly.For each node i, relaxed algorithm often walks iteration and comprises the following steps:
1) for certain node i, utilize its each neighbors j position to calculate estimated position and the variance of this node:
(x i′) j=x j+d jicos(θ jij),(17)
(y i′) j=y j+d jisin(θ jij),(18)
And the variance (v of node i i') j=v j+ v ji.
2) again according to the estimator about i that all neighbors obtain, weighted mean obtains the new coordinate about i:
v i = n i Σ j 1 ( v i ′ ) j , - - - ( 20 )
Wherein n iit is the neighbors number of node i.
x i = 1 n i Σ j ( x i ′ ) j v i ( v i ′ ) j , - - - ( 21 )
y i = 1 n i Σ j ( y i ′ ) j v i ( v i ′ ) j , - - - ( 22 )
θ i = arctan ( Σ j sin ( θ i ′ ) j ( v i ′ ) j / Σ j cos ( θ i ′ ) j ( v i ′ ) j ) . - - - ( 23 )
When algorithm is initial, first node location is designated as [0,0,0], above two steps constantly iteration carry out, until in whole map all nodes position successively twice iteration medial error and be less than certain given threshold value, or when iteration exceedes certain total degree, terminate this relaxed algorithm process.

Claims (5)

1., based on mobile robot's cascade map creating method that remarkable scene point detects, it is characterized in that, described method comprises:
According to the view data that the vision sensor of mobile robot gathers, carry out online significantly scene point and detect, generate Global Topological node; Described remarkable scene point detection method comprises scene image feature extraction and scene significance calculates, and step is as follows:
1) guiding the sampling of SURF feature to gather marking area by vision noticing mechanism, rejecting a large amount of without being beneficial to the information characterizing Site characterization;
2) cluster is carried out by the SURF proper vector set of K-mean clustering algorithm to image, and adopt the space word bag model structure scene appearance features merging spatial relationship to describe, scene image is described as a kind of vision word histogram merging spatial relationship;
3) utilize the foundation of vision word histogram based on the place MultivariatePolya model of this feature interpretation, and by calculation expectation BayesianSurprise threshold value, judge whether the place residing for current robot is remarkable scene point;
Upgraded mobile robot's pose and local grid sub-map: according to laser sensor data, dead reckoning sensors, a upper cycle is to the pose estimation of mobile robot and in earlier stage build the local grid sub-map obtained, estimate the local pose of current period mobile robot, and upgrade grid sub-map local coordinate frame;
Create Global Topological map structure using remarkable scene point as topological node, on the basis of robot trajectory's occlusion detection, introducing weighted scanning matching method and relaxation method are optimized topological structure, revise the consistance of Global Topological map; Described Global Topological map adopts to be Global Topological node with remarkable scene point, to be connected to the graph structure that adjacent topological node formed with fillet; Each node occupies grid sub-map with a local and is associated, the transformation relation of adjacent sub-map reference framework that every bar limit is corresponding, and foundation step is:
1) robot acquisition vision sensor data, and carry out scene significance detection, in Global Topological map, new topological node is created when new remarkable scene point being detected, using the end pose of robot in a upper node as the true origin of new sub-map, the i.e. basis coordinates of new node, adds historical data by the feature at current field sight spot in addition;
2) iteration uses couplingsummation formula to carry out On-line Estimation and renewal to robot global position variance, the pedestal mark transformation relation between adjacent node is attached on internodal fillet, realizes relative position transmission between node and calculates;
3) topological node that each establishment is new, all mates with the historical data of topological node, thus judges whether mobile robot reaches the place accessed again; The similarity distance matching method of scene image SURF characteristic model is wherein adopted to calculate the similarity probability in current scene place and historic scenery place;
4) after detecting that bot access location trajectory is closed, to the order of connection of the topological node created by each node, the relative displacement converted quantity between the corresponding sub-map basis coordinates of weighted scanning matching method calculating adjacent node is adopted.
2. the mobile robot's cascade map creating method detected based on remarkable scene point according to claim 1, is characterized in that: comprise the accurate description that barrier occupies grid, these two composite characters of scene location feature in described local grid sub-map; Local grid sub-map adopt with current period position of mobile robot be initial point, the coordinate system that is X-axis with current period mobile robot positive dirction;
While described local grid sub-map, robot localization and map building adopt Rao-Blackwellized particle filter algorithm, create first Global Topological node after completing initial alignment, and the grating map of current establishment is saved as sub-map corresponding to described first Global Topological node.
3. the mobile robot's cascade map creating method detected based on remarkable scene point according to claim 1, it is characterized in that: described global coherency modification method is, mutual relationship between the node obtained by robot sensor observed quantity is regarded as a kind of position constraint between node, employing relaxation method solves to meet and retrains and the topological organization structure of optimum, and step is as follows:
1) for certain node i, the estimated position of node described in each neighbors j position calculation utilizing described node and variance;
2) again according to the estimator about node i that all neighbors obtain, weighted mean obtains the new coordinate about i;
3) repeat above step, until in whole map all nodes position successively twice iteration medial error and be less than certain given threshold value, or when iteration exceedes certain total degree, terminate this relaxed algorithm process.
4. the mobile robot's cascade map creating method detected based on remarkable scene point according to claim 1, it is characterized in that: described vision sensor is monocular cam, described laser sensor is laser range finder.
5. the mobile robot's cascade map creating method detected based on remarkable scene point according to claim 4, it is characterized in that: the data that described laser range finder obtains are that laser to scan in the environment obtained on barrier each relative to the Distance geometry angle of mobile robot in range finding height 35cm plane, within the scope of 0 °-180 °, every 1 ° of resolution obtains a laser beam data, amounts to 181 laser beam.
CN201310183577.6A 2013-05-16 2013-05-16 Based on mobile robot's cascade map creating method that remarkable scene point detects Expired - Fee Related CN103278170B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310183577.6A CN103278170B (en) 2013-05-16 2013-05-16 Based on mobile robot's cascade map creating method that remarkable scene point detects

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310183577.6A CN103278170B (en) 2013-05-16 2013-05-16 Based on mobile robot's cascade map creating method that remarkable scene point detects

Publications (2)

Publication Number Publication Date
CN103278170A CN103278170A (en) 2013-09-04
CN103278170B true CN103278170B (en) 2016-01-06

Family

ID=49060758

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310183577.6A Expired - Fee Related CN103278170B (en) 2013-05-16 2013-05-16 Based on mobile robot's cascade map creating method that remarkable scene point detects

Country Status (1)

Country Link
CN (1) CN103278170B (en)

Families Citing this family (86)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103674015B (en) * 2013-12-13 2017-05-10 国家电网公司 Trackless positioning navigation method and device
CN103994765B (en) * 2014-02-27 2017-01-11 北京工业大学 Positioning method of inertial sensor
GB201407643D0 (en) * 2014-04-30 2014-06-11 Tomtom Global Content Bv Improved positioning relatie to a digital map for assisted and automated driving operations
CN103983270B (en) * 2014-05-16 2016-09-28 中国科学技术大学 A kind of image conversion processing method of sonar data
CN103994768B (en) * 2014-05-23 2017-01-25 北京交通大学 Method and system for seeking for overall situation time optimal path under dynamic time varying environment
CN105405156B (en) * 2014-06-30 2019-06-25 联想(北京)有限公司 A kind of information processing method, device and electronic equipment
CN106033435B (en) * 2015-03-13 2019-08-02 北京贝虎机器人技术有限公司 Item identification method and device, indoor map generation method and device
CN104848848A (en) * 2015-03-30 2015-08-19 北京云迹科技有限公司 Robot map drafting and positioning method based on wireless base station and laser sensor as well as system thereof
CN104932494B (en) * 2015-04-27 2018-04-13 广州大学 The build mechanism of distribution of obstacles figure in a kind of probabilistic type room
CN105205859B (en) * 2015-09-22 2018-05-15 王红军 A kind of method for measuring similarity of the environmental characteristic based on 3 d grid map
CN105225604B (en) * 2015-10-30 2018-06-29 汕头大学 A kind of construction method of the mixing map of Mobile Robotics Navigation
WO2017079918A1 (en) * 2015-11-11 2017-05-18 中国科学院深圳先进技术研究院 Indoor scene scanning reconstruction method and apparatus
CN105509755B (en) * 2015-11-27 2018-10-12 重庆邮电大学 A kind of mobile robot synchronous superposition method based on Gaussian Profile
CN105467994B (en) * 2015-11-27 2019-01-18 长春瑶光科技有限公司 The meal delivery robot indoor orientation method that vision is merged with ranging
CN105573318B (en) * 2015-12-15 2018-06-12 中国北方车辆研究所 environment construction method based on probability analysis
CN105716609B (en) * 2016-01-15 2018-06-15 浙江梧斯源通信科技股份有限公司 Vision positioning method in a kind of robot chamber
CN107103002A (en) * 2016-02-22 2017-08-29 南京中兴新软件有限责任公司 The search method and device of image
CN105843223B (en) * 2016-03-23 2018-11-20 东南大学 A kind of mobile robot three-dimensional based on space bag of words builds figure and barrier-avoiding method
US11402213B2 (en) 2016-03-30 2022-08-02 Intel Corporation Techniques for determining a current location of a mobile device
CN105702151B (en) * 2016-03-31 2019-06-11 百度在线网络技术(北京)有限公司 A kind of indoor map construction method and device
CN105955258B (en) * 2016-04-01 2018-10-30 沈阳工业大学 Robot global grating map construction method based on the fusion of Kinect sensor information
CN105678346B (en) * 2016-04-01 2018-12-04 上海博康智能信息技术有限公司 A kind of object matching search method based on space clustering
CN105865449B (en) * 2016-04-01 2020-05-05 深圳市杉川机器人有限公司 Hybrid positioning method of mobile robot based on laser and vision
CN107436148B (en) * 2016-05-25 2020-09-25 深圳市朗驰欣创科技股份有限公司 Robot navigation method and device based on multiple maps
CN106403953B (en) * 2016-09-05 2019-07-16 江苏科技大学 A method of for underwater independent navigation and positioning
CN106441151A (en) * 2016-09-30 2017-02-22 中国科学院光电技术研究所 Three-dimensional object European space reconstruction measurement system based on vision and active optics fusion
US10281279B2 (en) * 2016-10-24 2019-05-07 Invensense, Inc. Method and system for global shape matching a trajectory
US10274325B2 (en) * 2016-11-01 2019-04-30 Brain Corporation Systems and methods for robotic mapping
CN106651821A (en) * 2016-11-25 2017-05-10 北京邮电大学 Topological map fusion method based on secondary moment maintenance propagation algorithm and topological map fusion system thereof
CN107121142B (en) * 2016-12-30 2019-03-19 深圳市杉川机器人有限公司 The topological map creation method and air navigation aid of mobile robot
CN106840148B (en) * 2017-01-24 2020-07-17 东南大学 Wearable positioning and path guiding method based on binocular camera under outdoor working environment
WO2018145235A1 (en) * 2017-02-07 2018-08-16 驭势(上海)汽车科技有限公司 Distributed storage system for use with high-precision maps and application thereof
CN106931975B (en) * 2017-04-14 2019-10-22 北京航空航天大学 Tactful paths planning method more than a kind of mobile robot based on semantic map
CN107194332A (en) * 2017-05-09 2017-09-22 重庆大学 A kind of Mental rotation mechanism implementation model for being translated and being rotated based on space-time
CN106959697B (en) * 2017-05-16 2023-05-23 电子科技大学中山学院 Automatic indoor map construction system for long straight corridor environment
CN107179082B (en) * 2017-07-07 2020-06-12 上海阅面网络科技有限公司 Autonomous exploration method and navigation method based on fusion of topological map and measurement map
CN107414624A (en) * 2017-08-28 2017-12-01 东营小宇研磨有限公司 Automate the concrete polished system of terrace robot
CN107607107B (en) * 2017-09-14 2020-07-03 斯坦德机器人(深圳)有限公司 Slam method and device based on prior information
CN107817802B (en) * 2017-11-09 2021-08-20 北京进化者机器人科技有限公司 Construction method and device of hybrid double-layer map
CN107917712B (en) * 2017-11-16 2020-07-28 苏州艾吉威机器人有限公司 Synchronous positioning and map construction method and device
CN109933056B (en) * 2017-12-18 2022-03-15 九阳股份有限公司 Robot navigation method based on SLAM and robot
CN108226895A (en) * 2017-12-27 2018-06-29 吉林大学 Static-obstacle thing identifying system and recognition methods based on laser radar
CN110069058A (en) * 2018-01-24 2019-07-30 南京机器人研究院有限公司 Navigation control method in a kind of robot chamber
CN108267121A (en) * 2018-01-24 2018-07-10 锥能机器人(上海)有限公司 The vision navigation method and system of more equipment under a kind of variable scene
CN108446273B (en) * 2018-03-15 2021-07-20 哈工大机器人(合肥)国际创新研究院 Kalman filtering word vector learning method based on Dield process
CN108540938B (en) * 2018-04-16 2020-06-16 绍兴文理学院 Method for repairing loophole in wireless sensor network
US10807236B2 (en) * 2018-04-30 2020-10-20 Beijing Jingdong Shangke Information Technology Co., Ltd. System and method for multimodal mapping and localization
CN110570465B (en) * 2018-06-05 2022-05-20 杭州海康机器人技术有限公司 Real-time positioning and map construction method and device and computer readable storage medium
KR102601141B1 (en) * 2018-06-22 2023-11-13 삼성전자주식회사 mobile robots and Localization method using fusion image sensor and multiple magnetic sensors
WO2020014951A1 (en) * 2018-07-20 2020-01-23 深圳市道通智能航空技术有限公司 Method and apparatus for building local obstacle map, and unmanned aerial vehicle
CN109272021B (en) * 2018-08-22 2022-03-04 广东工业大学 Intelligent mobile robot navigation method based on width learning
CN109146976B (en) * 2018-08-23 2020-06-02 百度在线网络技术(北京)有限公司 Method and device for locating unmanned vehicles
CN109682368B (en) * 2018-11-30 2021-07-06 上海肇观电子科技有限公司 Robot, map construction method, positioning method, electronic device and storage medium
US10611028B1 (en) 2018-11-30 2020-04-07 NextVPU (Shanghai) Co., Ltd. Map building and positioning of robot
CN109520505B (en) * 2018-12-03 2022-11-25 中国北方车辆研究所 Autonomous navigation topological map generation method
CN109541634B (en) 2018-12-28 2023-01-17 歌尔股份有限公司 Path planning method and device and mobile device
CN109737980A (en) * 2018-12-29 2019-05-10 上海岚豹智能科技有限公司 A kind of air navigation aid and its corresponding robot
CN109557928A (en) * 2019-01-17 2019-04-02 湖北亿咖通科技有限公司 Automatic driving vehicle paths planning method based on map vector and grating map
CN109725327B (en) * 2019-03-07 2020-08-04 山东大学 Method and system for building map by multiple machines
CN109934918B (en) * 2019-03-08 2023-03-28 北京精密机电控制设备研究所 Multi-robot collaborative map reconstruction method based on visual and tactile fusion mechanism
CN109959937B (en) * 2019-03-12 2021-07-27 广州高新兴机器人有限公司 Laser radar-based positioning method for corridor environment, storage medium and electronic equipment
CN110044358B (en) * 2019-04-29 2020-10-02 清华大学 Mobile robot positioning method based on field line characteristics
CN110455289B (en) * 2019-06-24 2020-09-11 特斯联(北京)科技有限公司 Intelligent tourist guide system and method based on face technology
CN110515382A (en) * 2019-08-28 2019-11-29 锐捷网络股份有限公司 A kind of smart machine and its localization method
CN110686677B (en) * 2019-10-10 2022-12-13 东北大学 Global positioning method based on geometric information
CN112711249B (en) * 2019-10-24 2023-01-03 科沃斯商用机器人有限公司 Robot positioning method and device, intelligent robot and storage medium
CN110889364A (en) * 2019-11-21 2020-03-17 大连理工大学 Method for constructing grid map by using infrared sensor and visible light sensor
EP3828587A1 (en) * 2019-11-29 2021-06-02 Aptiv Technologies Limited Method for determining the position of a vehicle
CN111999741B (en) * 2020-01-17 2023-03-14 青岛慧拓智能机器有限公司 Method and device for detecting roadside laser radar target
CN113916215A (en) * 2020-03-31 2022-01-11 上海擎朗智能科技有限公司 Method, device and equipment for updating positioning map and storage medium
CN111504322B (en) * 2020-04-21 2021-09-03 南京师范大学 Scenic spot tour micro-route planning method based on visible field
CN111814605B (en) * 2020-06-23 2024-01-19 浙江华睿科技股份有限公司 Main road identification method, main road identification device and storage device based on topological map
CN112146662B (en) * 2020-09-29 2022-06-10 炬星科技(深圳)有限公司 Method and device for guiding map building and computer readable storage medium
CN112254728A (en) * 2020-09-30 2021-01-22 无锡太机脑智能科技有限公司 Method for enhancing EKF-SLAM global optimization based on key road sign
CN112362044A (en) * 2020-11-03 2021-02-12 北京无限向溯科技有限公司 Indoor positioning method, device, equipment and system
CN112902953B (en) * 2021-01-26 2022-10-04 中国科学院国家空间科学中心 Autonomous pose measurement method based on SLAM technology
CN113029168B (en) * 2021-02-26 2023-04-07 杭州海康机器人股份有限公司 Map construction method and system based on ground texture information and mobile robot
CN113010724A (en) * 2021-04-29 2021-06-22 山东新一代信息产业技术研究院有限公司 Robot map selection method and system based on visual feature point matching
CN113156461B (en) * 2021-05-17 2021-11-05 紫清智行科技(北京)有限公司 Dynamic loading method and system for 2D laser radar SLAM map
CN113724384A (en) * 2021-07-30 2021-11-30 深圳市普渡科技有限公司 Robot topology map generation system, method, computer device and storage medium
CN113759928B (en) * 2021-09-18 2023-07-18 东北大学 Mobile robot high-precision positioning method for complex large-scale indoor scene
CN114440892B (en) * 2022-01-27 2023-11-03 中国人民解放军军事科学院国防科技创新研究院 Self-positioning method based on topological map and odometer
CN114280583B (en) * 2022-03-02 2022-06-17 武汉理工大学 Laser radar positioning accuracy verification method and system without GPS signal
CN114413910B (en) * 2022-03-31 2022-07-12 中国科学院自动化研究所 Visual target navigation method and device
CN115512065B (en) * 2022-11-17 2023-05-05 之江实验室 Real-time map construction method and device based on blocking large-scale scene
CN116578088B (en) * 2023-05-04 2024-02-09 浙江大学 Outdoor autonomous mobile robot continuous track generation method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101619985A (en) * 2009-08-06 2010-01-06 上海交通大学 Service robot autonomous navigation method based on deformable topological map
CN102402225A (en) * 2011-11-23 2012-04-04 中国科学院自动化研究所 Method for realizing localization and map building of mobile robot at the same time

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100843085B1 (en) * 2006-06-20 2008-07-02 삼성전자주식회사 Method of building gridmap in mobile robot and method of cell decomposition using it

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101619985A (en) * 2009-08-06 2010-01-06 上海交通大学 Service robot autonomous navigation method based on deformable topological map
CN102402225A (en) * 2011-11-23 2012-04-04 中国科学院自动化研究所 Method for realizing localization and map building of mobile robot at the same time

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于分布式感知的移动机器人同时定位与地图创建;梁志伟;《机器人ROBOT》;20090131;第31卷(第1期);第33-39页 *
基于层次化SLAM的未知环境级联地图创建方法;钱堃;《机器人 ROBOT》;20111130;第33卷(第6期);第736-741页 *
未知环境中移动机器人视觉环境建模与定位研究;王璐;《中国博士论文全文数据库》;20080115;论文正文第60页 *

Also Published As

Publication number Publication date
CN103278170A (en) 2013-09-04

Similar Documents

Publication Publication Date Title
CN103278170B (en) Based on mobile robot's cascade map creating method that remarkable scene point detects
Xiao et al. Dynamic-SLAM: Semantic monocular visual localization and mapping based on deep learning in dynamic environment
Yin et al. 3d lidar-based global localization using siamese neural network
CN104330090B (en) Robot distributed sign intelligent semantic map creating method
Chen et al. Learning an overlap-based observation model for 3D LiDAR localization
CN109597087A (en) A kind of 3D object detection method based on point cloud data
CN105856230A (en) ORB key frame closed-loop detection SLAM method capable of improving consistency of position and pose of robot
CN103247040A (en) Layered topological structure based map splicing method for multi-robot system
Mueller et al. GIS-based topological robot localization through LIDAR crossroad detection
Meng et al. Efficient and reliable LiDAR-based global localization of mobile robots using multiscale/resolution maps
Gong et al. A two-level framework for place recognition with 3D LiDAR based on spatial relation graph
Liu et al. A real-time stereo visual-inertial SLAM system based on point-and-line features
Zhou et al. An autonomous navigation approach for unmanned vehicle in outdoor unstructured terrain with dynamic and negative obstacles
Shi et al. LiDAR localization at 100 FPS: a map-aided and template descriptor-based global method
Li et al. An efficient point cloud place recognition approach based on transformer in dynamic environment
Wang et al. Lightweight object-level topological semantic mapping and long-term global localization based on graph matching
Zhou et al. Place recognition and navigation of outdoor mobile robots based on random Forest learning with a 3D LiDAR
Soleimani et al. A disaster invariant feature for localization
CN115690343A (en) Robot laser radar scanning and mapping method based on visual following
Cao et al. An end-to-end localizer for long-term topological localization in large-scale changing environments
Dong et al. Weighted triplet loss based on deep neural networks for loop closure detection in VSLAM
Zhang et al. Research on AGV map building and positioning based on SLAM technology
Wang et al. Monocular visual-inertial localization in a point cloud map using feature-to-distribution registration
Liu et al. Laser 3D tightly coupled mapping method based on visual information
Li et al. Object-Aware View Planning for Autonomous 3D Model Reconstruction of Buildings Using a Mobile Robot

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160106

Termination date: 20190516