CN116399328B - Map construction and positioning method of indoor mobile robot based on BIM - Google Patents

Map construction and positioning method of indoor mobile robot based on BIM Download PDF

Info

Publication number
CN116399328B
CN116399328B CN202310409184.6A CN202310409184A CN116399328B CN 116399328 B CN116399328 B CN 116399328B CN 202310409184 A CN202310409184 A CN 202310409184A CN 116399328 B CN116399328 B CN 116399328B
Authority
CN
China
Prior art keywords
map
information
particle
indoor space
mobile robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310409184.6A
Other languages
Chinese (zh)
Other versions
CN116399328A (en
Inventor
张文胜
岳康
赵耘彻
李彦静
庞祯
纪强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei Wutu Technology Co ltd
Shijiazhuang Sanuo Technology Co ltd
Shijiazhuang Tiedao University
Original Assignee
Hebei Wutu Technology Co ltd
Shijiazhuang Sanuo Technology Co ltd
Shijiazhuang Tiedao University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei Wutu Technology Co ltd, Shijiazhuang Sanuo Technology Co ltd, Shijiazhuang Tiedao University filed Critical Hebei Wutu Technology Co ltd
Priority to CN202310409184.6A priority Critical patent/CN116399328B/en
Publication of CN116399328A publication Critical patent/CN116399328A/en
Application granted granted Critical
Publication of CN116399328B publication Critical patent/CN116399328B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3807Creation or updating of map data characterised by the type of data
    • G01C21/383Indoor data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3841Data obtained from two or more sources, e.g. probe vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/206Drawing of charts or graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application relates to a map construction and positioning method of an indoor mobile robot based on BIM, which comprises the following steps: s1, acquiring structural information and semantic information of an indoor space based on BIM; s2, generating a grid map based on the structural information, constructing a structural layer map, and constructing a semantic map by coupling semantic information; s3, operating an AMCL positioning module, fusing vision sensor data and laser radar data, correcting particle weight calculation, judging whether a particle swarm is converged, entering S4 if the particle swarm is converged, and repeating S3 if the particle swarm is not converged; and S4, running a scanning matching module, optimizing the alignment degree of the positioning point cloud, and outputting a positioning result. By utilizing the fusion of the visual sensor data and the laser radar data in the positioning process of the AMCL positioning module, the calculation method of the particle weight is optimized, so that the particle swarm convergence speed is higher, the positioning accuracy and robustness of the indoor mobile robot are improved, the point cloud scanning matching is started after the particle swarm convergence, the alignment degree of the point cloud and the environment is improved, and the positioning accuracy is further improved.

Description

Map construction and positioning method of indoor mobile robot based on BIM
Technical Field
The application relates to the field of map construction and positioning of indoor mobile robots, in particular to a map construction and positioning method of an indoor mobile robot based on BIM.
Background
With the continuous development of science and technology, the application scene of the mobile robot is also expanded, and the indoor mobile robot needs to face more complex working contents and working environments, and the autonomous navigation of the indoor mobile robot is more technically challenging, unlike the traditional industrial robot. At present, the main flow method in the field of mobile robot environment sensing and positioning navigation is SLAM (Simultaneous Localization AND MAPPING), but in the practical application process, the SLAM method has large data calculation amount and high prior information acquisition time cost, and when facing an indoor environment with large area and complex structure, the SLAM method needs larger calculation amount and time operation cost, so that the SLAM method cannot be well suitable for the indoor environment with large area and complex structure. And BIM technology provides a solution to the above-mentioned problems.
BIM technology is used as a digital expression tool of physical and functional characteristics of buildings, and has good digital information integration and sharing advantages. In recent years, as BIM technology is widely applied in practical projects, more and more construction projects realize engineering BIM construction, build a building virtual environment foundation and provide high-precision three-dimensional virtual map data for indoor navigation. Therefore, the environment information of the indoor space can be automatically obtained through programming, the mobile robot can obtain priori information required by map building and positioning without additional work, and meanwhile, the semantic information carried by the BIM can also help the mobile robot to understand the indoor environment, so that the navigation process of the indoor mobile robot is more intelligent.
As the abundant building information carried by the BIM technology is researched by more and more students to construct an indoor map, some problems are revealed accordingly, such as the map constructed by the BIM technology is more single in form and is difficult to meet the requirements of different application scenes, and the existing method for realizing indoor positioning based on the BIM technology has the problems of single data, low robustness and the like, so that the positioning accuracy of the indoor mobile robot still has great deviation.
Disclosure of Invention
In order to improve the positioning accuracy of the indoor mobile robot, the application provides a map construction and positioning method of the indoor mobile robot based on BIM, which optimizes the calculation method of particle weight by fusing vision sensor data and laser radar data in the positioning process of an AMCL positioning module, so that the convergence speed of particle swarm is faster, and the positioning accuracy and robustness of the indoor mobile robot are improved. And after the particle swarm is relatively converged, the point cloud scanning matching is started, so that the alignment degree of the point cloud and the environment is further improved, and the positioning accuracy is further improved.
The application provides a map construction and positioning method of an indoor mobile robot based on BIM, which adopts the following technical scheme:
a map construction and positioning method of an indoor mobile robot based on BIM comprises the following steps:
S1, acquiring structural information and semantic information of an indoor space based on a BIM model;
s2, generating a grid map reflecting the structural condition of the indoor space based on the structural information of the indoor space, constructing a structural layer map of the indoor space, and coupling semantic information of the indoor space to construct a semantic map of the indoor space;
S3, operating an AMCL positioning module based on a structural layer map and a semantic map of an indoor space structure, fusing vision sensor data and laser radar data, correcting a particle weight calculation process, judging whether a particle swarm is converged, entering S4 if the particle swarm is converged, and repeating S3 if the particle swarm is not converged;
and S4, running a scanning matching module, optimizing the alignment degree of the positioning point cloud, and outputting a positioning result.
Preferably, the step S1 includes the steps of:
s11, acquiring all primitives of a navigation floor based on a BIM model, and creating a filter to acquire a BIM family instance of a required floor;
S12, extracting structural information of an indoor space based on BIM family examples, and storing the extracted structural information of the indoor space, wherein the structural information is example appearance information and comprises example external outlines and corresponding BIM coordinate system coordinates;
s13, extracting semantic information of an indoor space based on BIM family examples, storing the extracted semantic information of the indoor space, wherein the semantic information of the indoor space comprises example information and space region description information, the example information comprises an ID and geometric center coordinates of a door and window structure example type with obvious characteristics in structure information, and the space region description information comprises a room number and a coordinate position and description of a functional region of space region description.
Preferably, the step S2 includes the steps of:
S21, constructing an indoor space of a gridding two-dimensional plane network, setting the size of a unit grid to be the minimum width d of IfcWall, and coordinating the unit grid, wherein the lower left corner positioning coordinate is set as an origin (0, 0);
s22, mapping the instance appearance information into a planar network according to the structure information of the indoor space in the BIM model, wherein the mapping relation formula is as follows:
(i,j)=(ceil(x/r),ceil(y/r))
i. j represents the position of the grid, x and y represent the actual position of the instance appearance information relative to the origin of coordinates, and r is the length represented by the unit grid of the grid map;
s23, assigning 1 to the unit grids with the instance appearance information, assigning 0 to the unit grids without the instance appearance information, and digitizing the two-dimensional plane network to obtain a digitized map;
s24, coloring the two-dimensional graph network according to the numerical map, constructing a two-dimensional grid map reflecting the indoor space structure condition, adding corresponding attribute description, and constructing a structural layer map;
and S25, coupling the semantic information with a two-dimensional grid map reflecting the indoor space structure condition, and constructing a semantic map for mobile robot recognition.
Preferably, the step S3 includes the steps of:
S31, starting an AMCL positioning model based on a structural layer map and a semantic map of an indoor space structure, and enabling the mobile robot to randomly distribute particles χ 0 in a positioning space, wherein the number of samples of a particle swarm is kept to be M;
S32, sampling a structure with obvious characteristics in an indoor space through a visual sensor of the mobile robot and by adopting YOLOv target detection algorithm to obtain sampling data of the characteristic structure and obtain position coordinates (x, y) of the characteristic structure under a robot base link coordinate system;
S33, roughly estimating the pose of the mobile robot according to the acquired data in the step S32, and updating the space random particle χ 1;
S34, combining IMU data, and according to a state transition equation The pose transformation from the moment t-1 to the moment t is obtained through a track calculation formula, wherein the calculation formula of the pose transformation from the moment t-1 to the moment t is as follows:
Wherein, X t-1=(xt-1,yt-1t-1) is the pose of the mobile robot at the time t-1, and X t=(xt,ytt) is the pose of the mobile robot at the time t; ((v. Times. Deltat. Times. Cos (θ t-1-Δθ),v×Δt×sin(θt-1 -. DELTA.θ), DELTA.θ) represents the difference in pose change of the mobile robot from time t-1 to time t;
S35, calculating angle values theta t={θ12,..,θn of all nodes in the current acquired image relative to the positive direction of the robot and a ranging value Z t={z1,z2,…,zk of the laser radar; meanwhile, according to node position information provided by the semantic map obtained in the step S2, calculating an angle value of a node of the identification type in each particle observation angle and the graph
S36, calculating particle weight according to radar ranging data, wherein a particle weight formula is as follows:
Z is a ranging value of a laser radar of the mobile robot in the step S35, X is a pose of the robot at the moment t, which is obtained in the step S34, m is a map, a calculation process of the laser radar samples a likelihood domain model algorithm of a range finder, and the distance is larger and the weight is smaller by matching the ranging value Z t={z1,z2,…,zk of each particle with obstacles in a surrounding range;
calculating particle weights according to the vision sensor data, wherein the particle weight formula is as follows:
Wherein θ is the angle value of all nodes in the image acquired by the visual sensor data of the mobile robot in S35 relative to the positive direction of the robot, and X is the pose of the robot at the time t obtained in S34;
the calculation process of the particle weight is as follows:
First, calculating the measured node angle theta t={θ12,...,θn of the robot and the calculated node angle of the particles Is a difference θ dist;
The calculation formula of the particle weight is as follows:
Wherein the probability of the gaussian distribution with 0 as the center standard deviation is calculated by the function prob (theta dist, sigma), and the calculation makes a difference between the measured value theta t={θ12,...,θn and the calculated value of the particle, wherein the larger the difference is, the smaller the weight of the particle is;
the weight value of each particle is obtained as follows:
Wherein eta is a debugging parameter;
S37, normalizing the particle swarm, wherein the normalization formula is as follows:
s38, calculating a short-term likelihood average w slow and a long-term likelihood average w fas t of the maintenance measurement probability, wherein the calculation formulas are respectively as follows:
wslow=wslowslow(wavg-wslow);
wfast=wfastfast(wfast-wfast);
Wherein the parameter alpha fast、αslow is the attenuation rate of an exponential filter for estimating long-term and short-term averages respectively, and 0 is less than or equal to alpha slow≤αfast;
the empirical measurement likelihood w avg is calculated, and the calculation formula is as follows:
Judging the sizes of a short-term likelihood average w slow and a long-term likelihood average w fast;
If wf ast is greater than or equal to w slow, then no random particles are added and the resampling process is as follows:
If wf ast is less than w slow, adding random particles to the population, and returning to step S34, adding random samples according to p rand=max{0.0,1.0-wfast/wslow };
Returning to S34 after the end;
S39, calculating a particle swarm variance D (X), and if D (X) is smaller than a preset limit epsilon, considering the particle swarm to converge, and entering S4; if D (X) is greater than ε, the particle swarm is considered not to be converged and the process returns to step S38.
Preferably, the step S4 includes the steps of:
S41, starting a scanning matching algorithm, and defining a scanning matching space delta xi' nearby the pose;
s42, scanning the pose Is limited in direction and position, with values of phi and delta, respectively, phi representing each/>An angular scan is performed within this range of values, Δ representing each/>Position scanning is carried out in the range, and phi and the delta table jointly form a scanning matching space delta zeta';
obtaining the scanning pose which needs to be calculated in each scanning space 2N Φ·nΔ, wherein n Φ、nΔ is the setting precision of scan matching, the higher the precision, the larger the value, but the longer the scan duration
S43, calculatingThe calculation formula of the point cloud matching degree is as follows:
Wherein, When the corresponding position of the point i in the map is not occupied to obtain 0, and if the corresponding position is occupied to obtain 1;
s44, at all scanning positions The scanning pose/>, corresponding to the maximum value P ξ*=max{p1,p2,...,pγ,...,pN of the weightAnd outputting the final pose and correcting the pose of the mobile robot under the odom coordinate system.
In summary, the present application includes at least one of the following beneficial technical effects:
1. According to the application, the prior information required by the mobile robot is automatically extracted by applying the BIM model, the two-dimensional grid map of the indoor space is constructed, the semantic information is further coupled, the motion safety area of the mobile robot is divided in the vertical and horizontal spaces, the additional work required by constructing the navigation environment of the robot is avoided, the semantic map containing the semantic information can help the robot understand the environment, and the robot can be more conveniently and rapidly assisted in developing the navigation interactive function.
2. According to the application, the visual sensor and the laser radar data are fused in the positioning process based on the AMCL, so that the calculation of the particle weight is optimized, the convergence speed of the particle swarm is faster, and the positioning accuracy and the positioning robustness are improved. And after the particle swarm is relatively converged, the point cloud scanning matching is started, so that the alignment degree of the point cloud and the environment is further improved, and the positioning accuracy is further improved.
Drawings
Fig. 1 is a schematic diagram of a structural layer map, a semantic map, and a secure navigation layer map constructed in step S2 in an embodiment of the present application.
Fig. 2 is a flowchart of a map construction and positioning method of a BIM-based indoor mobile robot in an embodiment of the present application.
Fig. 3 is a schematic diagram of division of a mobile robot in an embodiment of the present application.
Fig. 4 is a schematic diagram of pose transformation from time t-1 to time t in step S34 according to an embodiment of the present application.
FIG. 5 is a view showing the scanning pose in step S42 of the embodiment of the present applicationA schematic diagram of the scan matching space Δζ' is formed for the limits Φ and Δ.
Detailed Description
The present application will be described in further detail with reference to the accompanying drawings.
Modifications of the embodiments which do not creatively contribute to the application may be made by those skilled in the art after reading the present specification, but are protected by patent laws within the scope of the claims of the present application.
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Embodiments of the application are described in further detail below with reference to the drawings.
The embodiment of the application discloses a map construction and positioning method of an indoor mobile robot based on BIM.
The map construction and positioning method of the indoor mobile robot based on BIM comprises the following steps:
S1, acquiring structural information and semantic information of an indoor space based on a BIM model;
specifically, step S1 includes the steps of:
S11, acquiring all primitives of a navigation floor based on a BIM model, then creating a filter capable of filtering a required floor, and then filtering all primitives of the navigation floor by using the filter to acquire a BIM group instance of the required floor;
S12, extracting structural information of an indoor space based on BIM family examples, and storing the extracted structural information of the indoor space, wherein the structural information is example appearance information, and comprises example external outlines and corresponding BIM coordinate system coordinates;
Specifically, the example appearance information is mainly geometric information of a building, such as geometric mechanism features with solid shapes, such as models and sizes of doors and windows, and coordinates of a BIM coordinate system corresponding to the geometric mechanism features; the structure information is stored in a Json format file form;
S13, extracting semantic information of an indoor space based on BIM family examples, and storing the extracted semantic information of the indoor space, wherein the semantic information of the indoor space comprises example information and space region description information, the example information comprises an ID and geometric center coordinates of a door and window structure example type with obvious characteristics in structure information, and the space region description information comprises a coordinate position and description of a room number and a function region described in a space region;
specifically, the type of the structural examples such as doors and windows with obvious characteristics is defined as voice information, so that the mobile robot can identify and position the doors and windows with obvious characteristics; wherein, the semantic information is also stored in the form of Json format file.
S2, generating a grid map reflecting the structural condition of the indoor space based on the structural information of the indoor space, constructing a structural layer map of the indoor space, and coupling semantic information of the indoor space to construct a semantic map of the indoor space;
specifically, step S2 includes the steps of:
s21, firstly, constructing an indoor space of a gridded two-dimensional plane network, setting the size of a unit grid to be a minimum width d of IfcWall, coordinating the unit grid, and setting the left lower corner positioning coordinate as an origin (0, 0);
s22, mapping the instance appearance information into a two-dimensional plane network of the coordinate 2 in the step S21 according to the structure information of the indoor space in the BIM model, wherein a mapping relation formula is as follows:
(i,j)=(ceil(x/r),ceil(y/r))
i. j represents the position of the grid, x and y represent the actual position of the instance appearance information relative to the origin of coordinates, and r is the length represented by the unit grid of the grid map;
s23, assigning 1 to the unit grids with the instance appearance information, assigning 0 to the unit grids without the instance appearance information, and digitizing the two-dimensional plane network to obtain a digitized map;
Wherein, the cell grid with the example appearance information is that the geometric characteristics of doors and windows and the like contained in the structural information exist in the cell grid, and the non-example appearance refers to the geometric characteristics of doors and windows and the like not contained in the structural information;
s24, coloring the two-dimensional graph network according to the numerical map, constructing a two-dimensional grid map reflecting the indoor space structure condition, adding corresponding attribute description, and constructing a structural layer map;
and S25, coupling the semantic information with a two-dimensional grid map reflecting the indoor space structure condition, and constructing a semantic map for mobile robot recognition.
Preferably, step S2 further includes step S26:
S26, generating an initial map safe navigation layer map on the basis of the two-dimensional grid map obtained in the step S24, then judging whether a safe area on the initial safe navigation layer map meets the motion safety requirement of the mobile robot by combining the mobile robot with the self parameters, and then updating the initial safe navigation layer map by the mobile robot according to the self parameters to generate a new navigation layer map;
Specifically, the mobile robot compares the working Height (upper Height of IfcSlab +robot Height r_height)) with the bottom Height of each obstacle (such as the lower edge of the door), and the area higher than the mobile robot is regarded as a Height free space; then, the Width (r_width) of the mobile robot is compared with the Width of a structure such as a door, the Width is wider than the space of the mobile robot, the space is regarded as a Width free space, when the height free space and the Width free space are simultaneously satisfied, a safe area which can be passed by the mobile robot in the area is divided into a safe motion area of the robot in a vertical space and a horizontal space, the safe motion area r_security of the mobile robot is analyzed by combining with the kinematics of the robot, a dangerous area is divided near an obstacle, the division principle is as shown in fig. 3, grid values are updated and colored on a two-dimensional grid map, a new safe navigation layer map which can help the mobile robot to safely move is constructed, and the mobile robot can freely and safely move in the indoor space.
S3, operating an AMCL positioning module based on a structural layer map and a semantic map of an indoor space structure, fusing vision sensor data and laser radar data, correcting a particle weight calculation process, judging whether a particle swarm is converged, entering S4 if the particle swarm is converged, and repeating S3 if the particle swarm is not converged; until the particle swarm converges, then enter step S4;
specifically, step S3 includes the following steps: a step of
S31, starting an AMCL positioning model based on a structural layer map and a semantic map of an indoor space structure, and enabling the mobile robot to randomly distribute particles χ 0 in a positioning space, wherein the number of samples of a particle swarm is kept to be M;
S32, sampling structural information of an indoor space through a visual sensor of the mobile robot, specifically, sampling structural information with obvious characteristics of the indoor space by adopting YOLOv target detection algorithm by the mobile robot to obtain sampling data of a characteristic structure and recording position coordinates (x, y) of the sampling data under a robot base link coordinate system;
S33, roughly estimating the pose of the mobile robot according to the acquired data obtained in the step S32, and updating the space random particle χ 1;
S34, combining IMU data, and according to a state transition equation The pose transformation from the moment t-1 to the moment t is obtained through a track calculation formula, wherein the calculation formula of the pose transformation from the moment t-1 to the moment t is as follows:
wherein, X t-1=(xt-1,yt-1t-1) is the pose of the mobile robot at the time t-1, and X t=(xt,ytt) is the pose of the mobile robot at the time t; (v×Deltat×cos (θ t-1-Δθ),v×Δt×sin(θt-1 - Δθ), Δθ) represents the pose variation difference of the mobile robot from time t-1 to time t;
S35, calculating angle values theta t={θ12,...,θn of all nodes in the current acquired image relative to the positive direction of the robot and a ranging value Z t={z1,z2,...,zk of the laser radar; meanwhile, according to node position information provided by the semantic map obtained in the step S2, calculating an angle value of a node of the identification type in each particle observation angle and the graph
S36, calculating particle weight according to radar ranging data, wherein a particle weight formula is as follows:
wherein Z is the ranging value of the laser radar of the mobile robot in the step S35, X is the pose of the robot at the time t obtained in the step S34, and m is a map. The calculation process of the laser radar samples a likelihood domain model algorithm of a range finder, and the distance is larger and the weight is smaller by matching a ranging value Z t={z1,z2,...,zk of each particle with obstacles in a surrounding range;
calculating particle weights according to the vision sensor data, wherein the particle weight formula is as follows:
Wherein θ is the angle value of all nodes in the image acquired by the visual sensor data of the mobile robot in S35 relative to the positive direction of the robot, and X is the pose of the robot at the time t obtained in S34;
the calculation process of the particle weight is as follows:
First, calculating the measured node angle theta t={θ12,...,θn of the robot and the calculated node angle of the particles Is a difference θ dist;
the calculation formula of θ dist is:
The calculation formula of the particle weight is as follows:
Wherein the function prob (θ dist, σ) calculates the probability of a gaussian distribution with 0 as the center standard deviation σ. The above calculation makes a difference between the measured value Θ t={θ12,...,θn and the calculated value for the particle, the larger the difference is, the smaller the particle weight is.
The weight value of each particle is obtained as follows:
where η is a debug parameter.
S37, normalizing the particle swarm, wherein the normalization formula is as follows:
s38, resampling the particle swarm;
First, a short-term likelihood average w slow and a long-term likelihood average w fast of the maintenance measurement probability are calculated, and the calculation formulas are respectively as follows:
wslow=wslowslow(wavg-wslow);
wfast=wfastfast(wfast-wfast);
Wherein the parameter alpha fast、αslow is the decay rate of the exponential filter for estimating the long-term and short-term averages, respectively, and 0.ltoreq.alpha slow≤αfast.
The empirical measurement likelihood w avg is calculated, and the calculation formula is as follows:
Judging the sizes of a short-term likelihood average w slow and a long-term likelihood average w fast;
If w fast is greater than or equal to w slow, then no random particles are added and the resampling process is as follows:
If w fast is less than w slow, then add random particles to the population, then return to step S34, add random samples according to p rand=max{0.0,1.0-wfast/wslow } whose pseudo-code is expressed as:
with probability max{0.0,1.0-wfast/wslow}do
Add random pose toχt
after the end, the process returns to S34.
S39, calculating a particle swarm variance D (X), and if D (X) is smaller than a preset limit epsilon, considering the particle swarm to converge, and entering S4; if D (X) is greater than ε, the particle swarm is considered not to be converged and the process returns to step S38.
And S4, running a scanning matching module, optimizing the alignment degree of the positioning point cloud, and outputting an optimized positioning result.
Specifically, step S4 includes the steps of:
S41, starting a scanning matching algorithm, and defining a scanning matching space delta xi' nearby the pose;
s42, scanning the pose Is limited in direction and position, with values of phi and delta, respectively, phi representing each/>An angular scan is performed within this range of values, Δ representing each/>Position scanning is carried out in the range, and phi and the delta table jointly form a scanning matching space delta zeta';
obtaining the scanning pose which needs to be calculated in each scanning space 2N Φ·nΔ, wherein n Φ、nΔ is the setting precision of scan matching, the higher the precision, the larger the value, but the longer the scan duration
S43, calculatingThe calculation formula of the point cloud matching degree is as follows:
Wherein, When the corresponding position of the point i in the map is not occupied to obtain 0, and if the corresponding position is occupied to obtain 1;
s44, at all scanning positions In terms of weight maximum/>Corresponding scanning pose/>And outputting the final pose, correcting the pose of the mobile robot under the odom coordinate system, and finally obtaining the positioning result of the robot in the indoor space.
The foregoing is only a partial embodiment of the present application, and it should be noted that it will be apparent to those skilled in the art that modifications and adaptations can be made without departing from the principles of the present application, and such modifications and adaptations are intended to be comprehended within the scope of the present application.

Claims (1)

1. The map construction and positioning method of the indoor mobile robot based on the BIM is characterized by comprising the following steps of:
S1, acquiring structural information and semantic information of an indoor space based on a BIM model;
s11, acquiring all primitives of a navigation floor based on a BIM model, and creating a filter to acquire a BIM family instance of a required floor;
S12, extracting structural information of an indoor space based on BIM family examples, and storing the extracted structural information of the indoor space, wherein the structural information is example appearance information and comprises example external outlines and corresponding BIM coordinate system coordinates;
S13, extracting semantic information of an indoor space based on BIM family examples, and storing the extracted semantic information of the indoor space, wherein the semantic information of the indoor space comprises example information and space region description information, the example information comprises an ID and geometric center coordinates of a door and window structure example type with obvious characteristics in structure information, and the space region description information comprises a coordinate position and description of a room number and a function region described in a space region;
s2, generating a grid map reflecting the structural condition of the indoor space based on the structural information of the indoor space, constructing a structural layer map of the indoor space, and coupling semantic information of the indoor space to construct a semantic map of the indoor space;
S21, constructing an indoor space of a gridding two-dimensional plane network, setting the size of a unit grid to be the minimum width d of IfcWall, and coordinating the unit grid, wherein the lower left corner positioning coordinate is set as an origin (0, 0);
s22, mapping the instance appearance information into a planar network according to the structure information of the indoor space in the BIM model, wherein the mapping relation formula is as follows: ;/>、/> Representing the position of the grid,/> 、/>Representing the actual position of the instance shape information relative to the origin of coordinates,/>A length represented by a grid map unit grid;
s23, assigning 1 to the unit grids with the instance appearance information, assigning 0 to the unit grids without the instance appearance information, and digitizing the two-dimensional plane network to obtain a digitized map;
s24, coloring the two-dimensional graph network according to the numerical map, constructing a two-dimensional grid map reflecting the indoor space structure condition, adding corresponding attribute description, and constructing a structural layer map;
s25, coupling the semantic information with a two-dimensional grid map reflecting the indoor space structure condition, and constructing a semantic map for mobile robot recognition;
S3, operating an AMCL positioning module based on a structural layer map and a semantic map of an indoor space structure, fusing vision sensor data and laser radar data, correcting a particle weight calculation process, judging whether a particle swarm is converged, entering S4 if the particle swarm is converged, and repeating S3 if the particle swarm is not converged;
S31, starting an AMCL positioning model based on a structural layer map and a semantic map of an indoor space structure, and enabling the mobile robot to randomly distribute particles in a positioning space The particle swarm keeps the sampling number as M;
s32, sampling a structure with obvious characteristics in the indoor space through a visual sensor of the mobile robot and by adopting YOLOv target detection algorithm, obtaining sampling data of the characteristic structure and obtaining the position coordinate of the characteristic structure under a robot base link coordinate system
S33, roughly estimating the pose of the mobile robot according to the acquired data in the step S32, and updating the space random particles
S34, combining IMU data, and according to a state transition equationObtaining/>, through a track calculation formulaTime to/>Pose transformation of moments, wherein/>Time to/>The calculation formula of the pose transformation at the moment is as follows:
Wherein, For mobile robot at/>Pose of moment,/>For mobile robot at/>The position and the posture at moment; /(I)Representing the pose change difference value of the mobile robot from the time t-1 to the time t;
S35, calculating angle values of all nodes in the current acquired image relative to the positive direction of the robot Distance measurement value/>, of laser radar; Meanwhile, according to node position information provided by the semantic map obtained in the step S2, calculating an angle value/>, within each particle observation angle, of the identified type node in the graph
S36, calculating particle weight according to radar ranging data, wherein a particle weight formula is as follows: ; z is a ranging value of a laser radar of the mobile robot in the step S35, X is a pose of the robot at the time t obtained in the step S34, and m is a map; the calculation process of the laser radar samples a likelihood domain model algorithm of a range finder, and a range finding value/>, is carried out on each particle Matching with the obstacles in the surrounding range, the larger the distance is, the smaller the weight is;
calculating particle weights according to the vision sensor data, wherein the particle weight formula is as follows:
Wherein the method comprises the steps of The method comprises the steps that S35, angle values of all nodes in an image acquired by visual sensor data of a mobile robot relative to the positive direction of the robot are obtained, and X is the pose of the robot at the moment t, which is obtained in S34;
the calculation process of the particle weight is as follows:
First, calculating the angle of a measured node of a robot Calculating a node angle with the particleDifference/>
The calculation formula of the particle weight is as follows:
Wherein the function is Calculated as 0-centered standard deviation/>The above calculation is done on the probability of the gaussian distribution of the measured value/>Making a difference with the calculated value of the particle, wherein the larger the difference is, the smaller the weight of the particle is;
the weight value of each particle is obtained as follows:
Wherein the method comprises the steps of Is a debugging parameter;
S37, normalizing the particle swarm, wherein the normalization formula is as follows:
s38, calculating the short-term likelihood average of the maintenance measurement probability And long-term likelihood averaging/>The calculation formulas are respectively as follows:;/> ; wherein the parameter/> 、/>The decay rate of an exponential filter estimating long-term and short-term averages, respectively, and/>; Calculating empirical measurements likelihood/>The calculation formula is as follows: /(I)
Judging short-term likelihood averageAverage with long term likelihood/>Is of a size of (2);
If it is Greater than or equal to/>Random particles are not added, and the resampling process is as follows:
If it is Less than/>Then adding random particles to the particle swarm, and returning to step S34, thenTo increase random sampling;
Returning to S34 after the end;
S39, calculating variance of particle swarm If/>Less than a preset limit/>If the particle swarm is considered to be converged, the S4 is entered; if it isGreater than/>The particle swarm is considered not to be converged, and the step S38 is returned;
s4, running a scanning matching module, optimizing the alignment degree of the positioning point cloud, and outputting a positioning result;
s41, starting a scanning matching algorithm to define a scanning matching space nearby the pose
S42, scanning the poseIs limited in direction and position, respectively, which is/>And/>,/>The value represents each/>Angle scan within this range,/>Representing each/>Scanning the position within the range,/>And/>The tables together form a scan matching space/>
S43, calculatingThe calculation formula of the point cloud matching degree is as follows: /(I); Wherein,When/>The corresponding position of the point in the map is not occupied to take 0, and the occupied position takes 1;
s44, at all scanning positions In terms of weight maximum/>Corresponding scanning pose/>And outputting the final pose and correcting the pose of the mobile robot under the odom coordinate system.
CN202310409184.6A 2023-04-17 2023-04-17 Map construction and positioning method of indoor mobile robot based on BIM Active CN116399328B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310409184.6A CN116399328B (en) 2023-04-17 2023-04-17 Map construction and positioning method of indoor mobile robot based on BIM

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310409184.6A CN116399328B (en) 2023-04-17 2023-04-17 Map construction and positioning method of indoor mobile robot based on BIM

Publications (2)

Publication Number Publication Date
CN116399328A CN116399328A (en) 2023-07-07
CN116399328B true CN116399328B (en) 2024-06-21

Family

ID=87013982

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310409184.6A Active CN116399328B (en) 2023-04-17 2023-04-17 Map construction and positioning method of indoor mobile robot based on BIM

Country Status (1)

Country Link
CN (1) CN116399328B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109682382A (en) * 2019-02-28 2019-04-26 电子科技大学 Global fusion and positioning method based on adaptive Monte Carlo and characteristic matching
CN111754566A (en) * 2020-05-12 2020-10-09 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Robot scene positioning method and construction operation method
CN113483747A (en) * 2021-06-25 2021-10-08 武汉科技大学 Improved AMCL (advanced metering library) positioning method based on semantic map with corner information and robot

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170100204A (en) * 2016-02-25 2017-09-04 한국전자통신연구원 Apparatus and method for target tracking using 3d bim
CN106709481A (en) * 2017-03-03 2017-05-24 深圳市唯特视科技有限公司 Indoor scene understanding method based on 2D-3D semantic data set
CN111539994B (en) * 2020-04-28 2023-04-18 武汉科技大学 Particle filter repositioning method based on semantic likelihood estimation
CN112082553A (en) * 2020-07-24 2020-12-15 广州易来特自动驾驶科技有限公司 Indoor positioning method and positioning device based on WIFI and laser radar and robot
CN112785643A (en) * 2021-02-02 2021-05-11 武汉科技大学 Indoor wall corner two-dimensional semantic map construction method based on robot platform

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109682382A (en) * 2019-02-28 2019-04-26 电子科技大学 Global fusion and positioning method based on adaptive Monte Carlo and characteristic matching
CN111754566A (en) * 2020-05-12 2020-10-09 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Robot scene positioning method and construction operation method
CN113483747A (en) * 2021-06-25 2021-10-08 武汉科技大学 Improved AMCL (advanced metering library) positioning method based on semantic map with corner information and robot

Also Published As

Publication number Publication date
CN116399328A (en) 2023-07-07

Similar Documents

Publication Publication Date Title
Mansouri et al. Cooperative coverage path planning for visual inspection
Nieto et al. Recursive scan-matching SLAM
CN103268729B (en) Based on mobile robot's tandem type map creating method of composite character
CN112525202A (en) SLAM positioning and navigation method and system based on multi-sensor fusion
CN107315171B (en) Radar networking target state and system error joint estimation algorithm
Mu et al. Research on SLAM algorithm of mobile robot based on the fusion of 2D LiDAR and depth camera
CN114820932B (en) Panoramic three-dimensional scene understanding method based on graph neural network and relation optimization
CN113108773A (en) Grid map construction method integrating laser and visual sensor
Wulf et al. Ground truth evaluation of large urban 6D SLAM
Zhou et al. A lidar odometry for outdoor mobile robots using ndt based scan matching in gps-denied environments
Garrote et al. 3D point cloud downsampling for 2D indoor scene modelling in mobile robotics
Demim et al. Simultaneous localisation and mapping for autonomous underwater vehicle using a combined smooth variable structure filter and extended kalman filter
Zhao et al. Review of slam techniques for autonomous underwater vehicles
Lin et al. Graph-based SLAM in indoor environment using corner feature from laser sensor
Chen et al. Submap-based indoor navigation system for the fetch robot
Miller et al. Rao-blackwellized particle filtering for mapping dynamic environments
Zhang et al. Design of dual-LiDAR high precision natural navigation system
Torres et al. Occupancy grid map to pose graph-based map: Robust bim-based 2d-lidar localization for lifelong indoor navigation in changing and dynamic environments
Liu et al. A localizability estimation method for mobile robots based on 3d point cloud feature
CN116399328B (en) Map construction and positioning method of indoor mobile robot based on BIM
CN116958418A (en) High-precision three-dimensional mapping method for indoor multi-layer building fused with BEV features
CN115512054B (en) Method for constructing three-dimensional space-time continuous point cloud map
Zhou et al. Multi-sensor fusion robust localization for indoor mobile robots based on a set-membership estimator
Sim et al. Scalable real-time vision-based SLAM for planetary rovers
Bouraine et al. NDT-PSO, a new NDT based SLAM approach using particle swarm optimization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant