CN116399328A - Map construction and positioning method of indoor mobile robot based on BIM - Google Patents

Map construction and positioning method of indoor mobile robot based on BIM Download PDF

Info

Publication number
CN116399328A
CN116399328A CN202310409184.6A CN202310409184A CN116399328A CN 116399328 A CN116399328 A CN 116399328A CN 202310409184 A CN202310409184 A CN 202310409184A CN 116399328 A CN116399328 A CN 116399328A
Authority
CN
China
Prior art keywords
map
mobile robot
information
particle
indoor space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310409184.6A
Other languages
Chinese (zh)
Other versions
CN116399328B (en
Inventor
张文胜
岳康
赵耘彻
李彦静
庞祯
纪强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei Wutu Technology Co ltd
Shijiazhuang Sanuo Technology Co ltd
Shijiazhuang Tiedao University
Original Assignee
Hebei Wutu Technology Co ltd
Shijiazhuang Sanuo Technology Co ltd
Shijiazhuang Tiedao University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei Wutu Technology Co ltd, Shijiazhuang Sanuo Technology Co ltd, Shijiazhuang Tiedao University filed Critical Hebei Wutu Technology Co ltd
Priority to CN202310409184.6A priority Critical patent/CN116399328B/en
Publication of CN116399328A publication Critical patent/CN116399328A/en
Application granted granted Critical
Publication of CN116399328B publication Critical patent/CN116399328B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3807Creation or updating of map data characterised by the type of data
    • G01C21/383Indoor data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3841Data obtained from two or more sources, e.g. probe vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/206Drawing of charts or graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application relates to a map construction and positioning method of an indoor mobile robot based on BIM, which comprises the following steps: s1, acquiring structural information and semantic information of an indoor space based on BIM; s2, generating a grid map based on the structural information, constructing a structural layer map, and constructing a semantic map by coupling semantic information; s3, operating an AMCL positioning module, fusing vision sensor data and laser radar data, correcting particle weight calculation, judging whether a particle swarm is converged, entering S4 if the particle swarm is converged, and repeating S3 if the particle swarm is not converged; and S4, running a scanning matching module, optimizing the alignment degree of the positioning point cloud, and outputting a positioning result. By utilizing the fusion of the visual sensor data and the laser radar data in the positioning process of the AMCL positioning module, the calculation method of the particle weight is optimized, so that the particle swarm convergence speed is higher, the positioning accuracy and robustness of the indoor mobile robot are improved, the point cloud scanning matching is started after the particle swarm convergence, the alignment degree of the point cloud and the environment is improved, and the positioning accuracy is further improved.

Description

Map construction and positioning method of indoor mobile robot based on BIM
Technical Field
The application relates to the field of map construction and positioning of indoor mobile robots, in particular to a map construction and positioning method of an indoor mobile robot based on BIM.
Background
With the continuous development of science and technology, the application scene of the mobile robot is also expanded, and the indoor mobile robot needs to face more complex working contents and working environments, and the autonomous navigation of the indoor mobile robot is more technically challenging, unlike the traditional industrial robot. At present, the main flow method in the field of mobile robot environment sensing and positioning navigation is SLAM (Simultaneous Localization and Mapping), but the SLAM method has large data calculation amount and high prior information acquisition time cost in the practical application process, and when facing an indoor environment with large area and complex structure, the SLAM method needs larger calculation amount and time operation cost, so that the SLAM method cannot be well suitable for the indoor environment with large area and complex structure. And BIM technology provides a solution to the above-mentioned problems.
BIM technology is used as a digital expression tool of physical and functional characteristics of buildings, and has good digital information integration and sharing advantages. In recent years, as BIM technology is widely applied in practical projects, more and more construction projects realize engineering BIM construction, build a building virtual environment foundation and provide high-precision three-dimensional virtual map data for indoor navigation. Therefore, the environment information of the indoor space can be automatically obtained through programming, the mobile robot can obtain priori information required by map building and positioning without additional work, and meanwhile, the semantic information carried by the BIM can also help the mobile robot to understand the indoor environment, so that the navigation process of the indoor mobile robot is more intelligent.
As the abundant building information carried by the BIM technology is researched by more and more students to construct an indoor map, some problems are revealed accordingly, such as the map constructed by the BIM technology is more single in form and is difficult to meet the requirements of different application scenes, and the existing method for realizing indoor positioning based on the BIM technology has the problems of single data, low robustness and the like, so that the positioning accuracy of the indoor mobile robot still has great deviation.
Disclosure of Invention
In order to improve the positioning accuracy of the indoor mobile robot, the application provides a map construction and positioning method of the indoor mobile robot based on BIM, which optimizes a calculation method of particle weights by fusing vision sensor data and laser radar data in the positioning process of an AMCL positioning module, so that the convergence speed of particle swarm is faster, and the positioning accuracy and robustness of the indoor mobile robot are improved. And after the particle swarm is relatively converged, the point cloud scanning matching is started, so that the alignment degree of the point cloud and the environment is further improved, and the positioning accuracy is further improved.
The application provides a map construction and positioning method of an indoor mobile robot based on BIM, which adopts the following technical scheme:
a map construction and positioning method of an indoor mobile robot based on BIM comprises the following steps:
s1, acquiring structural information and semantic information of an indoor space based on a BIM model;
s2, generating a grid map reflecting the structural condition of the indoor space based on the structural information of the indoor space, constructing a structural layer map of the indoor space, and coupling semantic information of the indoor space to construct a semantic map of the indoor space;
s3, operating an AMCL positioning module based on a structural layer map and a semantic map of an indoor space structure, fusing vision sensor data and laser radar data, correcting a particle weight calculation process, judging whether a particle swarm is converged, entering S4 if the particle swarm is converged, and repeating S3 if the particle swarm is not converged;
and S4, running a scanning matching module, optimizing the alignment degree of the positioning point cloud, and outputting a positioning result.
Preferably, the step S1 includes the steps of:
s11, acquiring all primitives of a navigation floor based on a BIM model, and creating a filter to acquire a BIM family instance of a required floor;
s12, extracting structural information of an indoor space based on BIM family examples, and storing the extracted structural information of the indoor space, wherein the structural information is example appearance information and comprises example external outlines and corresponding BIM coordinate system coordinates;
s13, extracting semantic information of an indoor space based on BIM family examples, storing the extracted semantic information of the indoor space, wherein the semantic information of the indoor space comprises example information and space region description information, the example information comprises an ID and geometric center coordinates of a door and window structure example type with obvious characteristics in structure information, and the space region description information comprises a room number and a coordinate position and description of a functional region of space region description.
Preferably, the step S2 includes the steps of:
s21, constructing an indoor space of a meshed two-dimensional plane network, setting the size of a unit mesh to be the minimum width d of IfcWall, coordinating the unit mesh, and setting the lower left corner positioning coordinate as an origin (0, 0);
s22, mapping the instance appearance information into a planar network according to the structure information of the indoor space in the BIM model, wherein the mapping relation formula is as follows:
(i,j)=(ceil(x/r),ceil(y/r))
i. j represents the position of the grid, x and y represent the actual position of the instance appearance information relative to the origin of coordinates, and r is the length represented by the unit grid of the grid map;
s23, assigning 1 to the unit grids with the instance appearance information, assigning 0 to the unit grids without the instance appearance information, and digitizing the two-dimensional plane network to obtain a digitized map;
s24, coloring the two-dimensional graph network according to the numerical map, constructing a two-dimensional grid map reflecting the indoor space structure condition, adding corresponding attribute description, and constructing a structural layer map;
and S25, coupling the semantic information with a two-dimensional grid map reflecting the indoor space structure condition, and constructing a semantic map for mobile robot recognition.
Preferably, the step S3 includes the steps of:
s31, starting an AMCL positioning model based on a structural layer map and a semantic map of an indoor space structure, and enabling the mobile robot to randomly distribute particles χ in the positioning space 0 The particle swarm keeps the sampling number as M;
s32, sampling a structure with obvious characteristics in the indoor space by a visual sensor of the mobile robot and adopting a YOLOv5 target detection algorithm to obtain sampling data of the characteristic structure and obtain position coordinates (x, y) of the characteristic structure under a robot base_link coordinate system;
s33, roughly estimating the pose of the mobile robot according to the acquired data in the step S32, and updating the space random particle χ 1
S34, combining IMU data, and according to a state transition equation
Figure BDA0004182722880000041
The pose transformation from the moment t-1 to the moment t is obtained through a track calculation formula, wherein the calculation formula of the pose transformation from the moment t-1 to the moment t is as follows:
Figure BDA0004182722880000042
wherein X is t-1 =(x t-1 ,y t-1t-1 ) X is the pose of the mobile robot at the time t-1 t =(x t ,y tt ) The pose of the mobile robot at the time t is shown; ((v. DELTA.t.times.cos (. Theta.) t-1 -Δθ),v×Δt×sin(θ t-1 Δθ), Δθ) represents the pose change difference of the mobile robot from time t-1 to time t;
s35, calculating angle values theta of all nodes in the current acquired image relative to the positive direction of the robot t ={θ 12 ,..,θ n Distance measurement value Z of laser radar t ={z 1 ,z 2 ,…,z k -a }; meanwhile, according to node position information provided by the semantic map obtained in the step S2, calculating an angle value of a node of the identification type in each particle observation angle and the graph
Figure BDA0004182722880000051
S36, calculating particle weight according to radar ranging data, wherein a particle weight formula is as follows:
Figure BDA0004182722880000052
wherein Z is the ranging value of the laser radar of the mobile robot in the step S35, X is the pose of the robot at the time t obtained in the step S34, m is a map, and the calculation process of the laser radar samples a likelihood domain model algorithm of the range finder, and the ranging value Z is carried out on each particle t ={z 1 ,z 2 ,…,z k Matching with surrounding range obstacles, wherein the larger the distance is, the smaller the weight is;
calculating particle weights according to the vision sensor data, wherein the particle weight formula is as follows:
Figure BDA0004182722880000053
wherein θ is the angle value of all nodes in the image acquired by the visual sensor data of the mobile robot in S35 relative to the positive direction of the robot, and X is the pose of the robot at the time t obtained in S34;
the calculation process of the particle weight is as follows:
first, calculating the measured node angle theta of the robot t ={θ 12 ,...,θ n Node angle is calculated with particles
Figure BDA0004182722880000054
Difference theta of dist
The calculation formula of the particle weight is as follows:
Figure BDA0004182722880000055
wherein the function prob (θ dist σ) calculates the probability of a gaussian distribution with 0 as the center standard deviation σ, the calculation being performed on the measured value Θ t ={θ 12 ,...,θ n Making a difference value with the calculated value of the particle, wherein the larger the difference value is, the smaller the weight of the particle is;
the weight value of each particle is obtained as follows:
Figure BDA0004182722880000061
wherein eta is a debugging parameter;
s37, normalizing the particle swarm, wherein the normalization formula is as follows:
Figure BDA0004182722880000062
s38, calculating a short-term likelihood average w for maintaining the measurement probability slow And long-term likelihood average w fas t, the calculation formulas are respectively as follows:
w slow =w slowslow (w avg -w slow );
w fast =w fastfast (w fast -w fast );
wherein the parameter alpha fast 、α s l ow The attenuation rate of an exponential filter for estimating long-term and short-term averages respectively, and 0.ltoreq.alpha slow ≤α fast
Calculating empirical measurement likelihood w avg The calculation formula is as follows:
Figure BDA0004182722880000063
judging short-term likelihood average w slow Mean w with long-term likelihood fast Is of a size of (2);
if wf ast Greater than or equal to w slow Random particles are not added, and the resampling process is as follows:
Figure BDA0004182722880000064
Figure BDA0004182722880000065
if wf ast Less than w slow Adding random particles to the particle population, returning to step S34, and then following p rand =max{0.0,1.0-w fast /w slow Increasing random sampling;
returning to S34 after the end;
s39, calculating a particle swarm variance D (X), and if D (X) is smaller than a preset limit epsilon, considering the particle swarm to converge, and entering S4; if D (X) is greater than ε, the particle swarm is considered not to be converged and the process returns to step S38.
Preferably, the step S4 includes the steps of:
s41, starting a scanning matching algorithm, and defining a scanning matching space delta xi' nearby the pose;
s42, scanning the pose
Figure BDA0004182722880000071
Is limited in the direction and position of (a), which is phi and delta, respectively, phi representing each +.>
Figure BDA0004182722880000072
In this range of values an angular scan is performed, delta representing each +.>
Figure BDA0004182722880000073
Position scanning is carried out in the range, and phi and the delta table jointly form a scanning matching space delta zeta';
obtaining the scanning pose which needs to be calculated in each scanning space
Figure BDA0004182722880000074
Is 2n Φ ·n Δ Wherein n is Φ 、n Δ Setting precision for scanning matching, the higher the precision is, the larger the value is, but the longer the scanning time length is
S43, calculating
Figure BDA0004182722880000075
The calculation formula of the point cloud matching degree is as follows:
Figure BDA0004182722880000076
wherein,,
Figure BDA0004182722880000077
when the corresponding position of the point i in the map is not occupied to obtain 0, and if the corresponding position is occupied to obtain 1;
s44, at all scanning positions
Figure BDA0004182722880000078
In a weight maximum value P ξ* =max{p 1 ,p 2 ,...,p γ ,...,p N Scanning pose corresponding to ∈ }>
Figure BDA0004182722880000079
And outputting the final pose and correcting the pose of the mobile robot under the odom coordinate system.
In summary, the present application includes at least one of the following beneficial technical effects:
1. according to the method, the prior information required by the mobile robot is automatically extracted by applying the BIM model, the two-dimensional grid map of the indoor space is constructed, semantic information is further coupled, the motion safety area of the mobile robot is divided in the vertical and horizontal space, extra work required by constructing the navigation environment of the robot is avoided, the semantic map containing the semantic information can help the robot understand the environment, and the robot can more conveniently help the navigation interactive function development.
2. According to the method and the device, the visual sensor and the laser radar data are fused in the positioning process based on the AMCL, so that the calculation of the particle weight is optimized, the convergence speed of the particle swarm is higher, and the positioning accuracy and the positioning robustness are improved. And after the particle swarm is relatively converged, the point cloud scanning matching is started, so that the alignment degree of the point cloud and the environment is further improved, and the positioning accuracy is further improved.
Drawings
Fig. 1 is a schematic diagram of a structural layer map, a semantic map, and a secure navigation layer map constructed in step S2 in the embodiment of the present application.
Fig. 2 is a flowchart of a map building and positioning method of a BIM-based indoor mobile robot in an embodiment of the present application.
Fig. 3 is a schematic diagram of the division of the mobile robot in the embodiment of the present application.
Fig. 4 is a schematic diagram of pose transformation from time t-1 to time t in step S34 in the embodiment of the present application.
FIG. 5 the scanning pose in step S42 of the embodiment of the present application
Figure BDA0004182722880000081
A schematic diagram of the scan matching space Δζ' is formed for the limits Φ and Δ.
Detailed Description
The present application is described in further detail below with reference to the accompanying drawings.
Modifications of the embodiments which do not creatively contribute to the invention may be made by those skilled in the art after reading the present specification, but are protected by patent laws only within the scope of claims of the present application.
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
Embodiments of the present application are described in further detail below with reference to the drawings attached hereto.
The embodiment of the application discloses a map construction and positioning method of an indoor mobile robot based on BIM.
The map construction and positioning method of the indoor mobile robot based on BIM comprises the following steps:
s1, acquiring structural information and semantic information of an indoor space based on a BIM model;
specifically, step S1 includes the steps of:
s11, acquiring all primitives of a navigation floor based on a BIM model, then creating a filter capable of filtering a required floor, and then filtering all primitives of the navigation floor by using the filter to acquire a BIM group instance of the required floor;
s12, extracting structural information of an indoor space based on BIM family examples, and storing the extracted structural information of the indoor space, wherein the structural information is example appearance information, and comprises example external outlines and corresponding BIM coordinate system coordinates;
specifically, the example appearance information is mainly geometric information of a building, such as geometric mechanism features with solid shapes, such as models and sizes of doors and windows, and coordinates of a BIM coordinate system corresponding to the geometric mechanism features; the structure information is stored in a Json format file form;
s13, extracting semantic information of an indoor space based on BIM family examples, and storing the extracted semantic information of the indoor space, wherein the semantic information of the indoor space comprises example information and space region description information, the example information comprises an ID and geometric center coordinates of a door and window structure example type with obvious characteristics in structure information, and the space region description information comprises a coordinate position and description of a room number and a function region described in a space region;
specifically, the type of the structural examples such as doors and windows with obvious characteristics is defined as voice information, so that the mobile robot can identify and position the doors and windows with obvious characteristics; wherein, the semantic information is also stored in the form of Json format file.
S2, generating a grid map reflecting the structural condition of the indoor space based on the structural information of the indoor space, constructing a structural layer map of the indoor space, and coupling semantic information of the indoor space to construct a semantic map of the indoor space;
specifically, step S2 includes the steps of:
s21, firstly, constructing an indoor space of a gridded two-dimensional plane network, setting the size of a unit grid to be the minimum width d of IfcWall, and coordinating the unit grid, and then setting the lower left corner positioning coordinate as an origin (0, 0);
s22, mapping the instance appearance information into a two-dimensional plane network of the coordinate 2 in the step S21 according to the structure information of the indoor space in the BIM model, wherein a mapping relation formula is as follows:
(i,j)=(ceil(x/r),ceil(y/r))
i. j represents the position of the grid, x and y represent the actual position of the instance appearance information relative to the origin of coordinates, and r is the length represented by the unit grid of the grid map;
s23, assigning 1 to the unit grids with the instance appearance information, assigning 0 to the unit grids without the instance appearance information, and digitizing the two-dimensional plane network to obtain a digitized map;
wherein, the cell grid with the example appearance information is that the geometric characteristics of doors and windows and the like contained in the structural information exist in the cell grid, and the non-example appearance refers to the geometric characteristics of doors and windows and the like not contained in the structural information;
s24, coloring the two-dimensional graph network according to the numerical map, constructing a two-dimensional grid map reflecting the indoor space structure condition, adding corresponding attribute description, and constructing a structural layer map;
and S25, coupling the semantic information with a two-dimensional grid map reflecting the indoor space structure condition, and constructing a semantic map for mobile robot recognition.
Preferably, step S2 further includes step S26:
s26, generating an initial map safe navigation layer map on the basis of the two-dimensional grid map obtained in the step S24, then judging whether a safe area on the initial safe navigation layer map meets the motion safety requirement of the mobile robot by combining the mobile robot with the self parameters, and then updating the initial safe navigation layer map by the mobile robot according to the self parameters to generate a new navigation layer map;
specifically, the mobile robot compares the working Height (upper Height of ifcslab+robot Height r_height)) with the bottom Height of each obstacle (such as the lower edge of the door), and the area higher than the mobile robot is regarded as a Height free space; then, the Width (r_width) of the mobile robot is compared with the Width of a structure such as a door, the Width is wider than the space of the mobile robot, the space is regarded as a Width free space, when the height free space and the Width free space are simultaneously satisfied, a safe area which can be passed by the mobile robot in the area is divided into a safe motion area of the robot in a vertical space and a horizontal space, the safe motion area r_security of the mobile robot is analyzed by combining with the kinematics of the robot, a dangerous area is divided near an obstacle, the division principle is as shown in fig. 3, grid values are updated and colored on a two-dimensional grid map, a new safe navigation layer map which can help the mobile robot to safely move is constructed, and the mobile robot can freely and safely move in the indoor space.
S3, operating an AMCL positioning module based on a structural layer map and a semantic map of an indoor space structure, fusing vision sensor data and laser radar data, correcting a particle weight calculation process, judging whether a particle swarm is converged, entering S4 if the particle swarm is converged, and repeating S3 if the particle swarm is not converged; until the particle swarm converges, then enter step S4;
specifically, step S3 includes the following steps: a step of
S31, starting an AMCL positioning model based on a structural layer map and a semantic map of an indoor space structure, and enabling the mobile robot to randomly distribute particles χ in the positioning space 0 The particle swarm keeps the sampling number as M;
s32, sampling structural information of an indoor space through a visual sensor of a mobile robot, specifically, sampling structural information with obvious characteristics of the indoor space by the mobile robot through a YOLOv5 target detection algorithm, obtaining sampling data of a characteristic structure and recording position coordinates (x, y) of the characteristic structure under a robot base_link coordinate system;
s33, roughly estimating the pose of the mobile robot according to the acquired data obtained in the step S32, and updating the space random particle χ 1
S34, combining IMU data, and according to a state transition equation
Figure BDA0004182722880000121
The pose transformation from the moment t-1 to the moment t is obtained through a track calculation formula, wherein the calculation formula of the pose transformation from the moment t-1 to the moment t is as follows:
Figure BDA0004182722880000122
wherein X is t-1 =(x t-1 ,y t-1t-1 ) X is the pose of the mobile robot at the time t-1 t =(x t ,y tt ) The pose of the mobile robot at the time t is shown; (v. Times. Delta. T. Times. Cos (. Theta.) t-1 -Δθ),v×Δt×sin(θ t-1 Δθ), Δθ) represents the pose change difference of the mobile robot from time t-1 to time t;
s35, calculating all the images in the current acquired imageAngle value theta of node relative to positive direction of robot t ={θ 12 ,...,θ n Distance measurement value Z of laser radar t ={z 1 ,z 2 ,...,z k -a }; meanwhile, according to node position information provided by the semantic map obtained in the step S2, calculating an angle value of a node of the identification type in each particle observation angle and the graph
Figure BDA0004182722880000123
S36, calculating particle weight according to radar ranging data, wherein a particle weight formula is as follows:
Figure BDA0004182722880000124
wherein Z is the ranging value of the laser radar of the mobile robot in the step S35, X is the pose of the robot at the time t obtained in the step S34, and m is a map. The calculation process of the laser radar samples a likelihood domain model algorithm of a range finder, and a range value Z is obtained by carrying out range finding on each particle t ={z 1 ,z 2 ,...,z k Matching with surrounding range obstacles, wherein the larger the distance is, the smaller the weight is;
calculating particle weights according to the vision sensor data, wherein the particle weight formula is as follows:
Figure BDA0004182722880000131
wherein θ is the angle value of all nodes in the image acquired by the visual sensor data of the mobile robot in S35 relative to the positive direction of the robot, and X is the pose of the robot at the time t obtained in S34;
the calculation process of the particle weight is as follows:
first, calculating the measured node angle theta of the robot t ={θ 12 ,...,θ n Node angle is calculated with particles
Figure BDA0004182722880000132
Difference theta of dist
θ dist The calculation formula of (2) is as follows:
Figure BDA0004182722880000133
the calculation formula of the particle weight is as follows:
Figure BDA0004182722880000134
wherein the function prob (θ dist σ) calculates the probability of a gaussian distribution with 0 as the center standard deviation σ. The calculation is performed on the measured value theta t ={θ 12 ,...,θ n And (3) making a difference value with the calculated value of the particles, wherein the larger difference value is indicative of the smaller weight of the particles.
The weight value of each particle is obtained as follows:
Figure BDA0004182722880000135
where η is a debug parameter.
S37, normalizing the particle swarm, wherein the normalization formula is as follows:
Figure BDA0004182722880000136
s38, resampling the particle swarm;
first, calculate the short-term likelihood average w of the maintenance measurement probability slow And long-term likelihood average w fast The calculation formulas are respectively as follows:
w slow =w slowslow (w avg -w slow );
w fast =w fastfast (w fast -w fast );
wherein the parameter alpha fast 、α slow The attenuation rate of an exponential filter for estimating long-term and short-term averages respectively, and 0.ltoreq.alpha slow ≤α fast
Calculating empirical measurement likelihood w avg The calculation formula is as follows:
Figure BDA0004182722880000141
judging short-term likelihood average w slow Mean w with long-term likelihood fast Is of a size of (2);
if w fast Greater than or equal to w slow Random particles are not added, and the resampling process is as follows:
Figure BDA0004182722880000142
Figure BDA0004182722880000143
if w fast Less than w slow Adding random particles to the particle population, returning to step S34, and then following p rand =max{0.0,1.0-w fast /w slow Increasing random samples, the pseudo code of which is expressed as:
with probability max{0.0,1.0-w fast /w slow }do
Add random pose toχ t
after the end, the process returns to S34.
S39, calculating a particle swarm variance D (X), and if D (X) is smaller than a preset limit epsilon, considering the particle swarm to converge, and entering S4; if D (X) is greater than ε, the particle swarm is considered not to be converged and the process returns to step S38.
And S4, running a scanning matching module, optimizing the alignment degree of the positioning point cloud, and outputting an optimized positioning result.
Specifically, step S4 includes the steps of:
s41, starting a scanning matching algorithm, and defining a scanning matching space delta xi' nearby the pose;
s42, scanning the pose
Figure BDA0004182722880000144
Is limited in the direction and position of (a), which is phi and delta, respectively, phi representing each +.>
Figure BDA0004182722880000151
In this range of values an angular scan is performed, delta representing each +.>
Figure BDA0004182722880000152
Position scanning is carried out in the range, and phi and the delta table jointly form a scanning matching space delta zeta';
obtaining the scanning pose which needs to be calculated in each scanning space
Figure BDA0004182722880000153
Is 2n Φ ·n Δ Wherein n is Φ 、n Δ Setting precision for scanning matching, the higher the precision is, the larger the value is, but the longer the scanning time length is
S43, calculating
Figure BDA0004182722880000154
The calculation formula of the point cloud matching degree is as follows:
Figure BDA0004182722880000155
wherein,,
Figure BDA0004182722880000156
when the corresponding position of the point i in the map is not occupied to obtain 0, and if the corresponding position is occupied to obtain 1;
s44, at all scanning positions
Figure BDA0004182722880000157
Is>
Figure BDA0004182722880000158
Corresponding scanning pose->
Figure BDA0004182722880000159
And outputting the final pose, correcting the pose of the mobile robot under the odom coordinate system, and finally obtaining the positioning result of the robot in the indoor space.
The foregoing is only a partial embodiment of the present application, and it should be noted that, for a person skilled in the art, several improvements and modifications can be made without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

Claims (5)

1. The map construction and positioning method of the indoor mobile robot based on the BIM is characterized by comprising the following steps of:
s1, acquiring structural information and semantic information of an indoor space based on a BIM model;
s2, generating a grid map reflecting the structural condition of the indoor space based on the structural information of the indoor space, constructing a structural layer map of the indoor space, and coupling semantic information of the indoor space to construct a semantic map of the indoor space;
s3, operating an AMCL positioning module based on a structural layer map and a semantic map of an indoor space structure, fusing vision sensor data and laser radar data, correcting a particle weight calculation process, judging whether a particle swarm is converged, entering S4 if the particle swarm is converged, and repeating S3 if the particle swarm is not converged;
and S4, running a scanning matching module, optimizing the alignment degree of the positioning point cloud, and outputting a positioning result.
2. The map construction and positioning method of a BIM-based indoor mobile robot according to claim 1, wherein the S1 includes the steps of:
s11, acquiring all primitives of a navigation floor based on a BIM model, and creating a filter to acquire a BIM family instance of a required floor;
s12, extracting structural information of an indoor space based on BIM family examples, and storing the extracted structural information of the indoor space, wherein the structural information is example appearance information and comprises example external outlines and corresponding BIM coordinate system coordinates;
s13, extracting semantic information of an indoor space based on BIM family examples, storing the extracted semantic information of the indoor space, wherein the semantic information of the indoor space comprises example information and space region description information, the example information comprises an ID and geometric center coordinates of a door and window structure example type with obvious characteristics in structure information, and the space region description information comprises a room number and a coordinate position and description of a functional region of space region description.
3. The map construction and positioning method of a BIM-based indoor mobile robot according to claim 1, wherein the S2 includes the steps of:
s21, constructing an indoor space of a meshed two-dimensional plane network, setting the size of a unit mesh to be the minimum width d of IfcWall, coordinating the unit mesh, and setting the lower left corner positioning coordinate as an origin (0, 0);
s22, mapping the instance appearance information into a planar network according to the structure information of the indoor space in the BIM model, wherein the mapping relation formula is as follows:
(i,j)=(ceil(x/r),ceil(y/r))
i. j represents the position of the grid, x and y represent the actual position of the instance appearance information relative to the origin of coordinates, and r is the length represented by the unit grid of the grid map;
s23, assigning 1 to the unit grids with the instance appearance information, assigning 0 to the unit grids without the instance appearance information, and digitizing the two-dimensional plane network to obtain a digitized map;
s24, coloring the two-dimensional graph network according to the numerical map, constructing a two-dimensional grid map reflecting the indoor space structure condition, adding corresponding attribute description, and constructing a structural layer map;
and S25, coupling the semantic information with a two-dimensional grid map reflecting the indoor space structure condition, and constructing a semantic map for mobile robot recognition.
4. The map construction and positioning method of a BIM-based indoor mobile robot according to claim 1, wherein the S3 includes the steps of:
s31, starting an AMCL positioning model based on a structural layer map and a semantic map of an indoor space structure, and enabling the mobile robot to randomly distribute particles χ in the positioning space 0 The particle swarm keeps the sampling number as M;
s32, sampling a structure with obvious characteristics in an indoor space through a visual sensor of a mobile robot and by adopting a YOLOv5 target detection algorithm to obtain sampling data of the characteristic structure and obtain position coordinates (x, y) of the characteristic structure under a robot base_link coordinate system;
s33, roughly estimating the pose of the mobile robot according to the acquired data in the step S32, and updating the space random particle χ 1
S34, combining IMU data, and according to a state transition equation
Figure FDA0004182722870000031
The pose transformation from the moment t-1 to the moment t is obtained through a track calculation formula, wherein the calculation formula of the pose transformation from the moment t-1 to the moment t is as follows:
Figure FDA0004182722870000032
wherein X is t-1 =(x t-1 ,y t-1t-1 ) X is the pose of the mobile robot at the time t-1 t =(x t ,y tt ) The pose of the mobile robot at the time t is shown; (v. Times. Delta. T. Times. Cos (. Theta.) t-1 -Δθ),v×Δt×sin(θ t-1 Δθ), Δθ) represents the pose change difference of the mobile robot from time t-1 to time t;
s35, calculating angles of all nodes in the current acquired image relative to the positive direction of the robotValue theta t ={θ 12 ,...,θ n Distance measurement value Z of laser radar t ={z 1 ,z 2 ,…,z k -a }; meanwhile, according to node position information provided by the semantic map obtained in the step S2, calculating an angle value of a node of the identification type in each particle observation angle and the graph
Figure FDA0004182722870000033
S36, calculating particle weight according to radar ranging data, wherein a particle weight formula is as follows:
Figure FDA0004182722870000034
z is a ranging value of a laser radar of the mobile robot in the step S35, X is a pose of the robot at the time t obtained in the step S34, and m is a map; the calculation process of the laser radar samples a likelihood domain model algorithm of a range finder, and a range value Z is obtained by carrying out range finding on each particle t ={z 1 ,z 2 ,…,z k Matching with surrounding range obstacles, wherein the larger the distance is, the smaller the weight is;
calculating particle weights according to the vision sensor data, wherein the particle weight formula is as follows:
Figure FDA0004182722870000041
wherein θ is the angle value of all nodes in the image acquired by the visual sensor data of the mobile robot in S35 relative to the positive direction of the robot, and X is the pose of the robot at the time t obtained in S34;
the calculation process of the particle weight is as follows:
first, calculating the measured node angle theta of the robot t ={θ 12 ,..,θ n Node angle is calculated with particles
Figure FDA0004182722870000042
Difference theta of dist
The calculation formula of the particle weight is as follows:
Figure FDA0004182722870000043
wherein the function prob (θ dist σ) calculates the probability of a gaussian distribution with 0 as the center standard deviation σ, the calculation being performed on the measured value Θ t ={θ 12 ,...,θ n Making a difference value with the calculated value of the particle, wherein the larger the difference value is, the smaller the weight of the particle is;
the weight value of each particle is obtained as follows:
Figure FDA0004182722870000044
wherein eta is a debugging parameter;
s37, normalizing the particle swarm, wherein the normalization formula is as follows:
Figure FDA0004182722870000045
s38, calculating a short-term likelihood average w for maintaining the measurement probability slow And long-term likelihood average w fast The calculation formulas are respectively as follows:
w slow =w slowslow (w avg -w slow );
w fast =w fastfast (w fast -w fast );
wherein the parameter alpha fast 、α slow The attenuation rate of an exponential filter for estimating long-term and short-term averages respectively, and 0.ltoreq.alpha slow ≤α fast
Calculating empirical measurement likelihood w avg The calculation formula is as follows:
Figure FDA0004182722870000051
judging short-term likelihood average w slow Mean w with long-term likelihood fast Is of a size of (2);
if w fast Greater than or equal to w slow Random particles are not added, and the resampling process is as follows:
draw i∈{1,...,N}with
Figure FDA0004182722870000052
add
Figure FDA0004182722870000053
toχ t
if w fast Less than w slow Adding random particles to the particle population, returning to step S34, and then following p rand =max{0.0,1.0-w fast /w slow Increasing random sampling;
returning to S34 after the end;
s39, calculating a particle swarm variance D (X), and if D (X) is smaller than a preset limit epsilon, considering the particle swarm to converge, and entering S4; if D (X) is greater than ε, the particle swarm is considered not to be converged and the process returns to step S38.
5. The map construction and positioning method of a BIM-based indoor mobile robot according to claim 4, wherein the S4 includes the steps of:
s41, starting a scanning matching algorithm, and defining a scanning matching space delta xi' nearby the pose;
s42, scanning the pose
Figure FDA0004182722870000054
Is limited in the direction and position of (a), which is phi and delta, respectively, phi representing each +.>
Figure FDA0004182722870000055
In this range, angle scans are performed, delta represents each +.>
Figure FDA0004182722870000056
Position scanning is carried out in the range, and phi and the delta table jointly form a scanning matching space delta zeta';
s43, calculating
Figure FDA0004182722870000057
The calculation formula of the point cloud matching degree is as follows:
Figure FDA0004182722870000058
wherein,,
Figure FDA0004182722870000059
when the corresponding position of the point i in the map is not occupied to obtain 0, and if the corresponding position is occupied to obtain 1;
s44, at all scanning positions
Figure FDA0004182722870000061
Is>
Figure FDA0004182722870000062
Corresponding scanning pose->
Figure FDA0004182722870000063
And outputting the final pose and correcting the pose of the mobile robot under the odom coordinate system.
CN202310409184.6A 2023-04-17 2023-04-17 Map construction and positioning method of indoor mobile robot based on BIM Active CN116399328B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310409184.6A CN116399328B (en) 2023-04-17 2023-04-17 Map construction and positioning method of indoor mobile robot based on BIM

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310409184.6A CN116399328B (en) 2023-04-17 2023-04-17 Map construction and positioning method of indoor mobile robot based on BIM

Publications (2)

Publication Number Publication Date
CN116399328A true CN116399328A (en) 2023-07-07
CN116399328B CN116399328B (en) 2024-06-21

Family

ID=87013982

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310409184.6A Active CN116399328B (en) 2023-04-17 2023-04-17 Map construction and positioning method of indoor mobile robot based on BIM

Country Status (1)

Country Link
CN (1) CN116399328B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106709481A (en) * 2017-03-03 2017-05-24 深圳市唯特视科技有限公司 Indoor scene understanding method based on 2D-3D semantic data set
KR20170100204A (en) * 2016-02-25 2017-09-04 한국전자통신연구원 Apparatus and method for target tracking using 3d bim
CN109682382A (en) * 2019-02-28 2019-04-26 电子科技大学 Global fusion and positioning method based on adaptive Monte Carlo and characteristic matching
CN111539994A (en) * 2020-04-28 2020-08-14 武汉科技大学 Particle filter repositioning method based on semantic likelihood estimation
CN111754566A (en) * 2020-05-12 2020-10-09 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Robot scene positioning method and construction operation method
CN112082553A (en) * 2020-07-24 2020-12-15 广州易来特自动驾驶科技有限公司 Indoor positioning method and positioning device based on WIFI and laser radar and robot
CN113483747A (en) * 2021-06-25 2021-10-08 武汉科技大学 Improved AMCL (advanced metering library) positioning method based on semantic map with corner information and robot
US20220244740A1 (en) * 2021-02-02 2022-08-04 Wuhan University Of Science And Technology Method of constructing indoor two-dimensional semantic map with wall corner as critical feature based on robot platform

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170100204A (en) * 2016-02-25 2017-09-04 한국전자통신연구원 Apparatus and method for target tracking using 3d bim
CN106709481A (en) * 2017-03-03 2017-05-24 深圳市唯特视科技有限公司 Indoor scene understanding method based on 2D-3D semantic data set
CN109682382A (en) * 2019-02-28 2019-04-26 电子科技大学 Global fusion and positioning method based on adaptive Monte Carlo and characteristic matching
CN111539994A (en) * 2020-04-28 2020-08-14 武汉科技大学 Particle filter repositioning method based on semantic likelihood estimation
CN111754566A (en) * 2020-05-12 2020-10-09 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Robot scene positioning method and construction operation method
CN112082553A (en) * 2020-07-24 2020-12-15 广州易来特自动驾驶科技有限公司 Indoor positioning method and positioning device based on WIFI and laser radar and robot
US20220244740A1 (en) * 2021-02-02 2022-08-04 Wuhan University Of Science And Technology Method of constructing indoor two-dimensional semantic map with wall corner as critical feature based on robot platform
CN113483747A (en) * 2021-06-25 2021-10-08 武汉科技大学 Improved AMCL (advanced metering library) positioning method based on semantic map with corner information and robot

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MATTHEW PEAVY等: "Integration of real-time semantic building map updating with adaptive monte carlo localization(AMCL) for robust indoor mobile robot localization", APPLIED SCIENCES, vol. 13, no. 2, 9 January 2023 (2023-01-09) *
王卫东等: "基于蒙特卡洛和BIM的CRTS板式无砟轨道铺设工期仿真", 中南大学学报, vol. 50, no. 7, 31 July 2019 (2019-07-31) *

Also Published As

Publication number Publication date
CN116399328B (en) 2024-06-21

Similar Documents

Publication Publication Date Title
CN112347840B (en) Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method
Mansouri et al. Cooperative coverage path planning for visual inspection
CN105843223B (en) A kind of mobile robot three-dimensional based on space bag of words builds figure and barrier-avoiding method
Nieto et al. Recursive scan-matching SLAM
CN103268729B (en) Based on mobile robot's tandem type map creating method of composite character
CN112525202A (en) SLAM positioning and navigation method and system based on multi-sensor fusion
Lee et al. Probabilistic map merging for multi-robot RBPF-SLAM with unknown initial poses
CN114820932B (en) Panoramic three-dimensional scene understanding method based on graph neural network and relation optimization
CN113108773A (en) Grid map construction method integrating laser and visual sensor
Wulf et al. Ground truth evaluation of large urban 6D SLAM
CN111476286A (en) Map construction method for mobile robot
Zhou et al. A lidar odometry for outdoor mobile robots using ndt based scan matching in gps-denied environments
Magree et al. Combined laser and vision-aided inertial navigation for an indoor unmanned aerial vehicle
Garrote et al. 3D point cloud downsampling for 2D indoor scene modelling in mobile robotics
Demim et al. Simultaneous localisation and mapping for autonomous underwater vehicle using a combined smooth variable structure filter and extended kalman filter
Torres et al. Occupancy grid map to pose graph-based map: Robust bim-based 2d-lidar localization for lifelong indoor navigation in changing and dynamic environments
Zhang et al. Lidar-inertial 3d slam with plane constraint for multi-story building
Liu et al. A localizability estimation method for mobile robots based on 3d point cloud feature
CN116399328B (en) Map construction and positioning method of indoor mobile robot based on BIM
Rivadeneyra et al. Probabilistic multi-level maps from LIDAR data
Wulf et al. Using 3D data for Monte Carlo localization in complex indoor environments
Lee et al. Development of advanced grid map building model based on sonar geometric reliability for indoor mobile robot localization
Hernández-García et al. 3d city models: Mapping approach using lidar technology
Redondo et al. A compact representation of the environment and its frontiers for autonomous vehicle navigation
Ryu Autonomous robotic strategies for urban search and rescue

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant