CN106959697B - Automatic indoor map construction system for long straight corridor environment - Google Patents

Automatic indoor map construction system for long straight corridor environment Download PDF

Info

Publication number
CN106959697B
CN106959697B CN201710342694.0A CN201710342694A CN106959697B CN 106959697 B CN106959697 B CN 106959697B CN 201710342694 A CN201710342694 A CN 201710342694A CN 106959697 B CN106959697 B CN 106959697B
Authority
CN
China
Prior art keywords
robot
point
support frame
grid
automatic telescopic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710342694.0A
Other languages
Chinese (zh)
Other versions
CN106959697A (en
Inventor
杨亮
田丰溥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China Zhongshan Institute
Original Assignee
University of Electronic Science and Technology of China Zhongshan Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China Zhongshan Institute filed Critical University of Electronic Science and Technology of China Zhongshan Institute
Priority to CN201710342694.0A priority Critical patent/CN106959697B/en
Publication of CN106959697A publication Critical patent/CN106959697A/en
Application granted granted Critical
Publication of CN106959697B publication Critical patent/CN106959697B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0234Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons
    • G05D1/0236Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/028Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using a RF signal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Electromagnetism (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Optics & Photonics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses an indoor map automatic construction system for a long straight corridor environment, which comprises a robot body, a vision acquisition device, an automatic telescopic bracket, a laser range finder and a controller, and is characterized in that: the vision acquisition device is installed on the robot body through automatic telescopic bracket, the robot body comprises drive wheel, supporting wheel, bottom plate, second floor splint, top layer splint, the controller judge whether get into long straight corridor environment through the degree of depth information that the laser range finder gathered to gather surrounding environment image information and degree of depth information through vision acquisition device, improve and build the drawing precision, effectively solve and draw the map length and appear shortage and crooked problem under the long straight corridor environment that the loop false detection caused in the traditional drawing algorithm.

Description

Automatic indoor map construction system for long straight corridor environment
Technical Field
The invention relates to an indoor map automatic construction system facing a long straight corridor environment.
Background
The indoor map automatic construction technology is always a research hotspot in the field of autonomous mobile navigation of robots. The automatic map construction technology generally needs to judge the actual distance and the actual size by means of data such as a robot chassis encoder, a visual odometer, a laser range finder and the like, however, under the long corridor environment scene, the map construction technology is influenced by factors such as sparse characteristic points, higher environment similarity and the like due to the fact that the similarity of two side walls is high, so that the map construction under the long corridor environment cannot be accurately completed in the prior art, and particularly, the calculation of the length of the long corridor has larger errors.
At present, as two-dimensional laser mapping algorithms such as Gmaging and Hector SLAM are popular in the world, as the characteristics of two-dimensional laser data are less, the linear motion of a robot is easily judged to be the drift of an encoder odometer by mistake in a long straight corridor environment, so that the algorithm relocates the position of the robot, the length of a finally established straight corridor map is short compared with the real distance, and the situation of overlapping can occur.
Therefore, how to improve the image building accuracy in long straight corridor scenes has become a problem of urgent need.
Disclosure of Invention
The invention aims to solve the technical problem of providing an indoor map automatic construction system for a long straight corridor environment, and compared with the traditional map construction system, the indoor map automatic construction system for the long straight corridor environment can effectively improve the map construction precision in the long straight corridor environment and reduce the problems of map length shortage and bending in the long straight corridor environment caused by loop detection errors of a map construction algorithm.
In order to solve the problems, the technical scheme adopted by the invention comprises a robot body (1), a vision acquisition device (2), an automatic telescopic bracket (3), a laser range finder (4) and a controller (5), and is characterized in that the vision acquisition device (2) is arranged on the robot body (1) through the automatic telescopic bracket (3), the robot body (1) comprises a driving wheel (11), a supporting wheel (12), a bottom plate (13), a second layer of splints (14) and a top layer of splints (15), the controller (5) is arranged on the bottom layer of splints (13), the laser range finder (4) is arranged on the second layer of splints (14), and a direct current motor (131) and an encoder (132) are arranged on the bottom plate (13) of the robot.
The visual acquisition device (2) is composed of an RGBD camera (22), a bottom support plate (23), a U-shaped support frame (24), a top support plate (25), a single-shaft steering engine (26) and a double-shaft steering engine (27); the bottom layer backup pad (23) accessible unipolar steering wheel (26) drive U type support frame (24) and carry out 360 degrees rotations along the beat direction, and U type support frame (24) accessible biax steering wheel (27) drive top layer backup pad (25) and carry out + -45 degrees rotations along the every single move direction, and top layer backup pad (25) are fixed on biax steering wheel (27) through the mounting hole, and bottom layer backup pad (23) can be fixed on lifter plate (32) through toothed guide rail (31).
The automatic telescopic support (3) is arranged on the robot body (1), the toothed guide rail (31), the lifting plate (32) and the annular support frame (342) are arranged on the support, the annular support frame (342) is fixed on a top-layer clamping plate (15) of the robot through a mounting hole, the motor support frame (341) is fixed on the annular support frame (342) through a welding mode, the stepping motor (332) is fixed on the motor support frame (341) through a set screw (343), the lifting function of the vision acquisition device (2) is realized through meshing of the gear (331), the toothed guide rail (31) is vertically installed, and the motor support frame (341) is connected with the annular support frame (342) through a rolling bearing (311); two step motor (332), unipolar steering wheel (26), biax steering wheel (27), direct current motor (131) are connected with the output of controller (5), and laser range finder (4), RGBD camera (22) and encoder (132) are connected with the input of controller (5), and controller (5) can automatic telescopic bracket (3) and direct current motor (131) of simultaneous control to and can acquire the data of laser range finder (4), RGBD camera (22) and encoder (132) in real time.
The controller (5) collects depth information of surrounding environment through the laser range finder (4), and 360-degree rotation of the laser range finder generates 360 depth data (P) 1 ,P 2 ...P n ) And 360 angle data (A) 1 ,A 2 ...A n ) The method comprises the steps that a controller (5) establishes a Cartesian coordinate system which accords with a right-hand rule by taking the current direction of a robot as the positive direction of a y axis, two-dimensional reconstruction is conducted on depth data and angle data which are obtained currently, n-1 groups of line segments are obtained by connecting adjacent data of all the depth data, included angles between different line segments are calculated, if the included angles of the two line segments are smaller than 5 degrees, the two line segments are judged to be in the same line segment group, a least square method is adopted to fit the position coordinates of all the acquired information points to obtain the line segment group, the included angles between the different line segments are calculated, and if the minimum distance between the two line segments is larger than the maximum width of a robot body and the difference of inclination angles is smaller than 5 degrees, the robot is judged to enter a long straight corridor environment.
After entering a long straight corridor environment, the controller (5) adjusts the automatic telescopic bracket (3) and drives the single-shaft steering engine (26), surrounding environment image information and depth information are collected through the vision collecting device (2), an integral map is built in a grid map mode, and the minimum unit of grids is 0.04 m.
The mapping method of the robot in the long straight corridor environment comprises the following steps: step 1, marking a coordinate point of a current position as D after the robot judges that the robot enters a straight corridor environment 1 Points, and marking all grid points passed by the robot according to the increasing sequence (D 1 ,D 2 ...D n ) And the system is used for detecting whether the grid point is accessed or not, collecting the odometer information and initializing a gallery construction pattern.
And 2, controlling the vision acquisition device (2) to move to the highest point, the middle point and the lowest point of the automatic telescopic bracket (3) respectively by taking the advancing direction of the robot as a starting point through the automatic telescopic bracket (3) by the RGBD camera (22).
And 3, controlling the RGBD camera (31) to rotate left and right by 45 degrees respectively to acquire images and depth information by taking the forward direction of the robot as a positive direction at each position height through the single-shaft steering engine (26), and acquiring six groups of environment information with different heights and directions.
And 4, extracting feature descriptors of the acquired data by adopting a SURF algorithm, performing triangularized feature point matching, and performing point cloud splicing on the acquired six groups of data by combining the rotation angle of the RGBD camera (22) to form a point cloud frame.
Step 5: controlling the robot from the current grid map D according to feedback information of the motor encoder (132) 1 Adjacent grid map D with points moving to the robot advancing direction 2 Point, record grid offset as T 1 Grid offset is numbered according to increasing order (T 1 ,T 2 ...T n ) And repeating the step 2-4 to obtain a second point cloud frame.
Step 6, performing feature descriptor matching on the two point cloud frames obtained in the step 3 and the step 4, performing coarse splicing, screening out non-horizontal matched feature descriptors, performing fine splicing through an ICP algorithm, geometrically calculating an average distance value of the screened matched sub-frames, calculating the distance of robot movement between the two point cloud frames in the step 3 and the step 4, and numbering according to an increment (M 1 ,M 2 ...M n )。
Step 7, all grid offsets (T) generated after the completion of the mapping 1 ,T 2 ...T n ) Calculated distance (M 1 ,M 2 ...M n ) Respectively calculate standard deviation S 1 、S 2 And comparing the two data, selecting a group of data with smaller standard deviation value as the drawing mileometer data, and correcting the whole construction map.
Drawings
Fig. 1 is a schematic diagram of the system as a whole.
Fig. 2 is a view of a vision acquisition device.
Fig. 3 is a detailed view of the motor bracket and toothed rail connection shown in fig. 1.
Fig. 4 is a chassis motion system shown in fig. 1.
Fig. 5 is a schematic view of a robot moving in a grid map.
Detailed Description
An embodiment of the present invention is further described below with reference to the accompanying drawings.
As shown in fig. 1, the indoor map automatic construction system for the long straight corridor environment in the embodiment of the invention comprises a robot body (1), a vision acquisition device (2), an automatic telescopic bracket (3), a laser range finder (4) and a controller (5), wherein the automatic telescopic bracket (3) is arranged on two sides of a top-layer clamping plate (15) of the robot body (1), toothed guide rails (31) are vertically arranged, and lifting plates (32) are meshed with the toothed guide rails (31) through grooves at two ends to realize fixation; the robot can control the stepping motor (332) to drive the gear (331) through the controller (5), and drive the toothed guide rail (31) through meshing transmission to drive the lifting plate (32) to move vertically to the horizontal plane; the laser range finder (4) is installed on a second layer splint (14) of the robot, the controller (5) is installed on a bottom plate (13) of the robot, the annular supporting frame (342) is fixed on a top layer splint (15) of the robot, and the motor supporting frame (341) is fixed on the front side of the annular supporting frame (342) in a welding mode.
In the embodiment of the invention, the laser range finder (4) can collect distance information within a 360-degree range in a two-dimensional plane, the scanning frequency is 20Hz, and the detection range is as follows: 0.15-6 m; precision: + -50 mm; the controller (5) is a high-performance embedded control board; the vision acquisition device (2) consists of an RGBD camera and a two-degree-of-freedom cradle head, and the camera is connected with the controller (5) through a USB interface; the stepping motor (332) is connected with the input end of the controller (5).
As shown in fig. 2, the vision acquisition device (2) in the embodiment of the invention comprises a bottom layer supporting plate (23), a U-shaped supporting frame (24) and a top layer supporting plate (25), a single-shaft steering engine (26) and a double-shaft steering engine (27); the controller (5) is connected with the single-shaft steering engine (26), the double-shaft steering engine (27) is connected through a DuPont line, and the single-shaft steering engine (26) can be controlled by a control algorithm to drive the U-shaped support frame (24) to rotate 360 degrees, so that the RGBD camera (22) achieves the effect of rotating along the deflection direction; similarly, the controller (5) can control the double-shaft steering engine (27) to rotate in a pitching mode, so that the RGBD camera (22) can rotate in a pitching direction.
As shown in fig. 3, an automatic telescopic bracket (3) of a robot in the embodiment of the invention comprises a toothed guide rail (31), a lifting plate (32), a rolling bearing (311), a gear (331), a stepping motor (332), a motor support frame (341), an annular support frame (342) and screws (343), wherein the annular support frame (342) is fixed on a top-layer clamping plate (15) of the robot through mounting holes, the motor support frame (341) is fixed on the annular support frame (342) in a welding mode, the stepping motor (332) is fixed on the motor support frame (341) through positioning screws (343) and is meshed through the gear (331) to realize the lifting function of a vision acquisition device (2), and the toothed guide rail (31) is vertically installed and is connected with the annular support frame (342) through the rolling bearing (311).
As shown in fig. 4, the chassis motion system of the robot in the embodiment of the invention comprises two driving wheels (11), two supporting wheels (12), two direct current motors (131) and two encoders (132); the controller (5) calculates the movement distance of the robot by collecting encoder data:
Figure 817268DEST_PATH_IMAGE001
(1)
wherein: Δx is the distance traveled by the robot in the x-axis direction,
r is the radius of a robot driving wheel (11),
Δt is the unit of sampling time and,
△W l ,△W r the distances of the robot movement in a unit sampling time are respectively,
θ is the robot rotation angle.
In the embodiment of the invention, the information of the laser range finder is divided into angle information and depth information, the controller (5) can calculate the unique position of each laser point data relative to the robot under a two-dimensional coordinate system with the current position of the robot as an origin, and fit the depth data information of the laser range finder (4) to obtain different line segment groups, and the specific fitting method is as follows:
Figure 577413DEST_PATH_IMAGE002
(2)
wherein: x is X i ,Y i ,X i+1 ,Y i+1 The current position of the robot is taken as the origin point omega of a two-dimensional coordinate system for the two-dimensional coordinates of adjacent laser data i The included angle between the laser data i+1 and the laser data i is set;
if adjacent omega i And if the difference between the two points is smaller than 5 degrees, the two points are determined to be in one line segment, a line segment group is obtained, and if the distance between the two line segments is larger than the maximum width of the robot body and the difference between the inclination angles is smaller than 5 degrees, the robot is judged to enter a long straight corridor environment.
The method specifically comprises the following steps:
step 1: the robot stores the map in a grid map mode, namely each grid point is square, and the resolution of the grid map in the embodiment of the invention is 0.2m, namely the area of each grid point is 0.04 m; the robot marks the coordinate point of the robot entering the straight corridor environment as D on the existing map 1 And marking newly passed grid points (D 1 ,D 2 ...D n ) Skipping if the grid point has been accessed;
step 2: the controller (5) controls the automatic telescopic bracket (3) and the lifting plate (32) to enable the RGBD camera (22) to reach different positions relative to the coordinate system of the robot body (1) through sending a series of instructions, wherein the positions are respectively the highest point, the middle point and the lowest point of the automatic telescopic bracket (3);
step 3: the RGBD cameras take the advancing direction of the robot as a central axis at three different heights, the controller (5) controls the rotatable camera support table (32) to rotate 45 degrees leftwards and rightwards respectively, so that 6 groups of point cloud data with different heights are acquired, point cloud data numbers with different directions are sequentially obtained, and preparation is made for the next splicing point cloud frame;
step 4: extracting feature descriptors of the 6 groups of point cloud frame data obtained in the step 3 according to a SURF algorithm, and combining the lifting distance of the automatic telescopic bracket (3) and the rotation angle of the single-shaft steering engine (26) to obtain rotation angles and translation distances among different groups of point cloud data so as to perform point cloud splicing, wherein a point cloud splicing formula is as follows:
Figure 844447DEST_PATH_IMAGE003
wherein: t is the transformation matrix used in the point cloud position conversion,
R 3x3 is a 3x3 rotation orthogonal matrix, and for calculating the rotation state of the camera, namely the rotation angle of the single-axis steering engine (26),
t 3x1 for calculating the translation distance of the camera, namely the lifting distance of the automatic telescopic bracket (3) for the displacement matrix of 3x1,
O 1x3 recording image scaling between different groups of images for a 1x3 transformation matrix;
step 5: as shown in fig. 4, the controller (5) calculates the grid closest to the robot and is labeled D 2 Data T using the calculated value fed back by the motor encoder (132) as a grid offset 1 Grid offset is numbered according to increasing order (T 1 ,T 2 ...T n ) T1 is grid D 1 Center to grid D 2 A difference in center distance;
step 6: extracting characteristic points from point cloud frame data of two adjacent positions according to SURF algorithm to generate a characteristic point data set C 1 、C 2 Since the robot moves horizontally, C will be 1 、C 2 Deleting the matched data with the longitudinal offset of the matched characteristic points exceeding 3cm in the data set, and finely splicing the filtered data set through an ICP algorithm to obtain a finely filtered data set C 1 、C 2 Calculate C 1 、C 2 Offset of matched sub-elements in the data set is generated into a set V, and average value of the set V is calculated to obtain two adjacent grid positions D 1 、D 2 Distance M of (2) 1 And according to the increasing sequence number (M 1 ,M 2 ...M n );
Step 7: repeating the steps 1 to 6, and constructing an automatic map; after the mapping is completed, all grid offsets (T 1 ,T 2 ...T n ) And distance of movement (M) 1 ,M 2 ...M n ) Respectively calculate standard deviation S 1 、S 2 The formula is as follows:
Figure 43347DEST_PATH_IMAGE004
wherein: s is a standard deviation value, wherein S is a standard deviation value,
n is the total amount of the acquired data,
X i representing a grid offset set T i And a set of motion distances M i
Mu is the average value of the Xi set;
S 1 and S is equal to 2 The smaller the medium value is, the more stable the corresponding data set is, a group of data with small standard deviation value is adopted as the standard motion data input of the robot, and the final map is generated.

Claims (1)

1. An automatic indoor map construction system for long straight corridor environments, comprising: robot body (1), vision collection system (2), automatic telescopic bracket (3), laser range finder (4), controller (5), its characterized in that: the vision acquisition device (2) is arranged on the robot body (1) through an automatic telescopic bracket (3), the robot body (1) is composed of a driving wheel (11), a supporting wheel (12), a bottom clamping plate (13), a second layer clamping plate (14) and a top clamping plate (15), the controller (5) is arranged on the bottom clamping plate (13), the laser range finder (4) is arranged on the second layer clamping plate (14), and the bottom clamping plate (13) of the robot is provided with a direct current motor (131) and an encoder (132); the visual acquisition device (2) is composed of an RGBD camera (22), a bottom support plate (23), a U-shaped support frame (24), a top support plate (25), a single-shaft steering engine (26) and a double-shaft steering engine (27); the bottom support plate (23) can drive the U-shaped support frame (24) to rotate 360 degrees along the deflection direction through the single-shaft steering engine (26), the U-shaped support frame (24) can drive the top support plate (25) to rotate +/-45 degrees along the pitching direction through the double-shaft steering engine (27), the top support plate (25) is fixed on the double-shaft steering engine (27) through the mounting hole, and the bottom support plate (23) can be fixed on the lifting plate (32) through the toothed guide rail (31);
the automatic telescopic device is characterized in that an automatic telescopic support (3) is arranged on the robot body (1), a toothed guide rail (31), a lifting plate (32) and an annular support frame (342) are arranged on the automatic telescopic support, the annular support frame (342) is fixed on a top-layer clamping plate (15) of the robot through a mounting hole, a motor support frame (341) is fixed on the annular support frame (342) through a welding mode, a stepping motor (332) is fixed on the motor support frame (341) through a set screw (343), the lifting function of the vision acquisition device (2) is realized through meshing of a gear (331), the toothed guide rail (31) is vertically installed, and the motor support frame (341) is connected with the annular support frame (342) through a rolling bearing (311);
after entering a long straight corridor environment, the controller (5) adjusts the automatic telescopic bracket (3) and drives the single-shaft steering engine (26), the visual acquisition device (2) acquires image information and depth information of surrounding environment, a whole map is constructed in a grid map mode, and the single grid area is 0.04m 2 The method comprises the following specific steps:
step 1: when the robot judges that the robot enters the long straight corridor environment, marking the coordinate point of the current position as a D1 point, and marking all grid points passed by the robot according to the increasing sequence (D 1 ,D 2 ...D n ) Detecting whether the grid point is accessed;
step 2: the vision acquisition device (2) is controlled to move to the highest point, the middle point and the lowest point of the automatic telescopic bracket (3) through the automatic telescopic bracket (3);
step 3: at each position height, taking the forward direction of the robot as a positive direction, controlling an RGBD camera (22) to rotate left and right by 45 degrees respectively through a single-shaft steering engine (26) to acquire images and depth information, and acquiring six groups of environment information with different heights and directions;
step 4: extracting feature descriptors from the data acquired in the steps 2 to 3 through a SURF algorithm, performing triangularization feature point matching, and performing point cloud splicing on the acquired six groups of data by combining the rotation angle of the RGBD camera (22) to form a point cloud frame;
step 5: from the current grid map D, the robot is based on feedback information from the encoder (132) 1 The point moving to the machineAdjacent grid map D of human advancing direction 2 Point, record grid offset as T 1 Grid offset is numbered according to increasing order (T 1 ,T 2 ...T n ) Repeating the step 2-4 to obtain a second point cloud frame;
step 6: performing feature descriptor matching on the two point cloud frames obtained in the step 3 and the step 4, performing coarse splicing, screening out non-horizontal matching feature descriptors, performing fine splicing through ICP (Iterative Closest Point) algorithm, averaging matching distances of the remaining horizontal matching feature descriptors, calculating a distance M1 of robot movement, and performing sequence number (M 1 ,M 2 ...M n );
Step 7: all grid offsets (T) generated after the completion of the mapping 1 ,T 2 ...T n ) Point cloud frame matching motion distance (M 1 ,M 2 ...M n ) Respectively calculate standard deviation S 1 、S 2 And comparing the two data, selecting a group of data with smaller standard deviation value as the drawing mileometer data, and correcting the whole construction map.
CN201710342694.0A 2017-05-16 2017-05-16 Automatic indoor map construction system for long straight corridor environment Active CN106959697B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710342694.0A CN106959697B (en) 2017-05-16 2017-05-16 Automatic indoor map construction system for long straight corridor environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710342694.0A CN106959697B (en) 2017-05-16 2017-05-16 Automatic indoor map construction system for long straight corridor environment

Publications (2)

Publication Number Publication Date
CN106959697A CN106959697A (en) 2017-07-18
CN106959697B true CN106959697B (en) 2023-05-23

Family

ID=59482438

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710342694.0A Active CN106959697B (en) 2017-05-16 2017-05-16 Automatic indoor map construction system for long straight corridor environment

Country Status (1)

Country Link
CN (1) CN106959697B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108364340A (en) * 2017-12-03 2018-08-03 广东康云多维视觉智能科技有限公司 The method and system of synchronous spacescan
CN108303096B (en) * 2018-02-12 2020-04-10 杭州蓝芯科技有限公司 Vision-assisted laser positioning system and method
CN108693875B (en) * 2018-03-13 2023-09-29 浙江工业大学 Non-contact guiding device suitable for corrugated track and guiding method thereof
CN109109768B (en) * 2018-07-12 2020-06-05 浙江大学 SLAM travelling car device with camera
CN110146080B (en) * 2019-04-24 2024-01-19 佛山科学技术学院 SLAM loop detection method and device based on mobile robot
CN110053014A (en) * 2019-05-07 2019-07-26 河北工业大学 A kind of indoor intelligent mobile platform of view-based access control model SLAM
CN110143396A (en) * 2019-06-27 2019-08-20 广东利元亨智能装备股份有限公司 Intelligent cruise vehicle
CN111126172B (en) * 2019-12-04 2022-11-18 江西洪都航空工业集团有限责任公司 Grassland autonomous mapping method based on vision
CN111578959B (en) * 2020-05-19 2023-12-29 鲲鹏通讯(昆山)有限公司 Unknown environment autonomous positioning method based on improved vector SLAM algorithm
CN111854651A (en) * 2020-07-20 2020-10-30 武汉科技大学 Indoor building area real-time measuring method based on SLAM
CN116518969B (en) * 2023-04-25 2023-10-20 南京艾小宝智能科技有限公司 Visual positioning system and method under indoor scene

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102508257A (en) * 2011-09-27 2012-06-20 北京航空航天大学 Vehicle-mounted mobile mapping device
CN103278170A (en) * 2013-05-16 2013-09-04 东南大学 Mobile robot cascading map building method based on remarkable scenic spot detection
CN203189977U (en) * 2013-04-27 2013-09-11 焦浩 Novel intelligent cradle head with multiple degrees of freedom
CN104615138A (en) * 2015-01-14 2015-05-13 上海物景智能科技有限公司 Dynamic indoor region coverage division method and device for mobile robot
CN204695113U (en) * 2015-05-13 2015-10-07 青岛东旭机器人视觉系统科技有限公司 Rail high speed dollying The Cloud Terrace
CN105320137A (en) * 2015-11-16 2016-02-10 江苏物联网研究发展中心 A home service-oriented indoor wheeled robot
CN205363870U (en) * 2015-12-15 2016-07-06 天津水诺企业信息咨询有限公司 Remote control plug wire robot
CN207115187U (en) * 2017-05-16 2018-03-16 电子科技大学中山学院 Automatic indoor map construction system oriented to rectangular corridor environment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102508257A (en) * 2011-09-27 2012-06-20 北京航空航天大学 Vehicle-mounted mobile mapping device
CN203189977U (en) * 2013-04-27 2013-09-11 焦浩 Novel intelligent cradle head with multiple degrees of freedom
CN103278170A (en) * 2013-05-16 2013-09-04 东南大学 Mobile robot cascading map building method based on remarkable scenic spot detection
CN104615138A (en) * 2015-01-14 2015-05-13 上海物景智能科技有限公司 Dynamic indoor region coverage division method and device for mobile robot
CN204695113U (en) * 2015-05-13 2015-10-07 青岛东旭机器人视觉系统科技有限公司 Rail high speed dollying The Cloud Terrace
CN105320137A (en) * 2015-11-16 2016-02-10 江苏物联网研究发展中心 A home service-oriented indoor wheeled robot
CN205363870U (en) * 2015-12-15 2016-07-06 天津水诺企业信息咨询有限公司 Remote control plug wire robot
CN207115187U (en) * 2017-05-16 2018-03-16 电子科技大学中山学院 Automatic indoor map construction system oriented to rectangular corridor environment

Also Published As

Publication number Publication date
CN106959697A (en) 2017-07-18

Similar Documents

Publication Publication Date Title
CN106959697B (en) Automatic indoor map construction system for long straight corridor environment
CN106323294B (en) Positioning method and positioning device for substation inspection robot
CN109238240B (en) Unmanned aerial vehicle oblique photography method considering terrain and photography system thereof
US9470658B2 (en) Self-contained holonomic tracking method and apparatus for non-destructive inspection
US8189867B2 (en) Learning method for article storage facility
TWI403690B (en) Self-localizing device and method thereof
CN106887037B (en) indoor three-dimensional reconstruction method based on GPU and depth camera
US20190368865A1 (en) Method for deriving varied-resolution 3d information from 2d images
CN112330794B (en) Single-camera image acquisition system based on rotary bipartite prism and three-dimensional reconstruction method
CN208818162U (en) Positioning robot
CN112927133B (en) Image space projection splicing method based on integrated calibration parameters
CN111189415B (en) Multifunctional three-dimensional measurement reconstruction system and method based on line structured light
US20180335521A1 (en) Floor Surveying System
CN110597265A (en) Recharging method and device for sweeping robot
CN103003666A (en) Parallel online-offline reconstruction for three-dimensional space measurement
JP2002156229A (en) Mobile displacement measuring method and device for structure
EP3734391A1 (en) Simultaneous localization and mapping
CN107941167B (en) Space scanning system based on unmanned aerial vehicle carrier and structured light scanning technology and working method thereof
WO2023207856A1 (en) Robot and linear travel control method therefor, and data processing device
CN207115187U (en) Automatic indoor map construction system oriented to rectangular corridor environment
CN103809603A (en) Cradle head control method and device
CN112082506B (en) Annular high-precision measuring device and method based on speckle structured light
JP2009198382A (en) Environment map acquiring device
CN110231008B (en) Contact net height guiding and pulling-out value measuring device and method based on twice imaging
JPS63124114A (en) Recognizing device for environment of traveling object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant