CN114674308A - Vision-assisted laser gallery positioning method and device based on safety exit indicator - Google Patents

Vision-assisted laser gallery positioning method and device based on safety exit indicator Download PDF

Info

Publication number
CN114674308A
CN114674308A CN202210579591.7A CN202210579591A CN114674308A CN 114674308 A CN114674308 A CN 114674308A CN 202210579591 A CN202210579591 A CN 202210579591A CN 114674308 A CN114674308 A CN 114674308A
Authority
CN
China
Prior art keywords
coordinates
robot
global
positioning
exit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210579591.7A
Other languages
Chinese (zh)
Other versions
CN114674308B (en
Inventor
郑涛
张志文
宋伟
张建明
朱世强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202210579591.7A priority Critical patent/CN114674308B/en
Publication of CN114674308A publication Critical patent/CN114674308A/en
Application granted granted Critical
Publication of CN114674308B publication Critical patent/CN114674308B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3841Data obtained from two or more sources, e.g. probe vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3859Differential updating map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention discloses a visual auxiliary laser gallery positioning method and device based on a safety exit sign board.A robot scans and constructs a map in a laser radar positioning mode, identifies the safety exit sign board through visual scanning, and marks the safety exit sign board in the map in a serial number mode; in the navigation process, fuse the location through vision and laser radar, when detecting near the existence emergency exit sign of robot through the visual positioning mode, match the serial number that corresponds in static map, to the gallery environment that has repeated geometric information, add emergency exit sign auxiliary point through visual detection, correct the few characteristic point matching error of laser radar in real time. The invention adopts the exit sign as visual assistance, makes up the problem of positioning drift of a pure laser positioning method in a typical corridor environment with few characteristic repeated geometric information, does not need any external auxiliary label, and is simple and easy to realize.

Description

Vision-assisted laser gallery positioning method and device based on safety exit indicator
Technical Field
The invention relates to the field of robot positioning, in particular to a method and a device for assisting laser gallery positioning based on exit sign vision.
Background
The positioning problem is a crucial problem in the SLAM and navigation processes of the robot, the problem of where the robot is solved, and the positioning problem is one of the prerequisites for realizing the intellectualization of the robot. The set task can be well finished only when the robot is accurately positioned and has correct judgment and knowledge on the surrounding environment. Currently, for an indoor navigation robot, a common positioning mode includes a laser, a depth camera, an IMU and a wheel-type odometer, the former two determine their own positions through detection of the environment, the latter two monitor the motion data of the robot itself, and infer the relative displacement of the robot through time accumulation to calculate the final pose.
In addition, due to cost limitation, the 2D plane laser radar is a commonly used radar, is usually only used for scanning contour features on a certain fixed height plane in an environment space, has single information amount and has larger improvement space. With the rise of computer vision technology, a large number of methods based on vision and laser radar data fusion are proposed, including binocular positioning, depth information acquisition by a depth camera and the like, so that relatively rich environmental information can be obtained, but too much information inevitably brings computational burden and environmental noise to the whole system, and a simplified space exists. Through retrieval, research on how to introduce a visual positioning method to enhance and improve a laser positioning effect can be found out by related scientific research and engineering technicians, wherein the prior patents which are relatively close to the invention are as follows: the invention discloses a vision-assisted laser positioning system and method CN 108303096A, which discloses a vision-assisted laser positioning method, wherein a laser positioning mode is adopted when a robot normally operates, and the surrounding environment is detected through the vision positioning mode; when the robot is detected to be in a large number of dynamic obstacles or a gallery environment through the visual positioning mode, the robot enters the visual positioning mode through the laser positioning mode; in the visual positioning mode, a positioning value is output through the visual positioning mode, the surrounding environment is detected, and when the environment meets the requirement required by the laser positioning mode, the laser positioning mode is switched back. The method can effectively avoid noise and interference introduced by introducing visual positioning information into particle filter calculation, however, the patent technology has the following defects:
1) in a gallery environment, the visual positioning mode is switched to, the characteristics identified by the laser radar are directly cut off, the environment of a camera view blind area cannot be distinguished, and the potential danger in the navigation process is increased; and on the other hand, simple radar data is abandoned, so that the diversity and confidence coefficient of positioning data are reduced, and the positioning effect is influenced to a certain extent.
2) Frequent switching of the positioning modes of the common environment and the special environment, lack of intercommunication between data, and lack of utilization of the advantages of the data of the other side, can reduce the positioning efficiency to a certain extent.
Disclosure of Invention
In order to solve the defects of the prior art, realize the simplification of redundant visual data characteristics and overcome the aim of inaccurate positioning of pure laser in a gallery, the invention adopts the following technical scheme:
a visual auxiliary laser gallery positioning method based on a safety exit sign comprises the following steps:
step S1: the robot scans and constructs a map in a laser radar positioning mode, identifies the safety exit indication board of the wall through visual scanning, and marks the safety exit indication board in the map in a serial number mode;
step S2: in the navigation process, fusion positioning is carried out through vision and a laser radar, when a safety exit indication board is detected to exist near a robot in a vision positioning mode, corresponding serial numbers are matched in a static map, auxiliary points of the safety exit indication board are added through vision detection in a typical corridor environment with repeated geometric information, and the matching error of few characteristic points of the laser radar is corrected in real time;
further, in step S1, when a map is constructed, a laser radar is used to obtain two-dimensional plane information, and the two-dimensional plane information is compared with contour information in an existing map to find an optimal position of the robot at present, so that the matching degree is the highest, and meanwhile, the robot is also positioned, and the constructed map is used as a static map for navigation. Algorithms used by robots to construct maps include, but are not limited to, Gnaping, Hector, Cartogrer, and the like.
Further, in step S1, when the map is constructed, the depth camera is used to detect the exit sign, and the position information of the exit sign in the depth camera coordinate system is converted into the position information in the global coordinate system, the position information in the global coordinate system is stored in the additional data file, the corresponding labels of the exit sign are given according to the identification order, and the position information of the exit sign is updated in the process of constructing the map.
Further, the position information in the global coordinate system in step S1 is obtained by detecting the exit sign, extracting the corner point of the exit sign, and obtaining the depth information of the corner point according to the structured light code by using the structured light distance principle of the depth camera, to obtain the coordinate value of the corner point in the depth camera coordinate system
Figure 100002_DEST_PATH_IMAGE002
WhereiniIndicating the number of focal points and calculating the coordinate values
Figure 735968DEST_PATH_IMAGE002
Coordinate values transformed into a global coordinate system
Figure 100002_DEST_PATH_IMAGE004
And calculating the average value of the angular points,obtaining global coordinates of the central point
Figure 100002_DEST_PATH_IMAGE006
Further, the storage and update of the coordinates of the exit signs in step S1 is to sequentially give the serial numbers of the global coordinates of the corner points and the center point of the exit signs from the first recognition of the exit signsnOne serial number corresponds to one group of coordinates, and when the global coordinates of the center point of the safety exit indicator board are newly acquired
Figure 100002_DEST_PATH_IMAGE008
And when the distances between the new sequence numbers and all the central point global coordinates in the additional data documents exceed the distance threshold value, adding the new sequence numbers and the corresponding coordinates in the additional data documents, or replacing the corresponding old coordinates with the new coordinates, wherein the sequence numbers are unchanged.
Further, in step S2, the detection of the gallery environment with the repeated geometric information is completed by the distance information returned by the laser radar, and when the ratio of the return distance to infinity exceeds the ratio threshold in the angle threshold in the front and the rear of the robot, it is determined that the robot is currently in the gallery environment, and the visual detection based on the exit indicator is started to assist the laser positioning.
Further, in the step S2, the fusion positioning of the vision and the laser radar is to identify a safety exit indicator existing in the environment through the acquired visual information, calculate an error between a current real-time relative position and a theoretical relative position of the robot in a map according to the safety exit indicator, reversely input the error into a positioning point based on the laser radar, and update and correct the global positioning coordinate of the robot.
Further, in step S2, the real coordinates of the corner point of the exit indicator in the robot coordinate system are calculated by recognizing the error between the current real-time relative position and the theoretical relative position of the robot in the map through the depth camera
Figure 358711DEST_PATH_IMAGE004
Firstly, converting local coordinates with errors to global coordinates to obtain temporary global coordinates of the corner points, calculating the average value of the temporary global coordinates to obtain central point global coordinates, then searching the nearest theoretical global coordinates and the corresponding corner point coordinates in the extra data document, and obtaining the theoretical coordinates of the corner points under the robot coordinate system by converting the global coordinates with errors to the local coordinates
Figure 100002_DEST_PATH_IMAGE010
And finally selecting the corner point
Figure 913320DEST_PATH_IMAGE004
And
Figure 672329DEST_PATH_IMAGE010
and calculating to obtain the error between the actual pose and the theoretical pose of the robot.
Further, the step S2 includes the following steps:
step S2.1: the visual and laser radar fusion positioning is realized by detecting the exit sign, extracting the angular point of the exit sign, acquiring the depth information of the angular point according to the structured light coding by utilizing the structured light distance measuring principle of the depth camera, and obtaining the coordinate value of the angular point under the coordinate system of the depth camera
Figure 146035DEST_PATH_IMAGE002
Step S2.2: fixed coordinate system transformation matrix known through depth camera and robot
Figure 100002_DEST_PATH_IMAGE012
Obtaining the angular point of the safety exit indicator board in the robot coordinate system
Figure 100002_DEST_PATH_IMAGE014
Coordinates of lower
Figure 100002_DEST_PATH_IMAGE016
Figure 100002_DEST_PATH_IMAGE018
(1)
Current coordinate system of robot
Figure 66193DEST_PATH_IMAGE014
And a global coordinate system
Figure 100002_DEST_PATH_IMAGE020
Is transformed by
Figure 100002_DEST_PATH_IMAGE022
Representing the theoretical pose of the robot on the global map,
Figure 955652DEST_PATH_IMAGE022
can be represented by a homogeneous matrix of 4 multiplied by 4, and firstly passes through the theoretical pose
Figure 834746DEST_PATH_IMAGE022
With true coordinates
Figure 213775DEST_PATH_IMAGE016
Conversion to temporary global coordinates
Figure 100002_DEST_PATH_IMAGE024
Calculating their average value to obtain the global coordinate of the central point
Figure 100002_DEST_PATH_IMAGE026
Step S2.3: searching for separation in additional data documents by K-nearest neighbor search
Figure 279951DEST_PATH_IMAGE026
Nearest theoretical global coordinates
Figure 100002_DEST_PATH_IMAGE028
And theoretical global corner coordinates
Figure 439013DEST_PATH_IMAGE024
Taken as the homogeneous coordinates of the global corner
Figure 100002_DEST_PATH_IMAGE030
The coordinates of the theoretical global corner point under the theoretical base coordinate system are defined as
Figure 172613DEST_PATH_IMAGE010
And is recorded as local angular point homogeneous coordinate
Figure 100002_DEST_PATH_IMAGE032
The coordinate transformation relationship of the rigid motion consisting of translation and rotation yields:
Figure 100002_DEST_PATH_IMAGE034
(2)
then obtaining the each angular point of the indication board in the theoretical base coordinate system
Figure 100002_DEST_PATH_IMAGE036
Coordinates of lower
Figure 4434DEST_PATH_IMAGE010
Theoretical basis coordinate system of robot
Figure 682540DEST_PATH_IMAGE036
To the current actual base coordinate system of the robot
Figure 117064DEST_PATH_IMAGE014
Is a homogeneous conversion matrix of
Figure 100002_DEST_PATH_IMAGE038
Will also
Figure 236329DEST_PATH_IMAGE016
Record corresponding homogeneous coordinates
Figure 100002_DEST_PATH_IMAGE040
Obtaining:
Figure 100002_DEST_PATH_IMAGE042
(3)
wherein
Figure 321876DEST_PATH_IMAGE038
Expressed as:
Figure 100002_DEST_PATH_IMAGE044
(4)
in the formula (4), unknown amount
Figure 100002_DEST_PATH_IMAGE046
Respectively represent
Figure 690541DEST_PATH_IMAGE036
To
Figure 663176DEST_PATH_IMAGE014
Is/are as followsxyAmount of directional deviation and windingzThe angle of rotation of the shaft.
Step S2.4: selecting a plurality of angular points of the exit indicator to substitute in formula (3) to obtain the unknown quantity
Figure 168107DEST_PATH_IMAGE046
Is
Figure 59839DEST_PATH_IMAGE038
The value of the transformation matrix, namely the error between the actual pose and the theoretical pose of the robot, is converted into a transformation matrix
Figure 587904DEST_PATH_IMAGE046
And reversely inputting the data into the robot particle distribution probability positioning points based on the laser radar so as to update and correct the global positioning coordinates of the robot.
The utility model provides a based on supplementary laser gallery positioner of emergency exit sign vision, includes one or more processors, is used for realizing the supplementary laser gallery positioner of emergency exit sign vision method.
The invention has the advantages and beneficial effects that:
according to the visual auxiliary laser corridor positioning method and device based on the exit sign, the depth information is returned by the depth camera for target detection of the exit sign, so that the positioning effect of the pure laser radar in a long straight corridor with few characteristics or similar characteristics can be effectively improved; the invention does not need any external auxiliary label, is simple and easy to realize, has positioning effect which is not easy to be influenced by light due to the particularity of the safe escape sign, and has universality for positioning most indoor corridors.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2 is a flow chart of the detection process of the laser radar SLAM and the camera in the invention.
Fig. 3 is a flow chart of a fused radar and camera assisted positioning process in the present invention.
Fig. 4 is a relation diagram of the theoretical coordinate system and the actual coordinate system of the robot in the global coordinate system.
Fig. 5 is a structural view of the apparatus of the present invention.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present invention, are given by way of illustration and explanation only, not limitation.
As shown in fig. 1, the visual-aided laser corridor positioning method based on the exit signs comprises the following steps:
step S1: the robot scans and constructs a map in a laser radar positioning mode, identifies a safety exit indicator (of a wall) through visual scanning, and marks the safety exit indicator in the map; specifically, the marking is carried out in a map in a sequence number mode;
furthermore, when a map is constructed, only the laser radar is used for obtaining two-dimensional plane information, and through comparison of the information and contour information in the existing map, the optimal estimation of the current robot position can be found to enable the matching degree to be the highest, so that the robot is positioned, and the constructed map is used as a static map for navigation. Algorithms commonly used by robots to construct maps include, but are not limited to, Gmaping, Hector, Cartographer, and the like.
Furthermore, when a map is constructed, the safety exit indicator is detected by using the depth camera, the position information of the safety exit indicator under the depth camera coordinate system is converted into the position information under the global coordinate system, the position information under the global coordinate system is stored in an additional data document, corresponding labels of the safety exit indicator are given according to the recognition sequence, and the position information of the safety exit indicator is updated in the process of constructing the map.
Further, the position information under the global coordinate system is obtained by detecting the exit sign, extracting the angular point of the exit sign, obtaining the depth information of the angular point according to the structured light coding by utilizing the structured light distance principle of the depth camera, and obtaining the coordinate value of the angular point under the depth camera coordinate system
Figure 488864DEST_PATH_IMAGE002
WhereiniIndicating the number of focal points and calculating the coordinate values
Figure 582722DEST_PATH_IMAGE002
Coordinate values transformed into a global coordinate system
Figure 786301DEST_PATH_IMAGE004
Calculating the average value of the angular points to obtain the global coordinate of the central point
Figure 926295DEST_PATH_IMAGE006
Specifically, a map is built, a mounted Kinect V2 depth camera is used for detecting targets such as a safety exit sign, the Yolo V3 is used for detecting the targets, after the targets are identified, images in an identification area are further processed, four corner points of the safety exit sign are extracted, and the structured light distance principle of the depth camera is utilized to encode and code the targets through structured lightObtaining the depth information of the angular point, namely the coordinate value of the angular point under the camera coordinate system
Figure 240733DEST_PATH_IMAGE002
Wherein
Figure DEST_PATH_IMAGE048
Specifically, the calculation method of the global coordinate position of the exit sign is as follows: transforming the currently obtained coordinates of 4 angular points into a robot coordinate system, and then transforming the coordinates into coordinates under a global coordinate system
Figure 982906DEST_PATH_IMAGE004
Calculating their average value to obtain the global coordinate of the central point
Figure 357386DEST_PATH_IMAGE006
Further, the coordinates of the exit signs are stored and updated by sequentially setting the serial numbers of the angular points and the global coordinates of the center point of the exit signs from the first recognition of the exit signsnOne serial number corresponds to one group of coordinates, and when the global coordinates of the center point of the safety exit indicator board are newly acquired
Figure 984677DEST_PATH_IMAGE008
And when the distances between the current coordinate and all the central point global coordinates in the extra data document exceed the distance threshold, adding a new sequence number and the corresponding coordinate in the extra data document, or replacing the corresponding old coordinate with the new coordinate, wherein the sequence number is unchanged.
Specifically, as shown in fig. 2, the method for storing the coordinates of the emergency exit signs and updating the coordinates includes: the sequence numbers of 4 corner points and the global coordinate of the central point of the indicating board are given in sequence from the first identification to the indication board of the exitnOne serial number corresponds to one group of coordinates, and when the global coordinates of the center point of the indicator are newly acquired
Figure 102805DEST_PATH_IMAGE008
The coordinate values of all the center points in the Data of the extra Data document are beyond the distanceIf a certain threshold value is exceeded, adding a new serial number and coordinates into Data, otherwise, replacing the old coordinates with the new coordinates, and keeping the serial number unchanged.
Step S2: in the navigation in-process, fuse the location through vision and laser radar, when detecting near robot through the vision positioning mode and have the emergency exit sign, match the sequence number that corresponds in static map, to the gallery environment that has repeated geometric information, add emergency exit sign auxiliary point through visual detection, correct the few characteristic point matching error of laser radar in real time.
Further, a typical corridor environment detection method with repeated geometric information: the detection of the gallery environment with repeated geometric information is completed by the distance information returned by the laser radar when the robot is in front of and behind and at an angle threshold
Figure DEST_PATH_IMAGE050
In, when the ratio that return distance is infinity exceeded the proportion threshold value, judge that current robot is being in the gallery environment, start based on emergency exit sign visual detection, supplementary laser positioning.
Furthermore, the fusion positioning of vision and laser radar is realized by identifying a safety exit indication board existing in the environment through the acquired visual information, calculating the error between the current real-time relative position and the theoretical relative position of the robot in a map according to the safety exit indication board, reversely inputting the error into a positioning point based on the laser radar, and updating and correcting the global positioning coordinate of the robot.
Furthermore, the error between the current real-time relative position and the theoretical relative position of the robot in the map is identified by the depth camera, and the real coordinate of the corner point of the safety exit indicator in the robot coordinate system is calculated
Figure 640097DEST_PATH_IMAGE004
Firstly, the temporary global coordinates of the corner points are obtained through the conversion from the local coordinates with errors to the global coordinates, the average value of the temporary global coordinates is calculated to obtain the global coordinates of the central point, and then the nearest theoretical global coordinates are searched in the extra data documentAnd corresponding angular point coordinates, and obtaining theoretical coordinates of the angular point in the robot coordinate system through the conversion from the global coordinate with the current error to the local coordinate
Figure 310113DEST_PATH_IMAGE010
And finally selecting the corner point
Figure 96803DEST_PATH_IMAGE004
And
Figure 753044DEST_PATH_IMAGE010
and calculating to obtain the error between the actual pose and the theoretical pose of the robot.
Further, as shown in fig. 3, step S2 includes the following steps:
step S2.1: the visual and laser radar fusion positioning method comprises the steps of detecting a safety exit sign, extracting a corner point of the safety exit sign, acquiring depth information of the corner point according to structured light coding by utilizing a structured light distance principle of a depth camera, and obtaining a coordinate value of the corner point under a depth camera coordinate system
Figure 269476DEST_PATH_IMAGE002
Specifically, vision and laser radar fusion positioning (vision is auxiliary, radar is main), visual information passes through a safety exit sign board existing in an identification environment, a Yolo v3 is used for target detection, after a target is identified, an image in an identification area is further processed, four corner points of the safety exit sign board are extracted, the depth information of the corner points is obtained through the structured light coding by utilizing the structured light distance principle of a depth camera, namely the coordinate values of the corner points under a camera coordinate system
Figure 985759DEST_PATH_IMAGE002
The method for updating and correcting the global positioning coordinate of the robot by calculating the error between the real-time relative position and the theoretical relative position of the robot in the map and reversely inputting the error to the positioning point based on the laser radar comprises the following steps:
step S2.2: fixed coordinate system transformation matrix known through depth camera and robot
Figure 322062DEST_PATH_IMAGE012
Obtaining the angular point of the safety exit indicator board in the robot coordinate system
Figure 781994DEST_PATH_IMAGE014
Coordinates of lower
Figure 152932DEST_PATH_IMAGE016
Figure 833925DEST_PATH_IMAGE018
(1)
Current coordinate system of robot
Figure 532890DEST_PATH_IMAGE014
And a global coordinate system
Figure 655567DEST_PATH_IMAGE020
Is transformed by
Figure 756378DEST_PATH_IMAGE022
Representing the theoretical pose of the robot on the global map, and passing the theoretical pose
Figure 939098DEST_PATH_IMAGE022
With true coordinates
Figure 187677DEST_PATH_IMAGE016
Conversion to temporary global coordinates
Figure 989411DEST_PATH_IMAGE024
Calculating their average value to obtain the global coordinate of the center point
Figure 803783DEST_PATH_IMAGE026
In particular, the method comprises the following steps of,
Figure 298349DEST_PATH_IMAGE022
may be represented by a 4 x 4 homogeneous matrix.
Step S2.3: in the extra Data document Data, the search is performed by K neighbor search method
Figure 96541DEST_PATH_IMAGE026
Nearest theoretical global coordinates
Figure 498703DEST_PATH_IMAGE028
And theoretical global corner coordinates
Figure 308528DEST_PATH_IMAGE024
Taken as the homogeneous coordinates of the global corner
Figure 98629DEST_PATH_IMAGE030
The coordinates of the theoretical global corner point under the theoretical base coordinate system are defined as
Figure 990974DEST_PATH_IMAGE010
And is recorded as local angular point homogeneous coordinate
Figure 259145DEST_PATH_IMAGE032
The coordinate transformation relationship of rigid body motion consisting of translation and rotation is obtained as follows:
Figure 189054DEST_PATH_IMAGE034
(2)
then obtaining the each angular point of the indication board in the theoretical base coordinate system
Figure 822161DEST_PATH_IMAGE036
Coordinates of lower
Figure 329366DEST_PATH_IMAGE010
Theoretical basis coordinate system of robot
Figure 276593DEST_PATH_IMAGE036
To the current actual base coordinate system of the robot
Figure 185643DEST_PATH_IMAGE014
Is a homogeneous conversion matrix of
Figure 927334DEST_PATH_IMAGE038
Will also
Figure 187414DEST_PATH_IMAGE016
Record corresponding homogeneous coordinates
Figure 469491DEST_PATH_IMAGE040
Obtaining:
Figure 373993DEST_PATH_IMAGE042
(3)
wherein
Figure 411219DEST_PATH_IMAGE038
Expressed as:
Figure 768383DEST_PATH_IMAGE044
(4)
in the formula (4), unknown amount
Figure 182046DEST_PATH_IMAGE046
Respectively represent
Figure 734863DEST_PATH_IMAGE036
To
Figure 83936DEST_PATH_IMAGE014
IsxyAmount of directional deviation and windingzThe angle of rotation of the shaft.
Step S2.4: as shown in FIG. 4, a plurality of corner points of the exit signs are selected to be substituted in formula (3) to obtain the unknown quantity
Figure 53029DEST_PATH_IMAGE046
Is/are as follows
Figure 880171DEST_PATH_IMAGE038
The value of the transformation matrix, namely the error between the actual pose and the theoretical pose of the robot, is converted into a transformation matrix
Figure 618319DEST_PATH_IMAGE046
And reversely inputting the data into positioning points based on the laser radar so as to update and correct the global positioning coordinates of the robot.
Specifically, the positioning point of the laser radar is a robot particle distribution probability positioning point of the laser radar.
According to the 'fire safety evacuation sign setting standard', evacuation guide signs are required to be arranged on evacuation walkways or main evacuation routes and the requirement that the spacing is not more than 10m is met, so that the method has good adaptability to most indoor long corridor environments due to the particularity of the exit sign.
Corresponding to the embodiment of the visual-aided laser corridor positioning method based on the exit sign, the invention also provides an embodiment of a visual-aided laser corridor positioning device based on the exit sign.
Referring to fig. 5, the visual-aided laser corridor positioning device based on the exit signs provided by the embodiment of the invention comprises one or more processors, and is used for implementing the visual-aided laser corridor positioning method based on the exit signs in the embodiment.
The embodiments of the present invention based on a visual assisted laser corridor positioning device for exit signs may be applied to any data processing capable device, such as a computer or other like device or apparatus. The apparatus embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. The software implementation is taken as an example, and as a logical device, the device is formed by reading corresponding computer program instructions in the nonvolatile memory into the memory for running through the processor of any device with data processing capability. From a hardware aspect, as shown in fig. 5, a hardware structure diagram of any device with data processing capability where the device for visually assisting laser corridor positioning based on a safety exit sign according to the present invention is located is shown, except for the processor, the memory, the network interface, and the nonvolatile memory shown in fig. 5, in the embodiment, any device with data processing capability where the device is located may also include other hardware according to the actual function of the any device with data processing capability, which is not described again.
The specific details of the implementation process of the functions and actions of each unit in the above device are the implementation processes of the corresponding steps in the above method, and are not described herein again.
For the device embodiment, since it basically corresponds to the method embodiment, reference may be made to the partial description of the method embodiment for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the invention. One of ordinary skill in the art can understand and implement it without inventive effort.
The embodiment of the invention also provides a computer readable storage medium, which stores a program, and when the program is executed by a processor, the method for positioning the laser corridor based on the visual assistance of the exit sign in the embodiment is realized.
The computer readable storage medium may be an internal storage unit, such as a hard disk or a memory, of any data processing capability device described in any of the foregoing embodiments. The computer readable storage medium may also be any external storage device of a device with data processing capabilities, such as a plug-in hard disk, a Smart Media Card (SMC), an SD Card, a Flash memory Card (Flash Card), etc. provided on the device. Further, the computer readable storage medium may include both an internal storage unit and an external storage device of any data processing capable device. The computer-readable storage medium is used for storing the computer program and other programs and data required by the arbitrary data processing capable device, and may also be used for temporarily storing data that has been output or is to be output.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A visual auxiliary laser gallery positioning method based on a safety exit sign is characterized by comprising the following steps:
step S1: the robot scans and constructs a map in a laser radar positioning mode, identifies the safety exit indication board through visual scanning, and marks the safety exit indication board in the map in a serial number mode;
step S2: in the navigation process, fuse the location through vision and laser radar, when detecting near robot through the vision positioning mode and have the emergency exit sign, match the sequence number that corresponds in static map, to the gallery environment that has repeated geometric information, add emergency exit sign auxiliary point through visual detection, correct the few characteristic point matching error of laser radar in real time.
2. The method of claim 1, wherein the laser corridor positioning method based on visual assistance of exit signs comprises: in the step S1, when a map is constructed, a laser radar is used to obtain two-dimensional plane information, the two-dimensional plane information is compared with contour information in an existing map to find an optimal position of the current robot, and the constructed map is used as a static map for navigation.
3. The method of claim 1, wherein the laser corridor positioning method based on visual assistance of exit signs comprises: in step S1, when a map is constructed, the depth camera is used to detect the exit sign, and the position information of the exit sign in the depth camera coordinate system is converted into position information in the global coordinate system, the position information in the global coordinate system is stored in an additional data file, corresponding labels of the exit sign are given according to the recognition sequence, and the position information of the exit sign is updated in the process of constructing the map.
4. The method of claim 3, wherein the laser corridor positioning method based on exit signs comprises: the position information in the global coordinate system in step S1 is obtained by detecting the exit sign, extracting the corner point of the exit sign, obtaining the depth information of the corner point according to the structured light coding by using the structured light distance measuring principle of the depth camera, and obtaining the coordinate value of the corner point in the depth camera coordinate system
Figure DEST_PATH_IMAGE002
WhereiniIndicating the number of focal points and calculating the coordinate values
Figure 60131DEST_PATH_IMAGE002
Coordinate values transformed to the global coordinate system
Figure DEST_PATH_IMAGE004
Calculating the average value of the angular points to obtain the global coordinate of the central point
Figure DEST_PATH_IMAGE006
5. The method of claim 4, wherein the laser corridor positioning method based on visual assistance of the exit signs comprises: storing and updating the emergency exit sign seat in the step S1The target is that the serial numbers of the angular points and the global coordinates of the central point of the exit sign are given in sequence from the first identification to the exit signnOne serial number corresponds to one group of coordinates, and when the global coordinates of the center point of the safety exit indicator board are newly acquired
Figure DEST_PATH_IMAGE008
And when the distances between the new sequence numbers and all the central point global coordinates in the additional data documents exceed the distance threshold value, adding the new sequence numbers and the corresponding coordinates in the additional data documents, or replacing the corresponding old coordinates with the new coordinates, wherein the sequence numbers are unchanged.
6. The method of claim 1, wherein the laser corridor positioning method based on visual assistance of exit signs comprises: in step S2, the detection of the corridor environment with the repeated geometric information is completed by the distance information returned by the laser radar, and when the ratio of the return distance to infinity exceeds the ratio threshold in the angle threshold in the front and the rear of the robot, it is determined that the robot is currently in the corridor environment, and visual detection based on the exit sign is started to assist laser positioning.
7. The method of claim 1, wherein the laser corridor positioning method based on exit signs visual assistance comprises: in the step S2, the fusion positioning of vision and lidar is to identify a security exit indicator existing in the environment through the acquired visual information, calculate an error between a current real-time relative position and a theoretical relative position of the robot in a map according to the security exit indicator, reversely input the error into a positioning point based on the lidar, and update and correct the global positioning coordinates of the robot.
8. The method of claim 7, wherein the laser corridor positioning method based on visual assistance of exit signs comprises: in step S2, the error between the current real-time relative position and the theoretical relative position of the robot in the map is calculated by recognizing the depth camera to obtain the safety exit indicationReal coordinates of the corner point of the card under the robot coordinate system
Figure 665687DEST_PATH_IMAGE004
Firstly, converting local coordinates with errors to global coordinates to obtain temporary global coordinates of the corner points, calculating the average value of the temporary global coordinates to obtain central point global coordinates, then searching the nearest theoretical global coordinates and the corresponding corner point coordinates in the extra data document, and obtaining the theoretical coordinates of the corner points under the robot coordinate system by converting the global coordinates with errors to the local coordinates
Figure DEST_PATH_IMAGE010
And finally selecting the corner point
Figure 55211DEST_PATH_IMAGE004
And
Figure 716000DEST_PATH_IMAGE010
and calculating to obtain the error between the actual pose and the theoretical pose of the robot.
9. The method of claim 8, wherein the laser corridor positioning method based on visual assistance of exit signs comprises: the step S2 includes the following steps:
step S2.1: the visual and laser radar fusion positioning method comprises the steps of detecting a safety exit sign, extracting a corner point of the safety exit sign, acquiring depth information of the corner point according to structured light coding by utilizing a structured light distance principle of a depth camera, and obtaining a coordinate value of the corner point under a depth camera coordinate system
Figure 730705DEST_PATH_IMAGE002
Step S2.2: fixed coordinate system transformation matrix known through depth camera and robot
Figure DEST_PATH_IMAGE012
Obtaining the angular point of the safety exit indicator board in the robot coordinate system
Figure DEST_PATH_IMAGE014
Coordinates of lower
Figure DEST_PATH_IMAGE016
Figure DEST_PATH_IMAGE018
(1)
Current coordinate system of robot
Figure 710424DEST_PATH_IMAGE014
With a global coordinate system
Figure DEST_PATH_IMAGE020
Is transformed by
Figure DEST_PATH_IMAGE022
The theoretical pose of the robot on the global map is represented and is determined according to the theoretical pose
Figure 318736DEST_PATH_IMAGE022
With true coordinates
Figure 455319DEST_PATH_IMAGE016
Conversion to temporary global coordinates
Figure DEST_PATH_IMAGE024
Calculating their average value to obtain the global coordinate of the central point
Figure DEST_PATH_IMAGE026
Step S2.3: in the extra data document, search for the distance by the K neighbor search method
Figure 999564DEST_PATH_IMAGE026
Nearest theoretical global coordinates
Figure DEST_PATH_IMAGE028
And theoretical global corner coordinates
Figure 665032DEST_PATH_IMAGE024
Taken as homogeneous coordinates of global corner
Figure DEST_PATH_IMAGE030
The coordinates of the theoretical global corner point under the theoretical base coordinate system are defined as
Figure 29148DEST_PATH_IMAGE010
And is recorded as local angular point homogeneous coordinate
Figure DEST_PATH_IMAGE032
The coordinate transformation relationship of rigid body motion consisting of translation and rotation is obtained as follows:
Figure DEST_PATH_IMAGE034
(2)
then obtaining the each angular point of the indication board in the theoretical base coordinate system
Figure DEST_PATH_IMAGE036
Coordinates of lower
Figure 193192DEST_PATH_IMAGE010
Theoretical basis coordinate system of robot
Figure 123102DEST_PATH_IMAGE036
To the current actual base coordinate system of the robot
Figure 818526DEST_PATH_IMAGE014
Is a homogeneous conversion matrix of
Figure DEST_PATH_IMAGE038
Will also
Figure 669938DEST_PATH_IMAGE016
Record corresponding homogeneous coordinates
Figure DEST_PATH_IMAGE040
And obtaining:
Figure DEST_PATH_IMAGE042
(3)
wherein
Figure 148324DEST_PATH_IMAGE038
Expressed as:
Figure DEST_PATH_IMAGE044
(4)
unknown amount in formula (4)
Figure DEST_PATH_IMAGE046
Respectively represent
Figure 932740DEST_PATH_IMAGE036
To
Figure 674431DEST_PATH_IMAGE014
Is/are as followsxyAmount of directional deviation and windingzThe angle of rotation of the shaft;
step S2.4: selecting a plurality of angular points of the exit indicator to substitute in formula (3) to obtain the unknown quantity
Figure 668932DEST_PATH_IMAGE046
Is/are as follows
Figure 948079DEST_PATH_IMAGE038
Transformation matrixThe value of (1), i.e. the error between the actual pose and the theoretical pose of the robot, will be erroneous
Figure 852581DEST_PATH_IMAGE046
And the reverse input is carried out on the positioning point based on the laser radar, so that the global positioning coordinate of the robot is updated and corrected.
10. A fire exit sign-based visually-assisted laser corridor positioning device comprising one or more processors configured to implement the method of any of claims 1-9.
CN202210579591.7A 2022-05-26 2022-05-26 Vision-assisted laser corridor positioning method and device based on safety exit indicator Active CN114674308B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210579591.7A CN114674308B (en) 2022-05-26 2022-05-26 Vision-assisted laser corridor positioning method and device based on safety exit indicator

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210579591.7A CN114674308B (en) 2022-05-26 2022-05-26 Vision-assisted laser corridor positioning method and device based on safety exit indicator

Publications (2)

Publication Number Publication Date
CN114674308A true CN114674308A (en) 2022-06-28
CN114674308B CN114674308B (en) 2022-09-16

Family

ID=82080989

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210579591.7A Active CN114674308B (en) 2022-05-26 2022-05-26 Vision-assisted laser corridor positioning method and device based on safety exit indicator

Country Status (1)

Country Link
CN (1) CN114674308B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780735A (en) * 2016-12-29 2017-05-31 深圳先进技术研究院 A kind of semantic map constructing method, device and a kind of robot
CN108303096A (en) * 2018-02-12 2018-07-20 杭州蓝芯科技有限公司 A kind of vision auxiliary laser positioning system and method
CN111045017A (en) * 2019-12-20 2020-04-21 成都理工大学 Method for constructing transformer substation map of inspection robot by fusing laser and vision
WO2020226187A1 (en) * 2019-05-03 2020-11-12 엘지전자 주식회사 Robot generating map on basis of multi-sensor and artificial intelligence and traveling by using map
US20200356582A1 (en) * 2019-05-09 2020-11-12 Ankobot (Shenzhen) Smart Technologies Co., Ltd. Method for updating a map and mobile robot
US20200376676A1 (en) * 2019-05-30 2020-12-03 Lg Electronics Inc. Method of localization using multi sensor and robot implementing same
CN112180937A (en) * 2020-10-14 2021-01-05 中国安全生产科学研究院 Subway carriage disinfection robot and automatic navigation method thereof
CN112785702A (en) * 2020-12-31 2021-05-11 华南理工大学 SLAM method based on tight coupling of 2D laser radar and binocular camera
CN113885046A (en) * 2021-09-26 2022-01-04 天津大学 Intelligent internet automobile laser radar positioning system and method for low-texture garage

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780735A (en) * 2016-12-29 2017-05-31 深圳先进技术研究院 A kind of semantic map constructing method, device and a kind of robot
CN108303096A (en) * 2018-02-12 2018-07-20 杭州蓝芯科技有限公司 A kind of vision auxiliary laser positioning system and method
WO2020226187A1 (en) * 2019-05-03 2020-11-12 엘지전자 주식회사 Robot generating map on basis of multi-sensor and artificial intelligence and traveling by using map
US20200356582A1 (en) * 2019-05-09 2020-11-12 Ankobot (Shenzhen) Smart Technologies Co., Ltd. Method for updating a map and mobile robot
US20200376676A1 (en) * 2019-05-30 2020-12-03 Lg Electronics Inc. Method of localization using multi sensor and robot implementing same
CN111045017A (en) * 2019-12-20 2020-04-21 成都理工大学 Method for constructing transformer substation map of inspection robot by fusing laser and vision
CN112180937A (en) * 2020-10-14 2021-01-05 中国安全生产科学研究院 Subway carriage disinfection robot and automatic navigation method thereof
CN112785702A (en) * 2020-12-31 2021-05-11 华南理工大学 SLAM method based on tight coupling of 2D laser radar and binocular camera
CN113885046A (en) * 2021-09-26 2022-01-04 天津大学 Intelligent internet automobile laser radar positioning system and method for low-texture garage

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘洞波等: "融合异质传感信息的机器人粒子滤波定位方法", 《电子测量与仪器学报》 *

Also Published As

Publication number Publication date
CN114674308B (en) 2022-09-16

Similar Documents

Publication Publication Date Title
WO2021233029A1 (en) Simultaneous localization and mapping method, device, system and storage medium
US20230194306A1 (en) Multi-sensor fusion-based slam method and system
WO2021073656A1 (en) Method for automatically labeling image data and device
CN113657224B (en) Method, device and equipment for determining object state in vehicle-road coordination
CN112070770B (en) High-precision three-dimensional map and two-dimensional grid map synchronous construction method
CN110807350A (en) System and method for visual SLAM for scan matching
Cui et al. Efficient large-scale structure from motion by fusing auxiliary imaging information
CN113313763B (en) Monocular camera pose optimization method and device based on neural network
CN115936029B (en) SLAM positioning method and device based on two-dimensional code
CN111854758A (en) Indoor navigation map conversion method and system based on building CAD (computer-aided design) drawing
CN110260866A (en) A kind of robot localization and barrier-avoiding method of view-based access control model sensor
CN114088081B (en) Map construction method for accurate positioning based on multistage joint optimization
CN112484746A (en) Monocular vision-assisted laser radar odometer method based on ground plane
CN114674308B (en) Vision-assisted laser corridor positioning method and device based on safety exit indicator
CN114862953A (en) Mobile robot repositioning method and device based on visual features and 3D laser
CN113554705A (en) Robust positioning method for laser radar in changing scene
Gu et al. Research on SLAM of indoor mobile robot assisted by AR code landmark
CN113763468A (en) Positioning method, device, system and storage medium
CN112258391A (en) Fragmented map splicing method based on road traffic marking
CN117170501B (en) Visual tracking method based on point-line fusion characteristics
Du et al. GNSS-Assisted LiDAR Odometry and Mapping for Urban Environment
CN116698017B (en) Object-level environment modeling method and system for indoor large-scale complex scene
CN114119805B (en) Semantic mapping SLAM method for point-line-plane fusion
Ming et al. Solid-State LiDAR SLAM System with Indoor Degradation Scene Compensation
Chen et al. Real-time visual-inertial SLAM based on RGB-D image and point-line feature with loop closure constraint

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant