CN114674308B - Vision-assisted laser corridor positioning method and device based on safety exit indicator - Google Patents
Vision-assisted laser corridor positioning method and device based on safety exit indicator Download PDFInfo
- Publication number
- CN114674308B CN114674308B CN202210579591.7A CN202210579591A CN114674308B CN 114674308 B CN114674308 B CN 114674308B CN 202210579591 A CN202210579591 A CN 202210579591A CN 114674308 B CN114674308 B CN 114674308B
- Authority
- CN
- China
- Prior art keywords
- robot
- coordinates
- global
- positioning
- exit
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/005—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
- G01C21/206—Instruments for performing navigational calculations specially adapted for indoor navigation
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3804—Creation or updating of map data
- G01C21/3833—Creation or updating of map data characterised by the source of data
- G01C21/3841—Data obtained from two or more sources, e.g. probe vehicles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3804—Creation or updating of map data
- G01C21/3859—Differential updating map data
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/86—Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
Abstract
The invention discloses a visual auxiliary laser gallery positioning method and device based on a safety exit sign board.A robot scans and constructs a map in a laser radar positioning mode, identifies the safety exit sign board through visual scanning, and marks the safety exit sign board in the map in a serial number mode; in the navigation process, fuse the location through vision and laser radar, when detecting near the existence emergency exit sign of robot through the visual positioning mode, match the serial number that corresponds in static map, to the gallery environment that has repeated geometric information, add emergency exit sign auxiliary point through visual detection, correct the few characteristic point matching error of laser radar in real time. The invention adopts the exit sign as visual assistance, makes up the problem of positioning drift of a pure laser positioning method in a typical corridor environment with few characteristic repeated geometric information, does not need any external auxiliary label, and is simple and easy to realize.
Description
Technical Field
The invention relates to the field of robot positioning, in particular to a method and a device for assisting laser gallery positioning based on exit sign vision.
Background
The positioning problem is a crucial problem in the SLAM and navigation processes of the robot, the problem of where the robot is solved, and the positioning problem is one of the prerequisites for realizing the intellectualization of the robot. The set task can be well finished only when the robot is accurately positioned and has correct judgment and knowledge on the surrounding environment. Currently, for an indoor navigation robot, a common positioning mode includes a laser, a depth camera, an IMU and a wheel-type odometer, the former two determine their own positions through detection of the environment, the latter two monitor the motion data of the robot itself, and infer the relative displacement of the robot through time accumulation to calculate the final pose.
For a special gallery environment, due to similarity of environment information, feature point matching based on the laser radar lacks feature information, so that positioning drift and even positioning errors are easy to occur. With the rise of computer vision technology, a large number of methods based on vision and laser radar data fusion are proposed, including binocular positioning, depth information acquisition by a depth camera and the like, so that relatively rich environmental information can be obtained, but too much information inevitably brings computational burden and environmental noise to the whole system, and a simplified space exists. Through retrieval, it can be found that there are related scientific research and engineering technicians researching how to introduce a visual positioning method to enhance and improve the laser positioning effect, wherein the prior patents which are closer to the present invention are: the invention discloses a vision-assisted laser positioning system and method (CN 108303096A). The invention discloses a vision-assisted laser positioning method, which adopts a laser positioning mode when a robot normally operates, and simultaneously detects the surrounding environment through the vision positioning mode; when the robot is detected to be in a large number of dynamic obstacles or a gallery environment through the visual positioning mode, the robot enters the visual positioning mode through the laser positioning mode; in the visual positioning mode, a positioning value is output through the visual positioning mode, the surrounding environment is detected, and when the environment meets the requirement required by the laser positioning mode, the laser positioning mode is switched back. The method can effectively avoid noise and interference introduced by introducing visual positioning information into particle filter calculation, however, the patent technology has the following defects:
1) in a gallery environment, the visual positioning mode is switched to, the characteristics identified by the laser radar are directly cut off, the environment of a camera view blind area cannot be distinguished, and the potential danger in the navigation process is increased; and on the other hand, simple radar data is abandoned, so that the diversity and confidence coefficient of positioning data are reduced, and the positioning effect is influenced to a certain extent.
2) Frequent switching of the positioning modes of the common environment and the special environment, lack of intercommunication between data, and lack of utilization of the advantages of the data of the other side, can reduce the positioning efficiency to a certain extent.
Disclosure of Invention
In order to solve the defects of the prior art, realize the simplification of redundant visual data characteristics and overcome the aim of inaccurate positioning of pure laser in a gallery, the invention adopts the following technical scheme:
a visual auxiliary laser gallery positioning method based on a safety exit sign comprises the following steps:
step S1: the robot scans and constructs a map in a laser radar positioning mode, identifies the safety exit indication board of the wall through visual scanning, and marks the safety exit indication board in the map in a serial number mode;
step S2: in the navigation process, fusion positioning is carried out through vision and a laser radar, when a safety exit indication board is detected to exist nearby the robot in a vision positioning mode, corresponding serial numbers are matched in a static map, and for a typical corridor environment with repeated geometric information, auxiliary points of the safety exit indication board are added through vision detection, and the matching error of few characteristic points of the laser radar is corrected in real time;
further, in step S1, when a map is constructed, a laser radar is used to obtain two-dimensional plane information, and the two-dimensional plane information is compared with contour information in an existing map to find an optimal position of the robot at present, so that the matching degree is the highest, and meanwhile, the robot is also positioned, and the constructed map is used as a static map for navigation. Algorithms used by robots to construct maps include, but are not limited to, Gnaping, Hector, Cartogrer, and the like.
Further, in step S1, when the map is constructed, the depth camera is used to detect the exit sign, and the position information of the exit sign in the depth camera coordinate system is converted into the position information in the global coordinate system, the position information in the global coordinate system is stored in the additional data file, the corresponding labels of the exit sign are given according to the identification order, and the position information of the exit sign is updated in the process of constructing the map.
Further, the position information in the global coordinate system in step S1 is obtained by detecting the exit sign, extracting the corner point of the exit sign, and obtaining the depth information of the corner point according to the structured light code by using the structured light distance principle of the depth camera, to obtain the coordinate value of the corner point in the depth camera coordinate systemIn whichiIndicating the number of focal points and calculating the coordinate valuesCoordinate values transformed to the global coordinate systemCalculating the average value of the angular points to obtain the global coordinate of the central point。
Further, the storage and update of the coordinates of the exit signs in step S1 is to sequentially give the serial numbers of the global coordinates of the corner points and the center point of the exit signs from the first recognition of the exit signsnOne serial number corresponds to one group of coordinates, and when the global coordinates of the central point of the safety exit indicator are obtained newlyAnd when the distances between the current coordinate and all the central point global coordinates in the extra data document exceed the distance threshold, adding a new sequence number and the corresponding coordinate in the extra data document, or replacing the corresponding old coordinate with the new coordinate, wherein the sequence number is unchanged.
Further, in step S2, the detection of the gallery environment with the repeated geometric information is completed by the distance information returned by the laser radar, and when the ratio of the return distance to infinity exceeds the ratio threshold in the angle threshold in the front and the rear of the robot, it is determined that the robot is currently in the gallery environment, and the visual detection based on the exit indicator is started to assist the laser positioning.
Further, in the step S2, the fusion positioning of the vision and the laser radar is to identify a safety exit indicator existing in the environment through the acquired visual information, calculate an error between a current real-time relative position and a theoretical relative position of the robot in a map according to the safety exit indicator, reversely input the error into a positioning point based on the laser radar, and update and correct the global positioning coordinate of the robot.
Further, in step S2, the real coordinates of the corner point of the exit indicator in the robot coordinate system are calculated by recognizing the error between the current real-time relative position and the theoretical relative position of the robot in the map through the depth cameraFirstly, converting local coordinates with errors to global coordinates to obtain temporary global coordinates of the corner points, calculating the average value of the temporary global coordinates to obtain central point global coordinates, then searching the nearest theoretical global coordinates and the corresponding corner point coordinates in the extra data document, and obtaining the theoretical coordinates of the corner points under the robot coordinate system by converting the global coordinates with errors to the local coordinatesAnd finally selecting the corner pointAndand calculating to obtain the error between the actual pose and the theoretical pose of the robot.
Further, the step S2 includes the following steps:
step S2.1: the visual and laser radar fusion positioning is realized by detecting the exit sign, extracting the angular point of the exit sign, acquiring the depth information of the angular point according to the structured light coding by utilizing the structured light distance measuring principle of the depth camera, and obtaining the coordinate value of the angular point under the coordinate system of the depth camera;
Step S2.2: fixed coordinate system transformation matrix known through depth camera and robotObtaining the angular point of the safety exit indicator board in the robot coordinate systemCoordinates of lower:
Current coordinate system of robotAnd a global coordinate systemIs transformed byRepresenting the theoretical pose of the robot on the global map,can be represented by a homogeneous matrix of 4 multiplied by 4, and firstly passes through the theoretical poseWith true coordinatesConversion to temporary global coordinatesCalculating their average value to obtain the global coordinate of the central point;
Step S2.3: searching for separation in additional data documents by K-nearest neighbor searchNearest theoretical global coordinatesAnd theoretical global corner coordinatesTaken as the homogeneous coordinates of the global cornerThe coordinates of the theoretical global corner point under the theoretical base coordinate system are defined asAnd is recorded as local angular point homogeneous coordinateThe coordinate transformation relationship of rigid body motion consisting of translation and rotation is obtained as follows:
then obtaining the each angular point of the indication board in the theoretical base coordinate systemCoordinates of lowerTheoretical basis coordinate system of robotTo the current actual base coordinate system of the robotIs a homogeneous conversion matrix ofWill alsoRecord corresponding homogeneous coordinatesObtaining:
in the formula (4), unknown amountRespectively representToIs/are as followsx、yAmount of directional deviation and windingzThe angle of rotation of the shaft.
Step S2.4: selecting a plurality of angular points of the exit indicator to substitute in formula (3) to obtain the unknown quantityIs/are as followsThe value of the transformation matrix, namely the error between the actual pose and the theoretical pose of the robot, is converted into a transformation matrixAnd reversely inputting the data to the robot particle distribution probability positioning points based on the laser radar so as to update and correct the global positioning coordinates of the robot.
The utility model provides a based on supplementary laser gallery positioner of emergency exit sign vision, includes one or more processors, is used for realizing the supplementary laser gallery positioner of emergency exit sign vision method.
The invention has the advantages and beneficial effects that:
according to the visual auxiliary laser corridor positioning method and device based on the exit sign, the depth information is returned by the depth camera for target detection of the exit sign, so that the positioning effect of the pure laser radar in a long straight corridor with few characteristics or similar characteristics can be effectively improved; the invention does not need any external auxiliary label, is simple and easy to realize, has positioning effect which is not easily influenced by light due to the particularity of the safety escape sign, and has universality for positioning most indoor corridors.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2 is a flow chart of the detection process of the laser radar SLAM and the camera in the invention.
Fig. 3 is a flow chart of a fused radar and camera assisted positioning process in the present invention.
Fig. 4 is a relation diagram of the theoretical coordinate system and the actual coordinate system of the robot in the global coordinate system.
Fig. 5 is a structural view of the apparatus of the present invention.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present invention, are given by way of illustration and explanation only, not limitation.
As shown in fig. 1, the visual-aided laser corridor positioning method based on the exit signs comprises the following steps:
step S1: the robot scans and constructs a map in a laser radar positioning mode, identifies a safety exit indicator (of a wall) through visual scanning, and marks the safety exit indicator in the map; specifically, the marking is carried out in a map in a sequence number mode;
furthermore, when a map is constructed, only the laser radar is used for obtaining two-dimensional plane information, and through comparison of the information and contour information in the existing map, the optimal estimation of the current robot position can be found to enable the matching degree to be the highest, so that the robot is positioned, and the constructed map is used as a static map for navigation. Algorithms commonly used by robots to construct maps include, but are not limited to, Gmaping, Hector, Cartographer, and the like.
Furthermore, when a map is constructed, the safety exit indicator is detected by using the depth camera, the position information of the safety exit indicator under the depth camera coordinate system is converted into the position information under the global coordinate system, the position information under the global coordinate system is stored in an additional data document, corresponding labels of the safety exit indicator are given according to the recognition sequence, and the position information of the safety exit indicator is updated in the process of constructing the map.
Further, the position information under the global coordinate system is obtained by detecting the exit sign, extracting the angular point of the exit sign, and obtaining the depth information of the angular point according to the structured light coding by utilizing the structured light distance principle of the depth camera to obtain the angular point under the coordinate system of the depth cameraCoordinate valuesWhereiniIndicating the number of focal points and calculating the coordinate valuesCoordinate values transformed into a global coordinate systemCalculating the average value of the angular points to obtain the global coordinate of the central point。
Specifically, a map is built, a mounted Kinect V2 depth camera is used for detecting targets such as a safety exit sign, the Yolo V3 is used for detecting the targets, after the targets are identified, images in an identification area are further processed, four corner points of the safety exit sign are extracted, the structured light distance principle of the depth camera is used, the depth information of the corner points is obtained through structured light coding, namely the coordinate values of the corner points under a camera coordinate systemWhereinSpecifically, the calculation method of the global coordinate position of the exit sign is as follows: transforming the currently obtained coordinates of 4 angular points into a robot coordinate system, and then transforming the coordinates into coordinates under a global coordinate systemCalculating their average value to obtain the global coordinate of the central point。
Further, storing and updating the exit sign coordinates is from first identification toStarting the exit sign, sequentially setting the serial numbers of the angular points and the global coordinates of the central point of the exit signnOne serial number corresponds to one group of coordinates, and when the global coordinates of the center point of the safety exit indicator board are newly acquiredAnd when the distances between the current coordinate and all the central point global coordinates in the extra data document exceed the distance threshold, adding a new sequence number and the corresponding coordinate in the extra data document, or replacing the corresponding old coordinate with the new coordinate, wherein the sequence number is unchanged.
Specifically, as shown in fig. 2, the method for storing the coordinates of the emergency exit signs and updating the coordinates includes: the sequence numbers of 4 corner points and the global coordinate of the central point of the indicating board are given in sequence from the first identification to the indication board of the exitnOne serial number corresponds to one group of coordinates, and when the global coordinates of the center point of the indicator are newly acquiredIf the coordinate values of all the central points in the Data of the extra Data document exceed a certain threshold value, adding a new sequence number and coordinates into the Data, and otherwise, replacing the old coordinates with the new coordinates, wherein the sequence number is unchanged.
Step S2: in the navigation process, fuse the location through vision and laser radar, when detecting near the robot through the vision positioning mode and have the emergency exit sign, match the serial number that corresponds in static map, to the gallery environment that has repeated geometric information, add emergency exit sign auxiliary point through visual detection, correct few characteristic point matching error of laser radar in real time.
Further, a typical corridor environment detection method with repeated geometric information: the detection of the gallery environment with repeated geometric information is completed by the distance information returned by the laser radar when the robot is in front of and behind and at an angle thresholdWhen the ratio of the infinite return distance exceeds the ratio threshold, the current is judgedThe robot is being in the gallery environment, starts and is based on emergency exit sign visual detection, supplementary laser positioning.
Furthermore, the fusion positioning of vision and laser radar is realized by identifying a safety exit indication board existing in the environment through the acquired visual information, calculating the error between the current real-time relative position and the theoretical relative position of the robot in a map according to the safety exit indication board, reversely inputting the error into a positioning point based on the laser radar, and updating and correcting the global positioning coordinate of the robot.
Furthermore, the error between the current real-time relative position and the theoretical relative position of the robot in the map is identified by the depth camera, and the real coordinate of the corner point of the safety exit indicator in the robot coordinate system is calculatedFirstly, converting local coordinates with errors to global coordinates to obtain temporary global coordinates of the corner points, calculating the average value of the temporary global coordinates to obtain central point global coordinates, then searching the nearest theoretical global coordinates and the corresponding corner point coordinates in the extra data document, and obtaining the theoretical coordinates of the corner points under the robot coordinate system by converting the global coordinates with errors to the local coordinatesAnd finally selecting the corner pointAndand calculating to obtain the error between the actual pose and the theoretical pose of the robot.
Further, as shown in fig. 3, step S2 includes the following steps:
step S2.1: the visual and laser radar fusion positioning is realized by detecting the exit sign, extracting the angular point of the exit sign and utilizing the structured light distance measuring principle of a depth cameraObtaining depth information of the angular point according to the structured light code to obtain coordinate values of the angular point under a depth camera coordinate system;
Specifically, vision and laser radar fusion positioning (vision is auxiliary, radar is main), visual information passes through a safety exit sign board existing in an identification environment, a Yolo v3 is used for target detection, after a target is identified, an image in an identification area is further processed, four corner points of the safety exit sign board are extracted, the depth information of the corner points is obtained through the structured light coding by utilizing the structured light distance principle of a depth camera, namely the coordinate values of the corner points under a camera coordinate system。
The method for updating and correcting the global positioning coordinate of the robot by calculating the error between the real-time relative position and the theoretical relative position of the robot in the map and reversely inputting the error to the positioning point based on the laser radar comprises the following steps:
step S2.2: fixed coordinate system transformation matrix known through depth camera and robotObtaining the angular point of the safety exit indicator board in the robot coordinate systemCoordinates of lower:
Current coordinate system of robotAnd a global coordinate systemIs transformed byRepresenting the theoretical pose of the robot on the global map, and passing the theoretical poseWith true coordinatesConversion to temporary global coordinatesCalculating their average value to obtain the global coordinate of the central point;
In particular, the method comprises the following steps of,may be represented by a 4 x 4 homogeneous matrix.
Step S2.3: in the extra Data document Data, the search is performed by K neighbor search methodNearest theoretical global coordinatesAnd theoretical global corner coordinatesTaken as the homogeneous coordinates of the global cornerThe coordinates of the theoretical global corner point under the theoretical base coordinate system are defined asAnd is recorded as local angular point homogeneous coordinateThe coordinate transformation relationship of the rigid motion consisting of translation and rotation yields:
then obtaining the each angular point of the indication board in the theoretical base coordinate systemCoordinates of lowerTheoretical basis coordinate system of robotTo the current actual base coordinate system of the robotIs a homogeneous conversion matrix ofWill alsoRecord corresponding homogeneous coordinatesObtaining:
in the formula (4), unknown amountRespectively representToIs/are as followsx、yAmount of directional deviation and windingzThe angle of rotation of the shaft.
Step S2.4: as shown in FIG. 4, a plurality of corner points of the exit signs are selected to be substituted in formula (3) to obtain the unknown quantityIs/are as followsThe value of the transformation matrix, namely the error between the actual pose and the theoretical pose of the robot, is converted into a transformation matrixAnd reversely inputting the data into positioning points based on the laser radar so as to update and correct the global positioning coordinates of the robot.
Specifically, the positioning point of the laser radar is a robot particle distribution probability positioning point of the laser radar.
According to the 'fire safety evacuation sign setting standard', evacuation guide signs are required to be arranged on evacuation walkways or main evacuation routes and the requirement that the spacing is not more than 10m is met, so that the method has good adaptability to most indoor long corridor environments due to the particularity of the exit sign.
Corresponding to the embodiment of the visual auxiliary laser gallery positioning method based on the exit sign, the invention also provides an embodiment of a visual auxiliary laser gallery positioning device based on the exit sign.
Referring to fig. 5, the visual-aided laser corridor positioning device based on the exit signs provided by the embodiment of the invention comprises one or more processors, and is used for implementing the visual-aided laser corridor positioning method based on the exit signs in the embodiment.
The embodiments of the present invention based on a visual assisted laser corridor positioning device for exit signs may be applied to any data processing capable device, such as a computer or other like device or apparatus. The device embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. The software implementation is taken as an example, and as a logical device, the device is formed by reading corresponding computer program instructions in the nonvolatile memory into the memory for running through the processor of any device with data processing capability. From a hardware aspect, as shown in fig. 5, a hardware structure diagram of any device with data processing capability where the device for visually assisting laser corridor positioning based on a safety exit sign according to the present invention is located is shown, except for the processor, the memory, the network interface, and the nonvolatile memory shown in fig. 5, in the embodiment, any device with data processing capability where the device is located may also include other hardware according to the actual function of the any device with data processing capability, which is not described again.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the invention. One of ordinary skill in the art can understand and implement without inventive effort.
An embodiment of the present invention further provides a computer-readable storage medium, on which a program is stored, where the program, when executed by a processor, implements the visual assistance laser corridor positioning method based on a safety exit sign in the above embodiments.
The computer readable storage medium may be an internal storage unit, such as a hard disk or a memory, of any data processing capability device described in any of the foregoing embodiments. The computer readable storage medium may also be any external storage device of a device with data processing capabilities, such as a plug-in hard disk, a Smart Media Card (SMC), an SD Card, a Flash memory Card (Flash Card), etc. provided on the device. Further, the computer readable storage medium may include both an internal storage unit and an external storage device of any data processing capable device. The computer-readable storage medium is used for storing the computer program and other programs and data required by the arbitrary data processing-capable device, and may also be used for temporarily storing data that has been output or is to be output.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.
Claims (7)
1. A visual auxiliary laser corridor positioning method based on a safety exit indicator is characterized by comprising the following steps:
step S1: the robot scans and constructs a map in a laser radar positioning mode, identifies the safety exit indication board through visual scanning, and marks the safety exit indication board in the map in a serial number mode;
step S2: in the navigation process, fusion positioning is carried out through vision and a laser radar, when a safety exit indication board is detected to exist near a robot in a vision positioning mode, corresponding serial numbers are matched in a static map, and for a gallery environment with repeated geometric information, auxiliary points of the safety exit indication board are added through vision detection, so that the matching error of few characteristic points of the laser radar is corrected in real time;
the fusion positioning of vision and laser radar is realized by identifying a safety exit indication board existing in the environment through the acquired vision information, calculating the error between the current real-time relative position and the theoretical relative position of the robot in a map according to the safety exit indication board, reversely inputting the error into a positioning point based on the laser radar, and updating and correcting the global positioning coordinate of the robot;
the error between the current real-time relative position and the theoretical relative position of the robot in the map is identified by a depth camera, and the real coordinates of the corner point of the safety exit indicator in the robot coordinate system are calculatedFirstly, converting local coordinates with errors to global coordinates to obtain temporary global coordinates of the corner points, calculating the average value of the temporary global coordinates to obtain central point global coordinates, then searching the nearest theoretical global coordinates and the corresponding corner point coordinates in the extra data document, and obtaining the theoretical coordinates of the corner points under the robot coordinate system by converting the global coordinates with errors to the local coordinatesAnd finally selecting the corner pointAndcalculating to obtain the error between the actual pose and the theoretical pose of the robot;
the method specifically comprises the following steps:
step S2.1: the visual and laser radar fusion positioning is realized by detecting the exit sign, extracting the angular point of the exit sign, acquiring the depth information of the angular point according to the structured light coding by utilizing the structured light distance measuring principle of the depth camera, and obtaining the coordinate value of the angular point under the coordinate system of the depth camera;
Step S2.2: fixed coordinate system transformation matrix known through depth camera and robotObtaining the current coordinate system of the indicating board angular point of the exit in the robotCoordinates of lower:
Current coordinate system of robotAnd a global coordinate systemIs transformed byExpressing the theoretical pose of the robot on the global mapPosition and posture of the robotWith true coordinatesConversion to temporary global coordinatesCalculating their average value to obtain the global coordinate of the central point;
Step S2.3: searching for separation in additional data documents by K-nearest neighbor searchNearest theoretical global coordinatesAnd theoretical global corner coordinatesTaken as homogeneous coordinates of global cornerThe coordinates of the theoretical global corner point under the theoretical base coordinate system are defined asAnd is recorded as local angular point homogeneous coordinateThe coordinate transformation relationship of rigid body motion consisting of translation and rotation is obtained as follows:
then obtaining the each angular point of the indication board in the theoretical base coordinate systemCoordinates of lowerTheoretical basis coordinate system of robotTo the current coordinate system of the robotIs a homogeneous conversion matrix ofAs well as willRecord corresponding homogeneous coordinatesObtaining:
in the formula (4), unknown amountRespectively representToIsx、yAmount of directional deviation and windingzThe angle of rotation of the shaft;
step S2.4: selecting a plurality of angular points of the exit indicator to substitute in formula (3) to obtain the unknown quantityIs/are as followsThe value after matrix change, namely the error between the actual pose and the theoretical pose of the robot, is calculatedAnd the reverse input is carried out on the positioning point based on the laser radar, so that the global positioning coordinate of the robot is updated and corrected.
2. The method of claim 1, wherein the laser corridor positioning method based on visual assistance of exit signs comprises: in the step S1, when a map is constructed, a laser radar is used to obtain two-dimensional plane information, the two-dimensional plane information is compared with contour information in the existing map to find the optimal position of the current robot, and the constructed map is used as a static map for navigation.
3. The method of claim 1, wherein the laser corridor positioning method based on exit signs visual assistance comprises: in step S1, when a map is constructed, the depth camera is used to detect the exit sign, and the position information of the exit sign in the depth camera coordinate system is converted into position information in the global coordinate system, the position information in the global coordinate system is stored in an additional data file, corresponding labels of the exit sign are given according to the recognition sequence, and the position information of the exit sign is updated in the process of constructing the map.
4. The method of claim 3, wherein the laser corridor positioning method based on visual assistance of exit signs comprises: the position information in the global coordinate system in step S1 is obtained by detecting the exit sign, extracting the corner point of the exit sign, obtaining the depth information of the corner point according to the structured light coding using the structured light distance principle of the depth camera, and obtaining the coordinate value of the corner point in the depth camera coordinate systemWhereiniRepresenting the number of angular points and calculating the coordinate valuesCoordinate values transformed into a global coordinate systemCalculating the average value of the angular points to obtain the global coordinate of the central point。
5. The method of claim 4, wherein the laser corridor positioning method based on visual assistance of the exit signs comprises: the storage and update of the coordinates of the exit signs in step S1 are performed by sequentially setting the serial numbers of the global coordinates of the corner points and the center point of the exit signs from the first recognition of the exit signsnOne serial number corresponds to one group of coordinates, and when the global coordinates of the center point of the safety exit indicator board are newly acquiredAnd all of the extra data documentsAnd when the global coordinate distances of the central point exceed the distance threshold, adding a new sequence number and the corresponding coordinate in the extra data file, otherwise, replacing the corresponding old coordinate with the new coordinate, and keeping the sequence number unchanged.
6. The method of claim 1, wherein the laser corridor positioning method based on visual assistance of exit signs comprises: in step S2, the detection of the gallery environment with the repeated geometric information is completed by the distance information returned by the laser radar, and when the ratio of the returned distance to infinity exceeds the ratio threshold within the angle threshold in front of and behind the robot, it is determined that the robot is currently in the gallery environment, and visual detection based on the exit sign is started to assist laser positioning.
7. A fire exit sign-based visual aid laser corridor positioning device comprising one or more processors configured to implement the fire exit sign-based visual aid laser corridor positioning method of any of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210579591.7A CN114674308B (en) | 2022-05-26 | 2022-05-26 | Vision-assisted laser corridor positioning method and device based on safety exit indicator |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210579591.7A CN114674308B (en) | 2022-05-26 | 2022-05-26 | Vision-assisted laser corridor positioning method and device based on safety exit indicator |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114674308A CN114674308A (en) | 2022-06-28 |
CN114674308B true CN114674308B (en) | 2022-09-16 |
Family
ID=82080989
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210579591.7A Active CN114674308B (en) | 2022-05-26 | 2022-05-26 | Vision-assisted laser corridor positioning method and device based on safety exit indicator |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114674308B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111045017A (en) * | 2019-12-20 | 2020-04-21 | 成都理工大学 | Method for constructing transformer substation map of inspection robot by fusing laser and vision |
CN112785702A (en) * | 2020-12-31 | 2021-05-11 | 华南理工大学 | SLAM method based on tight coupling of 2D laser radar and binocular camera |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106780735B (en) * | 2016-12-29 | 2020-01-24 | 深圳先进技术研究院 | Semantic map construction method and device and robot |
CN108303096B (en) * | 2018-02-12 | 2020-04-10 | 杭州蓝芯科技有限公司 | Vision-assisted laser positioning system and method |
US11960297B2 (en) * | 2019-05-03 | 2024-04-16 | Lg Electronics Inc. | Robot generating map based on multi sensors and artificial intelligence and moving based on map |
WO2020223974A1 (en) * | 2019-05-09 | 2020-11-12 | 珊口(深圳)智能科技有限公司 | Method for updating map and mobile robot |
KR102220564B1 (en) * | 2019-05-30 | 2021-02-25 | 엘지전자 주식회사 | A method for estimating a location using multi-sensors and a robot implementing the same |
CN113126621A (en) * | 2020-10-14 | 2021-07-16 | 中国安全生产科学研究院 | Automatic navigation method of subway carriage disinfection robot |
CN113885046A (en) * | 2021-09-26 | 2022-01-04 | 天津大学 | Intelligent internet automobile laser radar positioning system and method for low-texture garage |
-
2022
- 2022-05-26 CN CN202210579591.7A patent/CN114674308B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111045017A (en) * | 2019-12-20 | 2020-04-21 | 成都理工大学 | Method for constructing transformer substation map of inspection robot by fusing laser and vision |
CN112785702A (en) * | 2020-12-31 | 2021-05-11 | 华南理工大学 | SLAM method based on tight coupling of 2D laser radar and binocular camera |
Also Published As
Publication number | Publication date |
---|---|
CN114674308A (en) | 2022-06-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021233029A1 (en) | Simultaneous localization and mapping method, device, system and storage medium | |
US20230194306A1 (en) | Multi-sensor fusion-based slam method and system | |
WO2021073656A1 (en) | Method for automatically labeling image data and device | |
CN110361027A (en) | Robot path planning method based on single line laser radar Yu binocular camera data fusion | |
CN109584302B (en) | Camera pose optimization method, camera pose optimization device, electronic equipment and computer readable medium | |
CN107167826B (en) | Vehicle longitudinal positioning system and method based on variable grid image feature detection in automatic driving | |
Cui et al. | Efficient large-scale structure from motion by fusing auxiliary imaging information | |
CN112070770B (en) | High-precision three-dimensional map and two-dimensional grid map synchronous construction method | |
CN113313763B (en) | Monocular camera pose optimization method and device based on neural network | |
CN112862881B (en) | Road map construction and fusion method based on crowd-sourced multi-vehicle camera data | |
CN111854758A (en) | Indoor navigation map conversion method and system based on building CAD (computer-aided design) drawing | |
CN110260866A (en) | A kind of robot localization and barrier-avoiding method of view-based access control model sensor | |
CN115936029B (en) | SLAM positioning method and device based on two-dimensional code | |
CN110992424B (en) | Positioning method and system based on binocular vision | |
CN114037762A (en) | Real-time high-precision positioning method based on image and high-precision map registration | |
CN113592015A (en) | Method and device for positioning and training feature matching network | |
CN114674308B (en) | Vision-assisted laser corridor positioning method and device based on safety exit indicator | |
CN112258391B (en) | Fragmented map splicing method based on road traffic marking | |
CN113554705A (en) | Robust positioning method for laser radar in changing scene | |
CN112767482A (en) | Indoor and outdoor positioning method and system with multi-sensor fusion | |
Gu et al. | Research on SLAM of indoor mobile robot assisted by AR code landmark | |
CN116698017B (en) | Object-level environment modeling method and system for indoor large-scale complex scene | |
CN117170501B (en) | Visual tracking method based on point-line fusion characteristics | |
CN117541464A (en) | Global initialization method, system, vehicle, storage medium and equipment | |
Ming et al. | Solid-State LiDAR SLAM System with Indoor Degradation Scene Compensation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |