CN112596064B - Laser and vision integrated global positioning method for indoor robot - Google Patents

Laser and vision integrated global positioning method for indoor robot Download PDF

Info

Publication number
CN112596064B
CN112596064B CN202011373978.4A CN202011373978A CN112596064B CN 112596064 B CN112596064 B CN 112596064B CN 202011373978 A CN202011373978 A CN 202011373978A CN 112596064 B CN112596064 B CN 112596064B
Authority
CN
China
Prior art keywords
laser
global positioning
visual
positioning module
pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011373978.4A
Other languages
Chinese (zh)
Other versions
CN112596064A (en
Inventor
邸慧军
罗云翔
硕南
徐志
张展华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Institute Of Software Technology Institute Of Software Chinese Academy Of Sciences
Original Assignee
Nanjing Institute Of Software Technology Institute Of Software Chinese Academy Of Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Institute Of Software Technology Institute Of Software Chinese Academy Of Sciences filed Critical Nanjing Institute Of Software Technology Institute Of Software Chinese Academy Of Sciences
Priority to CN202011373978.4A priority Critical patent/CN112596064B/en
Publication of CN112596064A publication Critical patent/CN112596064A/en
Application granted granted Critical
Publication of CN112596064B publication Critical patent/CN112596064B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a global positioning method of an integrated indoor robot integrating laser and vision, which comprises the following steps: receiving the pose information of the robot provided by the laser positioning module, and simultaneously acquiring a visual image and a laser image by adopting the visual global positioning module and the laser global positioning module to establish a global positioning environment library; and searching and matching in a global positioning environment library by utilizing the visual image and the laser information of the current position of the robot, and determining the pose of the robot. The invention can support the realization of positioning in complex environments, such as environments with dense crowd, similar structures, no textures and the like, and simultaneously effectively combines the information of the laser radar and the vision, thereby greatly improving the robustness of the robot for positioning in the global environment, having short global positioning time and small operation amount, solving the problem of high requirement on hardware by the common vision SLAM algorithm, getting rid of the industrial personal computer with high manufacturing cost, saving the hardware cost and reducing the battery consumption.

Description

Laser and vision integrated global positioning method for indoor robot
Technical Field
The invention relates to the technical field of global positioning of robots, in particular to a global positioning method of an integrated indoor robot by fusing laser and vision.
Background
Global environment positioning is one of key technologies of intelligent robots, on one hand, the robots can determine the positions of the robots after being started at any time and any place, and the applicability and the practicability of the robots are effectively improved; on the other hand, repositioning information can be provided when the robot positioning is lost, and the long-term stability and reliability of the robot environment positioning are ensured.
Global environmental localization of indoor robots faces a number of challenges. On the one hand, there are many scenes with similar structures (such as elevator hatches of different floors), no structures (such as corridor) and no textures (such as white wall) in indoor environment, so that the global environment positioning method based on a single type of sensor (such as pure vision or pure laser radar) cannot adapt to all situations. For example, the global environment positioning method of pure vision depends on the richness of scene textures, and can not process non-textured areas, while the global environment positioning method of pure laser radar depends on scene structures, and can not process non-structural areas or areas with similar structures. On the other hand, the indoor environment has the conditions of dense crowd and changeable scenes, so that the current global environment positioning method of the vision or laser radar has poor robustness.
Therefore, a method for positioning the global environment of the robot by combining vision and laser, which can cope with many challenges of indoor environment, is needed, and meanwhile, the method has low system resource consumption, can smoothly run on a low-power ARM development board, and can be used for various indoor robots. Currently, researchers propose a global environment positioning method and a global environment positioning device for a robot fusing vision and laser, for example, a repositioning method and a repositioning device for an indoor robot are proposed in the invention patent with the patent number of CN106092104A, a robot positioning method and a positioning device are proposed in the invention patent with the patent number of CN108256574A, and a robot rapid repositioning method and a system based on a vision dictionary are proposed in the invention patent with the patent number of CN 110533722A.
However, the current global environment localization method has the following drawbacks and disadvantages: 1) The current method respectively establishes a visual positioning map and a laser positioning map, and then realizes the unification of the results of the two global positioning methods by aligning the two maps. However, the consistency of the maps is difficult to ensure, such as poor monocular vision positioning accuracy and error positioning, and the positioning map established by monocular vision is difficult to keep consistent with the laser positioning map. Such inconsistencies may affect, on the one hand, the alignment quality of the two maps and, on the other hand, the accuracy and robustness of the fused positioning results. In addition, the calculation amount required in the process of separately establishing the map is large, and smooth operation on the low-power ARM development board is difficult. 2) The current method is to carry out key frame retrieval and global positioning only by vision, and on the result of vision positioning, laser data is utilized to carry out positioning result evaluation; or on the basis of visual positioning, a simple laser global positioning method is combined, such as global positioning by using the overall statistical information of one frame of laser data. The global localization process by vision only depends on the richness of scene textures and can not process non-textured areas. And the simple laser global positioning method is difficult to cope with the situation of changeable scenes. 3) The current method is simple and rough in processing various positioning results, and the positioning result is directly selected to evaluate best as positioning output. This simple selection strategy results in a positioning output that is too dependent on the evaluation process, and in case of varying situations, the evaluation is difficult to do without loss, thus directly resulting in an unreliable positioning result being selected as output.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides the global positioning method of the integrated indoor robot with laser and vision, which supports the realization of positioning in complex environments, such as environments with dense crowd, similar structures, no textures and the like, and simultaneously effectively combines the information of laser radar and vision, thereby greatly improving the robustness of the robot in global environment positioning, having short global positioning time and small operation amount, solving the problem of high requirement of the common vision SLAM algorithm on hardware, getting rid of an industrial personal computer with high manufacturing cost, saving the hardware cost and reducing the battery consumption.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
a global positioning method of an integrated indoor robot integrating laser and vision comprises the following steps:
s1, when a robot reaches a new environment or the environment is changed, receiving pose information of the robot provided by a laser positioning module, acquiring a visual image and a laser image simultaneously by adopting a visual global positioning module and a laser global positioning module, and establishing a global positioning environment library, wherein the global positioning environment library comprises a visual global positioning library and a laser global positioning library; the visual global positioning module receives the pose of all visual frames sent by the laser positioning module and key frame information sent by the laser global positioning module at the same time, and builds a visual positioning map under a laser map coordinate system based on a laser map building result;
s2, in the initial positioning, repositioning or closed loop detection process of the robot, the visual image and the laser information of the current position of the robot are utilized to search and match in a global positioning environment library, and the pose of the robot is determined.
In order to optimize the technical scheme, the specific measures adopted further comprise:
further, the key frame satisfies at least any one of the following conditions:
the method comprises the steps of (1) enabling a rotation angle of a robot to be larger than a preset angle threshold, (2) enabling a moving distance of the robot to be larger than a preset distance threshold, (3) adopting matching information of feature points of a current frame image and a previous key frame image to evaluate the content overlapping degree between the current frame image and the previous key frame image, wherein when the content overlapping degree between the current frame image and the previous key frame image is smaller than a first preset overlapping degree threshold, (4) adopting an ICP algorithm to calculate the laser data registration degree between the current frame laser and the previous key frame laser, and evaluating that the content overlapping degree between the obtained current frame laser and the previous key frame laser is smaller than a second preset overlapping degree threshold.
Further, the process of establishing the global positioning environment library comprises the following steps:
s11, performing system clock alignment on the laser positioning module, the visual global positioning module and the laser global positioning module;
s12, receiving a drawing starting command sent by a user interface module, and sequentially performing the following operations:
s121, acquiring a visual image of a current scene by using a visual global positioning module, performing online preprocessing on the acquired visual image, and extracting and storing image characteristic information from the visual image;
s122, driving a laser positioning module to construct a laser map under the current scene;
s123, collecting laser data in a current scene by utilizing a laser global positioning module, carrying out online preprocessing on the collected laser data, detecting from the laser data to obtain a key frame, and extracting and storing laser characteristic information in the key frame;
s13, receiving a drawing establishment ending command sent by the user interface module;
s14, the visual global positioning module calls an interface of the laser positioning module to acquire the pose of all image frames, receives key frame information sent by the laser global positioning module, performs feature point inter-frame matching and tracking on stored image features, extracts additional key frames, optimizes 3D coordinates of feature points, establishes a visual word bag library of the key frames, and stores the visual word bag library to obtain the visual global positioning library;
s15, the laser global positioning module calls an interface of the laser positioning module to acquire pose and sub-map information of all key frames, establishes a relation between the key frames and the sub-map, constructs a laser word bag library of the key frames, and stores the laser word bag library to obtain the laser global positioning library.
Further, the process of performing feature point inter-frame matching and tracking on the stored image features comprises the following steps:
and when the frames are matched, utilizing inter-frame motion prediction, obtaining the predicted position of the characteristic point of the current frame for each characteristic point of the previous frame, then matching the predicted position with the characteristic points around the predicted position, and selecting the characteristic vector which is the closest as a matching result.
Further, the process of optimizing the 3D coordinates of the feature points includes the following steps:
s141, pose information provided by a laser positioning module is obtained, and a difference value between poses is estimated;
s142, for each feature point, arbitrarily selecting two key frames with pose difference values larger than a preset difference threshold value, and directly calculating to obtain the 3D coordinates of the feature point;
s143, optimizing 3D coordinates of the feature points under a global map coordinate system by utilizing pixel coordinates of the feature points in a plurality of key frames; the optimization objective is to solve the optimal 3D coordinates of the feature points, so that the error between the projection coordinates of the feature points projected onto the key frame image according to the pose of each key frame and the pixel coordinates of the feature points is as small as possible.
Further, the process of searching and matching in the global positioning environment library by utilizing the visual image and the laser information of the current position of the robot to determine the pose of the robot comprises the following steps:
s21, performing system clock alignment on the laser positioning module, the visual global positioning module and the laser global positioning module;
s22, a visual global positioning module loads a visual global positioning library, a laser global positioning module loads a laser global positioning library, and the laser global positioning module subscribes to odom tf;
s23, extracting image features of the current frame image by the visual global positioning module, calculating a word bag vector, and searching a key frame in the visual global positioning library;
s24, extracting laser characteristics of laser data corresponding to the current frame by the laser global positioning module, calculating a word bag vector, and searching a key frame in a laser global positioning library;
s25, synthesizing key frame retrieval results of the visual global positioning module and the laser global positioning module to construct a candidate key frame set;
s26, performing image feature matching and laser feature matching on each key frame in the candidate key frame set and the current frame, removing invalid key frames, and constructing a preferred key frame set;
s27, for each key frame in the preferred key frame set, obtaining a matching result of the key frame and the current frame image characteristic, estimating a visual pose based on the matching result, and estimating the estimated quality of the visual pose;
s28, sequentially matching each sub map in the laser sub map set corresponding to the preferred key frame set with the laser data of the current frame, estimating to obtain a laser pose, and estimating the estimation quality of the laser pose;
s29, comprehensively estimating pose and quality of the visual global positioning module and the laser global positioning module, and determining and issuing a final global positioning result.
Further, the process of obtaining the matching result of the image characteristics of the current frame for each key frame in the preferred key frame set comprises the following steps:
when the frames are matched with the key frames, the characteristic point vector clustering result is utilized to match each characteristic point of the current frame with the characteristic point which belongs to the same clustering class selected from the key frames, and the characteristic vector is selected to be the closest as the matching result.
Further, the process of sequentially matching each sub map in the laser sub map set corresponding to the preferred key frame set with the laser data of the current frame to estimate and obtain the laser pose includes the following steps:
s281, evaluating and obtaining the curvature of each laser data point through the difference between the original laser data and the smoothed data;
s282, detecting curvature point characteristics in laser data, and reserving local stable and sparse characteristic points through non-maximum inhibition;
s283, using the block distribution information of the laser data points around each feature point as a feature point description vector, using the feature point description vector to perform feature point matching, and using the RANSAC algorithm to calculate the pose relationship between the two frames of laser data from the feature point matching result.
Further, the calculating the bag of words vector includes the steps of:
clustering the feature point description vectors of the multi-frame images or the laser data to obtain a feature point description dictionary;
for a certain frame of image or laser data, calculating type numbers of all feature point description vectors in the dictionary;
and calculating the occurrence frequency of all words in the image or laser data, and using the word frequency as a word bag vector of the image or laser data.
Further, the pose estimation and quality estimation results of the comprehensive visual global positioning module and the laser global positioning module, and determining and releasing the final global positioning result comprise the following steps:
s291, taking the re-projection error as a quality evaluation index of visual pose estimation;
s292, taking the registration degree between the current frame laser and the key frame laser and the sub map matching degree as quality evaluation indexes of laser pose estimation;
s293, automatically eliminating unreliable pose estimation results through pose graph optimization under comprehensive quality evaluation weighting, fusing various visual pose estimation and laser pose estimation results, and outputting final global positioning results.
The beneficial effects of the invention are as follows:
(1) The integrated laser and vision mapping method is provided, and the construction of a vision positioning map under a laser map coordinate system is guided through a laser mapping result. The visual positioning map is consistent with the coordinate system of the laser map in the building process, so that on one hand, the integrated hybrid positioning map is automatically built, alignment among positioning maps is not needed, and on the other hand, the guidance of the laser map building result can improve the map building precision and stability of monocular vision. Under the guidance of the laser mapping result, the visual mapping process omits the time-consuming pose and map point coordinate combined optimization process, only the optimization of the map point coordinates is needed to be considered, and the coordinate optimization processes of different map points are independent from each other and are not coupled, so that the mapping process becomes very efficient and can be parallelized.
(2) A brand new laser global positioning method is provided, feature points are extracted from laser data, and key frame retrieval and matching positioning are carried out through word frequency features of the laser feature points. By means of laser characteristic point positioning, partial inconsistency between input laser data and key frame data is allowed, and therefore the situation of multiple scenes can be dealt with.
(3) When the final global positioning is output, different from the positioning result with the best evaluation effect of the simple selection at present, the invention provides an integrated fusion method of the laser and visual multi-hypothesis positioning result, and the unreliable positioning hypothesis is automatically identified by utilizing mutual verification between the visual and laser multi-hypothesis positioning results, so that the robustness and reliability of the global positioning are improved; and by fusing a plurality of positioning results of vision and laser, the stability and the precision of global positioning are improved.
Drawings
FIG. 1 is a flow chart of a global positioning method of an integrated indoor robot integrating laser and vision.
Fig. 2 is a schematic diagram of the software module configuration of the present invention.
FIG. 3 is a general idea diagram of the global localization build phase.
Detailed Description
The invention will now be described in further detail with reference to the accompanying drawings.
Referring to fig. 1, the invention refers to a global positioning method of an integrated indoor robot with laser and vision fusion, which comprises the following steps:
s1, when a robot reaches a new environment or the environment is changed, receiving pose information of the robot provided by a laser positioning module, acquiring a visual image and a laser image simultaneously by adopting a visual global positioning module and a laser global positioning module, and establishing a global positioning environment library, wherein the global positioning environment library comprises a visual global positioning library and a laser global positioning library; the visual global positioning module receives the pose of all visual frames sent by the laser positioning module and key frame information sent by the laser global positioning module at the same time, and builds a visual positioning map under a laser map coordinate system based on a laser map building result.
S2, in the initial positioning, repositioning or closed loop detection process of the robot, the visual image and the laser information of the current position of the robot are utilized to search and match in a global positioning environment library, and the pose of the robot is determined.
The invention adopts the configuration mode of the top view camera and the round view laser radar, so that not only the geometric structure information around the robot can be utilized, but also the visual texture information of the indoor roof and the front of the robot can be utilized, and the robustness of the global environment positioning of the robot is improved.
Fig. 2 is a schematic diagram of the software module configuration of the present invention. The whole robot environment positioning system comprises three modules: the system comprises a laser positioning module, a visual global positioning module and a laser global positioning module. The laser positioning module is integrally responsible for the environment positioning of the robot, receives the global positioning information of the robot from the visual global positioning module and the laser global positioning module, integrally evaluates the global positioning information, initializes a positioning system by utilizing the global positioning information when the robot is started anytime and anywhere, and performs repositioning by utilizing the global positioning information when the robot positioning is lost, and the like, so that the practicability, long-term stability and reliability of the environment positioning of the robot are ensured. When the library is built, the visual global positioning module and the laser global positioning module receive the pose information of the robot provided by the laser positioning module, and acquire visual images and laser data at the same time to build the global positioning library. When the robot is in global positioning operation, the visual global positioning module utilizes visual image information, the laser global positioning module utilizes laser information to search and match in the global positioning library, calculates and outputs the pose of the robot, and provides information such as initialization pose, closed loop detection, repositioning and the like for the laser positioning module.
In the invention, in order to realize an efficient global positioning process, a map content organization form based on key frames is adopted. A key frame can be thought of as a block of local sub-maps of the entire map. By establishing a key frame search library, a similar key frame candidate set can be quickly searched by using the image and laser data of a given frame, then fine content matching and structure verification with relatively low speed are performed, and a final global positioning result is determined. By quickly searching out the key frame candidate set and narrowing the search range, the execution times of fine matching and verification with relatively low speed can be effectively reduced, and therefore an efficient global positioning process is realized.
The global positioning method is divided into two stages: a warehouse building stage and a positioning stage. As shown in fig. 3, the goal of the library creation stage is to create a map information library and a quick search library of feature points, key frames, etc. of images and lasers required for global positioning when the robot reaches a new environment or when the environment changes. The target of the positioning stage is to utilize the image of the current position of the robot and laser information to search and match in a global positioning environment library, determine the pose of the robot, and is used for initializing the laser positioning module after the robot is started anytime and anywhere, providing repositioning information for the laser positioning module when the robot is positioned, providing closed-loop detection information for the laser positioning module and the like.
As shown in fig. 1, in the positioning stage of the global positioning module, after feature points are extracted on the image and laser data of a given frame, the bag-of-words description vector of the current frame is calculated, a candidate set of similar key frames is retrieved from a key frame library, then feature matching and structure verification of the image and laser are carried out on each candidate key frame, the most reliable key frame is selected to output the association information between the given frame and a positioning map, and the pose of the given frame in the positioning map is estimated.
When the characteristics between the current frame and the candidate key frames are matched, the characteristic word bag description is utilized to obtain a characteristic set to be matched in the key frames for each characteristic of the current frame, and then the characteristic set is compared with the characteristics to be matched one by one to obtain the best matching characteristics. Based on the feature matching result between the current frame and the candidate key frame, the pose of the current frame is estimated and structural verification is carried out by utilizing the coordinates of feature points in the key frame under the global coordinate system, so that the feature which is matched with the candidate key frame sufficiently between the current frame and the candidate key frame is ensured to meet rigid constraint. And finally, selecting the key frame with the highest matching degree under the condition of meeting rigid constraint from the candidate set of the key frames, and outputting the pose of the given frame in the positioning map.
1. The workflow of the global environment positioning method is specifically described below.
1. The workflow of the library establishment stage is as follows:
(1.1) System clock alignment between multiple modules.
(1.2) the user interface module sending an initiate mapping command.
(1.3) starting online preprocessing after receiving the command by the visual global positioning module: and receiving the image, extracting the characteristics and storing the image characteristics.
(1.4) the laser positioning module starts to build the graph.
(1.5) starting online processing after the laser global positioning module receives the command: and receiving laser data, detecting key frames, extracting features and storing the laser features.
(1.6) the user interface module sending an end map command.
And (1.7) the visual global positioning module calls an interface of the laser positioning module to acquire the pose of all frames, receives key frame information of the laser global positioning module at the same time, performs feature point inter-frame matching and tracking on stored image features, extracts additional key frames, optimizes 3D coordinates of the feature points, establishes a visual word bag library of the key frames, and finally stores the visual global positioning library.
And (1.8) the laser global positioning module calls an interface of the laser positioning module to acquire pose and sub-map information of all key frames, establishes a relation between the key frames and the sub-maps, establishes a key frame word bag library and stores the laser global positioning library.
2. The workflow of the positioning phase is as follows:
(2.1) System clock alignment between multiple modules.
(2.2) the vision and laser global positioning module loads the global positioning library while subscribing to odom tf.
And (2.3) extracting features from the current frame image by the visual global positioning module, calculating a word bag vector, and searching the key frame in the visual positioning library.
And (2.4) extracting features of the laser data of the current frame by the laser global positioning module, calculating a word bag vector, and searching the key frame in the laser positioning library.
And (2.5) synthesizing key frame retrieval results of the two modules to construct a candidate key frame set.
And (2.6) carrying out image feature matching and laser feature matching on each key frame in the candidate key frame set and the current frame, removing invalid key frames, and constructing a preferable key frame set.
(2.7) for each key frame in the preferred set of key frames, estimating pose based on the result of the matching with the current frame image features, and evaluating visual pose estimation quality.
And (2.8) matching each sub map in the laser sub map set corresponding to the preferred key frame set with the laser data of the current frame, estimating the pose, and evaluating the laser pose estimation quality.
(2.9) combining the pose estimation and quality estimation results of the two modules, determining a final global positioning result, and issuing a global positioning result (which is a map2odom transformation).
2. Use flow of global positioning result
The positioning result of the global positioning module can provide three purposes for the laser positioning module: initialization after power-on, providing relocation information when a location is lost, providing closed loop detection information, etc. The specific use flow of the global positioning result in the laser positioning module is as follows.
1. Workflow at initialization:
the laser positioning module sends an initialization request and starts random walk.
The global positioning module receives the initialization command, performs global positioning, and sends map2odom transformation after successful positioning.
And the laser positioning module receives an initialization result, calculates the current pose of the robot by using map2odom transformation and current odom information, and initializes a positioning system.
2. Workflow when locating loss:
the laser positioning module detects the loss of positioning, sends a repositioning request after the loss of positioning is found, and starts random walk.
The global positioning module receives the repositioning command, performs global positioning, and sends map2odom transformation after successful positioning.
And the laser positioning module receives the repositioning result, calculates the current pose of the robot by using map2odom transformation and current odom information, and resets the positioning system.
3. Workflow at closed loop:
the laser positioning module sends a closed loop detection request when the laser positioning module finds that closed loop is possible to occur.
The global positioning module receives the closed loop detection command, continuously performs global positioning, and sends map2odom conversion after successful positioning.
And the laser positioning module receives the global positioning result, calculates the current pose of the robot by using map2odom transformation and current odom information, judges whether a closed-loop result is obtained, and sends a closed-loop detection termination command if the closed-loop detection result is obtained.
The global positioning module receives a closed loop termination command and stops global positioning.
3. Numerous details techniques in a global environment positioning method
1. Image feature point detection and matching
And detecting corner features in the image, and establishing a rapid detection algorithm of the corner features by utilizing a gray level change rule at the image feature points. The non-characteristic point area is rapidly filtered by utilizing the information of few pixels around the characteristic point. And then, carrying out feature detection on the rest area by utilizing surrounding complete pixel information, and reserving local stable and sparse feature points through non-maximum inhibition. Calculating the gray gradient of the image, counting the gradient direction and intensity information of pixels around the feature points, and constructing the feature point description vector. And performing feature point matching by using the feature point description vector. The feature point matching problems of two cases are considered, namely, the inter-frame matching in the database building stage and the matching between the frames and the key frames in the global positioning stage. And when the frames are matched, utilizing inter-frame motion prediction, obtaining the predicted position of the characteristic point of the current frame for each characteristic point of the previous frame, then matching the predicted position with the characteristic points around the predicted position, and selecting the characteristic vector which is the closest as a matching result. When the frames are matched with the key frames, the characteristic point vector clustering result is utilized to match each characteristic point of the current frame with the characteristic point which belongs to the same clustering class selected from the key frames, and the characteristic vector is selected to be the closest as the matching result.
2. Laser feature point detection and matching
The curvature point features in the laser data are detected by evaluating the magnitude of curvature at each laser data point (as evaluated by the difference between the original laser data and the smoothed data). And the local stable and sparse characteristic points are reserved through non-great inhibition. The partitioned distribution information of the laser data points around each feature point is used as a feature point description vector. And carrying out feature point matching by using the feature point description vector, and calculating the pose relation between the two frames of laser data from the feature point matching result by using the RANSAC algorithm.
3. Key frame detection
The key frame detection employs a plurality of criteria, and the current frame is regarded as the key frame as long as any one is satisfied. Specific criteria include the following: 1) the rotation angle of the robot is large enough, 2) the moving distance of the robot is large enough, 3) the content overlapping degree of the current frame image and the previous key frame image is smaller than a certain value, the content overlapping degree is estimated through the matching information of the characteristic points of the current frame image and the previous key frame image, 4) the content overlapping degree of the current frame laser and the previous key frame laser is smaller than a certain value, and the content overlapping degree is estimated through the registration degree of laser data (utilizing an ICP algorithm).
4. 3D coordinate optimization of image feature points
For each feature point, the pose information provided by the laser positioning module is utilized, and the 3D coordinates of the feature point are directly calculated through two key frames with relatively large pose differences. And then optimizing the 3D coordinates of the feature points in the global map coordinate system by utilizing the pixel coordinates of the feature points in the plurality of key frames. The optimization objective is to solve the 3D coordinates of the optimal feature points, so that the error (i.e., the reprojection error) between the projection coordinates of the feature points projected onto the key frame image according to the pose of each key frame and the pixel coordinates of the feature points is as small as possible. And 3D coordinates of the feature points are optimized by using an LM algorithm.
5. Word bag vector calculation
In order to calculate the bag-of-words vector of a certain frame of image or laser data, firstly, clustering the feature point description vector of a plurality of frames of image or laser data to obtain a feature point description dictionary. And then for a certain frame of image or laser data, calculating the type numbers of all feature point description vectors under a dictionary, further calculating the occurrence frequency of all words in the image or laser data, and using the word frequency as a word bag vector of the image or laser data for key frame database construction and retrieval.
6. Key frame library and search
Efficient search libraries of image key frames and laser key frames will be built separately. The key frame is compared with the file, the bag of words vector of the key frame is compared with the frequency of different words in the file, and the related algorithm and technology of information retrieval are utilized to build a database and retrieve the key frame. And analyzing the word bag vectors of all the key frames by utilizing a TF-IDF algorithm in information retrieval, calculating the reverse file frequencies of different words, and establishing a TF-IDF retrieval library of the key frames. And then, performing key frame retrieval by using a retrieval algorithm corresponding to the TF-IDF model.
7. Visual pose estimation and quality estimation
After the feature matching result between the current frame image and the key frame image is obtained, the pose of the current frame is estimated by utilizing the 3D coordinates of feature points in the key frame obtained by optimizing in the database building stage. The object of the pose estimation is to solve the optimal current frame pose, so that the 3D coordinates of the feature points are projected to the coordinates in the current frame image by using the current frame pose, and the error (namely, the reprojection error) between the coordinates of the feature points in the current frame image is as small as possible. And (5) completing visual pose estimation by using an LM algorithm. The reprojection error can be used as a quality assessment index for visual pose estimation.
8. Laser pose estimation and quality estimation
After the feature matching result between the current frame laser and the key frame laser is obtained, a RANSAC algorithm is utilized to calculate the preliminary pose relation between the two frames of laser data from the feature point matching result. And taking the preliminary pose relationship as an initial value, and carrying out fine registration between the current frame laser and the key frame laser by utilizing an ICP algorithm to obtain the registration pose of the current frame laser. And then, further matching the laser of the current frame with the sub map to obtain the laser pose estimation result of the current frame. The registration degree between the current frame laser and the key frame laser and the matching degree of the sub map can be used as quality evaluation indexes for estimating the laser pose.
9. Global positioning result comprehensive evaluation
Comprehensively considering different positioning results of visual pose estimation and laser pose estimation, automatically removing unreliable pose estimation results through pose diagram optimization under comprehensive quality estimation weighting, fusing various visual pose estimation and laser pose estimation results, and outputting a final global positioning result. Through pose map optimization, unreliable pose estimation results are automatically identified by utilizing mutual verification between various visual pose estimation and laser pose estimation results, and the robustness and reliability of global positioning are improved. Through pose map optimization, stability and accuracy of global positioning are improved by fusing various visual pose estimation and laser pose estimation results.
On the basis of the technical scheme, the invention achieves the following technical effects:
(1) The system resources are occupied low, and the safety and reliability are higher. The system supports the operating system of the openEuler domestic server, has low system resource occupation, and is suitable for the scenes with high safety and reliability requirements, such as banks, customs, public places and the like. (2) The function is strong, and the positioning is realized in complex environments, such as environments with similar structures, no textures and the like, at crowds. (3) good performance. The global positioning time is short, and the repositioning time is within 1 second and 2-10 times faster than the existing algorithm. (4) low cost. The optimized algorithm supports the customized simplest RK3399 development board, solves the problem that the common visual SLAM algorithm has high requirement on hardware, gets rid of an industrial personal computer with high manufacturing cost, saves the hardware cost and reduces the battery consumption.
The above is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above examples, and all technical solutions belonging to the concept of the present invention belong to the protection scope of the present invention. It should be noted that modifications and adaptations to the invention without departing from the principles thereof are intended to be within the scope of the invention as set forth in the following claims.

Claims (7)

1. The global positioning method of the integrated indoor robot integrating laser and vision is characterized by comprising the following steps of:
s1, when a robot reaches a new environment or the environment is changed, receiving pose information of the robot provided by a laser positioning module, acquiring a visual image and a laser image simultaneously by adopting a visual global positioning module and a laser global positioning module, and establishing a global positioning environment library, wherein the global positioning environment library comprises a visual global positioning library and a laser global positioning library; the visual global positioning module receives the pose of all visual frames sent by the laser positioning module and key frame information sent by the laser global positioning module at the same time, and builds a visual positioning map under a laser map coordinate system based on a laser map building result;
s2, searching and matching in a global positioning environment library by utilizing a visual image and laser information of the current position of the robot in the initial positioning, repositioning or closed loop detection process of the robot, and determining the pose of the robot;
wherein the key frame satisfies at least any one of the following conditions:
(1) The rotation angle of the robot is larger than a preset angle threshold; (2) the robot movement distance is greater than a preset distance threshold; (3) When the content overlapping degree between the current frame image and the previous key frame image is evaluated by adopting the matching information of the characteristic points of the current frame image and the previous key frame image, the content overlapping degree between the current frame image and the previous key frame image is smaller than a first preset overlapping degree threshold value; (4) Calculating the laser data registration degree between the current frame laser and the previous key frame laser by adopting an ICP algorithm, wherein the content overlapping degree of the current frame laser and the previous key frame laser obtained through evaluation is smaller than a second preset overlapping degree threshold value;
the process for establishing the global positioning environment library comprises the following steps:
s11, performing system clock alignment on the laser positioning module, the visual global positioning module and the laser global positioning module;
s12, receiving a drawing starting command sent by a user interface module, and sequentially performing the following operations:
s121, acquiring a visual image of a current scene by using a visual global positioning module, performing online preprocessing on the acquired visual image, and extracting and storing image characteristic information from the visual image;
s122, driving a laser positioning module to construct a laser map under the current scene;
s123, collecting laser data in a current scene by utilizing a laser global positioning module, carrying out online preprocessing on the collected laser data, detecting from the laser data to obtain a key frame, and extracting and storing laser characteristic information in the key frame;
s13, receiving a drawing establishment ending command sent by the user interface module;
s14, the visual global positioning module calls an interface of the laser positioning module to acquire the pose of all image frames, receives key frame information sent by the laser global positioning module, performs feature point inter-frame matching and tracking on stored image features, extracts additional key frames, optimizes 3D coordinates of feature points, establishes a visual word bag library of the key frames, and stores the visual word bag library to obtain the visual global positioning library;
s15, the laser global positioning module calls an interface of the laser positioning module to acquire pose and sub-map information of all key frames, establishes a relation between the key frames and the sub-map, constructs a laser word bag library of the key frames, and stores the laser word bag library to obtain a laser global positioning library;
the process for searching and matching in the global positioning environment library by utilizing the visual image and the laser information of the current position of the robot and determining the pose of the robot comprises the following steps:
s21, performing system clock alignment on the laser positioning module, the visual global positioning module and the laser global positioning module;
s22, a visual global positioning module loads a visual global positioning library, a laser global positioning module loads a laser global positioning library, and the laser global positioning module subscribes to odom tf;
s23, extracting image features of the current frame image by the visual global positioning module, calculating a word bag vector, and searching a key frame in the visual global positioning library;
s24, extracting laser characteristics of laser data corresponding to the current frame by the laser global positioning module, calculating a word bag vector, and searching a key frame in a laser global positioning library;
s25, synthesizing key frame retrieval results of the visual global positioning module and the laser global positioning module to construct a candidate key frame set;
s26, performing image feature matching and laser feature matching on each key frame in the candidate key frame set and the current frame, removing invalid key frames, and constructing a preferred key frame set;
s27, for each key frame in the preferred key frame set, obtaining a matching result of the key frame and the current frame image characteristic, estimating a visual pose based on the matching result, and estimating the estimated quality of the visual pose;
s28, sequentially matching each sub map in the laser sub map set corresponding to the preferred key frame set with the laser data of the current frame, estimating to obtain a laser pose, and estimating the estimation quality of the laser pose;
s29, comprehensively estimating pose and quality of the visual global positioning module and the laser global positioning module, and determining and issuing a final global positioning result.
2. The global positioning method of the laser and vision integrated indoor robot according to claim 1, wherein the process of performing feature point inter-frame matching and tracking on the stored image features comprises the following steps:
and when the frames are matched, utilizing inter-frame motion prediction, obtaining the predicted position of the characteristic point of the current frame for each characteristic point of the previous frame, then matching the predicted position with the characteristic points around the predicted position, and selecting the characteristic vector which is the closest as a matching result.
3. The global positioning method of an integrated indoor robot by combining laser and vision according to claim 1, wherein the process of optimizing the 3D coordinates of the feature points comprises the following steps:
s141, pose information provided by a laser positioning module is obtained, and a difference value between poses is estimated;
s142, for each feature point, arbitrarily selecting two key frames with pose difference values larger than a preset difference threshold value, and directly calculating to obtain the 3D coordinates of the feature point;
s143, optimizing 3D coordinates of the feature points under a global map coordinate system by utilizing pixel coordinates of the feature points in a plurality of key frames; the optimization objective is to solve the optimal 3D coordinates of the feature points, so that the error between the projection coordinates of the feature points projected onto the key frame image according to the pose of each key frame and the pixel coordinates of the feature points is as small as possible.
4. The global positioning method of an integrated indoor robot by combining laser and vision according to claim 1, wherein the process of obtaining the matching result of the image features of each key frame in the preferred key frame set and the current frame comprises the following steps:
when the frames are matched with the key frames, the characteristic point vector clustering result is utilized to match each characteristic point of the current frame with the characteristic point which belongs to the same clustering class selected from the key frames, and the characteristic vector is selected to be the closest as the matching result.
5. The global positioning method of an integrated indoor robot by combining laser and vision according to claim 1, wherein the process of sequentially matching each sub map in the laser sub map set corresponding to the preferred key frame set with the laser data of the current frame and estimating the pose of the laser comprises the following steps:
s281, evaluating and obtaining the curvature of each laser data point through the difference between the original laser data and the smoothed data;
s282, detecting curvature point characteristics in laser data, and reserving local stable and sparse characteristic points through non-maximum inhibition;
s283, using the block distribution information of the laser data points around each feature point as a feature point description vector, using the feature point description vector to perform feature point matching, and using the RANSAC algorithm to calculate the pose relationship between the two frames of laser data from the feature point matching result.
6. The global positioning method of the laser and vision fusion integrated indoor robot according to claim 1, wherein the calculating of the bag-of-words vector comprises the following steps:
clustering the feature point description vectors of the multi-frame images or the laser data to obtain a feature point description dictionary;
for a certain frame of image or laser data, calculating type numbers of all feature point description vectors in the dictionary;
and calculating the occurrence frequency of all words in the image or laser data, and using the word frequency as a word bag vector of the image or laser data.
7. The global positioning method of an integrated indoor robot by combining laser and vision according to claim 1, wherein the pose estimation and quality estimation results of the comprehensive vision global positioning module and the laser global positioning module, determining and issuing the final global positioning result comprises the following steps:
s291, taking the re-projection error as a quality evaluation index of visual pose estimation;
s292, taking the registration degree between the current frame laser and the key frame laser and the sub map matching degree as quality evaluation indexes of laser pose estimation;
s293, automatically eliminating unreliable pose estimation results through pose graph optimization under comprehensive quality evaluation weighting, fusing various visual pose estimation and laser pose estimation results, and outputting final global positioning results.
CN202011373978.4A 2020-11-30 2020-11-30 Laser and vision integrated global positioning method for indoor robot Active CN112596064B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011373978.4A CN112596064B (en) 2020-11-30 2020-11-30 Laser and vision integrated global positioning method for indoor robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011373978.4A CN112596064B (en) 2020-11-30 2020-11-30 Laser and vision integrated global positioning method for indoor robot

Publications (2)

Publication Number Publication Date
CN112596064A CN112596064A (en) 2021-04-02
CN112596064B true CN112596064B (en) 2024-03-08

Family

ID=75187646

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011373978.4A Active CN112596064B (en) 2020-11-30 2020-11-30 Laser and vision integrated global positioning method for indoor robot

Country Status (1)

Country Link
CN (1) CN112596064B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113140004B (en) * 2021-04-23 2024-04-23 南京航空航天大学 Laser radar-based unmanned system rapid repositioning method and device
WO2023280274A1 (en) * 2021-07-07 2023-01-12 The Hong Kong University Of Science And Technology Geometric structure aided visual localization method and system
CN113674409A (en) * 2021-07-20 2021-11-19 中国科学技术大学先进技术研究院 Vision-based multi-robot instant positioning and synchronous drawing establishing method, system and medium
CN113624239A (en) * 2021-08-11 2021-11-09 火种源码(中山)科技有限公司 Laser mapping method and device based on hierarchical switchable sparse pose map optimization
CN115267796B (en) * 2022-08-17 2024-04-09 深圳市普渡科技有限公司 Positioning method, positioning device, robot and storage medium
CN117804423A (en) * 2022-09-26 2024-04-02 华为云计算技术有限公司 Repositioning method and device
CN115657062B (en) * 2022-12-27 2023-03-17 理工雷科智途(北京)科技有限公司 Method and device for quickly relocating equipment and storage medium

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016085633A (en) * 2014-10-27 2016-05-19 株式会社デンソー Object identification unit, driving assist system, and vehicle and object identification method
CN106092104A (en) * 2016-08-26 2016-11-09 深圳微服机器人科技有限公司 The method for relocating of a kind of Indoor Robot and device
CN107356252A (en) * 2017-06-02 2017-11-17 青岛克路德机器人有限公司 A kind of Position Method for Indoor Robot for merging visual odometry and physics odometer
US9870624B1 (en) * 2017-01-13 2018-01-16 Otsaw Digital Pte. Ltd. Three-dimensional mapping of an environment
CN108152823A (en) * 2017-12-14 2018-06-12 北京信息科技大学 The unmanned fork truck navigation system and its positioning navigation method of a kind of view-based access control model
CN108256574A (en) * 2018-01-16 2018-07-06 广东省智能制造研究所 Robot localization method and device
CN108717710A (en) * 2018-05-18 2018-10-30 京东方科技集团股份有限公司 Localization method, apparatus and system under indoor environment
CN108759844A (en) * 2018-06-07 2018-11-06 科沃斯商用机器人有限公司 Robot relocates and environmental map construction method, robot and storage medium
WO2019040800A1 (en) * 2017-08-23 2019-02-28 TuSimple 3d submap reconstruction system and method for centimeter precision localization using camera-based submap and lidar-based global map
CN109425348A (en) * 2017-08-23 2019-03-05 北京图森未来科技有限公司 A kind of while positioning and the method and apparatus for building figure
CN110533722A (en) * 2019-08-30 2019-12-03 的卢技术有限公司 A kind of the robot fast relocation method and system of view-based access control model dictionary
CN110796683A (en) * 2019-10-15 2020-02-14 浙江工业大学 Repositioning method based on visual feature combined laser SLAM
CN111045017A (en) * 2019-12-20 2020-04-21 成都理工大学 Method for constructing transformer substation map of inspection robot by fusing laser and vision
CN111065980A (en) * 2017-08-23 2020-04-24 图森有限公司 System and method for centimeter-accurate positioning using camera-based sub-maps and LIDAR-based global maps
CN111258313A (en) * 2020-01-20 2020-06-09 深圳市普渡科技有限公司 Multi-sensor fusion SLAM system and robot
CN111337947A (en) * 2020-05-18 2020-06-26 深圳市智绘科技有限公司 Instant mapping and positioning method, device, system and storage medium
CN111652179A (en) * 2020-06-15 2020-09-11 东风汽车股份有限公司 Semantic high-precision map construction and positioning method based on dotted line feature fusion laser
CN111795687A (en) * 2020-06-29 2020-10-20 深圳市优必选科技股份有限公司 Robot map updating method and device, readable storage medium and robot

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10565457B2 (en) * 2017-08-23 2020-02-18 Tusimple, Inc. Feature matching and correspondence refinement and 3D submap position refinement system and method for centimeter precision localization using camera-based submap and LiDAR-based global map

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016085633A (en) * 2014-10-27 2016-05-19 株式会社デンソー Object identification unit, driving assist system, and vehicle and object identification method
CN106092104A (en) * 2016-08-26 2016-11-09 深圳微服机器人科技有限公司 The method for relocating of a kind of Indoor Robot and device
US9870624B1 (en) * 2017-01-13 2018-01-16 Otsaw Digital Pte. Ltd. Three-dimensional mapping of an environment
CN107356252A (en) * 2017-06-02 2017-11-17 青岛克路德机器人有限公司 A kind of Position Method for Indoor Robot for merging visual odometry and physics odometer
WO2019040800A1 (en) * 2017-08-23 2019-02-28 TuSimple 3d submap reconstruction system and method for centimeter precision localization using camera-based submap and lidar-based global map
CN111373337A (en) * 2017-08-23 2020-07-03 图森有限公司 3D sub-map reconstruction system and method for centimeter-accurate positioning using camera-based sub-maps and LIDAR-based global maps
CN111065980A (en) * 2017-08-23 2020-04-24 图森有限公司 System and method for centimeter-accurate positioning using camera-based sub-maps and LIDAR-based global maps
CN109425348A (en) * 2017-08-23 2019-03-05 北京图森未来科技有限公司 A kind of while positioning and the method and apparatus for building figure
CN108152823A (en) * 2017-12-14 2018-06-12 北京信息科技大学 The unmanned fork truck navigation system and its positioning navigation method of a kind of view-based access control model
CN108256574A (en) * 2018-01-16 2018-07-06 广东省智能制造研究所 Robot localization method and device
CN108717710A (en) * 2018-05-18 2018-10-30 京东方科技集团股份有限公司 Localization method, apparatus and system under indoor environment
CN108759844A (en) * 2018-06-07 2018-11-06 科沃斯商用机器人有限公司 Robot relocates and environmental map construction method, robot and storage medium
CN110533722A (en) * 2019-08-30 2019-12-03 的卢技术有限公司 A kind of the robot fast relocation method and system of view-based access control model dictionary
CN110796683A (en) * 2019-10-15 2020-02-14 浙江工业大学 Repositioning method based on visual feature combined laser SLAM
CN111045017A (en) * 2019-12-20 2020-04-21 成都理工大学 Method for constructing transformer substation map of inspection robot by fusing laser and vision
CN111258313A (en) * 2020-01-20 2020-06-09 深圳市普渡科技有限公司 Multi-sensor fusion SLAM system and robot
CN111337947A (en) * 2020-05-18 2020-06-26 深圳市智绘科技有限公司 Instant mapping and positioning method, device, system and storage medium
CN111652179A (en) * 2020-06-15 2020-09-11 东风汽车股份有限公司 Semantic high-precision map construction and positioning method based on dotted line feature fusion laser
CN111795687A (en) * 2020-06-29 2020-10-20 深圳市优必选科技股份有限公司 Robot map updating method and device, readable storage medium and robot

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"CPFG-SLAM:a Robust Simultaneous Localization and Mapping based on LIDAR in Off-Road Environment";Kaijin Ji 等;《2018 IEEE Intelligent Vehicles Symposium (IV)》;650-655 *
"Vision-and-Lidar Based Real-time Outdoor Localization for Unmanned Ground Vehicles without GPS";Fangyi Wu 等;《2018 IEEE International Conference on Information and Automation (ICIA)》;232-237 *
"基于激光雷达和深度相机的AGV自主定位方法研究";黄婷;《中国优秀硕士论文全文数据库》;全文 *
"多摄像机人体姿态跟踪";孙洛 等;《清华大学学报(自然科学版)》;966-971 *

Also Published As

Publication number Publication date
CN112596064A (en) 2021-04-02

Similar Documents

Publication Publication Date Title
CN112596064B (en) Laser and vision integrated global positioning method for indoor robot
CN112486171B (en) Robot obstacle avoidance method based on vision
CN111563442B (en) Slam method and system for fusing point cloud and camera image data based on laser radar
Li et al. Structure-slam: Low-drift monocular slam in indoor environments
McDonald et al. Real-time 6-DOF multi-session visual SLAM over large-scale environments
JP2021509215A (en) Navigation methods, devices, devices, and storage media based on ground texture images
Grant et al. Efficient Velodyne SLAM with point and plane features
McDonald et al. 6-DOF multi-session visual SLAM using anchor nodes
CN113537208A (en) Visual positioning method and system based on semantic ORB-SLAM technology
WO2013117940A2 (en) Method of locating a sensor and related apparatus
Ahn et al. A practical approach for EKF-SLAM in an indoor environment: fusing ultrasonic sensors and stereo camera
CN110969648B (en) 3D target tracking method and system based on point cloud sequence data
CN111161334B (en) Semantic map construction method based on deep learning
CN114937083B (en) Laser SLAM system and method applied to dynamic environment
Zhang et al. High-precision localization using ground texture
CN113674416A (en) Three-dimensional map construction method and device, electronic equipment and storage medium
Feng et al. Visual map construction using RGB-D sensors for image-based localization in indoor environments
Lu et al. Indoor localization via multi-view images and videos
CN110827320A (en) Target tracking method and device based on time sequence prediction
Hile et al. Information overlay for camera phones in indoor environments
Li et al. Improving synthetic 3D model-aided indoor image localization via domain adaptation
Xu et al. A critical analysis of image-based camera pose estimation techniques
Strobl et al. Image-based pose estimation for 3-D modeling in rapid, hand-held motion
Li et al. Localization for intelligent vehicles in underground car parks based on semantic information
CN117213470B (en) Multi-machine fragment map aggregation updating method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant