CN113124854A - Visual positioning method, map construction method and map construction device - Google Patents

Visual positioning method, map construction method and map construction device Download PDF

Info

Publication number
CN113124854A
CN113124854A CN201911417241.5A CN201911417241A CN113124854A CN 113124854 A CN113124854 A CN 113124854A CN 201911417241 A CN201911417241 A CN 201911417241A CN 113124854 A CN113124854 A CN 113124854A
Authority
CN
China
Prior art keywords
map
image
node
current
nodes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911417241.5A
Other languages
Chinese (zh)
Other versions
CN113124854B (en
Inventor
龙学雄
易雨亭
李建禹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikrobot Technology Co Ltd
Original Assignee
Hangzhou Hikrobot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikrobot Technology Co Ltd filed Critical Hangzhou Hikrobot Technology Co Ltd
Priority to CN201911417241.5A priority Critical patent/CN113124854B/en
Publication of CN113124854A publication Critical patent/CN113124854A/en
Application granted granted Critical
Publication of CN113124854B publication Critical patent/CN113124854B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B29/00Maps; Plans; Charts; Diagrams, e.g. route diagram
    • G09B29/003Maps
    • G09B29/005Map projections or methods associated specifically therewith
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Automation & Control Theory (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

The visual positioning method comprises loading a map constructed based on a ground image; acquiring a current ground image and current inertial sensor data at any time during the travel, extracting image feature points based on the current ground image frame, and calculating matching information of the image feature points; searching map nodes based on a map according to the current inertial sensor data; and matching the map point position information and the matching information of the image feature points and each map point in the map nodes, and if the matching is successful, calculating the global pose of the current image frame according to the matched feature points to obtain a current positioning result. The invention utilizes the ground image to carry out mapping and visual positioning, thereby reducing the dependence on manual marking.

Description

Visual positioning method, map construction method and map construction device
Technical Field
The invention relates to the field of computer vision positioning navigation, in particular to a map construction method and a method for positioning based on a constructed map.
Background
Autonomous navigation in unknown environments is one of the key tasks for various mobile robotic or smart car applications. The robot needs to obtain its own positioning information in real time during its long-distance roaming. Currently, due to the increasing cost performance of visual sensors and the development of visual computing, visual-based navigation positioning methods are beginning to be widely applied to various related fields. The visual navigation positioning method has few hardware devices, low requirements on volume, power consumption, load and the like, and is suitable for various complex unknown environments, so the adoption of the visual positioning method becomes the inevitable trend of autonomous navigation of the robot.
Compared with the early electromagnetic navigation and magnetic tape navigation, the two-dimensional code navigation is a common scheme at present, is more accurate in positioning than the magnetic tape navigation, is easier to lay and change a path, but needs to maintain the two-dimensional code regularly, and is very high in implementation and later maintenance cost. Although the laser navigation scheme does not need two-dimensional codes, color bands, magnetic strips and other artificial layout markers, zero transformation of the operation environment is really realized, the requirements on the environment such as external light and the ground are high, the visibility requirement is relatively high, the environment of a long corridor is difficult to process, and the cost of laser is relatively high.
Disclosure of Invention
The invention provides a visual positioning method, which realizes positioning without depending on manual marks.
The map construction method provided by the invention is realized as follows:
the image acquisition interval is set up, and the image acquisition interval,
the robot body is controlled to move,
when the moving distance of the robot accords with the acquisition distance, triggering the acquisition of the current ground image and the current output data of the inertial sensor, and taking the current acquisition position as a map node to obtain an image frame of the map node and the data of the inertial sensor;
extracting at least one image feature point based on the image frame of the map node, and calculating the matching information of each image feature point; acquiring global position information of each map point generated for the map node based on inertial sensor data of the map node, wherein the map point is a set of all feature points in an image frame of the map node;
and storing the map node information, wherein the map node information comprises position information of each map point and matching information of each map point, and the matching information of the map points is the matching information of image feature points.
Preferably, the obtaining of the position information of each map point generated for the map node based on the inertial sensor data of the map node comprises,
acquiring global pose data of the robot to obtain global pose information of map nodes;
acquiring position information of each map point in the map node based on the global pose information of the map node;
the map node information comprises global position and attitude information of map nodes;
and the step of storing the map node information comprises the steps of taking each map node as a father node, and taking an adjacent map node with a spatial position relation with the map node as a child node, so that the map nodes are stored according to a father-child association relation.
Preferably, the map nodes with each map node as a parent node and the map nodes with adjacent spatial position relation with the parent node as child nodes include,
any map node is taken as a father node, an adjacent map node having any position relation with the father node in the front, back, left and right is taken as a child node,
and if no adjacent map node exists in any position relation of front, back, left and right with the father node, the child node corresponding to the adjacent map node is empty.
Preferably, the image acquisition interval is determined according to the error magnitude of the inertial sensor,
the control of the robot body to move comprises the step of controlling the robot body to move according to a planned mapping path;
when the moving distance of the robot accords with the acquisition distance, the acquisition of the current ground image and the current output data of the inertial sensor is triggered, comprising,
determining the current moving distance of the robot according to the position information in the current global pose data of the robot output by the inertial sensor,
triggering the acquisition of the current ground texture image and the current output data of the inertial sensor at least more than once when the current moving distance accords with the acquisition interval, adding the acquired current inertial sensor data into the currently acquired image frame in a watermark mode,
repeatedly executing the step of determining the current moving distance of the robot according to the position information in the current global pose data of the robot output by the inertial sensor until the robot finishes planning a mapping path;
and taking each acquisition position as a map node, and obtaining the image frame and the inertial sensor data of each map node for off-line processing.
Preferably, the planned mapping path is an unclosed path from a starting point to an end point,
the control robot body moves according to the planned graph building path, and comprises the steps of controlling the robot body to move from a starting point to an end point and then from the end point to the starting point along the original path;
the method comprises the steps that global pose data of the robot are obtained based on the inertial sensor data of the map nodes, wherein the inertial sensor data are extracted from image frames added with the inertial sensor data watermarks, and the positions of the map nodes corresponding to the image frames are obtained;
the steps of obtaining the global pose data of the robot based on the inertial sensor data of the map nodes and obtaining the global pose information of the map nodes further comprise,
performing closed-loop detection on all image frames to obtain closed-loop frames,
for the closed-loop frames, calculating the relative pose between the closed-loop frames to obtain a second relative pose,
for the non-closed loop frame, calculating the relative pose between two adjacent frames to obtain a first relative pose,
performing graph optimization of the poses of the map nodes based on the first relative pose and the second relative pose and all map nodes associated with the first relative pose and the second relative pose to obtain the optimized poses of all map nodes;
the map node is generated with global position information of each map point based on the global pose information of the map node, including,
and for each map node, obtaining three-dimensional position information of each map point of the map node in a world coordinate system by using the optimized pose and according to the internal reference of the camera and the installation height of the camera.
Preferably, the closed-loop detection is performed on all image frames to obtain closed-loop frames, including,
for any one of the image frames, the image data is,
taking other image frames except the image frame in all the image frames as compared frames;
determining the position of a map node corresponding to the image frame according to the position information in the global pose data of the robot corresponding to the image frame;
determining the position of the map node corresponding to the compared image frame according to the position information in the global pose data of the robot corresponding to the compared image frame,
if the absolute value of the difference between the positions of the map nodes corresponding to the image frame and the positions of the map nodes corresponding to the compared image frame is smaller than a set distance threshold, matching image feature points of the image frame and the compared image frame, and if the matching condition is met, judging the two frames to be closed-loop frames;
otherwise, judging the image frame and the compared image frame to be non-closed loop frames.
Preferably, the closed-loop detection is performed on all image frames to obtain closed-loop frames, including,
numbering the image frames according to the sequence of the acquisition time in ascending order,
for any one of the image frames i,
determining a compared image frame j, wherein j < i, i and j are natural numbers;
determining the positions of map nodes corresponding to the image frame i and the image frame j respectively according to the position information in the robot global pose data corresponding to the image frame i and the image frame j respectively;
if the absolute value of the difference between the distances between the map node position corresponding to the image frame i and the map node position corresponding to the image frame j is smaller than a set distance threshold, performing image feature point matching on the image frame i and the image frame j, and if the matching condition is met, judging that the image frame i and the image frame j are closed-loop frames;
otherwise, judging the image frame i and the image frame j as non-closed loop frames;
and repeating the steps until all the image frames are respectively compared with the historical frames.
The invention provides a visual positioning method, which comprises the following steps,
loading a map constructed based on the ground image; the map comprises more than one map node, each map node comprises position information of each map point and matching information of each map point, wherein the map points are a set of ground image feature points, the matching information of the map points is the matching information of the ground image feature points, and the position information of the map points is the position information of the ground image feature points;
the current ground image and current inertial sensor data are acquired at any time during the trip,
extracting image feature points based on the current ground image frame, and calculating matching information of the image feature points;
searching map nodes based on a map according to the current inertial sensor data;
and matching the map point position information and the matching information of the image feature points and each map point in the map nodes, and if the matching is successful, calculating the global pose of the current image frame according to the matched feature points to obtain a current positioning result.
Preferably, the ground image at least comprises a ground texture image, the map nodes further comprise global pose information of the map nodes,
said map-based search for map nodes based on current inertial sensor data comprises,
acquiring the global pose of the robot according to the current inertial sensor data;
according to position information in the global pose of the robot, searching a map node closest to the position based on the global pose information of the map nodes in the map, and taking the searched map node as a candidate map node;
the matching of the map point position information and the matching information between the image feature points and each map point in the map nodes comprises,
the searched map nodes are used as candidate map nodes,
matching the map point position information and the matching information of the image characteristic points with each map point in the candidate map nodes;
each map node in the map is used as a father node and is associated with an adjacent map node having a spatial position relation with the map node, and the adjacent map node is stored as a child node of the father node;
between the step of loading a ground-based constructed map and the step of acquiring current ground images and current inertial sensor data at any one time during the trip, including,
controlling the robot to move to the position of any map node to obtain the position of the current initial map node;
predicting a next map node according to a planned path based on the current map node and child nodes thereof;
the method comprises the steps of searching a map node closest to the position based on a map according to position information in the global pose of the robot, taking the searched map node as a candidate map node,
at any moment in the process of traveling, acquiring a current image and inertial sensor data, estimating the global pose of the current image frame, and acquiring the current position;
searching a prediction map node closest to the current position based on the map, and taking the searched prediction map node and a child node of the prediction map node as candidate map nodes;
the method further comprises the step of judging that the predicted map node is not reached if the matching is unsuccessful, and taking the global pose of the robot acquired based on the current inertial sensor data as a current positioning result.
Preferably, the acquiring the global pose of the robot based on the current inertial sensor data includes obtaining an inter-frame relative pose according to the inertial sensor data, estimating the global pose of the current image frame by using the inter-frame relative pose and the global pose of the last observed map node, and obtaining the current position
Preferably, the control robot moves to the position of any map node, acquires the current initial map node position, including,
collecting the ground texture image at the map node, extracting the image feature point of the current image frame, matching the image feature point with each map point in all map nodes in the map, and estimating the initial global pose of the robot based on the matched feature point.
Preferably, said matching the image feature point with each map point in all map nodes in the map comprises,
map nodes in a certain range are set, and the image feature points are matched with all map points of all map nodes in the set range.
The invention also provides a visual positioning device, comprising,
the map loading module is used for loading a map constructed based on the ground image; the map comprises more than one map node, each map node comprises position information of each map point and matching information of each map point, wherein the map points are a set of ground image feature points, the matching information of the map points is the matching information of the ground image feature points, and the position information of the map points is the position information of the ground image feature points;
an image acquisition module for acquiring the current ground image to obtain the current ground image frame,
an inertial sensor for acquiring current inertial sensor data,
the computing module extracts image characteristic points and computes matching information of the image characteristic points based on the current ground image frame acquired by the image acquisition module;
searching map nodes based on a map according to current inertial sensor data acquired by an inertial sensor;
and matching the map point position information and the matching information of the image feature points and each map point in the map nodes, and if the matching is successful, calculating the global pose of the current image frame according to the matched feature points to obtain a current positioning result.
The invention provides a mobile robot, comprising,
the visual positioning device is used for outputting a positioning result through visual positioning;
the driving device outputs a motion instruction based on the positioning result output by the visual positioning device;
the execution mechanism executes the motion instruction;
wherein,
the visual positioning device comprises a visual positioning device,
the map loading module loads a map constructed based on the ground image; the map comprises more than one map node, each map node comprises position information of each map point and matching information of each map point, wherein the map points are a set of ground image feature points, the matching information of the map points is the matching information of the ground image feature points, and the position information of the map points is the position information of the ground image feature points;
an image acquisition module for acquiring the current ground image to obtain the current ground image frame,
an inertial sensor for acquiring current inertial sensor data,
the computing module extracts image characteristic points and computes matching information of the image characteristic points based on the current ground image frame acquired by the image acquisition module; searching map nodes based on a map according to current inertial sensor data acquired by an inertial sensor;
and matching the map point position information and the matching information of the image feature points and each map point in the map nodes, and if the matching is successful, calculating the global pose of the current image frame according to the matched feature points to obtain a current positioning result.
The present invention provides a computer readable storage medium having stored therein a computer program which, when executed by a processor, performs any of the mapping steps described above, and/or any of the visual positioning steps described above.
According to the embodiment of the invention, the map is constructed and visually positioned by utilizing the ground image, the map is formed by constructing the map node information on the planned path, the dependence on the manual mark is reduced, the map construction is convenient for the daily maintenance of the map, the maintenance cost of the map is reduced, and a coordinate system is arbitrarily established in the map construction process; in the visual positioning process, the positioning is carried out through the inertial sensor and the ground image, the dependence on an artificial marking image is not needed, and the positioning result is stable.
Drawings
Fig. 1 is a schematic flow chart of a process for constructing a map based on a ground texture.
Fig. 2 is a schematic diagram of a path of a robot to complete a work task.
Fig. 3 is a schematic diagram of data collected by an inertial sensor.
Fig. 4 is a schematic diagram of closed loop detection.
Fig. 5 is a pose graph after all map nodes are optimized.
Fig. 6 is a schematic diagram of a map node storage form of a two-dimensional map.
Fig. 7a to 7e are schematic diagrams of poses of the map nodes of various paths after optimization is completed.
Fig. 8 is a schematic flow chart of the robot performing visual positioning based on the constructed map.
Fig. 9 is a schematic view of a visual positioning apparatus according to an embodiment of the invention.
Fig. 10 is a schematic diagram of a mobile robot apparatus according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical means and advantages of the present application more apparent, the present application will be described in further detail with reference to the accompanying drawings.
The visual positioning method realizes positioning by matching the acquired features in the ground texture image with map node information in a map, wherein the map is constructed based on the ground texture and comprises map node pose information, feature point matching information and feature point position information.
A method of performing map construction based on the ground texture is described below. In this embodiment, the robot is equipped with an inertial sensor, such as a wheel encoder and an Inertial Measurement Unit (IMU), which obtains the pose of the robot in a world coordinate system by integrating the detection data, and is further equipped with a downward-looking lens image acquisition module for performing ground texture image acquisition toward the ground, such as a camera mounted on the bottom surface of the robot carrier, with the lens facing the ground; the downward-looking lens image acquisition module further comprises a light supplement lamp used for guaranteeing good illumination and a light shield assembly used for guaranteeing stable illumination of downward-looking vision.
Referring to fig. 1, fig. 1 is a schematic diagram illustrating a process of constructing a map based on a ground texture.
Step 101, planning a map building path according to the task requirements of the robot, so that the built map can cover all working ranges and meet the task requirements.
Taking the task requirement of fig. 2 as an example, fig. 2 is a schematic diagram of a path of a robot to complete a work task. The path is a rectangular side length connected by four side lengths.
102, setting the space for acquiring the ground texture images according to task requirements based on the planned mapping path, for example, acquiring the images every 1 m; the error magnitude of the inertial sensor can be considered at the same time, and if the error of the inertial sensor is larger, the acquired distance setting is smaller. The positions of the collected images are used as map nodes to represent key positions on the path, so that more accurate positioning precision exists at the positions during positioning, which means that each map node corresponds to image data.
Preferably, for the ground with a sharp change of the path, such as a turn or a path intersection, an auxiliary identifier can be added on the ground to facilitate the distinction between the map nodes.
103, controlling the robot to perform data acquisition along a planned path based on the robot pose output by the inertial sensor according to a set acquisition interval, namely acquiring the traveling distance of the robot based on position information in the robot pose in a world coordinate system obtained by the inertial sensor, and triggering image acquisition of the current ground texture when the traveling distance of the robot meets the acquisition interval;
the robot pose output by the inertial sensor is obtained by integrating inertial sensing data, and the principle is that the average value of the acceleration of the current moment t and the acceleration of the next moment t +1 is obtained and is used as the average acceleration in the delta t time between the current moment and the next moment t +1, and the speed and the position of the moment t +1 can be approximately obtained by the average acceleration, the initial speed and the initial position of the current moment; the attitude at the time t +1 can be approximated by calculating the average of the angular velocities at the current time t and the next time t +1, and by using this average angular velocity and the attitude at the current time as the average angular velocity in the time Δ t between the current time and the next time t + 1.
A slight offset may be allowed in terms of the path traveled, as long as the offset from the path should not exceed half the size of the image size.
In view of the fact that each image acquisition point can be used as a map node, the robot moves to each map node, and at least one corresponding image is acquired to be used as data of a subsequent map; at the image acquisition point, the acquired data comprises image data of the current ground texture and current inertial sensor data; the image data and the inertial sensor data are temporally synchronized data, that is, for each frame of image, there is inertial sensor data corresponding to the same time. The inertial sensor data comprises pose data of the robot body in a world coordinate system. The data of the inertial sensor can be added to the picture (image frame) in the form of a watermark or can be stored in a manner associated with the captured picture. The collected ground texture may be a texture characteristic inherent to the ground itself, such as a texture of marble itself, a texture of tile, a texture of cement, and the like.
And 104, performing offline or online processing on all the collected data to construct a map. For example, for a map applied to the navigation of the AGV robot, offline processing may be adopted, and the map obtained by the offline processing is saved so as to facilitate loading of the required AGV; for the map applied to the sweeping robot, online processing can be adopted.
The processing of all the acquired data includes,
step 1041, preprocessing the collected image, including image distortion removal and/or image enhancement, to avoid that the feature points cannot be extracted due to weak ground texture, and image texture enhancement may not be performed for environment with clear ground texture.
Step 1042, extracting image feature points based on the preprocessed image, and calculating matching information of the feature points; the image feature points include, but are not limited to, Scale Invariant Feature Transform (SIFT), Speeded Up Robust Feature (SURF), fast feature points (ORB), and the like, and the matching information is a descriptor of the corresponding feature points.
And 1043, acquiring the global pose of the robot body under a world coordinate system based on the acquired inertial sensor data. When the pose data is added into the image in the watermark form, the pose data added in the watermark form is extracted from the image, and the position information in the pose data is the position information of the map node.
Through the above steps, the data of each map node is obtained, including the extracted image feature points, the feature point matching information, and the position information of the map node, hereinafter simply referred to as map node position.
Step 1044, performing closed-loop detection on all image frames;
because there is a large deviation between the end position and the start position collected by the inertial sensor due to the accumulated error of the inertial sensor, referring to fig. 3, fig. 3 is a schematic diagram of data collected by the inertial sensor, the robot does not need to move strictly according to a set path, the actual final arrival end of the robot is also the start point, but the end position data collected by the inertial sensor and the actual start position data are different, and the difference reflects the accumulated error of the inertial sensor. The purpose of closed-loop detection is to eliminate accumulated errors of the inertial sensor through correlation between images, that is, visual association can be established by matching an image of an end point and an image of a start point to determine the same position image acquired at different moments, so as to determine whether a plurality of map nodes are the same position in an actual environment, and thus the map nodes are taken as the same node.
Specifically, for any image frame i, the image frame i is compared with other image frames except the image frame i in the acquired image data, the positions of map nodes are obtained according to pose data acquired by an inertial sensor corresponding to the image frame, the distance between the map nodes is judged according to the positions of the map nodes, if the absolute value of the difference between the positions of the map nodes corresponding to the image frame and the positions of the map nodes corresponding to the compared image frame is smaller than a certain distance threshold, feature matching is performed on the two frames, if a matching condition is met, the two frames are determined to be closed-loop frames, the corresponding map nodes are the same map node in the actual environment, the matching relationship of the two frames is stored, and if not, the two frames are determined to be non-closed-loop frames.
In technical implementation, in order to improve the comparison efficiency, any image frame i is compared with the historical frame, for example, each image frame is numbered in ascending order according to the sequence of the acquisition time, and the image frame i is compared with the image frame j, wherein j < i, i and j are natural numbers;
if the absolute value of the difference value between the map node position corresponding to the image frame i and the map node position corresponding to the image frame j is smaller than the distance threshold, performing feature matching on the image frame i and the image frame j, and if the matching condition is met, enabling the image frame i and the image frame j to be a group of closed-loop frames;
otherwise, judging the image frame i and the image frame j as non-closed loop frames;
and repeating the steps until all the image frames are compared with the historical frames respectively, so that the closed-loop detection of all the image frames is completed.
Step 1045, calculating a relative pose of the camera between the closed-loop frames as a second relative pose for each group of closed-loop frames; and calculating the relative pose of the camera between the adjacent frames as a first relative pose aiming at the non-closed loop frames, wherein the first relative pose is obtained by integrating the inertial sensor.
Referring to fig. 4, fig. 4 is a schematic diagram of closed loop detection. After all the second relative poses are estimated, as described in step 1045, the position constraints of the start and end points are obtained, as shown by the thick black lines in fig. 4.
For the calculation of the second relative pose, a least square method can be utilized for solving.
Expressed mathematically as:
constructing a residual error:
e=Xi-TijXj
wherein, XiIs the three-dimensional coordinate, X, of the characteristic point in the image frame i under the world coordinate systemjIs the three-dimensional coordinate T of the characteristic point in the image frame j under the world coordinate systemijFor the second relative pose, a closed-loop frame position constraint, i.e., an inter-frame constraint formed between the inertial sensors, i.e., an edge represented by a black bold line in fig. 4, is characterized.
Solving a second relative attitude T by using a least square method based on the constructed residual errorij
And 1046, performing pose graph optimization by using the first relative pose, the second relative pose and all map nodes associated by the first relative pose and the second relative pose, eliminating accumulated errors measured by the inertial sensor, and obtaining the global poses of all the map nodes after optimization.
In this step, a residual is constructed:
Figure BDA0002351504910000101
wherein, TijIn the second relative position, TiRepresents the pose of node i, TjThe pose of the node j is represented and corresponds to the global poses of the node i to be optimized and the node j, namely, the first relative pose of the image frame of the node i and the first relative pose of the image frame of the node j; the symbol V denotes the inverse of the antisymmetric operation.
Solving the global pose T of the node i by using a least square method based on the constructed residual erroriGlobal position T of node jjThereby eliminating cumulative errors in inertial sensor measurements.
In order to quickly solve the global pose of the image frame, graph optimization is adopted, for example, an optimization function of a Bundle Adjustment (Bundle Adjustment) model is constructed, and based on the constructed Bundle Adjustment optimization function, the global pose of the image frame, namely the global pose of a map node, is solved through a steepest descent method (iteratively searching a variable value which enables the function to be minimum along the negative direction of a gradient). The steepest descent method ensures that each iteration function descends, the descending speed is very high when the initial point is far away from the optimal point, but the convergence is very slow because the iteration direction of the steepest descent method is a zigzag shape.
Referring to fig. 5, fig. 5 is a pose graph after all map nodes are optimized. The accumulated error between the starting point and the end point is eliminated, so that the pose of each map node is accurate at present.
Step 1047, generating map point information for each map node based on the image features and matching information in the image frame of each map node, wherein a map point is a set of image feature points in the image frame.
Specifically, for each map node, the three-dimensional position of each feature point in the map node under the world coordinate system is obtained according to the camera parameters and the installation height of the camera by using the optimized pose of each map node.
The mathematical expression is:
Figure BDA0002351504910000102
wherein, TwcRepresenting the pose of the camera in the world coordinate system, i.e. the pose of the map node in the world coordinate system, Twc_newFor the optimized position and posture of map node, Twc_oldMap node poses before map optimization; xnewIs a three-dimensional coordinate, X, of a feature point (map point) after map optimization in a world coordinate systemoldThree-dimensional coordinates of feature points before graph optimization under a world landmark system.
The three-dimensional position of the feature point in the world coordinate system can be obtained by the following formula:
Figure BDA0002351504910000111
wherein X is the three-dimensional position of the characteristic point in the world coordinate system, u and v are pixel coordinates in the image frame, and K is the camera parameter (internal reference and external reference); r is a rotation matrix, t is a translation vector, and can be obtained according to the camera external parameters and the odometer information of the inertial sensor, and the camera external parameters are related to the installation height of the camera.
Thus, any map node includes the following information: the map node comprises the position and posture information of the map node in the world coordinate system, the three-dimensional position information of each map point of the map node in the world coordinate system, and the matching information (matching information of the characteristic points) of each map point of the map node.
And 1048, storing the map nodes as a unit, wherein each map node comprises the pose information of the map node and the matching information and the position information of a large number of map points. Every map node is related and stored in a tree form, and any map node (father node) stores the addresses of the front, back, left and right map nodes (child nodes) adjacent to the map node, so that the robot conveniently indexes the next map node which can appear next, and if the address is empty, the direction of the map node is not provided with the next adjacent node.
Referring to fig. 6, fig. 6 is a schematic diagram of a two-dimensional map in a map node storage form. The current node is used as a father node, and the map nodes at the front, the back, the left and the right adjacent to the current node are stored as child nodes.
Through the steps, the construction of the map is completed.
The invention can be used for map construction of paths such as straight lines, right angles, non-right angles, circular arcs, grids and the like. Referring to fig. 7a to 7e, fig. 7a to 7e are schematic diagrams of poses of map nodes of various paths after optimization is completed. In the case that a map is constructed, at least two observations of the same map node can be achieved by collecting data back and forth at least two times, so as to achieve closed loop on the map, and in this case, more than one group of closed loop frames can be obtained during closed loop detection, and collected map nodes at the same position at different times are combined into the same map node through closed loop detection. Fig. 7d is a schematic view including a non-orthogonal path, and fig. 7e is a schematic view of a circular path.
According to the map construction method, the collected ground texture image features are used as map point information of map nodes, closed-loop detection is established through feature matching between the map nodes, the association of the same map node is established, the accumulated error of data collected by an inertial sensor is eliminated, and the prior of the artificial relative pose is not needed; establishing association among different map nodes by using data acquired by an inertial sensor, and optimizing the pose of each map node to ensure that the pose of each map node is accurate; the information of each map node is stored in a tree form, so that the searching efficiency of the map nodes is improved. Because the map is constructed without depending on manual marks, a coordinate system can be randomly appointed in the map construction process, and the map is flexible and convenient to construct and convenient to maintain.
Referring to fig. 8, fig. 8 is a schematic flow chart of the robot performing visual positioning based on the constructed map. The positioning method comprises the following steps of,
and step 801, loading a map constructed based on the ground texture in the motion range of the robot.
In this step, if the map within the range of motion of the robot is not previously stored in the memory of the robot, the map is downloaded from the outside into the memory of the robot, and if the map within the range of motion of the robot is already stored in the memory of the robot, the stored map is called to run.
Step 802, moving the robot to the position of any map node, and collecting an image at the map node, wherein the image comprises ground texture; and extracting the feature points of the current image, and matching the feature points with map points in all map nodes in the map to estimate the initial global pose of the robot so as to acquire the current initial map node position.
In order to accelerate the process, the approximate positions of the map nodes can be manually input, or the robot is moved to the map nodes which are convenient to initialize, for example, the map nodes with the initialized pose data stored on the map, so that the searching range of the nodes is narrowed, and the speed of initializing the pose of the robot is accelerated.
Step 803, the robot predicts the next map node based on the map node associated with the current map node according to the planned path so as to perform feature point matching subsequently;
and at any time during the travel, acquiring the current image and the inertial sensor data, obtaining the inter-frame relative pose according to the inertial sensor data, and estimating the global pose of the current image frame by using the inter-frame relative pose and the global pose of the last observed map node so as to acquire the current position.
In this step, the current position acquired is typically between two map nodes, corresponding to a coarse positioning.
Step 804, the current image frame is preprocessed, including image distortion removal and/or image enhancement, so that the situation that the feature points cannot be extracted due to weak ground texture is avoided, and image texture enhancement can not be performed on the environment with clear ground texture.
Step 805, extracting image feature points based on the preprocessed image frame, and calculating matching information of the feature points; the image feature points include, but are not limited to, Scale Invariant Feature Transform (SIFT), Speeded Up Robust Feature (SURF), fast feature points (ORB), and the like, and the matching information is a descriptor of the corresponding feature points.
Step 806, searching a prediction map node closest to the current position according to the current data of the inertial sensor, and taking the prediction map node and a map node associated with the prediction map node as candidate map nodes, for example, a map node shown in fig. 6 and 4 adjacent map nodes, i.e., a map node which is associated with the map node, in front of, behind, left of, behind, right of, and left of the map node, so that 5 candidate map nodes are provided; matching the feature points extracted based on the current image frame with map points in candidate map nodes by map point position information and matching information,
if the matching is successful, calculating the global pose of the current image frame based on the matched feature points to obtain a current positioning result, wherein the positioning result is one of candidate map nodes; through visual positioning, the accumulated error from the previous map node to the inertial sensor of the current map node can be corrected;
if the matching is not successful, the predicted map node is judged not to be reached, the current data of the inertial sensor is still used as the pose of the robot,
and returning to the step 803 until the navigation positioning is finished.
In the above steps, the steps 803 and 804 may be performed after the candidate node is searched.
In the positioning method, the pose data output by the inertial sensor is adopted among the nodes, and the visual positioning is carried out at the nodes through the feature matching of map points, so that the robot can get rid of the dependence on two-dimensional codes or other artificial marks, and the stable positioning is realized through the inertial sensor and the visual map based on the natural texture of the ground; in addition, visual positioning is carried out at the node through feature matching, and the accumulated error of the inertial sensor is corrected; and by utilizing the storage association among the map nodes, the positioning efficiency is improved, and the calculated amount in the positioning process is reduced.
Referring to fig. 9, fig. 9 is a schematic view of a visual positioning apparatus according to an embodiment of the present invention. The visual device comprises a visual device and a visual display,
the map loading module is used for loading a map constructed based on the ground image; the map comprises more than one map node, and each map node comprises global position and attitude information of the map node, position information of each map point and matching information of each map point, wherein the map points are a set of ground image feature points, the matching information of the map points is the matching information of the ground image feature points, and the position information of the map points is the global position information of the ground image feature points;
an image acquisition module for acquiring the current ground image to obtain the current ground image frame,
an inertial sensor for acquiring current inertial sensor data,
the computing module extracts image characteristic points and computes matching information of the image characteristic points based on the current ground image frame acquired by the image acquisition module; acquiring the global pose of a positioning device body based on the current inertial sensor data acquired by an inertial sensor;
according to the position information in the global pose of the positioning device body, searching a map node closest to the position based on a map provided by a map loading module, and taking the searched map node as a candidate map node;
and matching the image characteristic points with each map point in the candidate map nodes by map point position information and matching information, and if the matching is successful, calculating the global pose of the current image frame according to the matched characteristic points to obtain a current positioning result.
The map loading module can download the map constructed based on the ground image through a network, store the downloaded map in the storage module, or load the map constructed based on the ground image in real time through the network.
Referring to fig. 10, fig. 10 is a schematic view of a mobile robot apparatus according to an embodiment of the present invention. The mobile robot comprises a mobile robot body and a mobile robot body,
the visual positioning device is used for outputting a positioning result through visual positioning;
the driving device outputs a motion instruction based on the positioning result output by the visual positioning device;
and the execution mechanism executes the motion instruction.
Preferably, the image acquisition module comprises a downward-looking lens image acquisition assembly, and the module is mounted at the bottom of the mobile robot, and the lens faces the ground. The image acquisition module is still including being used for guaranteeing the good light filling lamp of illumination to and be used for guaranteeing down looking the stable lens hood subassembly of field illumination.
Preferably, the computing module includes a processor and a Memory, and the Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
Embodiments of the present invention further provide a computer-readable storage medium, in which a computer program is stored, and when the computer program is executed by a processor, the map construction and/or the visual positioning steps described in the embodiments of the present application are implemented.
For the device/network side device/storage medium embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and for the relevant points, refer to the partial description of the method embodiment.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (15)

1. A map construction method, characterized in that the method comprises,
the image acquisition interval is set up, and the image acquisition interval,
the robot body is controlled to move,
when the moving distance of the robot accords with the acquisition distance, triggering the acquisition of the current ground image and the current output data of the inertial sensor, and taking the current acquisition position as a map node to obtain an image frame of the map node and the data of the inertial sensor;
extracting at least one image feature point based on the image frame of the map node, and calculating the matching information of each image feature point;
acquiring position information of each map point generated for the map node based on inertial sensor data of the map node, wherein the map points are a set of all feature points in the map node image frame;
and storing the map node information, wherein the map node information comprises position information of each map point and matching information of each map point, and the matching information of the map points is the matching information of image feature points.
2. The method of claim 1, wherein obtaining location information for generating respective map points for a map node based on inertial sensor data for the map node comprises,
acquiring global pose data of the robot to obtain global pose information of map nodes;
acquiring position information of each map point in the map node based on the global pose information of the map node;
the map node information comprises global position and attitude information of map nodes;
and the step of storing the map node information comprises the steps of taking each map node as a father node, and taking an adjacent map node with a spatial position relation with the map node as a child node, so that the map nodes are stored according to a father-child association relation.
3. The method of claim 2, wherein said regarding each map node as a parent node and regarding map nodes having adjacent spatial positional relationships with the parent node as child nodes includes,
any map node is taken as a father node, an adjacent map node having any position relation with the father node in the front, back, left and right is taken as a child node,
and if no adjacent map node exists in any position relation of front, back, left and right with the father node, the child node corresponding to the adjacent map node is empty.
4. The method of claim 1, wherein the image acquisition spacing is determined in conjunction with an error magnitude of an inertial sensor,
the control of the robot body to move comprises the step of controlling the robot body to move according to a planned mapping path;
when the moving distance of the robot accords with the acquisition distance, the acquisition of the current ground image and the current output data of the inertial sensor is triggered, comprising,
determining the current moving distance of the robot according to the position information in the current global pose data of the robot output by the inertial sensor,
triggering the acquisition of the current ground texture image and the current output data of the inertial sensor at least more than once when the current moving distance accords with the acquisition interval, adding the acquired current inertial sensor data into the currently acquired image frame in a watermark mode,
repeatedly executing the step of determining the current moving distance of the robot according to the position information in the current global pose data of the robot output by the inertial sensor until the robot finishes planning a mapping path;
and taking each acquisition position as a map node, and obtaining the image frame and the inertial sensor data of each map node for off-line processing.
5. The method of claim 4, wherein the planned charting path is an unclosed path from a starting point to an ending point,
the control robot body moves according to the planned graph building path, and comprises the steps of controlling the robot body to move from a starting point to an end point and then from the end point to the starting point along the original path;
the method comprises the steps that global pose data of the robot are obtained based on the inertial sensor data of the map nodes, wherein the inertial sensor data are extracted from image frames added with the inertial sensor data watermarks, and the positions of the map nodes corresponding to the image frames are obtained;
the steps of obtaining the global pose data of the robot based on the inertial sensor data of the map nodes and obtaining the global pose information of the map nodes further comprise,
performing closed-loop detection on all image frames to obtain closed-loop frames,
for the closed-loop frames, calculating the relative pose between the closed-loop frames to obtain a second relative pose,
for the non-closed loop frame, calculating the relative pose between two adjacent frames to obtain a first relative pose,
performing graph optimization of the poses of the map nodes based on the first relative pose and the second relative pose and all map nodes associated with the first relative pose and the second relative pose to obtain the optimized poses of all map nodes;
the map node is generated with global position information of each map point based on the global pose information of the map node, including,
and for each map node, obtaining three-dimensional position information of each map point of the map node in a world coordinate system by using the optimized pose and according to the internal reference of the camera and the installation height of the camera.
6. The method of claim 5, wherein said performing closed-loop detection on all image frames, obtaining closed-loop frames, comprises,
for any one of the image frames, the image data is,
taking other image frames except the image frame in all the image frames as compared frames;
determining the position of a map node corresponding to the image frame according to the position information in the global pose data of the robot corresponding to the image frame;
determining the position of the map node corresponding to the compared image frame according to the position information in the global pose data of the robot corresponding to the compared image frame,
if the absolute value of the difference between the positions of the map nodes corresponding to the image frame and the positions of the map nodes corresponding to the compared image frame is smaller than a set distance threshold, matching image feature points of the image frame and the compared image frame, and if the matching condition is met, judging the two frames to be closed-loop frames;
otherwise, judging the image frame and the compared image frame to be non-closed loop frames.
7. The method of claim 5, wherein said performing closed-loop detection on all image frames, obtaining closed-loop frames, comprises,
numbering the image frames according to the sequence of the acquisition time in ascending order,
for any one of the image frames i,
determining a compared image frame j, wherein j < i, i and j are natural numbers;
determining the positions of map nodes corresponding to the image frame i and the image frame j respectively according to the position information in the robot global pose data corresponding to the image frame i and the image frame j respectively;
if the absolute value of the difference between the distances between the map node position corresponding to the image frame i and the map node position corresponding to the image frame j is smaller than a set distance threshold, performing image feature point matching on the image frame i and the image frame j, and if the matching condition is met, judging that the image frame i and the image frame j are closed-loop frames;
otherwise, judging the image frame i and the image frame j as non-closed loop frames;
and repeating the steps until all the image frames are respectively compared with the historical frames.
8. A visual positioning method, comprising,
loading a map constructed based on the ground image; the map comprises more than one map node, each map node comprises position information of each map point and matching information of each map point, wherein the map points are a set of ground image feature points, the matching information of the map points is the matching information of the ground image feature points, and the position information of the map points is the position information of the ground image feature points;
the current ground image and current inertial sensor data are acquired at any time during the trip,
extracting image feature points based on the current ground image frame, and calculating matching information of the image feature points;
searching map nodes based on a map according to the current inertial sensor data;
and matching the map point position information and the matching information of the image feature points and each map point in the map nodes, and if the matching is successful, calculating the global pose of the current image frame according to the matched feature points to obtain a current positioning result.
9. The method of claim 8, wherein the ground image comprises at least a ground texture image, the map nodes further comprising, global pose information for map nodes,
said map-based search for map nodes based on current inertial sensor data comprises,
acquiring the global pose of the robot according to the current inertial sensor data;
according to position information in the global pose of the robot, searching a map node closest to the position based on the global pose information of the map nodes in the map, and taking the searched map node as a candidate map node;
the matching of the map point position information and the matching information between the image feature points and each map point in the map nodes comprises,
the searched map nodes are used as candidate map nodes,
matching the map point position information and the matching information of the image characteristic points with each map point in the candidate map nodes;
each map node in the map is used as a father node and is associated with an adjacent map node having a spatial position relation with the map node, and the adjacent map node is stored as a child node of the father node;
between the step of loading a ground-based constructed map and the step of acquiring current ground images and current inertial sensor data at any one time during the trip, including,
controlling the robot to move to the position of any map node to obtain the position of the current initial map node;
and predicting the next map node according to the planned path based on the current map node and the child nodes thereof.
10. The method of claim 9, wherein the obtaining a global pose of the robot from the current inertial sensor data comprises,
obtaining inter-frame relative poses according to inertial sensor data, estimating the global poses of the current image frame by using the inter-frame relative poses and the global poses of the map nodes observed last time, and obtaining the current position;
the method comprises the steps of searching a map node closest to the position based on the global pose information of map nodes in a map according to the position information in the global pose of the robot, taking the searched map node as a candidate map node,
at any moment in the process of traveling, acquiring a current image and inertial sensor data, estimating the global pose of the current image frame, and acquiring the current position;
searching a prediction map node closest to the current position based on the global pose information of the map node, and taking the searched prediction map node and the child nodes of the prediction map node as candidate map nodes;
the method further comprises the step of judging that the predicted map node is not reached if the matching is unsuccessful, and taking the global pose of the robot acquired based on the current inertial sensor data as a current positioning result.
11. The method of claim 9, wherein said controlling the robot to move to the location of any one of the map nodes, obtaining a current initial map node location, comprises,
collecting the ground texture image at the map node, extracting the image feature point of the current image frame, matching the image feature point with each map point in all map nodes in the map, and estimating the initial global pose of the robot based on the matched feature point.
12. The method of claim 11, wherein said matching the image feature point to respective map points in all map nodes in a map comprises,
map nodes in a certain range are set, and the image feature points are matched with all map points of all map nodes in the set range.
13. A visual positioning device, comprising,
the map loading module is used for loading a map constructed based on the ground image; the map comprises more than one map node, each map node comprises position information of each map point and matching information of each map point, wherein the map points are a set of ground image feature points, the matching information of the map points is the matching information of the ground image feature points, and the position information of the map points is the position information of the ground image feature points;
an image acquisition module for acquiring the current ground image to obtain the current ground image frame,
an inertial sensor for acquiring current inertial sensor data,
the computing module extracts image characteristic points and computes matching information of the image characteristic points based on the current ground image frame acquired by the image acquisition module;
searching map nodes based on a map according to current inertial sensor data acquired by an inertial sensor;
and matching the map point position information and the matching information of the image feature points and each map point in the map nodes, and if the matching is successful, calculating the global pose of the current image frame according to the matched feature points to obtain a current positioning result.
14. A mobile robot, characterized by comprising,
the visual positioning device is used for outputting a positioning result through visual positioning;
the driving device outputs a motion instruction based on the positioning result output by the visual positioning device;
the execution mechanism executes the motion instruction;
wherein,
the visual positioning device comprises a visual positioning device,
the map loading module loads a map constructed based on the ground image; the map comprises more than one map node, each map node comprises position information of each map point and matching information of each map point, wherein the map points are a set of ground image feature points, the matching information of the map points is the matching information of the ground image feature points, and the position information of the map points is the position information of the ground image feature points;
an image acquisition module for acquiring the current ground image to obtain the current ground image frame,
an inertial sensor for acquiring current inertial sensor data,
the computing module extracts image characteristic points and computes matching information of the image characteristic points based on the current ground image frame acquired by the image acquisition module;
searching map nodes based on a map according to current inertial sensor data acquired by an inertial sensor;
and matching the map point position information and the matching information of the image feature points and each map point in the map nodes, and if the matching is successful, calculating the global pose of the current image frame according to the matched feature points to obtain a current positioning result.
15. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the mapping step according to any one of claims 1 to 7 and/or the visual positioning step according to any one of claims 8 to 12.
CN201911417241.5A 2019-12-31 2019-12-31 Visual positioning method, map construction method and device Active CN113124854B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911417241.5A CN113124854B (en) 2019-12-31 2019-12-31 Visual positioning method, map construction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911417241.5A CN113124854B (en) 2019-12-31 2019-12-31 Visual positioning method, map construction method and device

Publications (2)

Publication Number Publication Date
CN113124854A true CN113124854A (en) 2021-07-16
CN113124854B CN113124854B (en) 2023-04-25

Family

ID=76769768

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911417241.5A Active CN113124854B (en) 2019-12-31 2019-12-31 Visual positioning method, map construction method and device

Country Status (1)

Country Link
CN (1) CN113124854B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114742884A (en) * 2022-06-09 2022-07-12 杭州迦智科技有限公司 Texture-based mapping, mileage calculation and positioning method and system
CN114913295A (en) * 2022-03-31 2022-08-16 阿里巴巴(中国)有限公司 Visual mapping method, device, storage medium and computer program product
US20230050492A1 (en) * 2021-08-12 2023-02-16 I-Sprint Innovations Pte Ltd Graphical watermark, method and apparatus for generating same, and method and apparatus for authenticating same

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160379092A1 (en) * 2015-06-26 2016-12-29 Intel Corporation System for building a map and subsequent localization
CN107160395A (en) * 2017-06-07 2017-09-15 中国人民解放军装甲兵工程学院 Map constructing method and robot control system
CN107193279A (en) * 2017-05-09 2017-09-22 复旦大学 Robot localization and map structuring system based on monocular vision and IMU information
CN107869989A (en) * 2017-11-06 2018-04-03 东北大学 A kind of localization method and system of the fusion of view-based access control model inertial navigation information
CN108489482A (en) * 2018-02-13 2018-09-04 视辰信息科技(上海)有限公司 The realization method and system of vision inertia odometer
CN110044354A (en) * 2019-03-28 2019-07-23 东南大学 A kind of binocular vision indoor positioning and build drawing method and device
CN110322500A (en) * 2019-06-28 2019-10-11 Oppo广东移动通信有限公司 Immediately optimization method and device, medium and the electronic equipment of positioning and map structuring

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160379092A1 (en) * 2015-06-26 2016-12-29 Intel Corporation System for building a map and subsequent localization
CN107193279A (en) * 2017-05-09 2017-09-22 复旦大学 Robot localization and map structuring system based on monocular vision and IMU information
CN107160395A (en) * 2017-06-07 2017-09-15 中国人民解放军装甲兵工程学院 Map constructing method and robot control system
CN107869989A (en) * 2017-11-06 2018-04-03 东北大学 A kind of localization method and system of the fusion of view-based access control model inertial navigation information
CN108489482A (en) * 2018-02-13 2018-09-04 视辰信息科技(上海)有限公司 The realization method and system of vision inertia odometer
CN110044354A (en) * 2019-03-28 2019-07-23 东南大学 A kind of binocular vision indoor positioning and build drawing method and device
CN110322500A (en) * 2019-06-28 2019-10-11 Oppo广东移动通信有限公司 Immediately optimization method and device, medium and the electronic equipment of positioning and map structuring

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
詹文强;陈绪兵;陈凯;王盛兴;余良伟;韩桂荣;: "基于深度相机的机器人室内定位技术研究" *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230050492A1 (en) * 2021-08-12 2023-02-16 I-Sprint Innovations Pte Ltd Graphical watermark, method and apparatus for generating same, and method and apparatus for authenticating same
CN114913295A (en) * 2022-03-31 2022-08-16 阿里巴巴(中国)有限公司 Visual mapping method, device, storage medium and computer program product
CN114742884A (en) * 2022-06-09 2022-07-12 杭州迦智科技有限公司 Texture-based mapping, mileage calculation and positioning method and system

Also Published As

Publication number Publication date
CN113124854B (en) 2023-04-25

Similar Documents

Publication Publication Date Title
CN109887057B (en) Method and device for generating high-precision map
CN112183171B (en) Method and device for building beacon map based on visual beacon
JP5987823B2 (en) Method and system for fusing data originating from image sensors and motion or position sensors
Panahandeh et al. Vision-aided inertial navigation based on ground plane feature detection
CN113124854B (en) Visual positioning method, map construction method and device
CN109903330B (en) Method and device for processing data
CN107909614B (en) Positioning method of inspection robot in GPS failure environment
CN112734852A (en) Robot mapping method and device and computing equipment
CN112254729B (en) Mobile robot positioning method based on multi-sensor fusion
US11158065B2 (en) Localization of a mobile unit by means of a multi hypothesis kalman filter method
CN109631911B (en) Satellite attitude rotation information determination method based on deep learning target recognition algorithm
CN110260861B (en) Pose determination method and device and odometer
CN113447949B (en) Real-time positioning system and method based on laser radar and prior map
CN110260866A (en) A kind of robot localization and barrier-avoiding method of view-based access control model sensor
CN115272596A (en) Multi-sensor fusion SLAM method oriented to monotonous texture-free large scene
CN113763549A (en) Method, device and storage medium for simultaneous positioning and mapping by fusing laser radar and IMU
CN113188557A (en) Visual inertial integrated navigation method fusing semantic features
CN112184906A (en) Method and device for constructing three-dimensional model
CN111256689B (en) Robot positioning method, robot and storage medium
Wei et al. Rapid and robust initialization for monocular visual inertial navigation within multi-state Kalman filter
Wudenka et al. Towards robust monocular visual odometry for flying robots on planetary missions
Hu et al. Efficient Visual-Inertial navigation with point-plane map
CN114323038B (en) Outdoor positioning method integrating binocular vision and 2D laser radar
CN114842224A (en) Monocular unmanned aerial vehicle absolute vision matching positioning scheme based on geographical base map
CN113034538B (en) Pose tracking method and device of visual inertial navigation equipment and visual inertial navigation equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 310051 room 304, B / F, building 2, 399 Danfeng Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant after: Hangzhou Hikvision Robot Co.,Ltd.

Address before: 310051 room 304, B / F, building 2, 399 Danfeng Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant before: HANGZHOU HIKROBOT TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant