CN112129282B - Method and device for converting positioning results among different navigation modes - Google Patents

Method and device for converting positioning results among different navigation modes Download PDF

Info

Publication number
CN112129282B
CN112129282B CN202011062611.0A CN202011062611A CN112129282B CN 112129282 B CN112129282 B CN 112129282B CN 202011062611 A CN202011062611 A CN 202011062611A CN 112129282 B CN112129282 B CN 112129282B
Authority
CN
China
Prior art keywords
pose
key frame
sample
beacon
keyframe
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011062611.0A
Other languages
Chinese (zh)
Other versions
CN112129282A (en
Inventor
党志强
易雨亭
李建禹
贾永华
吴永海
李必勇
白寒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikrobot Co Ltd
Original Assignee
Hangzhou Hikrobot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikrobot Technology Co Ltd filed Critical Hangzhou Hikrobot Technology Co Ltd
Priority to CN202011062611.0A priority Critical patent/CN112129282B/en
Publication of CN112129282A publication Critical patent/CN112129282A/en
Application granted granted Critical
Publication of CN112129282B publication Critical patent/CN112129282B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

The application discloses a method and a device for converting positioning results among different navigation modes. The invention compensates the deviation caused by directly aligning the map by using two-dimensional plane pose transformation under different navigation modes due to the factors of map building scale difference, configuration distortion and the like by using the pre-established pose mapping relation between the first coordinate system used for the first navigation mode and the second coordinate system used for the second navigation mode, and improves the accuracy of the pose mapping positioning result. Compared with a mode of depending on a beacon as a mapping reference point, the method takes the sample key frame for creating the pose mapping relation as a reference, and can avoid the problem that the positioning result output by mapping jumps due to overlarge beacon arrangement distance and uneven distribution.

Description

Method and device for converting positioning results among different navigation modes
Technical Field
The invention relates to the field of robot navigation, in particular to a method and a device for converting positioning results among different navigation modes.
Background
With the development of navigation technology, more and more mature navigation modes are applied to robot positioning. For example, there may be laser navigation, visual texture navigation, beacon navigation, etc. depending on the navigation method.
In practical applications, the navigation mode adopted by the robot side often needs to be updated due to changes in the environment, for example, the navigation mode based on the beacon often has to adopt a more advanced navigation mode due to abrasion of the beacon, the existing planning and scheduling topological graph and the positioning mode of the scheduling system side do not need to be frequently changed, or the robots in different areas of the same scheduling system need to adopt different navigation modes, and the scheduling system side usually adopts one navigation mode for facilitating planning and scheduling.
Because the positioning results of different navigation modes have great difference even under the same space environment. For example, in a use scene of transition from existing two-dimensional code navigation to visual texture navigation, a positioning result output by the visual texture navigation needs to be matched with an existing two-dimensional code map coordinate system through certain transformation. This makes it desirable to switch positioning results between different navigation modes.
Disclosure of Invention
The invention provides a method and a device for converting positioning results among different navigation modes, which are used for solving the problem of pose mapping of coordinate spaces in different navigation modes.
The invention provides a method for converting positioning results among different navigation modes, which is realized by the following steps:
obtaining a first positioning result of the mobile robot in a first coordinate system used for a first navigation mode and a pre-created pose mapping relation between the first coordinate system and a second coordinate system used for a second navigation mode, wherein the pose mapping relation comprises a first key frame pose of each sample key frame in a sample key frame set obtained by pre-sampling in the first coordinate system and a second key frame pose in the second coordinate system,
selecting one sample key frame from the sample key frame set as a reference key frame, wherein the correlation degree between the reference key frame and the first positioning result is higher than the correlation degree between other sample key frames in the sample key frame set and the first positioning result;
converting the first positioning result into a second positioning result in the second coordinate system based on a pose deviation of the reference keyframe between the first keyframe pose and the second keyframe pose in the pose mapping.
Preferably, obtaining a first positioning result of the mobile robot in a first coordinate system for the first navigation mode includes: acquiring a sampling key frame contained in sampling image data acquired by a camera of the mobile robot; determining the first positioning result using the sampling key frame.
Preferably, selecting one of the sample key frames from the sample key frame set as a reference key frame comprises: determining a spatial distance between the first keyframe pose of the sample keyframes in the set of sample keyframes with respect to the first positioning result; determining the sample key frame with the smallest spatial distance as the reference key frame.
Preferably, selecting one of the sample key frames from the sample key frame set as a reference key frame comprises: screening out the sample keyframes with the first keyframe posture falling within the preset neighborhood range of the first positioning result; and determining the similarity between the screened sample key frame and the sampling key frame for determining the first positioning result, wherein the sampling key frame is obtained from the sampling image data collected by the mobile robot, and the sample key frame with the highest similarity is determined as the reference key frame.
Preferably, the pose mapping relationship is created by: the method comprises the steps of obtaining a sample key frame set containing sample key frames from sample image data, determining the first key frame pose of each sample key frame in the sample key frame set, extracting beacon key frames including beacons from the sample key frames in the sample key frame set, determining the constraint of the beacons on the beacon key frames based on the position information of the beacons in the beacon key frames, performing image optimization on the first key frame pose of each sample key frame at least through the constraint of the beacons on the beacon key frames, and creating the pose mapping relation by using the first key frame pose of each sample key frame and the second key frame obtained after the image optimization.
The invention also provides a method for constructing the pose mapping relation among different navigation modes, which comprises the following steps,
a sample key frame set containing sample key frames is obtained from the sample image data,
determining a first keyframe pose for each of the set of sample keyframes under a first coordinate system for a first navigation style,
extracting beacon key frames including beacons from the sample key frames of the sample key frame set,
determining a beacon's constraint on the beacon key frame based on location information of beacons in the beacon key frame,
performing graph optimization on the first keyframe pose of each sample keyframe through at least the constraint of the beacon on the beacon keyframe to obtain a second keyframe pose in a second coordinate system for a second navigation mode,
and establishing a pose mapping relation by using the first key frame pose and the second key frame pose of each sample key frame.
Preferably, determining the constraint of the beacon on the beacon key frame based on the location information of the beacon in the beacon key frame comprises: for each beacon in each beacon key frame, determining a difference between a first displacement measurement of the beacon relative to the camera of the beacon key frame in the first coordinate system and a second displacement measurement of the beacon relative to the camera of the beacon key frame in the second coordinate system, wherein the beacon key frame in the second coordinate system is mapped from the beacon key frame in the first coordinate system, and accumulating the differences of all beacons in all beacon key frames to obtain the constraint of the beacon on the beacon key frame.
Preferably, the first displacement measurement value is determined according to a camera model relationship between the feature point in the beacon key frame and the three-dimensional space point; and the second displacement measurement value is obtained according to the position information of the beacon in the second coordinate system, the pose of the key frame in the second coordinate system and the camera external parameters.
Preferably, the graph optimizing the first keyframe pose for each of the sample keyframes through at least beacon-to-beacon keyframe constraints comprises: and performing graph optimization on the first keyframe pose of the sample keyframe through the constraint of the beacon keyframe by the beacon, the odometry interframe constraint of each sample keyframe, and the closed-loop constraint of each sample keyframe, wherein the odometry interframe constraint comprises a visual odometry interframe constraint and/or an inertial odometry interframe constraint, and the odometry interframe constraint is obtained from the acquired odometry data.
Preferably, the map optimizing the first keyframe pose for each sample keyframe from the beacon keyframe constraints, the odometer-to-frame constraints for that sample keyframe, and the closed-loop constraints for that sample keyframe, comprises: and constructing an objective function, wherein the objective function is the sum of the constraint of the beacon on the beacon key frame, the odometer inter-frame constraint of all the sample key frames and the closed-loop constraint of all the sample key frames, and the pose of the objective function with the minimum value is determined as the pose of the second key frame obtained after the graph optimization through nonlinear optimization solution.
The invention also provides a device for converting positioning results among different navigation modes, which comprises:
an information acquisition module, configured to obtain a first positioning result of the mobile robot in a first coordinate system for a first navigation manner and a pre-created pose mapping relationship between the first coordinate system and a second coordinate system for a second navigation manner, where the pose mapping relationship includes a first keyframe pose of each sample keyframe in a sample keyframe set obtained by pre-sampling in the first coordinate system and a second keyframe pose of the sample keyframe in the second coordinate system,
a reference selection module, configured to select one of the sample key frames from the sample key frame set as a reference key frame, where a degree of association between the reference key frame and the first positioning result is higher than a degree of association between other sample key frames in the sample key frame set and the first positioning result;
a transformation execution module, configured to transform the first positioning result into a second positioning result in the second coordinate system based on a pose deviation of the reference key frame between the first key frame pose and the second key frame pose in the pose mapping relationship.
The present invention also provides a mobile robot comprising a processor configured to perform:
obtaining a first positioning result of the mobile robot in a first coordinate system used for a first navigation mode and a pre-created pose mapping relation between the first coordinate system and a second coordinate system used for a second navigation mode, wherein the pose mapping relation comprises a first key frame pose of each sample key frame in a sample key frame set obtained by pre-sampling in the first coordinate system and a second key frame pose in the second coordinate system,
selecting one sample key frame from the sample key frame set as a reference key frame, wherein the correlation degree between the reference key frame and the first positioning result is higher than the correlation degree between other sample key frames in the sample key frame set and the first positioning result;
converting the first positioning result into a second positioning result in the second coordinate system based on a pose deviation of the reference keyframe between the first keyframe pose and the second keyframe pose in the pose mapping.
The present invention also provides a data processing apparatus comprising a processor configured to perform:
obtaining a first positioning result of the mobile robot in a first coordinate system used for a first navigation mode and a pre-created pose mapping relation between the first coordinate system and a second coordinate system used for a second navigation mode, wherein the pose mapping relation comprises a first key frame pose of each sample key frame in a sample key frame set obtained by pre-sampling in the first coordinate system and a second key frame pose in the second coordinate system,
selecting one sample key frame from the sample key frame set as a reference key frame, wherein the correlation degree between the reference key frame and the first positioning result is higher than the correlation degree between other sample key frames in the sample key frame set and the first positioning result;
converting the first positioning result into a second positioning result in the second coordinate system based on a pose deviation of the reference keyframe between the first keyframe pose and the second keyframe pose in the pose mapping.
The invention also provides a device for constructing the pose mapping relation among different navigation modes, which comprises,
an information collection module to obtain a sample key frame set including sample key frames from the sample image data,
a pose determination module that determines a first keyframe pose for each of the set of sample keyframes under a first coordinate system for a first navigation style,
a beacon extraction module extracting a beacon key frame including a beacon from the sample key frames of the sample key frame set,
a constraint determination module that determines a constraint of a beacon on the beacon key frame based on location information of the beacon in the beacon key frame,
a pose optimization module that performs graph optimization on the first keyframe pose of each sample keyframe through at least beacon-to-beacon keyframe constraints to obtain a second keyframe pose in a second coordinate system for a second navigation mode,
and the mapping creation module is used for creating a pose mapping relation by using the first key frame pose and the second key frame pose of each sample key frame.
The present invention also provides another data processing apparatus comprising a processor configured to perform:
a sample key frame set containing sample key frames is obtained from the sample image data,
determining a first keyframe pose for each of the set of sample keyframes under a first coordinate system for a first navigation style,
extracting beacon key frames including beacons from the sample key frames of the sample key frame set,
determining a beacon's constraint on the beacon key frame based on location information of beacons in the beacon key frame,
performing graph optimization on the first keyframe pose of each sample keyframe through at least the constraint of the beacon on the beacon keyframe to obtain a second keyframe pose in a second coordinate system for a second navigation mode,
and establishing a pose mapping relation by using the first key frame pose and the second key frame pose of each sample key frame.
The invention compensates the deviation of the positioning result in different navigation modes caused by directly aligning the map by using two-dimensional plane pose transformation due to the map construction scale difference, the configuration distortion and other factors of the map by using the pre-established pose mapping relation between the first coordinate system used for the first navigation mode and the second coordinate system used for the second navigation mode, and improves the accuracy of the pose mapping positioning result. Compared with a mode of depending on a beacon as a mapping reference point, the method takes the sample key frame for creating the pose mapping relation as a reference, and can avoid the problem that the positioning result output by mapping jumps due to overlarge beacon arrangement distance and uneven distribution. In addition, when the pose mapping relation is created, the invention utilizes the constraint of the beacon to the beacon key frame to carry out graph optimization on the pose of the sample key frame, so that the subordinate inclusion relation of the physical space of the map under different coordinate systems is not required, and the method can adapt to more using scenes. The mapping transformation is carried out on the whole pose of the plane motion of the robot, so that the method has higher practical value.
Drawings
Fig. 1 is an exemplary flowchart illustrating a method for constructing a pose mapping relationship between different navigation modes in an embodiment of the present invention.
Fig. 2 is a schematic view of an example process for constructing a pose mapping relationship between a two-dimensional code navigation mode and a visual texture navigation mode based on the exemplary process shown in fig. 1.
Fig. 3 is a schematic diagram of pose graph optimization by adding beacon constraints to a first keyframe.
Fig. 4 is a schematic diagram of a correlation of physical quantities for constructing a pose mapping relationship.
Fig. 5 is an exemplary flowchart illustrating a method for converting positioning results between different navigation modes according to an embodiment of the present invention.
Fig. 6 is a schematic flow chart of switching the positioning result between different navigation modes based on the flow chart shown in fig. 5 in the mobile robot positioning process.
Fig. 7 is a schematic diagram showing a correlation between physical quantities at the time of conversion.
Fig. 8 is a schematic diagram of a device for converting positioning results between different navigation modes according to an embodiment of the present invention.
Fig. 9 is a schematic diagram of a mobile robot according to an embodiment of the present invention.
Fig. 10 is a schematic diagram of an apparatus for constructing a pose mapping relationship between different navigation modes according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical means and advantages of the present application more apparent, the present application will be described in further detail with reference to the accompanying drawings.
The applicant finds that even in the same space environment, the large difference of the positioning results between different navigation modes exists because the positioning maps for different navigation modes have distortion integrally, for example, the distance between the point a and the point B in the two-dimensional code navigation positioning map is L1, the distance between the point a and the point B in the visual texture navigation map is L2, and the size difference between the two is a nonlinear relationship, so that the strong precondition that the positioning results of different navigation modes are aligned globally only by means of a single rigid body pose transformation is not satisfied, which also makes the transformation of the positioning results between different navigation modes a technical obstacle.
In order to solve the above technical problem, in the embodiments of the present application, rigid body pose transformation constraints satisfied by positioning maps in different navigation modes are sought, and pose mapping relationships in different navigation modes are constructed based on the rigid body pose transformation constraints, so that a positioning result is converted based on the pose mapping relationships.
The following parameters will be described in relation to the present invention:
TCK: the external parameters of the camera are referred to,
Figure GDA0003056135770000061
beacon MjAt the coordinate positions of the second coordinate system W',
Figure GDA0003056135770000062
beacon MjFor beacon key frame K in visual texture navigation modeiThe pose of (1) is constrained;
Figure GDA0003056135770000071
collecting a displacement measurement value between a camera and a beacon when the beacon is in a key frame;
Figure GDA0003056135770000072
the odometer relative pose measurement value between adjacent sample key frames;
TWK: sample keyframes areGlobal pose under a coordinate system;
Figure GDA0003056135770000073
the first key frame global pose of the ith frame sample key frame in the first coordinate system,
Figure GDA0003056135770000074
the global pose of the first key frame of the i +1 th frame sample key frame in the first coordinate system,
Figure GDA0003056135770000075
and
Figure GDA0003056135770000076
the first keyframe pose of the two sample keyframes forming the loop,
Figure GDA0003056135770000077
detecting a relative pose measurement value between two sample key frames obtained by loop detection;
Figure GDA0003056135770000078
odometry interframe constraints of a sample key frame i;
Figure GDA0003056135770000079
closed loop constraint of two sample key frames i and j is obtained through loop detection;
Kmin: referencing a key frame;
TWA: the global pose of the mobile robot under the first coordinate system;
Tw′A′: a second positioning result of the mobile robot in the second coordinate system,
Figure GDA00030561357700000710
referring to a second keyframe pose of the keyframe in a second coordinate system,
Figure GDA00030561357700000711
the first key frame pose of the key frame under the first coordinate system is referred.
Fig. 1 is an exemplary flowchart illustrating a method for constructing a pose mapping relationship between different navigation modes in an embodiment of the present invention. Referring to fig. 1, in this embodiment, the method for constructing the pose mapping relationship between different navigation manners may include the following steps (for example, executed by a data processing device capable of communicating with a mobile robot):
step 101, a sample key frame set containing sample key frames is obtained from sample image data.
The sample image data in this step may be acquired in the first navigation mode, for example, acquired by the mobile robot in the first navigation mode, and provided to the data processing device.
Step 102, determining a first keyframe pose of each sample keyframe in the set of sample keyframes under a first coordinate system for a first navigation.
Step 103, extracting a beacon key frame including a beacon from the sample key frames of the sample key frame set.
And step 104, determining the constraint of the beacon to the beacon key frame based on the position information of the beacon in the beacon key frame.
And 105, performing graph optimization on the first keyframe pose of each sample keyframe at least through the constraint of the beacon on the beacon keyframe to obtain a second keyframe pose in a second coordinate system for a second navigation mode.
And 106, establishing a pose mapping relation by using the first key frame pose and the second key frame pose of each sample key frame.
The pose mapping relation created in the step can be stored in the data processing equipment, so that the data processing equipment can map and convert the positioning result provided by the mobile robot under the first coordinate system to the second coordinate system; or, the pose mapping relationship created in this step may also be sent to the mobile robot, so that the mobile robot can map and convert the positioning result of the mobile robot in the first coordinate system to the second coordinate system.
As can be seen from the above flow, when creating a pose mapping relationship, this embodiment can perform graph optimization on the poses of the sample keyframes by using the constraints of the beacons on the beacon keyframes, so that the subordinate inclusion relationships in the physical space of the floor map in different coordinate systems are not required, and more usage scenarios can be adapted. The mapping transformation is carried out on the whole pose of the plane motion of the robot, so that the method has higher practical value.
In order to facilitate understanding of the above pose mapping relationship creation process, the following description takes an example of converting the positioning result in the visual texture navigation mode (first navigation mode) into the positioning result in the two-dimensional code navigation mode (second navigation mode), and it should be understood that the present invention is not limited to converting the visual texture navigation mode into the two-dimensional code navigation mode, and in the same way, may also convert the positioning result in the two-dimensional code navigation mode into the positioning result in the visual texture navigation mode; further, the conversion between the two specific navigation manners may not be limited, and the conversion between other different navigation manners may also be adapted based on the principle of the present application, for example, the conversion of the positioning result between the beacon navigation manner and the visual navigation manner, where the visual navigation manner includes a laser navigation manner, a SLAM (Simultaneous localization and mapping) navigation manner, and the like.
In view of the fact that the positioning result is obtained in real time in the navigation positioning process, in order to improve the conversion efficiency, the pose mapping relations under different navigation modes can be constructed in advance, namely, the off-line mode is adopted for carrying out the positioning.
Fig. 2 is a schematic view of an example process for constructing a pose mapping relationship between a two-dimensional code navigation mode and a visual texture navigation mode based on the exemplary process shown in fig. 1. Referring to fig. 2, in this embodiment, the navigation mode of the robot side is visual texture navigation, and the construction method may include the following steps (for example, executed by a data processing device):
step 201, a sample key frame set containing sample key frames is obtained from sample image data in a visual texture navigation mode.
The sample image data mentioned in this step may be texture image data, and the step may further acquire odometer data (a visual odometer and an inertial odometer) in a visual texture navigation mode.
In order to improve the mapping accuracy of the pose mapping relationship, the acquired texture image data and odometry data may preferably include texture image data and odometry data on all possible task paths of the mobile robot. All possible task paths may be paths that the mobile robot may theoretically move through.
The sample key frame selected from the acquired sample image data (texture image data) may be a key frame selected based on a preset sampling rule. Such as sampling rules based on time intervals, or location sampling rules for areas of the localization map that are more distorted, etc.
Step 202, based on the sample image data, a first key frame pose of each sample key frame in the sample key frame set in a first coordinate system for the visual texture navigation mode is solved.
Optionally, the process of obtaining the pose of the first key frame through settlement in the step can further combine with odometer data.
The first keyframe poses of each sample keyframe in the set of sample keyframes can form a set of first keyframe poses, let the set of first keyframe poses be S, and the global coordinate system (world coordinate system) under the texture atlas is referred to as the first coordinate system { W }.
The pose of the first key frame is estimated through image data, and in the subsequent step, pose graph optimization is carried out on the pose of the first key frame through interframe constraint provided by an inertial odometer (such as an encoder odometer) and/or a visual odometer and closed-loop constraint generated by visual features, so that the pose of the second key frame can be obtained.
Step 203, for each sample key frame in the sample key frame set, identifying a beacon in a two-dimensional code form in the sample key frame, and determining the identified sample key frame containing the beacon as the beacon key frame.
In this step, the image data of the sample key frame is parsed to extract a beacon containing two-dimensional code identification information.
Step 204, establishing the constraint of the beacon to the beacon key frame.
In this step, for the inclusion of a valid beacon MjBeacon key frame KiThe beacon M can be obtained by a camera model relationship between the feature points in the beacon keyframe and the three-dimensional space points, for example, by using a PnP method to solvejRelative to beacon keyframe KiThe position of the camera, which can also be understood as a beacon MjThe position in the camera coordinate system of the frame is recorded as
Figure GDA0003056135770000091
The displacement observation between the camera and the beacon for characterizing the beacon key frame, namely the displacement measurement value between the camera and the beacon when the beacon key frame is collected. The beacon key frame here is in the first coordinate system.
Since the global coordinates (in the world coordinate system) of all beacons are known, the coordinates in the two-dimensional code navigation mode are determined and are recorded as the second coordinate system { W' }, so that the constraint of the two-dimensional code on the beacon key frame pose, namely, the beacon key frame K in the visual texture navigation mode can be formediBeacon key frame K 'in mapped two-dimensional code navigation mode'iPosition and posture thereof in the second coordinate system
Figure GDA0003056135770000101
Can characterize: a first displacement measurement of the beacon relative to the camera of the beacon keyframe in the visual navigation mode (in the first coordinate system), and a second displacement measurement of the beacon relative to the camera of the beacon keyframe acquired in the two-dimensional code navigation mode (in the second coordinate system)The difference between the displacement measurements, expressed mathematically as:
Figure GDA0003056135770000102
in the above formula, TCKIs an external reference of the camera,
Figure GDA0003056135770000103
as a beacon MjAt the coordinate positions of the second coordinate system W',
Figure GDA0003056135770000104
represents a beacon MjFor beacon key frame K in visual texture navigation modeiAnd (4) posture constraint.
Traversing the image datasets of all sample keyframes obtained in step 201 according to step 203 to obtain the pose constraint of each valid beacon on the beacon keyframe with that beacon, whereby the pose constraint of all beacons on the beacon keyframe with that beacon can be recorded as rMThe mathematical expression is:
Figure GDA0003056135770000105
where i is the number of beacon key frames containing beacons and j is the number of beacons in all beacon key frames.
Step 205, utilizing the constraint of beacon to the beacon key frame pose and combining the odometer inter-frame constraint rO(visual odometry interframe constraints and/or inertial odometry interframe constraints), and closed-loop constraints r of visual feature generationCAnd optimizing the pose of the first key frame by a graph optimization method, wherein the pose of the key frame obtained after optimization can be used as the pose of a second key frame in a second coordinate system. Where the set of second keyframe poses can be denoted as S'.
Referring to fig. 3, fig. 3 is a schematic diagram of pose graph optimization by adding beacon constraints to a sample key frame. In the constraint, the two-dimension code constraint is a link for establishing the association of a two-dimension code navigation mode and a visual texture navigation mode, so that the two-dimension code constraint is necessary constraint during optimization; as an optional implementation mode, camera external parameters and the coordinate position of the beacon in the second coordinate system are fixed and unchanged in the two-dimensional code constraint; closed loop constraints generated by visual features are used for eliminating odometer errors; an inertial odometry interframe constraint (e.g., an encoder odometry interframe constraint), a visual odometry interframe constraint, in the odometry interframe constraint, may alternatively be constraints.
Preferably, to improve the accuracy of the optimization, the inertial odometer interframe constraint (e.g., the encoder odometer interframe constraint) and the visual odometer interframe constraint may be selected simultaneously.
Based on the constraints, an objective function is constructed that accumulates the constraints:
Figure GDA0003056135770000111
wherein, TWKFor the first keyframe pose, i.e., the global pose of the sample keyframe in the first coordinate system,
Figure GDA0003056135770000112
can be obtained by the following formula:
Figure GDA0003056135770000113
in the formula
Figure GDA0003056135770000114
For the odometer relative pose observations between sample keyframes of the adjacent ith and i +1 frames, i.e., the odometer relative pose measurements between adjacent keyframes,
Figure GDA0003056135770000115
for the first keyframe pose (global pose) of the ith frame sample keyframe in the first coordinate system,
Figure GDA0003056135770000116
the pose (global pose) of the first key frame of the i +1 th frame sample key frame in the first coordinate system is obtained.
Figure GDA0003056135770000117
Can be obtained by the following formula:
Figure GDA0003056135770000118
in the formula
Figure GDA0003056135770000119
And
Figure GDA00030561357700001110
the first key frame poses of the ith and jth frame sample key frames forming a loop respectively,
Figure GDA00030561357700001111
the method is used for observing the relative pose between two sample key frames obtained by loop detection, namely obtaining a relative pose measurement value between the two sample key frames obtained by loop detection.
Then, all inter-frame constraints rOFor the sum of the constraints between frames, the mathematical expression is:
Figure GDA00030561357700001112
all closed loop constraints rCFor the sum of the key frame constraints of each loop, the mathematical expression is:
Figure GDA00030561357700001113
and substituting the first key frame pose into the target function, and solving the first key frame pose when the target function reaches the minimum value through nonlinear optimization, such as a least square method, to serve as the first key frame pose in a second coordinate system.
Position constraint between the beacon and the beacon key frame is constructed by extracting beacon information in the sample key frame, the global position of the sample key frame in a first coordinate system is used as a node, odometer observation and beacon observation are used as constraint edges, iterative optimization is carried out on the position of the first key frame by using a nonlinear graph optimization algorithm, the optimal solution of the position of the key frame in a second coordinate system is output, the tolerance of the whole system to sensor noise is guaranteed by the optimization algorithm, and the accuracy and consistency of the position and the image of the first key frame are improved.
And step 206, establishing a pose mapping relation by using the first key frame pose of each sample key frame in the first coordinate system for the visual texture navigation mode and the second key frame pose in the second coordinate system for the two-dimensional code navigation mode.
In other words, in a certain neighborhood range of the position of the key frame, the local two-dimensional code coordinate map and the local texture map satisfy the rigid body pose transformation constraint, so that the mapping relation between the first key frame pose in the first coordinate system and the first key frame pose in the second coordinate system is constructed.
In an optional embodiment, for convenience of search, according to the frame identifier of the first key frame, a mapping relationship between the first key frame pose set S in the first coordinate system and the second key frame pose set S' in the second coordinate system is established and stored. For example, the mapping relationship may be as shown in the following table:
Figure GDA0003056135770000121
through the steps, the mapping relation between the first key frame pose in the first coordinate system and the second key frame pose in the second coordinate system can be established, so that the mobile robot pose in the texture map can be converted into the mobile robot pose in the two-dimensional code coordinate map by taking the first key frame pose and the second key frame pose as reference poses in the positioning process.
For easy understandingReferring to fig. 4, the process of constructing the mapping relationship is shown, and fig. 4 is a schematic diagram of the correlation relationship between the physical quantities for constructing the pose mapping relationship. Obtaining a first key frame pose from the image data and the odometer data in a first coordinate system, extracting a beacon key frame containing a beacon from the first key frame pose, and obtaining the position of the beacon relative to the acquired beacon key frame camera based on the beacon key frame
Figure GDA0003056135770000122
Obtaining the global position information based on the beacon in the second coordinate system
Figure GDA0003056135770000123
By
Figure GDA0003056135770000124
And
Figure GDA0003056135770000125
and obtaining the constraint of the beacon on the beacon key frame, optimizing the pose of the first key frame by combining the odometer interframe constraint and the closed-loop constraint generated by the visual characteristics, and taking the optimized pose of the first key frame as the pose of the second key frame in a second coordinate system.
Fig. 5 is an exemplary flowchart illustrating a method for converting positioning results between different navigation modes according to an embodiment of the present invention. Referring to fig. 5, in this embodiment, the method for converting the positioning result between different navigation manners may include the following steps (for example, executed by a mobile robot or a data processing device):
step 501, obtaining a first positioning result of the mobile robot in a first coordinate system for a first navigation mode, and a pose mapping relationship between the first coordinate system and a second coordinate system for a second navigation mode, wherein the pose mapping relationship includes a first keyframe pose of each sample keyframe in a sample keyframe set obtained by pre-sampling in the first coordinate system and a second keyframe pose in the second coordinate system.
If the process is executed by a mobile robot, the first positioning result that can be obtained in this step may be determined by the mobile robot, for example, a sampling key frame included in the sampling image data collected by a camera of the mobile robot is obtained, and the sampling key frame is used to determine the first positioning result.
If the process is executed by a data processing device such as a server, the first positioning result determined by the mobile robot may be directly obtained from the mobile robot.
Step 502, selecting a sample key frame from the sample key frame set as a reference key frame, wherein the correlation degree between the reference key frame and the first positioning result is higher than the correlation degree between other sample key frames in the sample key frame set and the first positioning result.
Preferably, the degree of association may be determined based on spatial location, for example: determining a spatial distance between a first key frame pose of a sample key frame in the sample key frame set relative to the first positioning result, and determining a sample key frame with the smallest spatial distance as a reference key frame, that is, the spatial distance between the first key frame pose and the first positioning result is the smallest, which may indicate the highest degree of association with the first positioning result.
As another preferable scheme, the step may further determine the association degree based on the image similarity of the local region, for example: screening out the sample key frames of which the first key frame positions fall within the preset neighborhood range of the first positioning result, determining the similarity between the screened sample key frames and the sampling key frames determining the first positioning result (the sampling key frames are obtained from the sampling image data collected by the mobile robot), and determining the sample key frame with the highest similarity as a reference key frame, namely, the sample key frame has the highest similarity with the sampling key frame determining the first positioning result, which can also indicate that the correlation with the first positioning result is the highest.
Step 503, based on the pose deviation between the first key frame pose and the second key frame pose in the pose mapping relationship of the reference key frame, converting the first positioning result into a second positioning result in the second coordinate system.
Thereafter, the position information of the mobile robot may be published, wherein the position information includes the second positioning result. For example, the position information of the mobile robot may be published to the robot scheduling system, so that the robot scheduling system can implement the position monitoring and scheduling of the second navigation mode according to the second positioning result.
Namely, the first positioning result representing the robot pose of the mobile robot in the first navigation mode is converted into the second positioning result representing the robot pose of the mobile robot in the second navigation mode and then is issued.
As can be seen from the above, the above process may utilize the pre-created pose mapping relationship between the first coordinate system for the first navigation mode and the second coordinate system for the second navigation mode (e.g., the pose mapping relationship created by the process shown in fig. 1) to compensate for the deviation of the positioning result in different navigation modes caused by directly aligning the map by using two-dimensional plane pose transformation due to the map-building scale difference, the configuration distortion, and other factors, thereby improving the accuracy of the pose mapping positioning result.
Based on the above process, the sample key frame used for creating the pose mapping relationship can be used as a reference, and compared with a mode of relying on a beacon as a mapping reference point, the problem that a positioning result output by mapping jumps due to overlarge beacon arrangement distance and uneven distribution can be avoided by using the sample key frame as the reference.
Referring to fig. 6, fig. 6 is a schematic flow chart illustrating a process of converting positioning results between different navigation modes based on the flow chart shown in fig. 5 in the mobile robot positioning process. Taking the example that the first navigation mode is a visual texture navigation mode and the second navigation mode is a two-dimensional code navigation mode, after initialization such as reading texture maps and keyframe pose mapping relationships, the following steps are executed (for example, executed by a mobile robot or a data processing device):
step 600, obtaining current texture image data and odometer data, and processing the obtained texture image data and the obtained odometer data through a visual texture navigation positioning algorithm to obtain a first seat in a visual texture navigation modeThe first positioning result under the coordinate system, namely the global pose of the mobile robot under the first coordinate system, is recorded as TWA
Step 601, obtaining a first positioning result of the mobile robot in a first coordinate system for a visual texture navigation mode and a pre-created pose mapping relation.
Step 602, based on the first positioning result of the mobile robot, searching the sample key frame closest to the first positioning result of the mobile robot in the sample key frame set S in the first coordinate system to obtain a reference key frame Kmin
As another alternative, in this step, a sample key frame whose first key frame position falls within the neighborhood range of the first positioning result is first searched in the sample key frame set, then the similarity between the searched sample key frame and the sampling key frame for determining the first positioning result is determined, and then the sample key frame with the highest similarity is determined as the reference key frame Kmin. The shape and size of the neighborhood range can be set according to specific situations.
Step 603, based on the reference key frame, searching the pose mapping relationship of the first key frame of the reference key frame in the first coordinate system
Figure GDA0003056135770000151
And a second keyframe pose in a second coordinate system
Figure GDA0003056135770000152
And step 604, taking the searched reference key frame pose as a reference pose, and mapping a first positioning result in a first coordinate system (texture map coordinate system) for the visual texture navigation mode to a second positioning result in a second coordinate system (two-dimension code coordinate system) for the two-dimension code navigation mode.
In this step, local maps under different navigation modes in a certain neighborhood range of a specific position (such as a keyframe position) satisfy rigid body pose transformation, that is, a linear relationship is conformed between local maps under different navigation modes in the certain neighborhood range of the keyframe; this means that the first keyframe pose of the searched reference keyframe in the first coordinate system may correspond to the second keyframe pose in the second coordinate system; therefore, reference can be made to the reference key frame, and one embodiment is: the relative pose between the positioning result of the current mobile robot and the pose of the key frame of the reference key frame is the same in the first coordinate system and the second coordinate system, that is, the relative pose of the positioning result of the mobile robot in the two-dimensional code map relative to the pose of the second key frame of the reference key frame is consistent with the relative pose of the positioning result of the mobile robot in the texture map relative to the pose of the first key frame of the reference key frame within a certain neighborhood range of the first positioning result. The mathematical formula is expressed as:
Figure GDA0003056135770000153
wherein the content of the first and second substances,
Figure GDA0003056135770000154
for the relative pose of the robot localization result of the mobile robot in the second coordinate system with respect to the second keyframe pose of the reference keyframe,
Figure GDA0003056135770000155
a relative pose of the positioning result of the mobile robot in the first coordinate system relative to the first keyframe pose of the reference keyframe;
and because the relative pose of the robot pose in the first coordinate system and the first key frame pose of the reference key frame can be obtained according to the global pose of the reference key frame in the first coordinate system and the first positioning result (robot pose) of the mobile robot in the first coordinate system, the mathematical expression is as follows:
Figure GDA0003056135770000156
then, the second positioning result of the mobile robot in the second coordinate system can be obtained by performing coordinate transformation according to the first keyframe pose of the reference keyframe in the first coordinate system, the second keyframe pose of the reference keyframe in the second coordinate system obtained by the mapping relationship, and the first positioning result of the mobile robot in the first coordinate system (the robot global pose), and the mathematical expression is as follows:
Figure GDA0003056135770000157
wherein, Tw′A′Is a second positioning result in a second coordinate system,
Figure GDA0003056135770000161
for the pose of the second key frame in the second coordinate system of the reference key frame obtained by the pose mapping relationship,
Figure GDA0003056135770000162
is a first keyframe pose of the reference keyframe in a first coordinate system.
Therefore, the first positioning result of the visual texture navigation is converted into a second positioning result under the two-dimensional code map, and the robot pose of the mobile robot under a second coordinate system is obtained.
Thereafter, the position information of the mobile robot may be issued to the robot scheduling system, wherein the position information includes the second positioning result.
To facilitate understanding of the relationship between the physical quantities converted by the positioning result, fig. 7 is a schematic diagram of the relationship between the physical quantities when the conversion is performed. After the robot obtains a positioning result in a first coordinate system, searching a reference key frame closest to the positioning result, inquiring the pose of the reference key frame in a second coordinate system corresponding to the reference key frame according to the frame identifier, and obtaining a conversion result, namely a second positioning result in the second coordinate system according to the first positioning result in the first coordinate system, the global pose (first key frame pose) of the reference key frame in the first coordinate system and the global pose (second key frame pose) of the reference key frame in the second coordinate system.
The embodiment for converting the positioning result of the visual texture navigation into the positioning result under the two-dimensional code map can perform conversion in real time in the positioning process, and solves the problem that the positioning result is converted when the navigation mode adopted by the robot side and the navigation mode adopted by the robot scheduling system side are different. In the use scene of transition from the existing two-dimension code navigation to the visual texture navigation, the planning and scheduling topological graph of the existing robot scheduling system can be adapted after pose mapping, and the mixed butt joint with the two-dimension code positioning mode can be realized, so that the requirement of re-formulating the scheduling topological graph due to the change of the navigation mode is avoided.
Referring to fig. 8, fig. 8 is a schematic diagram of a device for converting positioning results between different navigation modes according to an embodiment of the present invention. The conversion device comprises a conversion device and a control device,
an information obtaining module 801, configured to obtain a first positioning result of the mobile robot in a first coordinate system used for a first navigation manner, and a pose mapping relationship between the first coordinate system and a second coordinate system used for a second navigation manner, where the pose mapping relationship includes a first keyframe pose of each sample keyframe in a sample keyframe set obtained by pre-sampling in the first coordinate system and a second keyframe pose in the second coordinate system,
a reference selection module 802, configured to select a sample key frame from the sample key frame set as a reference key frame, where a correlation degree between the reference key frame and the first positioning result is higher than a correlation degree between other sample key frames in the sample key frame set and the first positioning result;
a conversion executing module 803, configured to convert the first positioning result into a second positioning result in the second coordinate system based on a pose deviation between a first key frame pose and a second key frame pose of the reference key frame in the pose mapping relationship.
In addition, the conversion apparatus may further include an information distribution module 804 (indicated by a dotted line) configured to distribute the position information of the mobile robot, where the position information includes the second positioning result.
Preferably, the reference selection module 802 may be configured to determine the association based on the spatial location, for example: determining a spatial distance between a first key frame pose of a sample key frame in the sample key frame set relative to the first positioning result, and determining a sample key frame with the smallest spatial distance as a reference key frame, that is, the spatial distance between the first key frame pose and the first positioning result is the smallest, which may indicate the highest degree of association with the first positioning result.
As another preference, the reference selection module 802 may also be configured to determine the association degree based on the image similarity of the local region, for example: screening out the sample key frames of which the first key frame positions fall within the preset neighborhood range of the first positioning result, determining the similarity between the screened sample key frames and the sampling key frames determining the first positioning result (the sampling key frames are obtained from the sampling image data collected by the mobile robot), and determining the sample key frame with the highest similarity as a reference key frame, namely, the sample key frame has the highest similarity with the sampling key frame determining the first positioning result, which can also indicate that the correlation with the first positioning result is the highest.
In practical applications, the conversion device may be on the mobile robot side or on the data processing device side such as a server.
Referring to fig. 9, fig. 9 is a schematic diagram of a mobile robot according to an embodiment of the present invention, where the mobile robot includes a camera 900 and a processor 901, and the processor 901 may be configured to perform:
obtaining a first positioning result of the mobile robot in a first coordinate system for a first navigation mode and a pose mapping relation between the first coordinate system and a second coordinate system for a second navigation mode, wherein the first positioning result can be determined according to sampling key frames contained in sampling image data collected by the camera 900, and the pose mapping relation comprises a first key frame pose of each sample key frame in a pre-sampling sample key frame set in the first coordinate system and a second key frame pose in the second coordinate system,
selecting a sample key frame from the sample key frame set as a reference key frame, wherein the relevance between the reference key frame and the first positioning result is higher than the relevance between other sample key frames in the sample key frame set and the first positioning result;
and converting the first positioning result into a second positioning result under a second coordinate system based on the pose deviation between the first key frame pose and the second key frame pose of the reference key frame in the pose mapping relation.
In addition, the mobile robot may issue its position information, which includes the second positioning result.
The mobile robot as shown in fig. 9 may also include a non-transitory computer readable storage medium 902, the non-transitory computer readable storage medium 902 may store instructions, a portion of which, when executed by the processor 901, cause the processor 901 to perform the steps listed above.
In another embodiment, there is also provided a data processing device which may contain another processor having substantially the same functions as the processor 901 of the mobile robot as shown in fig. 9, and may also include a storage medium storing similar instructions to the non-transitory computer-readable storage medium 902.
Referring to fig. 10, fig. 10 is a schematic view of an apparatus for constructing a pose mapping relationship between different navigation manners according to an embodiment of the present invention. The device comprises a plurality of devices which are connected with each other,
an information collection module 1001 that obtains a sample key frame set including sample key frames from sample image data,
a pose determination module 1002 that determines a first keyframe pose for each sample keyframe in the set of sample keyframes under a first coordinate system for a first navigation style,
a beacon extraction module 1003 extracting a beacon key frame including a beacon from the sample key frames of the sample key frame set,
a constraint determination module 1004 that determines a constraint on a beacon key frame by a beacon based on location information of the beacon in the beacon key frame,
pose optimization module 1005, graph-optimizes the first keyframe pose of each sample keyframe by at least beacon-to-beacon keyframe constraints, to obtain a second keyframe pose in a second coordinate system for a second navigation mode,
the mapping creation module 1006 creates a pose mapping relationship by using the first key frame pose and the second key frame pose of each sample key frame.
In another embodiment, another data processing apparatus is provided that may include a processor for performing the steps of the flow shown in fig. 1, and may also include a non-transitory computer readable storage medium storing associated instructions for the processor to perform the steps.
Each of the aforementioned processors may be a general-purpose Processor including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
The embodiment of the invention also provides a computer readable storage medium, wherein a computer program is stored in the storage medium, and when being executed by a processor, the computer program realizes the steps of the method for converting the positioning results among different navigation modes.
The embodiment of the invention also provides a computer readable storage medium, wherein a computer program is stored in the storage medium, and when being executed by a processor, the computer program realizes the step of constructing the pose mapping relation among different navigation modes.
For the device/network side device/storage medium embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and for the relevant points, refer to the partial description of the method embodiment.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (15)

1. A method for converting positioning results between different navigation modes is characterized by comprising the following steps:
obtaining a first positioning result of the mobile robot in a first coordinate system used for a first navigation mode and a pre-created pose mapping relation between the first coordinate system and a second coordinate system used for a second navigation mode, wherein the pose mapping relation comprises a first key frame pose of each sample key frame in a sample key frame set obtained by pre-sampling in the first coordinate system and a second key frame pose in the second coordinate system,
selecting one of the sample key frames from the set of sample key frames as a reference key frame, wherein the reference key frame is associated with the first positioning result more closely than other sample key frames in the set of sample key frames;
converting the first positioning result into a second positioning result in the second coordinate system based on a pose deviation of the reference keyframe between the first keyframe pose and the second keyframe pose in the pose mapping.
2. The method of converting of claim 1, wherein obtaining a first positioning result of the mobile robot in a first coordinate system for a first navigation mode comprises:
acquiring a sampling key frame contained in sampling image data acquired by a camera of the mobile robot;
determining the first positioning result using the sampling key frame.
3. The conversion method of claim 1, wherein selecting one of the sample key frames from the set of sample key frames as a reference key frame comprises:
determining a spatial distance between the first keyframe pose of the sample keyframes in the set of sample keyframes with respect to the first positioning result;
determining the sample key frame with the smallest spatial distance as the reference key frame.
4. The conversion method of claim 1, wherein selecting one of the sample key frames from the set of sample key frames as a reference key frame comprises:
screening out the sample keyframes with the first keyframe posture falling within the preset neighborhood range of the first positioning result;
determining similarity between the screened sample key frame and a sampling key frame for determining the first positioning result, wherein the sampling key frame is obtained from sampling image data acquired by the mobile robot,
and determining the sample key frame with the highest similarity as the reference key frame.
5. The transformation method according to claim 1, wherein the pose mapping relationship is created by:
obtaining the sample key frame set including the sample key frame from sample image data,
determining the first keyframe pose for each of the sample keyframes in the set of sample keyframes,
extracting beacon key frames including beacons from the sample key frames of the sample key frame set,
determining a beacon's constraint on the beacon key frame based on location information of beacons in the beacon key frame,
map-optimizing the first keyframe pose for each of the sample keyframes through at least beacon-to-beacon keyframe constraints,
and establishing and obtaining the pose mapping relation by using the first key frame pose of each sample key frame and the second key frame pose obtained after the graph optimization.
6. A method for constructing a pose mapping relation among different navigation modes is characterized by comprising the following steps:
a sample key frame set containing sample key frames is obtained from the sample image data,
determining a first keyframe pose for each of the set of sample keyframes under a first coordinate system for a first navigation style,
extracting beacon key frames including beacons from the sample key frames of the sample key frame set,
determining a beacon's constraint on the beacon key frame based on location information of beacons in the beacon key frame,
performing graph optimization on the first keyframe pose of each sample keyframe through at least the constraint of the beacon on the beacon keyframe to obtain a second keyframe pose in a second coordinate system for a second navigation mode,
and establishing a pose mapping relation by using the first key frame pose and the second key frame pose of each sample key frame.
7. The method of claim 6, wherein determining beacon constraints for the beacon key frames based on location information for beacons in the beacon key frames comprises:
for each beacon in each of the beacon keyframes,
determining a difference between a first displacement measurement of the beacon relative to the camera of the beacon keyframe in the first coordinate system and a second displacement measurement of the beacon relative to the camera of the beacon keyframe in the second coordinate system, wherein the beacon keyframe in the second coordinate system is mapped from the beacon keyframe in the first coordinate system,
and accumulating the differences of all beacons in all beacon key frames to obtain the constraint of the beacons on the beacon key frames.
8. The method of claim 7,
the first displacement measurement value is determined according to a camera model relation between the feature point in the beacon key frame and the three-dimensional space point;
and the second displacement measurement value is obtained according to the position information of the beacon in the second coordinate system, the pose of the beacon key frame in the second coordinate system and the camera external parameters.
9. The method of claim 6, wherein map optimizing the first keyframe position for each of the sample keyframes through at least beacon-to-beacon keyframe constraints comprises:
map optimizing the first keyframe pose of the sample keyframe by a beacon's constraint on the beacon keyframe, an odometry interframe constraint for each of the sample keyframes, and a closed-loop constraint for each of the sample keyframes,
wherein the odometry interframe constraints comprise visual odometry interframe constraints and/or inertial odometry interframe constraints, and the odometry interframe constraints are obtained from the acquired odometry data.
10. The method of claim 9, wherein map optimizing the first keyframe pose of the sample keyframe from a beacon-to-the-beacon keyframe constraint, an odometer-to-frame constraint for each of the sample keyframes, and a closed-loop constraint for each of the sample keyframes, comprises:
constructing an objective function that is the sum of the beacon's constraints on the beacon keyframes, the odometry inter-frame constraints for all the sample keyframes, and the closed-loop constraints for all the sample keyframes,
and determining the pose which enables the objective function to obtain the minimum value as the pose of the second key frame obtained after the graph optimization through nonlinear optimization solution.
11. A device for converting positioning results between different navigation modes, the device comprising:
an information acquisition module, configured to obtain a first positioning result of the mobile robot in a first coordinate system for a first navigation manner and a pre-created pose mapping relationship between the first coordinate system and a second coordinate system for a second navigation manner, where the pose mapping relationship includes a first keyframe pose of each sample keyframe in a sample keyframe set obtained by pre-sampling in the first coordinate system and a second keyframe pose of the sample keyframe in the second coordinate system,
a reference selection module, configured to select one of the sample key frames from the sample key frame set as a reference key frame, where a degree of association between the reference key frame and the first positioning result is higher than a degree of association between other sample key frames in the sample key frame set and the first positioning result;
a transformation execution module, configured to transform the first positioning result into a second positioning result in the second coordinate system based on a pose deviation of the reference key frame between the first key frame pose and the second key frame pose in the pose mapping relationship.
12. A mobile robot, comprising a processor configured to perform:
obtaining a first positioning result of the mobile robot in a first coordinate system used for a first navigation mode and a pre-created pose mapping relation between the first coordinate system and a second coordinate system used for a second navigation mode, wherein the pose mapping relation comprises a first key frame pose of each sample key frame in a sample key frame set obtained by pre-sampling in the first coordinate system and a second key frame pose in the second coordinate system,
selecting one of the sample key frames from the set of sample key frames as a reference key frame, wherein the reference key frame is associated with the first positioning result more closely than other sample key frames in the set of sample key frames;
converting the first positioning result into a second positioning result in the second coordinate system based on a pose deviation of the reference keyframe between the first keyframe pose and the second keyframe pose in the pose mapping.
13. A data processing device, characterized in that the data processing device comprises a processor configured to perform:
obtaining a first positioning result of the mobile robot in a first coordinate system used for a first navigation mode and a pre-created pose mapping relation between the first coordinate system and a second coordinate system used for a second navigation mode, wherein the pose mapping relation comprises a first key frame pose of each sample key frame in a sample key frame set obtained by pre-sampling in the first coordinate system and a second key frame pose in the second coordinate system,
selecting one of the sample key frames from the set of sample key frames as a reference key frame, wherein the reference key frame is associated with the first positioning result more closely than other sample key frames in the set of sample key frames;
converting the first positioning result into a second positioning result in the second coordinate system based on a pose deviation of the reference keyframe between the first keyframe pose and the second keyframe pose in the pose mapping.
14. A device for constructing a pose mapping relation between different navigation modes is characterized by comprising,
an information collection module to obtain a sample key frame set including sample key frames from the sample image data,
a pose determination module that determines a first keyframe pose for each of the set of sample keyframes under a first coordinate system for a first navigation style,
a beacon extraction module extracting a beacon key frame including a beacon from the sample key frames of the sample key frame set,
a constraint determination module that determines a constraint of a beacon on the beacon key frame based on location information of the beacon in the beacon key frame,
a pose optimization module that performs graph optimization on the first keyframe pose of each sample keyframe through at least beacon-to-beacon keyframe constraints to obtain a second keyframe pose in a second coordinate system for a second navigation mode,
and the mapping creation module is used for creating a pose mapping relation by using the first key frame pose and the second key frame pose of each sample key frame.
15. A data processing device, characterized in that the data processing device comprises a processor configured to perform:
a sample key frame set containing sample key frames is obtained from the sample image data,
determining a first keyframe pose for each of the set of sample keyframes under a first coordinate system for a first navigation style,
extracting beacon key frames including beacons from the sample key frames of the sample key frame set,
determining a beacon's constraint on the beacon key frame based on location information of beacons in the beacon key frame,
performing graph optimization on the first keyframe pose of each sample keyframe through at least the constraint of the beacon on the beacon keyframe to obtain a second keyframe pose in a second coordinate system for a second navigation mode,
and establishing a pose mapping relation by using the first key frame pose and the second key frame pose of each sample key frame.
CN202011062611.0A 2020-09-30 2020-09-30 Method and device for converting positioning results among different navigation modes Active CN112129282B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011062611.0A CN112129282B (en) 2020-09-30 2020-09-30 Method and device for converting positioning results among different navigation modes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011062611.0A CN112129282B (en) 2020-09-30 2020-09-30 Method and device for converting positioning results among different navigation modes

Publications (2)

Publication Number Publication Date
CN112129282A CN112129282A (en) 2020-12-25
CN112129282B true CN112129282B (en) 2021-06-18

Family

ID=73843565

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011062611.0A Active CN112129282B (en) 2020-09-30 2020-09-30 Method and device for converting positioning results among different navigation modes

Country Status (1)

Country Link
CN (1) CN112129282B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113483755B (en) * 2021-07-09 2023-11-07 北京易航远智科技有限公司 Multi-sensor combination positioning method and system based on non-global consistent map
CN115502971B (en) * 2022-09-15 2023-06-27 杭州蓝芯科技有限公司 Navigation docking method, system and equipment for coping with positioning switching jump

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107063258A (en) * 2017-03-07 2017-08-18 重庆邮电大学 A kind of mobile robot indoor navigation method based on semantic information
CN107705333A (en) * 2017-09-21 2018-02-16 歌尔股份有限公司 Space-location method and device based on binocular camera
CN108717710A (en) * 2018-05-18 2018-10-30 京东方科技集团股份有限公司 Localization method, apparatus and system under indoor environment
CN109974712A (en) * 2019-04-22 2019-07-05 广东亿嘉和科技有限公司 It is a kind of that drawing method is built based on the Intelligent Mobile Robot for scheming optimization
CN110345944A (en) * 2019-05-27 2019-10-18 浙江工业大学 Merge the robot localization method of visual signature and IMU information
CN110501017A (en) * 2019-08-12 2019-11-26 华南理工大学 A kind of Mobile Robotics Navigation based on ORB_SLAM2 ground drawing generating method
CN111462207A (en) * 2020-03-30 2020-07-28 重庆邮电大学 RGB-D simultaneous positioning and map creation method integrating direct method and feature method
CN111561923A (en) * 2020-05-19 2020-08-21 北京数字绿土科技有限公司 SLAM (simultaneous localization and mapping) mapping method and system based on multi-sensor fusion

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107063258A (en) * 2017-03-07 2017-08-18 重庆邮电大学 A kind of mobile robot indoor navigation method based on semantic information
CN107705333A (en) * 2017-09-21 2018-02-16 歌尔股份有限公司 Space-location method and device based on binocular camera
CN108717710A (en) * 2018-05-18 2018-10-30 京东方科技集团股份有限公司 Localization method, apparatus and system under indoor environment
CN109974712A (en) * 2019-04-22 2019-07-05 广东亿嘉和科技有限公司 It is a kind of that drawing method is built based on the Intelligent Mobile Robot for scheming optimization
CN110345944A (en) * 2019-05-27 2019-10-18 浙江工业大学 Merge the robot localization method of visual signature and IMU information
CN110501017A (en) * 2019-08-12 2019-11-26 华南理工大学 A kind of Mobile Robotics Navigation based on ORB_SLAM2 ground drawing generating method
CN111462207A (en) * 2020-03-30 2020-07-28 重庆邮电大学 RGB-D simultaneous positioning and map creation method integrating direct method and feature method
CN111561923A (en) * 2020-05-19 2020-08-21 北京数字绿土科技有限公司 SLAM (simultaneous localization and mapping) mapping method and system based on multi-sensor fusion

Also Published As

Publication number Publication date
CN112129282A (en) 2020-12-25

Similar Documents

Publication Publication Date Title
Wisth et al. Unified multi-modal landmark tracking for tightly coupled lidar-visual-inertial odometry
Muñoz-Salinas et al. Mapping and localization from planar markers
AU2016202515B2 (en) Adaptive mapping with spatial summaries of sensor data
CN109186606B (en) Robot composition and navigation method based on SLAM and image information
Agarwal et al. Metric localization using google street view
Fathi et al. Automated sparse 3D point cloud generation of infrastructure using its distinctive visual features
JP2019120927A (en) Method and device for creating grid map
CN109059895A (en) A kind of multi-modal indoor ranging and localization method based on mobile phone camera and sensor
CN112129282B (en) Method and device for converting positioning results among different navigation modes
Andreasson et al. 6D scan registration using depth-interpolated local image features
CN111383205A (en) Image fusion positioning method based on feature points and three-dimensional model
Feng et al. Visual map construction using RGB-D sensors for image-based localization in indoor environments
CN114413909A (en) Indoor mobile robot positioning method and system
Fischer et al. StickyLocalization: robust end-to-end relocalization on point clouds using graph neural networks
Shu et al. 3d point cloud-based indoor mobile robot in 6-dof pose localization using a wi-fi-aided localization system
Arth et al. Full 6dof pose estimation from geo-located images
Feng et al. Marker-assisted structure from motion for 3D environment modeling and object pose estimation
CN112487234A (en) Track analysis method, equipment and storage medium based on split type hierarchical clustering
Frosi et al. Osm-slam: Aiding slam with openstreetmaps priors
Lee et al. Development of indoor localization system using a mobile data acquisition platform and BoW image matching
Aggarwal Machine vision based SelfPosition estimation of mobile robots
CN115239902A (en) Method, device and equipment for establishing surrounding map of mobile equipment and storage medium
JP2018194366A (en) Position estimation device, method and program
Ito et al. Global localization from a single image in known indoor environments
JP2023538946A (en) Multi-agent map generation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 310051 room 304, B / F, building 2, 399 Danfeng Road, Binjiang District, Hangzhou City, Zhejiang Province

Patentee after: Hangzhou Hikvision Robot Co.,Ltd.

Address before: 310051 room 304, B / F, building 2, 399 Danfeng Road, Binjiang District, Hangzhou City, Zhejiang Province

Patentee before: HANGZHOU HIKROBOT TECHNOLOGY Co.,Ltd.