CN107516326B - Robot positioning method and system fusing monocular vision and encoder information - Google Patents

Robot positioning method and system fusing monocular vision and encoder information Download PDF

Info

Publication number
CN107516326B
CN107516326B CN201710574132.9A CN201710574132A CN107516326B CN 107516326 B CN107516326 B CN 107516326B CN 201710574132 A CN201710574132 A CN 201710574132A CN 107516326 B CN107516326 B CN 107516326B
Authority
CN
China
Prior art keywords
pose
robot
monocular
coordinate system
initial value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710574132.9A
Other languages
Chinese (zh)
Other versions
CN107516326A (en
Inventor
韦伟
李小娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Computing Technology of CAS
Original Assignee
Institute of Computing Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Computing Technology of CAS filed Critical Institute of Computing Technology of CAS
Priority to CN201710574132.9A priority Critical patent/CN107516326B/en
Publication of CN107516326A publication Critical patent/CN107516326A/en
Application granted granted Critical
Publication of CN107516326B publication Critical patent/CN107516326B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/11Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
    • G06F17/12Simultaneous equations, e.g. systems of linear equations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization

Abstract

The invention relates to a robot positioning method and system fusing monocular vision and encoder data, which comprises the steps of calibrating external parameters and visual scale factors in real time based on a sliding window method and loosely coupling and fusing poses based on the sliding window method. The method comprises the steps of establishing a sliding window pose graph by using pose information output by a monocular vision and an encoder based on external parameters and visual scale factors of a sliding window method in real time, automatically estimating the relative pose between a visual sensor and the encoder, and calculating the scale factors of the monocular vision. Meanwhile, monocular vision positioning information and encoder positioning information are fused by using the constructed sliding window pose graph, the problems of poor long-term positioning accuracy and poor vision positioning robustness of the encoder are solved, and the calculation amount of the fusion method is effectively ensured not to increase along with time based on the sliding window pose graph mode, so that the algorithm is suitable for mobile robot airborne embedded equipment.

Description

Robot positioning method and system fusing monocular vision and encoder information
Technical Field
The invention relates to the technical field of mobile robot positioning, in particular to a robot positioning method and system integrating monocular vision and encoder information.
Background
With the continuous development of robotics, people have more and more strong requirements on autonomous positioning and navigation of a service robot in a natural environment. The method for carrying out odometer calculation by utilizing an encoder according to a robot kinematic model is a common method in the robot field, but has the problems that accumulated errors cannot be eliminated, and the positioning accuracy is poor after long-time running. The recent development of positioning by monocular vision is fast, and visual mileage calculation methods such as svo (semi direct visual interaction algorithm), dso (direct visual interaction algorithm) and the like are developed in succession. The method has the characteristics of small accumulated error and high positioning accuracy, but is easily influenced by external environments such as illumination and the like, so that the robustness is poor, and the monocular vision cannot obtain a real scale factor. Therefore, the encoder and the monocular vision are fused and positioned, the advantages of the encoder and the vision are fully utilized, and the robot is robustly positioned.
The current main fusion positioning modes are divided into two types, namely filtering and pose graph optimization. Most of the filtering methods adopt an Extended Kalman Filter (EKF) frame, and an encoder odometer predicts the pose of the robot and updates the pose by using visual measurement. And (3) the pose graph optimization enables the pose of the robot to form nodes of the graph, the sensor measurement is used as an edge of the graph, and the state of the robot is subjected to iterative optimization by constructing the pose graph. The measured values in the standard EKF method are only used once, and the pose graph optimization method can be used for linearization operation for many times, so that the graph optimization can obtain higher positioning accuracy, and is widely adopted recently. However, the pose graph has increased computational complexity with the increase of the number of added nodes, which is not favorable for real-time computation on the robot embedded device.
Disclosure of Invention
In order to solve the technical problems, the invention limits the number of the nodes of the pose graph by a sliding window, removes the oldest pose node, and simultaneously loses the data information of the removed node as little as possible to meet the real-time positioning requirement of the robot.
Specifically, the invention discloses a robot positioning method fusing monocular vision and encoder data, which comprises the following steps:
step 1, collecting encoder data through a plurality of positioning sensors of a robot, calculating an encoding pose change matrix of a robot coordinate system according to the encoder data and a robot motion model, collecting a monocular visual image through a camera carried by the robot, processing the monocular visual image according to a computer visual geometric algorithm, and calculating a visual pose change matrix of the camera coordinate system;
step 2, calculating a transformation matrix initial value, a monocular scale factor initial value and a translation vector between the camera coordinate system and the robot coordinate system according to the visual pose change matrix and the coding pose change matrix;
step 3, constructing a pose graph by taking the pose of the robot in the robot coordinate system, the initial value of the transformation matrix and the initial value of the monocular scale factor as nodes, and taking the pose change measured by each sensor as edge constraint in the pose graph;
and 4, when the number of nodes in the pose graph exceeds the size of a preset sliding window, removing the oldest pose node and the edge constraint thereof in the pose graph, adding the removed edge constraint as prior constraint into the pose graph, adding the newest pose node and the edge constraint thereof into the pose graph, performing iterative solution on the current pose graph by adopting a nonlinear least square method to generate an external parameter matrix, a monocular scale factor and robot pose information, wherein the initial value of the transformation matrix is updated by the external parameter matrix, the initial value of the monocular scale factor is updated by the monocular scale factor, and the robot pose information is output as a positioning result of the robot.
The robot positioning method fusing the monocular vision and the encoder data is characterized in that the encoder calculates the pose of the robot according to the encoder data and the odometer model; the camera is used as a monocular odometer, and the pose of the camera is calculated according to the monocular visual image and the visual mileage calculation method.
The robot positioning method fusing monocular vision and encoder data, wherein the step 2 comprises: and according to the visual pose change matrix and the coding pose change matrix, constructing a constraint equation set by a hand-eye calibration solution, and directly calculating the initial value of the transformation matrix from the camera coordinate system to the robot coordinate system and the scale factor of the monocular odometer.
The robot positioning method fusing the monocular vision and the encoder data is characterized in that each positioning sensor works independently, and the positioning information output by each positioning sensor is collected to be used as the encoder data.
The robot positioning method fusing monocular vision and encoder data, wherein the step 4 comprises: and when the number of the nodes in the pose graph exceeds the sliding window, removing the oldest node through marginalization.
The invention also provides a robot positioning system fusing monocular vision and encoder data, which comprises:
the change matrix establishing module is used for acquiring encoder data through a plurality of positioning sensors of the robot, calculating an encoding pose change matrix of a robot coordinate system according to the encoder data and a robot motion model, acquiring a monocular visual image through a camera carried by the robot, processing the monocular visual image according to a computer visual geometric algorithm, and calculating a visual pose change matrix of the camera coordinate system;
the calculation module is used for calculating a transformation matrix initial value, a monocular scale factor initial value and a translation vector between the camera coordinate system and the robot coordinate system according to the visual pose change matrix and the coding pose change matrix;
the pose graph constructing module is used for constructing a pose graph taking the pose of the robot in the robot coordinate system, the initial value of the transformation matrix and the initial value of the monocular scale factor as nodes, and taking the pose change measured by each sensor as edge constraint in the pose graph;
and the pose information calculation module is used for removing the oldest pose node and the edge constraint thereof in the pose graph when the number of the nodes in the pose graph exceeds the size of a preset sliding window, adding the removed edge constraint as prior constraint into the pose graph, simultaneously adding the newest pose node and the edge constraint thereof into the pose graph, iteratively solving the current pose graph by adopting a nonlinear least square method to generate an external parameter matrix, a monocular scale factor and robot pose information, wherein the initial value of the transformation matrix is updated by the external parameter matrix, the initial value of the monocular scale factor is updated by the monocular scale factor, and the robot pose information is output as a positioning result of the robot.
The robot positioning system fusing the monocular vision and the encoder data, wherein the encoder calculates the pose of the robot according to the encoder data and the odometer model; the camera is used as a monocular odometer, and the pose of the camera is calculated according to the monocular visual image and the visual mileage calculation method.
The robot positioning system fusing monocular vision and encoder data, wherein the calculation module comprises: and according to the visual pose change matrix and the coding pose change matrix, constructing a constraint equation set by a hand-eye calibration solution, and directly calculating the initial value of the transformation matrix from the camera coordinate system to the robot coordinate system and the scale factor of the monocular odometer.
The robot positioning system fusing the monocular vision and the encoder data is characterized in that each positioning sensor works independently, and the positioning information output by each positioning sensor is collected to be used as the encoder data.
This robot positioning system who fuses monocular vision and encoder data, wherein this position appearance information calculation module includes: when the number of nodes in the pose graph exceeds the sliding window, the oldest node needs to be removed through marginalization.
The beneficial effects of the invention include: the transformation matrix between the camera coordinate system and the robot coordinate system and the scale factor of the monocular milemeter can be automatically calculated and optimized; the invention uses the sliding window position and posture graph to ensure that the system calculation amount is not increased along with time, and simultaneously can fuse the monocular vision odometer information and the encoder odometer information, effectively eliminate the odometer accumulated error and obtain a better positioning result.
Drawings
FIG. 1 is a block diagram of the system of the present invention;
FIG. 2 is a schematic diagram of a system coordinate system;
FIG. 3 is a schematic diagram of a pose;
FIG. 4 is a schematic flow chart of the sliding window pose diagram of the present invention.
Detailed Description
In order to make the aforementioned features and effects of the present invention more comprehensible, embodiments accompanied with figures are described in detail below.
As shown in fig. 1, a flow diagram of the present invention is shown, and an external parameter matrix and an initial value of a scale factor between a camera coordinate system and a robot coordinate system are solved by using positioning information calculated by an encoder and a monocular vision module. After the resolving is completed, a sliding window pose graph is constructed by using the initial values and the positioning information of each module, and an optimized external parameter matrix, scale factors and positioning information are obtained by solving the pose graph through nonlinear least squares.
In fig. 2, 21 is a world coordinate system, 22 is a robot coordinate system, and 23 is a camera coordinate system. The world coordinate system is the same as the robot coordinate system at the initial moment. Taking a two-wheel differential drive mobile robot as an example, the origin of the robot coordinate system is the center of two wheels, the x axis points to the right front of the robot, the z axis is vertically upward, and the y axis can be determined by the right-hand rule. The center of the camera coordinate system is the camera optical center, the z-axis is vertical to the image plane, the x-axis is horizontal to the right of the image plane, and the y-axis is horizontal to the down of the image plane.
The whole process of the present invention will be described in detail with reference to fig. 3 and 4.
The method comprises the following steps of 1, collecting encoder data through a plurality of positioning sensors of a robot, calculating an encoding pose change matrix of a robot coordinate system according to the encoder data and a robot motion model, collecting a monocular visual image through a camera carried by the robot, processing the monocular visual image according to a computer visual geometric algorithm, and calculating a visual pose change matrix of the camera coordinate system. Wherein each of the positioning sensors operates independently and gathers the positioning information output by each of the positioning sensors as the set of encoder data.
The step 1 specifically includes obtaining the time interval between adjacent time points t and t +1And carrying out pose calculation on the taken camera image and the encoder data. According to the robot motion model and the encoder data collected by the adjacent time point sensors, the coordinate system r of the robot at the ith moment is calculatediCoordinate system r of robot at j timejPose change matrix of
Figure BDA0001350469930000051
And processing the monocular vision images collected by the cameras in the monocular odometer at the adjacent time points according to a computer vision geometric algorithm to calculate a camera coordinate system c at the ith momentiCamera coordinate system c to time jjPose change matrix of
Figure BDA0001350469930000052
Wherein the camera has a monocular camera, wherein the change matrix is
Figure BDA0001350469930000053
R represents a rotation matrix, t ═ txtytz]TRepresenting the translation in the directions of three axes of x, y and z.
And 2, calculating a transformation matrix initial value, a monocular scale factor initial value and a translation vector between the camera coordinate system and the robot coordinate system according to the visual pose change matrix and the coding pose change matrix. And (2) solving an initial value of a transformation matrix between a camera coordinate system and a robot coordinate system and an initial value of a scale factor of the monocular odometer by using a series of encoders and two pose change matrixes measured by monocular vision acquired in the step (1) through movement within a period of time, wherein an external parameter matrix between the camera and the robot coordinate system can be calculated only by acquiring data for a period of time because the external parameter matrix cannot be calculated only by one group of data.
And according to the visual pose change matrix and the coding pose change matrix, constructing a constraint equation set by a hand-eye calibration solution, and directly calculating the initial value of the transformation matrix from the camera coordinate system to the robot coordinate system and the scale factor of the monocular odometer. The robot coordinate system and the camera coordinate system are both fixed on the robot, and rigid body transformation is adopted between the two coordinate systems, so that the constraint equation set corresponding to the robot pose transformation calculated by the encoder and the monocular vision odometer exists:
Figure BDA0001350469930000054
Figure BDA0001350469930000055
wherein q is the corresponding four-element expression of the rotation matrix R.
Figure BDA0001350469930000056
Expressing quaternion multiplication, by which a rotation matrix between the camera coordinate system c and the robot coordinate system r can be solvedrRcAnd a translation vector
Figure BDA0001350469930000057
T inx,tyAnd a monocular scale factor lambda, wherein T represents a rigid body transformation matrix, and the monocular scale factor is the primary value of the variable scale factor. Because the robot generally moves on only one plane, the translation t in the direction of the z axis is causedzCannot be estimated, but tzHas no effect on subsequent calculations and is therefore negligible.
And 3, constructing a pose graph by taking the pose of the robot in the robot coordinate system, the initial value of the transformation matrix and the initial value of the monocular scale factor as nodes, and taking the pose change measured by each sensor as edge constraint in the pose graph. Defining the time t to be 0, wherein a robot body coordinate system is a world coordinate system, the pose of the robot coordinate system in the world coordinate system, a transformation matrix from a camera coordinate system to the robot coordinate system and scale factors are used as nodes in a pose graph, pose changes between adjacent time points measured by sensors are used as edge constraints in the pose graph, and a fusion frame based on the pose graph is constructed. The encoder calculates the pose of the robot according to the encoder data and the odometer model; the camera is used as a monocular odometer, and the pose of the camera is calculated according to the monocular visual image and the visual mileage calculation method.
FIG. 3 is a diagram of pose, robot pose
Figure BDA0001350469930000061
As a node of the pose graph, a circle in fig. 3 represents the pose of the robot coordinate system r in the world coordinate system w at the i-th time. And the poses are constrained through the measured values of the sensors to form edges between nodes in the graph, and the edges are constrained as shown by square blocks in the graph. The general equation for the relationship between the measurements of the sensors and the nodes is as follows:
zij=hij(xi,xj)+nij
wherein h isij(xi,xj) Hiding the poses of the node i and the node j to a measured value zij,nijIs zero mean white Gaussian noise, nij~N(0,Σij). And (3) according to the problem that the pose is constrained and optimized by the nodes and edges in the pose graph, converting into solving the maximum likelihood estimation:
Figure BDA0001350469930000062
more specifically, in the present invention, because the external parameter transformation matrix and the scale factor need to be optimized, the node variables in the pose graph are as follows:
X={xi,xi+1,...,xi+N,s}
wherein the content of the first and second substances,
Figure BDA0001350469930000063
representing a transformation matrix of the camera coordinate system c to the robot coordinate system r. The measuring functions respectively corresponding to the encoder odometer and the monocular vision odometer are as follows:
rzijrhij(xi,xj)+rnij
Figure BDA0001350469930000075
in the formularzij,czijRespectively measured values between the node i and the node j through an encoder or a monocular camera,rnij,cnijfor white noise with zero mean, the corresponding measurement function expression is:
Figure BDA0001350469930000071
Figure BDA0001350469930000072
and 4, when the number of nodes in the pose graph exceeds the size of a preset sliding window, removing the oldest pose node and the edge constraint thereof in the pose graph, adding the removed edge constraint as prior constraint into the pose graph, adding the newest pose node and the edge constraint thereof into the pose graph, performing iterative solution on the current pose graph by adopting a nonlinear least square method to generate an external parameter matrix, a monocular scale factor and robot pose information, wherein the initial value of the transformation matrix is updated by the external parameter matrix, the initial value of the monocular scale factor is updated by the monocular scale factor, and the robot pose information is output as a positioning result of the robot. Specifically, when the number of the pose graph nodes exceeds the size of a sliding window, the oldest pose nodes are removed, the edge constraints of the removed pose nodes are converted into a priori constraint to be added into the pose graph, and the newly observed pose nodes and the constraints thereof are added into the pose graph. And performing optimization iterative solution on the constructed pose graph, updating the pose of the robot, completing the fusion of pose information, and outputting positioning information, wherein the sliding window is set by a user according to the hardware level of the system and the actual precision requirement, and the larger the sliding window is, the higher the calculation precision is, but the calculation amount is increased.
As shown in fig. 4, take the first 5 states as an example, forSpecifically, the updating process of the monocular vision and encoder information loose coupling positioning method based on the sliding window pose graph is explained, wherein the set size of the sliding window in the graph is 4. Circular node xiRepresenting the pose of the robot, s representing the extrinsic parameter matrix from the camera coordinate system to the robot coordinate system and the monocular vision scale factor, and the circular dotted nodes representing nodes from which the pose graph has been removed. The solid black squares represent the sides formed by the encoder measurements, the open squares represent the sides formed by the monocular visual odometry measurements, and the slashed squares represent the newly added prior sides after the nodes are removed.
The state robot constructs 4 pose nodes and an external parameter node X ═ X in a pose graph from an initial time in sequence0,x1,x2,x3S }. In the 5 th pose x4And before relevant measurement is added into the pose graph, the oldest node x in the current pose graph is needed0Removal by marginalization:
Figure BDA0001350469930000073
wherein x ism={x0Denotes the 0 th node to be removed in the pose graph, xb={x1S represents a node variable connected to the 0 th node, zmDenotes xmAnd xbThe constraints of the direct measurement are that,
Figure BDA0001350469930000074
denotes that after i-1 nodes are removed, xbA gaussian distribution is satisfied. To solve this gaussian distribution, the following maximum likelihood problem needs to be calculated:
Figure BDA0001350469930000081
equivalent to solving the nonlinear least squares problem:
Figure BDA0001350469930000082
when node x is removedmAnd after the relevant edge is constrained,
Figure BDA0001350469930000083
as xbThe prior value of (A) needs to newly construct a prior edge and xbConnected as diagonal square blocks in the figure.
New pose node x4Adding a pose graph, converting the newly constructed pose graph into a nonlinear least square problem, and performing iterative solution:
Figure BDA0001350469930000084
the optimal result output after iterative convergence is the external parameter matrix, the monocular scale factor and the position and attitude information (position information) of the robot after fusion.
The following is a system example corresponding to the above method example, and the present implementation system can be implemented in cooperation with the above embodiments. The related technical details mentioned in the above embodiments are still valid in the present implementation system, and are not described herein again for the sake of reducing repetition. Accordingly, the related-art details mentioned in the present embodiment system can also be applied to the above-described embodiments.
The invention also provides a robot positioning system fusing monocular vision and encoder data, which comprises:
the change matrix establishing module is used for acquiring encoder data through a plurality of positioning sensors of the robot, calculating an encoding pose change matrix of a robot coordinate system according to the encoder data and a robot motion model, acquiring a monocular visual image through a camera carried by the robot, processing the monocular visual image according to a computer visual geometric algorithm, and calculating a visual pose change matrix of the camera coordinate system;
the calculation module is used for calculating a transformation matrix initial value, a monocular scale factor initial value and a translation vector between the camera coordinate system and the robot coordinate system according to the visual pose change matrix and the coding pose change matrix;
the pose graph constructing module is used for constructing a pose graph taking the pose of the robot in the robot coordinate system, the initial value of the transformation matrix and the initial value of the monocular scale factor as nodes, and taking the pose change measured by each sensor as edge constraint in the pose graph;
and the pose information calculation module is used for removing the oldest pose node and the edge constraint thereof in the pose graph when the number of the nodes in the pose graph exceeds the size of a preset sliding window, adding the removed edge constraint as prior constraint into the pose graph, simultaneously adding the newest pose node and the edge constraint thereof into the pose graph, iteratively solving the current pose graph by adopting a nonlinear least square method to generate an external parameter matrix, a monocular scale factor and robot pose information, wherein the initial value of the transformation matrix is updated by the external parameter matrix, the initial value of the monocular scale factor is updated by the monocular scale factor, and the robot pose information is output as a positioning result of the robot.
The robot positioning system fusing the monocular vision and the encoder data, wherein the encoder calculates the pose of the robot according to the encoder data and the odometer model; the camera is used as a monocular odometer, and the pose of the camera is calculated according to the monocular visual image and the visual mileage calculation method.
The robot positioning system fusing monocular vision and encoder data, wherein the calculation module comprises: and according to the visual pose change matrix and the coding pose change matrix, constructing a constraint equation set by a hand-eye calibration solution, and directly calculating the initial value of the transformation matrix from the camera coordinate system to the robot coordinate system and the scale factor of the monocular odometer.
The robot positioning system fusing the monocular vision and the encoder data is characterized in that each positioning sensor works independently, and the positioning information output by each positioning sensor is collected to be used as the encoder data.
This robot positioning system who fuses monocular vision and encoder data, wherein this position appearance information calculation module includes: when the number of nodes in the pose graph exceeds the sliding window, the oldest node needs to be removed through marginalization.
Although the present invention has been described in terms of the above embodiments, the embodiments are merely illustrative, and not restrictive, and various changes and modifications may be made by those skilled in the art without departing from the spirit and scope of the invention, and the scope of the invention is defined by the appended claims.

Claims (10)

1. A robot positioning method fusing monocular vision and encoder data is characterized by comprising the following steps:
step 1, collecting encoder data through a plurality of positioning sensors of a robot, calculating an encoding pose change matrix of a robot coordinate system according to the encoder data and a robot motion model, collecting a monocular visual image through a camera carried by the robot, processing the monocular visual image according to a computer visual geometric algorithm, and calculating a visual pose change matrix of the camera coordinate system;
step 2, calculating a transformation matrix initial value, a monocular scale factor initial value and a translation vector between the camera coordinate system and the robot coordinate system according to the visual pose change matrix and the coding pose change matrix;
step 3, constructing a pose graph by taking the pose of the robot in a robot coordinate system, the initial value of the transformation matrix and the initial value of the monocular scale factor as nodes, and taking the pose change measured by each sensor as edge constraint in the pose graph;
and 4, when the number of nodes in the pose graph exceeds the size of a preset sliding window, removing the oldest pose node and the edge constraint thereof in the pose graph, adding the edge constraint as prior constraint into the pose graph, adding the newest pose node and the edge constraint thereof into the pose graph, performing iterative solution on the current pose graph by adopting a nonlinear least square method to generate an external parameter matrix, a monocular scale factor and robot pose information, wherein the initial value of the transformation matrix is updated by the external parameter matrix, the initial value of the monocular scale factor is updated by the monocular scale factor, and the robot pose information is output as a positioning result of the robot.
2. The method of claim 1, wherein the encoder calculates the pose of the robot based on the encoder data and the odometer model; the camera is used as a monocular odometer, and the pose of the camera is calculated according to the monocular visual image and the visual mileage calculation method.
3. The method of claim 1, wherein the step 2 comprises: and according to the visual pose change matrix and the coding pose change matrix, constructing a constraint equation set by a hand-eye calibration solution, and directly calculating the initial value of the transformation matrix from the camera coordinate system to the robot coordinate system and the initial value of the scale factor of the monocular odometer.
4. The method of claim 1, wherein each of the positioning sensors operates independently, and the positioning information output by each of the positioning sensors is collected as the set of encoder data.
5. The method of claim 1, wherein the step 4 comprises: and when the number of the nodes in the pose graph exceeds the sliding window, removing the oldest node through marginalization.
6. A robot positioning system that fuses monocular vision and encoder data, comprising:
the change matrix establishing module is used for acquiring encoder data through a plurality of positioning sensors of the robot, calculating an encoding pose change matrix of a robot coordinate system according to the encoder data and a robot motion model, acquiring a monocular visual image through a camera carried by the robot, processing the monocular visual image according to a computer visual geometric algorithm, and calculating a visual pose change matrix of the camera coordinate system;
the calculation module is used for calculating a transformation matrix initial value, a monocular scale factor initial value and a translation vector between the camera coordinate system and the robot coordinate system according to the visual pose change matrix and the coding pose change matrix;
the pose graph constructing module is used for constructing a pose graph taking the pose of the robot in the robot coordinate system, the initial value of the transformation matrix and the initial value of the monocular scale factor as nodes, and taking the pose change measured by each sensor as edge constraint in the pose graph;
and the pose information calculation module is used for removing the oldest pose node and the edge constraint thereof in the pose graph when the number of nodes in the pose graph exceeds the size of a preset sliding window, adding the edge constraint as prior constraint into the pose graph, adding the newest pose node and the edge constraint thereof into the pose graph, iteratively solving the current pose graph by adopting a nonlinear least square method to generate an external parameter matrix, a monocular scale factor and robot pose information, wherein the initial value of the transformation matrix is updated by the external parameter matrix, the initial value of the monocular scale factor is updated by the monocular scale factor, and the robot pose information is output as a positioning result of the robot.
7. The monocular vision and encoder data fusing robot positioning system of claim 6, wherein the encoder calculates the pose of the robot based on the encoder data and the odometer model; the camera is used as a monocular odometer, and the pose of the camera is calculated according to the monocular visual image and the visual mileage calculation method.
8. A monocular vision and encoder data fusing robot positioning system according to claim 6, wherein the calculation module comprises: and according to the visual pose change matrix and the coding pose change matrix, constructing a constraint equation set by a hand-eye calibration solution, and directly calculating the initial value of the transformation matrix from the camera coordinate system to the robot coordinate system and the initial value of the scale factor of the monocular odometer.
9. A monocular vision and encoder data fusing robot positioning system according to claim 6 wherein each of said positioning sensors works independently and gathers the positioning information output by each of said positioning sensors as said set of encoder data.
10. The monocular vision and encoder data fusing robot positioning system of claim 6, wherein the pose information calculating module comprises: when the number of nodes in the pose graph exceeds the sliding window, the oldest node needs to be removed through marginalization.
CN201710574132.9A 2017-07-14 2017-07-14 Robot positioning method and system fusing monocular vision and encoder information Active CN107516326B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710574132.9A CN107516326B (en) 2017-07-14 2017-07-14 Robot positioning method and system fusing monocular vision and encoder information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710574132.9A CN107516326B (en) 2017-07-14 2017-07-14 Robot positioning method and system fusing monocular vision and encoder information

Publications (2)

Publication Number Publication Date
CN107516326A CN107516326A (en) 2017-12-26
CN107516326B true CN107516326B (en) 2020-04-03

Family

ID=60721872

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710574132.9A Active CN107516326B (en) 2017-07-14 2017-07-14 Robot positioning method and system fusing monocular vision and encoder information

Country Status (1)

Country Link
CN (1) CN107516326B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108534757B (en) * 2017-12-25 2021-01-15 达闼科技(北京)有限公司 Cloud-based visual map scale detection method and device
CN110967027B (en) * 2018-09-30 2021-12-03 北京地平线信息技术有限公司 Map correction method and device and electronic equipment
CN109671120A (en) * 2018-11-08 2019-04-23 南京华捷艾米软件科技有限公司 A kind of monocular SLAM initial method and system based on wheel type encoder
CN109540140B (en) * 2018-11-23 2021-08-10 宁波智能装备研究院有限公司 Mobile robot positioning method integrating SSD target identification and odometer information
CN109579844B (en) * 2018-12-04 2023-11-21 电子科技大学 Positioning method and system
CN112183171A (en) * 2019-07-05 2021-01-05 杭州海康机器人技术有限公司 Method and device for establishing beacon map based on visual beacon
CN110827395B (en) * 2019-09-09 2023-01-20 广东工业大学 Instant positioning and map construction method suitable for dynamic environment
CN110689513B (en) * 2019-09-26 2022-09-02 石家庄铁道大学 Color image fusion method and device and terminal equipment
CN111080699B (en) * 2019-12-11 2023-10-20 中国科学院自动化研究所 Monocular vision odometer method and system based on deep learning
CN111932637B (en) * 2020-08-19 2022-12-13 武汉中海庭数据技术有限公司 Vehicle body camera external parameter self-adaptive calibration method and device
CN112269187A (en) * 2020-09-28 2021-01-26 广州视源电子科技股份有限公司 Robot state detection method, device and equipment
CN112700505B (en) * 2020-12-31 2022-11-22 山东大学 Binocular three-dimensional tracking-based hand and eye calibration method and device and storage medium
CN113624226A (en) * 2021-04-28 2021-11-09 上海有个机器人有限公司 Plane motion constraint method, electronic equipment and storage medium
CN113494886A (en) * 2021-08-05 2021-10-12 唐山市宝凯科技有限公司 Coke oven cart positioning system and method based on visual camera and rotary encoder

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104374395A (en) * 2014-03-31 2015-02-25 南京邮电大学 Graph-based vision SLAM (simultaneous localization and mapping) method
CN105629970A (en) * 2014-11-03 2016-06-01 贵州亿丰升华科技机器人有限公司 Robot positioning obstacle-avoiding method based on supersonic wave
CN106767833A (en) * 2017-01-22 2017-05-31 电子科技大学 A kind of robot localization method of fusion RGBD depth transducers and encoder

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104374395A (en) * 2014-03-31 2015-02-25 南京邮电大学 Graph-based vision SLAM (simultaneous localization and mapping) method
CN105629970A (en) * 2014-11-03 2016-06-01 贵州亿丰升华科技机器人有限公司 Robot positioning obstacle-avoiding method based on supersonic wave
CN106767833A (en) * 2017-01-22 2017-05-31 电子科技大学 A kind of robot localization method of fusion RGBD depth transducers and encoder

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
An Iterative Kalman Smoother for Robust 3D Localization on Mobile and Wearable devices;Dimitrios G et.al;《2015 IEEE International Conference on Robotics and Automation (ICRA)》;20150330(第1期);第16-20页 *
ROBUST VISUAL-INERTIAL SLAM:COMBINATION OF EKF AND OPTIMIZATION METHOD;Meixiang Quan et.al;《arXiv:1706.03648v1 [cs.RO]》;20170712;第1-11页 *
基于单目视觉的自主寻迹智能车设计与实现;李亭 等;《重庆理工大学学报(自然科学)》;20161031;第30卷(第10期);第6336-6340页 *

Also Published As

Publication number Publication date
CN107516326A (en) 2017-12-26

Similar Documents

Publication Publication Date Title
CN107516326B (en) Robot positioning method and system fusing monocular vision and encoder information
CN107941217B (en) Robot positioning method, electronic equipment, storage medium and device
Zhang et al. Pose estimation for ground robots: On manifold representation, integration, reparameterization, and optimization
CN110595466B (en) Lightweight inertial-assisted visual odometer implementation method based on deep learning
CN112183171A (en) Method and device for establishing beacon map based on visual beacon
CN111795686A (en) Method for positioning and mapping mobile robot
CN112254729A (en) Mobile robot positioning method based on multi-sensor fusion
CN111890373A (en) Sensing and positioning method of vehicle-mounted mechanical arm
CN112556719B (en) Visual inertial odometer implementation method based on CNN-EKF
CN116222543B (en) Multi-sensor fusion map construction method and system for robot environment perception
CN114494629A (en) Three-dimensional map construction method, device, equipment and storage medium
CN116619358A (en) Self-adaptive positioning optimization and mapping method for autonomous mining robot
CN114323033B (en) Positioning method and equipment based on lane lines and feature points and automatic driving vehicle
CN115218906A (en) Indoor SLAM-oriented visual inertial fusion positioning method and system
CN113155152B (en) Camera and inertial sensor spatial relationship self-calibration method based on lie group filtering
Juan-Rou et al. The implementation of imu/stereo vision slam system for mobile robot
CN110967017A (en) Cooperative positioning method for rigid body cooperative transportation of double mobile robots
CN111553954B (en) Online luminosity calibration method based on direct method monocular SLAM
Sibley Sliding window filters for SLAM
CN117388830A (en) External parameter calibration method, device, equipment and medium for laser radar and inertial navigation
CN114046800B (en) High-precision mileage estimation method based on double-layer filtering frame
CN114723920A (en) Point cloud map-based visual positioning method
Tian et al. Adaptive-frame-rate monocular vision and imu fusion for robust indoor positioning
CN117392241B (en) Sensor calibration method and device in automatic driving and electronic equipment
Zuo et al. Robust Visual-Inertial Odometry Based on Deep Learning and Extended Kalman Filter

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant