CN114119752A - Indoor and outdoor linked robot positioning method based on GNSS and vision - Google Patents

Indoor and outdoor linked robot positioning method based on GNSS and vision Download PDF

Info

Publication number
CN114119752A
CN114119752A CN202111447907.9A CN202111447907A CN114119752A CN 114119752 A CN114119752 A CN 114119752A CN 202111447907 A CN202111447907 A CN 202111447907A CN 114119752 A CN114119752 A CN 114119752A
Authority
CN
China
Prior art keywords
robot
image
gnss
points
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111447907.9A
Other languages
Chinese (zh)
Inventor
王浩
高建文
张京博
吕梅柏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Guodian Ruiyuan Technology Development Co ltd
Original Assignee
Beijing Guodian Ruiyuan Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Guodian Ruiyuan Technology Development Co ltd filed Critical Beijing Guodian Ruiyuan Technology Development Co ltd
Priority to CN202111447907.9A priority Critical patent/CN114119752A/en
Publication of CN114119752A publication Critical patent/CN114119752A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Algebra (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an indoor and outdoor linked robot positioning method based on GNSS and vision, and belongs to the field of distribution network engineering on-site mobile robot positioning navigation. The invention is used on a robot operating system, the robot operating system analyzes GNSS data, the environment of the robot is judged according to the GNSS state, if the robot is located outdoors, the GNSS and a visual odometer are used for fusion positioning, and if the robot is located indoors, the visual odometer estimated by a visual sensor is used as the pose of the robot. In indoor and outdoor local environments, a decision is made for robot movement through a guide label template matching result, and in the process of movement, a robot operating system is used for detecting a positioning label in a two-dimensional code form to provide accurate positioning information for the robot. The invention achieves the decimeter-level positioning accuracy of the mobile robot in the indoor and outdoor local environments of the distribution network project site, and realizes the self-adaptive navigation in the indoor and outdoor local environments of the distribution network project site.

Description

Indoor and outdoor linked robot positioning method based on GNSS and vision
Technical Field
The invention belongs to the field of positioning and navigation of mobile robots in distribution network engineering sites, and particularly relates to an indoor and outdoor linked robot positioning method based on GNSS and vision.
Background
The existing robot navigation method comprises the following steps: visual navigation and satellite navigation.
The visual sensor of the visual navigation has the advantages of simple equipment, low cost, good economy and wider application range, and meanwhile, the development of computer hardware provides support for high-throughput data processing. However, in the visual navigation, factors such as illumination change and target movement bring noise to image processing, and meanwhile, error and error accumulation are generated due to mismatching of feature points, so that the positioning accuracy of the robot is gradually reduced in the movement process, and even the positioning may be lost due to insufficient feature points in the image in the movement process. Therefore, a single vision sensor cannot accomplish the positioning of the mobile robot. Meanwhile, the positioning effect of the visual navigation in and out of the room is often greatly different.
Global navigation satellite technology (GNSS) can provide positioning accuracy on the highest centimeter level in open areas outdoors, but in cities, due to special artificial environments, for example: a flat road surface, a tall building, etc. often cause the receiver to receive satellite signals and also receive satellite signals reflected by the surrounding environment, thereby generating multipath errors. There are many methods for processing multipath error, but at present, there is no method for completely eliminating the multipath error. Under the condition of serious shielding, particularly in an indoor environment, even satellite signals cannot be received, and the satellite positioning is completely ineffective.
Currently, the combined navigation method of GNSS and vision mainly integrates GNSS and vision SLAM. For GNSS and visual SLAM fusion, the two are often fused to different degrees by two methods, loose coupling for motion estimation alone and tight coupling for combining the two states for overall state estimation. In addition, there is also a method for fusing the image processing-assisted GNSS and the visual odometer, which is generally performed for a structured road, determines a gross error by performing image processing on a route marker such as a lane line, and fuses the GNSS and the visual odometer by using different methods.
For the method of combining the GNSS and the visual SLAM, no matter loose coupling or tight coupling, the navigation and positioning of the mobile robot can achieve a good effect under indoor and outdoor conditions, but the visual SLAM framework comprises a front-end visual odometer, a rear-end optimization part, a loop detection part and a graph building part, is an operation-intensive algorithm, and has a large CPU load in the operation process, so that the dependence degree of the visual SLAM framework on a CPU is high. In addition, the scene similarity of the environment has a large influence on loop detection, and when the scene similarity is too high, a system is closed by mistake, so that the navigation and positioning of the robot are influenced.
In the adaptive navigation of the robot, a global or local map is generally required to be constructed. On one hand, the visual SLAM cannot directly build a grid map for autonomous navigation of the robot; on the other hand, the grid map is constructed visually, dense stereo matching is needed, then dense space point cloud is subjected to dimensionality reduction, finally the map construction is realized through a binary Bayes filter, the requirements on GPU and CUP are high, and the real-time performance is often difficult to meet when the grid map is applied to an embedded system.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides an indoor and outdoor connection robot positioning method based on GNSS and vision.
In order to achieve the above purpose, the invention provides the following technical scheme:
indoor and outdoor linked robot positioning method based on GNSS and vision comprises the following steps:
when the true value of the position of the robot cannot be obtained, the pose and the speed of the robot are judged through the GNSS signals and the output image of the vision sensor, and the robot is positioned according to the pose and the speed of the robot; otherwise, identifying the positioning labels in the form of the peripheral two-dimensional codes through the robot operating system to obtain a true position value of the robot, and covering estimation information by using the true position value to perform repositioning.
Preferably, the step of determining the pose and the speed of the robot through the GNSS signal and the output image of the vision sensor, and performing robot positioning according to the pose and the speed of the robot includes:
judging the pose and the speed of the robot through the GNSS signal;
performing feature matching on an output image of the vision sensor to obtain a robot pose;
performing timestamp synchronization on the GNSS signal and the output signal of the vision sensor through a robot operating system;
the position and the speed of the robot judged by the robot operating system and the GNSS signal are fused, the position and the speed of the robot are obtained by the vision sensor, the estimated information of the position and the speed of the robot is obtained, and the robot is positioned by the estimated information.
Preferably, the step of determining the pose and the speed of the robot through the GNSS signal includes:
analyzing GPGGA statement information of the GNSS, and acquiring GNSS states and the number of used satellites;
when the GNSS state is 0 and the number of used satellites is 00, the robot is judged to be indoors; when the GNSS state is 1 or 2 and the number of the used satellites is not less than 04, the robot is judged to be outdoors, when the GNSS positioning data is received, the environment where the robot is located is judged to have signals, otherwise, the environment where the robot is located is judged to have no signals;
when the robot is outdoors and has good signals, the position and the speed of the robot are calculated by utilizing GNSS information, and the method comprises the following steps:
setting the starting point of the robot in the motion process as s0The continuous two points in the motion process of the robot at any moment are snAnd sn+1,snAnd sn+1The time interval between is tn,n+1,snHas a longitude and latitude of (lons)n,latsn),sn+1Has a longitude and latitude of (lons)n+1,latsn+1);
snAnd sn+1A distance d betweenn,n+1As shown below, the following description is given,
Figure BDA0003384567620000031
wherein R is the mean radius of the earth;
sn+1relative to snAzimuth angle theta ofnAs shown below, the following description is given,
Figure BDA0003384567620000032
setting s0Establishing a GNSS coordinate system following a right-hand rule by taking the north direction as the x axis of the coordinate system as the origin of the coordinate system;
the robot pose at any moment is estimated by the distance and the azimuth angle between any two continuous points as follows,
Figure BDA0003384567620000041
the robot speed at any moment is estimated through the distance, the azimuth angle and the time interval between any two continuous points as follows:
Figure BDA0003384567620000042
wherein v isxFor the movement speed, v, of the robot in the x-axis direction of the GNSS coordinate systemyThe motion speed of the robot in the y-axis direction of the GNSS coordinate system is shown, and omega is the angular speed of the robot in the GNSS coordinate system.
Preferably, the step of performing feature matching on the output image of the vision sensor to obtain the pose of the robot includes:
detecting and obtaining a plurality of ORB angular points; calculating the direction of each ORB corner point; calculating a descriptor of each ORB corner point;
carrying out violent matching on ORB angular points in two adjacent frames of images through a descriptor to obtain successfully matched 2d road marking points;
acquiring a disparity map of an output image by using an SGBM algorithm in opencv;
performing depth calculation on pixel points with non-zero pixel values in the parallax image to obtain a depth image;
acquiring matched 3d landmark points through the depth map and the 2d landmark points;
and decomposing the 3d landmark points through SVD to obtain the pose of the robot.
Preferably, the first and second liquid crystal materials are,
the step of detecting and obtaining the ORB corner point includes:
setting a threshold t, assuming the brightness of any pixel point p on the output image as Ip
Drawing a circle by taking each pixel point as a circle center and r as a radius;
taking r +1 pixels uniformly distributed on each circle, and setting the brightness of any pixel in the r +1 pixels as I;
when any one circle satisfies | I-Ip|<If the number of the pixel points of t is less than r, the pixel point where the center of the circle is located is not the ORB angular point; otherwise, all pixel points on the circle are judged, and when any circle meets the I-Ip|<the number of the continuous pixel points of t is more than or equal to r2If not, the pixel point of the center of the circle is not the ORB angular point;
the step of calculating the ORB corner direction comprises the following steps:
for each ORB angular point, taking an image block A with the ORB angular point as the center of a circle and the radius of R, calculating the moment of the image block A,
Figure BDA0003384567620000051
wherein 0< p <1,0< q < 1; x is the abscissa of a pixel point on the image block A, y is the z-ordinate of a pixel point on the image block A, and I (x, y) is the gray value of a pixel point at the coordinate (x, y);
the centroid of the image block a is calculated by the moments of the image block a,
Figure BDA0003384567620000052
connecting the geometric center and the centroid of each image block to obtain a vector, defining the direction of the vector as the direction of each ORB corner point, expressed as,
α=tan-12(cy,cx)=tan-12(m01,m10)
wherein, cyRepresenting the ordinate of the vector, cxRepresents the abscissa of the vector;
the step of calculating the descriptor of each ORB corner point includes:
setting a neighborhood window by taking each ORB corner point as a center;
any 128 pairs of pixel points are selected in each neighborhood window, the feature vector corresponding to each pair of pixel points is calculated by the following formula,
Figure BDA0003384567620000053
wherein, p (x) < p (y) is the pixel value of two pixels in a pair of pixels respectively;
the feature vector of 128 pairs of pixel points in the domain window where each ORB corner point is located is the descriptor of the ORB corner point.
Preferably, the violence matching the ORB corner points in the two adjacent images by a descriptor includes:
calculating the Hamming distance between each ORB corner point of one image and all the ORB corner points of the other image in every two adjacent images;
judging the point with the closest Hamming distance in two adjacent frame images as the matching point in the two frame images;
eliminating the matching points of which the Hamming distance is more than two times of the minimum Hamming distance from all the matching points;
and acquiring a disparity map of an output image by using an SGBM algorithm in opencv.
Preferably, the depth calculation is performed on the pixels with non-zero pixel values in the disparity map according to the following formula:
Figure BDA0003384567620000061
wherein Z is the pixel depth, d is the pixel parallax, b is the binocular baseline length, and f is the camera focal length.
Preferably, the step of obtaining the pose of the robot by decomposing the 3d landmark points through SVD includes:
two groups of 3d landmark points which are matched at any two moments are obtained through the depth map and the 2d landmark points,
P={p1,…,pn}
P′={p1′,…,pn′}
the coordinate transformation existing between the two sets of 3d landmark points is represented as follows,
Figure BDA0003384567620000062
wherein R and t are respectively a rotation matrix and a translation vector of the camera at any two moments;
pairing two groups of 3d landmark points, each pair of landmark points piAnd piThe error that exists between' is expressed as,
ei=pi-(Rpi′+t)
through each pair of 3d waypoints piAnd piThe error of' constructs a least squares problem for it,
Figure BDA0003384567620000063
solving the least square problem by adopting an SVD algorithm to obtain an optimal rotation matrix and translation vector, wherein the SVD algorithm is expressed as follows,
Figure BDA0003384567620000071
the optimal rotation matrix is expressed as follows,
R=UVT
the optimal translation vector is represented as follows,
t=p-RP′
wherein,
qi=pi-p
Figure BDA0003384567620000072
qi′=pi′-p′
Figure BDA0003384567620000073
and performing robot pose estimation through the rotation matrix and the translation vector of the camera at each moment.
Preferably, the method further comprises the following steps: and guiding the robot to move by using the guide label template of the robot according to the positioning information.
Preferably, the step of guiding the robot to move by using the guide label template of the robot includes:
obtaining an ROI image of the output image;
creating an image pyramid of the ROI image and the template image;
obtaining similarity measurement of the output image and the template image by using the image pyramid; judging the most matched template image according to the similarity measurement;
guiding the robot to move according to the guide label of the template image; wherein the guidance tag information includes: translation, rotation, stopping and turning around;
the step of creating the image pyramid of the ROI image and the template image is as follows:
creating 4 layers of pyramid images corresponding to the ROI image and the template image;
down-sampling the 4-layer pyramid images;
taking a convolution kernel K of 5 multiplied by 5, traversing each pixel of the 4-layer pyramid image after the down sampling, and performing mean smoothing filtering, wherein the convolution kernel K is expressed as follows,
Figure BDA0003384567620000081
obtaining similarity measurement of an output image and a template image by using an image pyramid; the step of determining the best matching template image according to the similarity measure comprises:
setting a threshold value;
calculating a similarity value of each template image and the ROI image by using a similarity measurement criterion of the normalized correlation coefficient,
Figure BDA0003384567620000082
wherein, I1As template image, I2For ROI image, u1As an image I1Mean value of u2As an image I2X is the value of a pixel point in an image;
and taking the template image with the highest similarity of the normalized correlation coefficient and larger than a threshold value as the best matched template image.
The indoor and outdoor linked robot positioning method based on GNSS and vision provided by the invention has the following beneficial effects: the invention realizes accurate relocation through the positioning tag, eliminates the phenomenon that the estimation error of the sensor increases along with the time lapse, realizes high-precision and quick positioning under different indoor and outdoor scenes, provides a reliable foundation for a robot navigation system, can self-adaptively switch the navigation system when the indoor environment turns to the outdoor environment or enters the indoor environment from the outdoor environment, realizes autonomous navigation by utilizing the guide tag, plans a safe path for the robot in an indoor and outdoor local safe area, and realizes local navigation.
Drawings
In order to more clearly illustrate the embodiments of the present invention and the design thereof, the drawings required for the embodiments will be briefly described below. The drawings in the following description are only some embodiments of the invention and it will be clear to a person skilled in the art that other drawings can be derived from them without inventive effort.
Fig. 1 is a flowchart of a GNSS and vision based indoor-outdoor docking robot positioning method according to embodiment 1 of the present invention;
fig. 2 is a flowchart of obtaining a robot pose by performing feature matching on an output image of a visual sensor according to embodiment 1 of the present invention;
fig. 3 is a flowchart of guiding the robot to move by using the guide tag template of the robot according to embodiment 1 of the present invention.
Detailed Description
In order that those skilled in the art will better understand the technical solutions of the present invention and can practice the same, the present invention will be described in detail with reference to the accompanying drawings and specific examples. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
Example 1
The robot is judged whether the robot is indoors or outdoors by detecting the satellite receiving quantity by applying an ROS framework and based on the difference of indoor and outdoor GNSS signals, and the indoor and outdoor self-adaptive local relocation function is realized by a GNSS and visual loose coupling mode on the basis.
Referring to fig. 1, the present invention provides a GNSS and vision based robot positioning method for indoor and outdoor connection, including the following steps: when the true value of the position of the robot cannot be obtained, the pose and the speed of the robot are judged through the GNSS signals and the output image of the vision sensor, and the robot is positioned according to the pose and the speed of the robot; otherwise, identifying the positioning labels in the form of the peripheral two-dimensional codes through the robot operating system to obtain a true position value of the robot, and covering estimation information by using the true position value to perform repositioning.
Specifically, the steps of judging the pose and the speed of the robot through the GNSS signal and the output image of the vision sensor and positioning the robot according to the pose and the speed of the robot comprise: judging the pose and the speed of the robot through the GNSS signal; performing feature matching on an output image of the vision sensor to obtain a robot pose; performing timestamp synchronization on the GNSS signal and the output signal of the vision sensor through a robot operating system; the position and the speed of the robot judged by the robot operating system through the fusion of GNSS signals and the position and the speed of the robot are obtained through a vision sensor, the estimated information of the position and the speed of the robot is obtained, and the robot is positioned by utilizing the estimated information; and guiding the robot to move by utilizing the guide label template of the robot according to the positioning information.
Specifically, the step of judging the pose and the speed of the robot by the signal comprises the following steps: analyzing GPGGA statement information of the GNSS, and acquiring GNSS states and the number of used satellites; when the GNSS state is 0 and the number of used satellites is 00, the robot is judged to be indoors; when the GNSS state is 1 or 2 and the number of the used satellites is not less than 04, the robot is judged to be outdoors, when the GNSS positioning data is received, the environment where the robot is located is judged to have signals, otherwise, the environment where the robot is located is judged to be not trusted. The robot is outdoors and has good signals, and the position and the speed of the robot are calculated by utilizing GNSS information.
The method for calculating the pose and the speed of the robot by utilizing the GNSS information comprises the following steps: setting the starting point of the robot in the motion process as s0The continuous two points in the motion process of the robot at any moment are snAnd sn+1,snAnd sn+1The time interval between is tn,n+1,snHas a longitude and latitude of (lons)n,latsn),sn+1Has a longitude and latitude of (lons)n+1,latsn+1)。
snAnd sn+1A distance d betweenn,n+1As shown below, the following description is given,
Figure BDA0003384567620000101
wherein R is the earth mean radius.
sn+1Relative to snAzimuth angle theta ofnAs shown below, the following description is given,
Figure BDA0003384567620000102
setting s0Is the origin of the coordinate system, is positiveEstablishing a GNSS coordinate system following a right-hand rule by taking the north direction as an x-axis of the coordinate system; the robot pose at any moment is estimated by the distance and the azimuth angle between any two continuous points as follows,
Figure BDA0003384567620000103
the robot speed at any moment is estimated through the distance, the azimuth angle and the time interval between any two continuous points as follows:
Figure BDA0003384567620000111
wherein v isxFor the movement speed, v, of the robot in the x-axis direction of the GNSS coordinate systemyThe motion speed of the robot in the y-axis direction of the GNSS coordinate system is shown, and omega is the angular speed of the robot in the GNSS coordinate system.
Referring to fig. 2, the step of performing feature matching on the output image of the vision sensor to obtain the pose of the robot includes: detecting and obtaining a plurality of ORB angular points; calculating the direction of each ORB corner point; calculating a descriptor of each ORB corner point; carrying out violent matching on ORB angular points in two adjacent frames of images through a descriptor to obtain successfully matched 2d road marking points; acquiring a disparity map of an output image by using an SGBM algorithm in opencv; performing depth calculation on pixel points with non-zero pixel values in the parallax image to obtain a depth image; acquiring matched 3d landmark points through the depth map and the 2d landmark points; and decomposing the 3d landmark points through SVD to obtain the pose of the robot.
Specifically, the step of detecting and obtaining the ORB corner includes: setting a threshold t, assuming the brightness of any pixel point p on the output image as Ip(ii) a Drawing a circle by taking each pixel point as a circle center and r as a radius; taking r +1 pixels uniformly distributed on each circle, and setting the brightness of any pixel in the r +1 pixels as I; when any one circle satisfies | I-Ip|<If the number of the pixel points of t is less than r, the pixel point where the center of the circle is located is not the ORB angular point; otherwise, all pixel points on the circle are judged, and when any circle meets the requirement|I-Ip|<the number of the continuous pixel points of t is more than or equal to r2If not, the pixel point of the center of the circle is not the ORB angular point.
Specifically, the step of calculating the ORB corner direction includes: for each ORB angular point, taking an image block A with the ORB angular point as the center of a circle and the radius of R, calculating the moment of the image block A,
Figure BDA0003384567620000112
wherein 0< p <1,0< q < 1; x is the abscissa of a pixel point on the image block a, y is the z-ordinate of a pixel point on the image block a, and I (x, y) is the gray value of a pixel point at the coordinate (x, y).
The centroid of the image block a is calculated by the moments of the image block a,
Figure BDA0003384567620000121
connecting the geometric center and the centroid of each image block to obtain a vector, defining the direction of the vector as the direction of each ORB corner point, expressed as,
α=tan-12(cy,cx)=tan-12(m01,m10)
wherein, cyRepresenting the ordinate of the vector, cxRepresenting the abscissa of the vector.
Specifically, the step of calculating the descriptor of each ORB corner includes: setting a neighborhood window by taking each ORB corner point as a center; any 128 pairs of pixel points are selected in each neighborhood window, the feature vector corresponding to each pair of pixel points is calculated by the following formula,
Figure BDA0003384567620000122
wherein, p (x) < p (y) is the pixel value of two pixels in a pair of pixels respectively; the feature vector of 128 pairs of pixel points in the domain window where each ORB corner point is located is the descriptor of the ORB corner point.
Specifically, the violence matching of ORB corners in two adjacent images through a descriptor includes: calculating the Hamming distance between each ORB corner point of one image and all the ORB corner points of the other image in every two adjacent images; judging the point with the closest Hamming distance in two adjacent frame images as the matching point in the two frame images; eliminating the matching points of which the Hamming distance is more than two times of the minimum Hamming distance from all the matching points; and acquiring a disparity map of an output image by using an SGBM algorithm in opencv.
Specifically, depth calculation is performed on pixels with non-zero pixel values in the disparity map according to the following formula:
Figure BDA0003384567620000123
wherein Z is the pixel depth, d is the pixel parallax, b is the binocular baseline length, and f is the camera focal length.
In this embodiment, the step of decomposing the 3d landmark points by SVD to obtain the pose of the robot includes: two groups of landmark points at any two moments are obtained through the depth map,
P={p1,…,pn}
P′={p1′,…,pn′}
the coordinate transformation existing between the two sets of landmark points is expressed as follows,
Figure BDA0003384567620000131
wherein, R and t are the rotation matrix and translation vector of the camera at any two moments respectively.
Pairing two groups of landmark points, each pair of landmark points piAnd piThe error that exists between' is expressed as,
ei=pi-(Rpi′+t)
by per-pair waypoints piAnd piThe error of' constructs a least squares problem for it,
Figure BDA0003384567620000132
solving the least square problem by adopting an SVD algorithm to obtain an optimal rotation matrix and translation vector, wherein the SVD algorithm is expressed as follows,
Figure BDA0003384567620000133
the optimal rotation matrix is expressed as follows,
R=UVT
the optimal translation vector is represented as follows,
t=p-RP′
wherein,
qi=pi-p
Figure BDA0003384567620000134
qi′=pi′-p′
Figure BDA0003384567620000135
and performing robot pose estimation through the rotation matrix and the translation vector of the camera at each moment.
Referring to fig. 3, the step of guiding the robot to move using the guide tag template of the robot includes: obtaining an ROI image of the output image; creating an image pyramid of the ROI image and the template image; obtaining similarity measurement of the output image and the template image by using the image pyramid; judging the most matched template image according to the similarity measurement; guiding the robot to move according to the guide label of the template image; wherein the guidance tag information includes: translation, rotation, stop and u-turn.
Further, the step of creating the image pyramid of the ROI image and the template image is as follows: creating 4 layers of pyramid images corresponding to the ROI image and the template image; down-sampling the 4-layer pyramid images; taking a convolution kernel K of 5 multiplied by 5, traversing each pixel of the 4-layer pyramid image after the down sampling, and performing mean smoothing filtering, wherein the convolution kernel K is expressed as follows,
Figure BDA0003384567620000141
specifically, an image pyramid is used for obtaining similarity measurement of an output image and a template image; the step of determining the best matching template image according to the similarity measure comprises: setting a threshold value; calculating a similarity value of each template image and the ROI image by using a similarity measurement criterion of the normalized correlation coefficient,
Figure BDA0003384567620000142
wherein, I1As template image, I2For ROI image, u1As an image I1Mean value of u2As an image I2X is the value of a pixel point in an image; and taking the template image with the highest similarity of the normalized correlation coefficient and larger than a threshold value as the best matched template image.
The robot mainly comprises a motion chassis and a motion control system, and realizes autonomous positioning and navigation under the assistance of laser, vision, IMU and GNSS. For example: the indoor scene is determined by judging satellite signals, autonomous positioning navigation is realized under the help of other sensors, when the indoor space is turned to the outdoor space, the satellite signals are received, the indoor and outdoor linked robot positioning method based on GNSS and vision is utilized to automatically switch to outdoor navigation, and autonomous navigation is realized under the guidance of a label. The positioning is accurate and the navigation precision is obviously improved under different scenes.
The above embodiments are only preferred embodiments of the present invention, and the scope of the present invention is not limited thereto, and any simple changes or equivalent substitutions of the technical solutions that can be obviously obtained by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention.

Claims (10)

1. Indoor and outdoor linked robot positioning method based on GNSS and vision is characterized by comprising the following steps:
when the true value of the position of the robot cannot be obtained, the pose and the speed of the robot are judged through the GNSS signals and the output image of the vision sensor, and the robot is positioned according to the pose and the speed of the robot;
otherwise, identifying the positioning labels in the form of the peripheral two-dimensional codes through the robot operating system to obtain a true position value of the robot, and covering estimation information by using the true position value to perform repositioning.
2. The GNSS and vision-based indoor-outdoor linked robot positioning method of claim 1, wherein the step of determining the pose and the speed of the robot from the GNSS signal and the output image of the vision sensor, and performing robot positioning based on the pose and the speed of the robot comprises:
judging the pose and the speed of the robot through the GNSS signal;
performing feature matching on an output image of the vision sensor to obtain a robot pose;
performing timestamp synchronization on the GNSS signal and the output signal of the vision sensor through a robot operating system;
the position and the speed of the robot judged by the robot operating system and the GNSS signal are fused, the position and the speed of the robot are obtained by the vision sensor, the estimated information of the position and the speed of the robot is obtained, and the robot is positioned by the estimated information.
3. The GNSS and vision based indoor-outdoor linked robot positioning method of claim 2, wherein the step of determining the pose and the speed of the robot through GNSS signals comprises:
analyzing GPGGA statement information of the GNSS, and acquiring GNSS states and the number of used satellites;
when the GNSS state is 0 and the number of used satellites is 00, the robot is judged to be indoors; when the GNSS state is 1 or 2 and the number of the used satellites is not less than 04, the robot is judged to be outdoors, when the GNSS positioning data is received, the environment where the robot is located is judged to have signals, otherwise, the environment where the robot is located is judged to have no signals;
when the robot is outdoors and has good signals, the position and the speed of the robot are calculated by utilizing GNSS information, and the method comprises the following steps:
setting the starting point of the robot in the motion process as s0The continuous two points in the motion process of the robot at any moment are snAnd sn+1,snAnd sn+1The time interval between is tn,n+1,snHas a longitude and latitude of (lons)n,latsn),sn+1Has a longitude and latitude of (lons)n+1,latsn+1);
snAnd sn+1A distance d betweenn,n+1As shown below, the following description is given,
Figure FDA0003384567610000021
wherein R is the mean radius of the earth;
sn+1relative to snAzimuth angle theta ofnAs shown below, the following description is given,
Figure FDA0003384567610000022
setting s0Establishing a GNSS coordinate system following a right-hand rule by taking the north direction as the x axis of the coordinate system as the origin of the coordinate system;
the robot pose at any moment is estimated by the distance and the azimuth angle between any two continuous points as follows,
Figure FDA0003384567610000023
the robot speed at any moment is estimated through the distance, the azimuth angle and the time interval between any two continuous points as follows:
Figure FDA0003384567610000024
wherein v isxFor the movement speed, v, of the robot in the x-axis direction of the GNSS coordinate systemyThe motion speed of the robot in the y-axis direction of the GNSS coordinate system is shown, and omega is the angular speed of the robot in the GNSS coordinate system.
4. The GNSS and vision based indoor-outdoor linked robot positioning method of claim 2, wherein the step of performing feature matching on the output image of the vision sensor to obtain the robot pose comprises:
detecting and obtaining a plurality of ORB angular points; calculating the direction of each ORB corner point; calculating a descriptor of each ORB corner point;
carrying out violent matching on ORB angular points in two adjacent frames of images through a descriptor to obtain successfully matched 2d road marking points;
acquiring a disparity map of an output image by using an SGBM algorithm in opencv;
performing depth calculation on pixel points with non-zero pixel values in the parallax image to obtain a depth image;
acquiring matched 3d landmark points through the depth map and the 2d landmark points;
and decomposing the 3d landmark points through SVD to obtain the pose of the robot.
5. The GNSS and vision based indoor-outdoor linked robot positioning method of claim 4, wherein the step of detecting to obtain the ORB corner point comprises:
setting a threshold t, assuming the brightness of any pixel point p on the output image as Ip
Drawing a circle by taking each pixel point as a circle center and r as a radius;
taking r +1 pixels uniformly distributed on each circle, and setting the brightness of any pixel in the r +1 pixels as I;
when any one circle satisfies | I-IpIf the number of the pixel points with the value of less than l is less than r, the pixel point where the center of the circle is located is not the ORB angular point; otherwise, all pixel points on the circle are judged, and when any circle meets the I-IpThe number of continuous pixel points with | < t is greater than or equal to r2If not, the pixel point of the center of the circle is not the ORB angular point;
the step of calculating the ORB corner direction comprises the following steps:
for each ORB angular point, taking an image block A with the ORB angular point as the center of a circle and the radius of R, calculating the moment of the image block A,
Figure FDA0003384567610000031
wherein p is more than 0 and less than 1, and q is more than 0 and less than 1; x is the abscissa of a pixel point on the image block A, y is the z-ordinate of a pixel point on the image block A, and I (x, y) is the gray value of a pixel point at the coordinate (x, y);
the centroid of the image block a is calculated by the moments of the image block a,
Figure FDA0003384567610000041
connecting the geometric center and the centroid of each image block to obtain a vector, defining the direction of the vector as the direction of each ORB corner point, expressed as,
α=tan-12(cy,cx)=tan-12(m01,m10)
wherein, cyRepresenting the ordinate of the vector, cxRepresents the abscissa of the vector;
the step of calculating the descriptor of each ORB corner point includes:
setting a neighborhood window by taking each ORB corner point as a center;
any 128 pairs of pixel points are selected in each neighborhood window, the feature vector corresponding to each pair of pixel points is calculated by the following formula,
Figure FDA0003384567610000042
wherein, p (x) < p (y) is the pixel value of two pixels in a pair of pixels;
the feature vector of 128 pairs of pixel points in the domain window where each ORB corner point is located is the descriptor of the ORB corner point.
6. The GNSS and vision based indoor-outdoor linked robot positioning method of claim 4, wherein the step of violently matching ORB corner points in two adjacent images by a descriptor comprises:
calculating the Hamming distance between each ORB corner point of one image and all the ORB corner points of the other image in every two adjacent images;
judging the point with the closest Hamming distance in two adjacent frame images as the matching point in the two frame images;
eliminating the matching points of which the Hamming distance is more than two times of the minimum Hamming distance from all the matching points;
and acquiring a disparity map of an output image by using an SGBM algorithm in opencv.
7. The GNSS and vision based indoor and outdoor linked robot positioning method of claim 4, wherein the depth calculation is performed for the pixel points with non-zero pixel values in the disparity map according to the following formula:
Figure FDA0003384567610000051
wherein Z is the pixel depth, d is the pixel parallax, b is the binocular baseline length, and f is the camera focal length.
8. The GNSS and vision based indoor and outdoor linked robot positioning method of claim 4, wherein the step of obtaining robot pose by SVD decomposition of 3d landmark points comprises:
two groups of 3d landmark points which are matched at any two moments are obtained through the depth map and the 2d landmark points,
P={p1,…,pn}
P′={p1′,...,pn′}
the coordinate transformation existing between the two sets of 3d landmark points is represented as follows,
Figure FDA0003384567610000052
wherein R and t are respectively a rotation matrix and a translation vector of the camera at any two moments;
pairing two groups of 3d landmark points, each pair of landmark points piAnd piThe error that exists between' is expressed as,
ei=pi-(Rpi′+t)
through each pair of 3d waypoints piAnd piThe error of' constructs a least squares problem for it,
Figure FDA0003384567610000053
solving the least square problem by adopting an SVD algorithm to obtain an optimal rotation matrix and translation vector, wherein the SVD algorithm is expressed as follows,
Figure FDA0003384567610000054
the optimal rotation matrix is expressed as follows,
R=UVT
the optimal translation vector is represented as follows,
t=p-RP′
wherein,
qi=pi-p
Figure FDA0003384567610000061
qi′=pi′-p′
Figure FDA0003384567610000062
and performing robot pose estimation through the rotation matrix and the translation vector of the camera at each moment.
9. The GNSS and vision based indoor-outdoor geared robot positioning method of claim 1, further comprising the steps of: and guiding the robot to move by using the guide label template of the robot according to the positioning information.
10. The GNSS and vision based indoor-outdoor engaged robot positioning method of claim 9, wherein the step of directing the robot motion using the robot's guide tag template comprises:
obtaining an ROI image of the output image;
creating an image pyramid of the ROI image and the template image;
obtaining similarity measurement of the output image and the template image by using the image pyramid; judging the most matched template image according to the similarity measurement;
guiding the robot to move according to the guide label of the template image; wherein the guidance tag information includes: translation, rotation, stopping and turning around;
the step of creating the image pyramid of the ROI image and the template image is as follows:
creating 4 layers of pyramid images corresponding to the ROI image and the template image;
down-sampling the 4-layer pyramid images;
taking a convolution kernel K of 5 multiplied by 5, traversing each pixel of the 4-layer pyramid image after the down sampling, and performing mean smoothing filtering, wherein the convolution kernel K is expressed as follows,
Figure FDA0003384567610000063
obtaining similarity measurement of an output image and a template image by using an image pyramid; the step of determining the best matching template image according to the similarity measure comprises:
setting a threshold value;
calculating a similarity value of each template image and the ROI image by using a similarity measurement criterion of the normalized correlation coefficient,
Figure FDA0003384567610000071
wherein, I1As template image, I2For ROI image, u1As an image I1Mean value of u2As an image I2X is the value of a pixel point in an image;
and taking the template image with the highest similarity of the normalized correlation coefficient and larger than a threshold value as the best matched template image.
CN202111447907.9A 2021-11-30 2021-11-30 Indoor and outdoor linked robot positioning method based on GNSS and vision Pending CN114119752A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111447907.9A CN114119752A (en) 2021-11-30 2021-11-30 Indoor and outdoor linked robot positioning method based on GNSS and vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111447907.9A CN114119752A (en) 2021-11-30 2021-11-30 Indoor and outdoor linked robot positioning method based on GNSS and vision

Publications (1)

Publication Number Publication Date
CN114119752A true CN114119752A (en) 2022-03-01

Family

ID=80368995

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111447907.9A Pending CN114119752A (en) 2021-11-30 2021-11-30 Indoor and outdoor linked robot positioning method based on GNSS and vision

Country Status (1)

Country Link
CN (1) CN114119752A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115578426A (en) * 2022-10-25 2023-01-06 哈尔滨工业大学 Indoor service robot repositioning method based on dense feature matching
CN117036753A (en) * 2023-07-18 2023-11-10 北京观微科技有限公司 SAR image expansion method based on template matching and InfoGAN

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115578426A (en) * 2022-10-25 2023-01-06 哈尔滨工业大学 Indoor service robot repositioning method based on dense feature matching
CN115578426B (en) * 2022-10-25 2023-08-18 哈尔滨工业大学 Indoor service robot repositioning method based on dense feature matching
CN117036753A (en) * 2023-07-18 2023-11-10 北京观微科技有限公司 SAR image expansion method based on template matching and InfoGAN

Similar Documents

Publication Publication Date Title
US10788830B2 (en) Systems and methods for determining a vehicle position
Alonso et al. Accurate global localization using visual odometry and digital maps on urban environments
Kümmerle et al. Large scale graph-based SLAM using aerial images as prior information
Guivant et al. Localization and map building using laser range sensors in outdoor applications
EP2660777B1 (en) Image registration of multimodal data using 3D geoarcs
Gonzalez et al. Combined visual odometry and visual compass for off-road mobile robots localization
Suzuki et al. N-LOS GNSS signal detection using fish-eye camera for vehicle navigation in urban environments
上條俊介 et al. Autonomous vehicle technologies: Localization and mapping
Senlet et al. Satellite image based precise robot localization on sidewalks
Callmer et al. Radar SLAM using visual features
CN113776519B (en) AGV vehicle mapping and autonomous navigation obstacle avoidance method under lightless dynamic open environment
CN114119752A (en) Indoor and outdoor linked robot positioning method based on GNSS and vision
CN111551171A (en) Target object positioning method and device, robot and storage medium
Shin et al. Drivable road region detection using a single laser range finder for outdoor patrol robots
CN112068152A (en) Method and system for simultaneous 2D localization and 2D map creation using a 3D scanner
Bhamidipati et al. Integrity monitoring of Graph‐SLAM using GPS and fish‐eye camera
Wang et al. UGV‐UAV robust cooperative positioning algorithm with object detection
Bai et al. Real-time GNSS NLOS detection and correction aided by sky-pointing camera and 3D LiDAR
Hara et al. Vehicle localization based on the detection of line segments from multi-camera images
Mounier et al. High-precision positioning in GNSS-challenged environments: a LiDAR-based multi-sensor fusion approach with 3D digital maps registration
CN117173214A (en) High-precision map real-time global positioning tracking method based on road side monocular camera
Gu et al. SLAM with 3dimensional-GNSS
Kaiser et al. Position and orientation of an aerial vehicle through chained, vision-based pose reconstruction
Wei Multi-sources fusion based vehicle localization in urban environments under a loosely coupled probabilistic framework
Balakrishnan et al. An integrity assessment framework for multi-modal vehicle localization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination