CN108180912B - Mobile robot positioning system and method based on hybrid navigation band - Google Patents

Mobile robot positioning system and method based on hybrid navigation band Download PDF

Info

Publication number
CN108180912B
CN108180912B CN201711497983.4A CN201711497983A CN108180912B CN 108180912 B CN108180912 B CN 108180912B CN 201711497983 A CN201711497983 A CN 201711497983A CN 108180912 B CN108180912 B CN 108180912B
Authority
CN
China
Prior art keywords
image
positioning
camera
datamatrix code
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711497983.4A
Other languages
Chinese (zh)
Other versions
CN108180912A (en
Inventor
陈智君
李超
曹雏清
高云峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhu Hit Robot Technology Research Institute Co Ltd
Original Assignee
Wuhu Hit Robot Technology Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhu Hit Robot Technology Research Institute Co Ltd filed Critical Wuhu Hit Robot Technology Research Institute Co Ltd
Priority to CN201711497983.4A priority Critical patent/CN108180912B/en
Publication of CN108180912A publication Critical patent/CN108180912A/en
Application granted granted Critical
Publication of CN108180912B publication Critical patent/CN108180912B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/04Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by terrestrial means

Abstract

The invention is suitable for the technical field of robot positioning, and provides a mobile robot positioning system of a hybrid navigation band, which comprises a DataMatrix code band and a color band which are arranged on the ground of a walking path of a mobile robot, wherein the color band is arranged on a path of a high-speed movement area, the DataMatrix code band is arranged on a path of an accurate positioning area, an external rectangle of the DataMatrix code consists of two adjacent solid line edges and two adjacent dotted line edges, and a path distance of a calibration point is carried in a DataMatrix code image; the mobile robot is provided with a camera and a positioning sensor, the shooting plane of the camera is parallel to the ground, the positioning sensor identifies whether a DataMatrix code exists in an image or not based on the image shot by the camera, if so, the positioning is carried out based on the DataMatrix code in the image, and if not, the positioning is carried out based on a color band. Accurate coordinate information of the robot is obtained in an area needing accurate positioning, stable and reliable position information of the robot can be obtained in an area needing high-speed movement, the positioning speed is high, and the requirement of high-speed movement is met.

Description

Mobile robot positioning system and method based on hybrid navigation band
Technical Field
The invention belongs to the technical field of robot positioning, and provides a mobile robot positioning system and method based on a hybrid navigation band.
Background
The mobile robot has more and more extensive application in fields such as automation factory and intelligent storage logistics, and accurate positioning is the key for realizing precise operation of the mobile robot. The positioning method of the mobile robot includes an encoder method, a radio frequency identification method, a visual positioning method, etc., wherein the encoder method may cause errors due to slippage caused by the movement of the robot. The radio frequency identification method needs to arrange radio frequency tags at high density, but the arrangement density of the radio frequency tags can interfere with each other in the positioning process, so that positioning errors are caused. The visual positioning method based on the artificial road sign is one of the most reliable positioning methods of the mobile robot at the present stage, wherein the DataMatrix code has small size and large information amount, can accurately position the sensor, but has low positioning speed, the principle of the visual positioning method based on the color ribbon is simple, the speed is high, the reliability is high, the sensor can be positioned with lower precision, if the two positioning methods are combined, the advantage of good stability of the color ribbon positioning at high speed is fully exerted, and the defect of the positioning of the DataMatrix code ribbon can be effectively solved.
The technical solution described in patent document CN203241826U is to continuously lay the color ribbon on the ground, and the main function of laying the DataMatrix codes on the color ribbon is to correct the traveling distance of the mobile robot, and this laying method results in that the interval between two DataMatrix codes is too large, and a camera installed at the bottom of the robot cannot guarantee that at least one DataMatrix code can be shot at any moment, so that a high-precision positioning result cannot be obtained;
in addition, when the color bar and the DataMatrix code are used for positioning the mobile robot, the relationship between the robot coordinate system and the image coordinate system needs to be calibrated in advance, which causes the height between a camera and the ground to be kept unchanged when the robot runs, and limits the operating environment of the mobile robot.
Disclosure of Invention
The embodiment of the invention provides a mobile robot positioning method based on a hybrid navigation band, and aims to solve the problems of low positioning precision and limited application of the existing visual positioning method.
The present invention is achieved in such a way that a hybrid navigation band-based mobile robot positioning system includes:
a DataMatrix code band and a color band which are arranged on the ground of the walking path of the mobile robot, wherein the color band is arranged on the path of the high-speed movement area, the DataMatrix code band is arranged on the path of the accurately positioned area,
the circumscribed rectangle of the DataMatrix code consists of two adjacent solid line edges and two adjacent dotted line edges, and the DataMatrix code image carries the path distance of the corresponding DataMatrix code mark point, wherein the mark point is the intersection point of the two adjacent solid line edges or the intersection point of the two adjacent dotted line edges;
the mobile robot is provided with a camera and a positioning sensor, the shooting plane of the camera is parallel to the ground, the camera sends the acquired image to the positioning sensor, the positioning sensor identifies whether a DataMatrix code exists in the image, if so, the positioning is carried out based on the DataMatrix code in the image, and if not, the positioning is carried out based on a color band.
The invention is realized in such a way that a mobile robot positioning method based on a hybrid navigation band comprises the following steps:
s1, identifying whether the DataMatrix code exists in the image, if so, positioning based on the path distance of the DataMatrix code calibration point, if not, positioning based on the color bar
Further, the positioning based on the color band specifically includes the following steps:
s11, extracting a navigation path, namely extracting a color band central line and an average pixel width by using a skeleton extraction algorithm, and fitting the central line to obtain the navigation path, wherein the navigation path is a straight line;
s12, calculating the deflection angle of the camera plane relative to the navigation path;
and S13, calculating the lateral offset of the robot relative to the navigation path.
Further, the deflection angle calculation method specifically includes:
acquiring two intersection points P of the navigation path straight line and the image boundary1And P2The calculation formula of the deflection angle is as follows: theta-atan 2 (y)2-y1,x2-x1) Wherein (x)1,y1) Is P1Image coordinates of points, (x)2,y2) Is P2The image coordinates of the points.
Further, the step S13 includes the following steps:
s131, calculating the intersection point P of the straight line where the navigation path is located and the straight line y which is height/2 in the image3,P3The image coordinates of the point are (x)3,y3) Height is the image height;
s132, the lateral offset of the robot with respect to the navigation path is dist ═ x3-width/2) × ratio, where width is the image width, ratio is set to lenWrd/lenImg, lenWrd is the true width of the color bar, and lenImg is the pixel width of the color bar on the image.
Further, the positioning based on the DataMatrix code in the image specifically includes the following steps:
s21, identifying a rectangular area of the DataMatrix code in the image, namely a Roi area;
s22, determining four boundary intersection points and calibration points of the DataMatrix code based on the rectangular area and the circumscribed rectangle fitted to the rectangular area;
s23, identifying the DataMatrix code image in the Roi area, and acquiring the path distance of the index point carried by the DataMatrix code image;
s24, calculating coordinates of four boundary intersection points of the DataMatrix code in a ground coordinate system based on the path distance of the calibration point;
s25, coordinates of four boundary intersection points in an image coordinate system based on the DataMatrix code, coordinates of the four boundary intersection points in a ground coordinate system, and an internal parameter matrix M of the cameracamTo calculate the position of the robot in the ground coordinate system.
Further, the step S21 includes the following steps:
s211, partitioning the image, and calculating the average gray scale of the image in each partition;
s212, carrying out binarization on each block according to the average gray level self-adaption;
s213, performing minimum external rectangle fitting on the binarized connected domain;
and S214, if the fitted circumscribed rectangle size accords with the circumscribed rectangle size of the DataMatrix code, the corresponding rectangular area is the Roi area.
Further, the step S25 specifically includes the following steps:
s251, calculating the height from the center of the camera to the bottom surface, wherein the calculation formula is as follows: h-distw/disticWherein, distwIs the distance, dist, of the boundary intersection point i and the boundary intersection point j in the ground coordinate systemicDefining a coordinate system for the boundary intersection point i and the boundary intersection point j
Figure GDA0002816135170000041
Wherein a coordinate system is defined
Figure GDA0002816135170000042
Figure GDA0002816135170000043
As image coordinate XiHomogeneous coordinate of (A), McamIs an internal parameter matrix of the camera;
s252 based on equation sigma Xi=McamXcCalculating coordinates of the four boundary intersections in a camera coordinate system, wherein XiCoordinates in the image coordinate system, X, being boundary intersectionscThe coordinates of the boundary intersection points in a camera coordinate system are shown, sigma is a photographing depth factor, and when a photographing plane of the camera is parallel to the ground, the height from the center of the camera to the ground is equal to the photographing depth factor of the camera;
s253, under the condition that the camera shooting plane is parallel to the ground, the conversion from the ground plane coordinate to the camera plane coordinate can be simplified into affine transformation, namely Xc=Rw-c*Xw+tw-cThe rotation matrix R of this affine transformation can be determined from the coordinates of the three boundary intersections in the camera coordinate system and the coordinates in the ground coordinate systemw-cAnd shifting adjacent tw-c
S254, the coordinate of the center of the camera is Xcan=-Rw-c*tw-cThe angle between the camera coordinate system and the ground coordinate system is theta (arctan) (R)w-c(2,1)/Rw-c(1,1));
S255, based on the relation between the camera center and the robot position: xcam=Rr-w*Xrobot+tr-wKnowing the coordinates of the robot as
Figure GDA0002816135170000044
Further, the step S22 specifically includes the following steps:
s221, detecting the black-white conversion times along four sides of the rectangular area, and identifying a solid line side and a dotted line side;
s222, determining four boundary intersection points of the DataMatrix code according to the circumscribed rectangle fitted based on the rectangular region, wherein the intersection point of two adjacent solid line edges or two adjacent dotted line edges is the calibration point of the DataMatrix code.
The positioning method provided by the embodiment of the invention has the following beneficial effects:
1. accurate coordinate information of the robot is obtained in an area needing accurate positioning, stable and reliable position information of the robot can be obtained in an area needing high-speed movement, the positioning speed is high, and the requirement of high-speed movement is met;
2. positioning is carried out based on the DataMatrix code and the color strip, and the actual physical distance corresponding to the pixels does not need to be calibrated in advance, so that the camera can randomly change the shooting height within a preset height range, and accurate positioning results can be obtained;
3. the method for positioning the DataMatrix-based codes fully utilizes a plurality of boundary intersections of each DataMatrix code to calculate, and the positioning result is more accurate.
Drawings
Fig. 1 is a flowchart of a hybrid navigation band-based mobile robot positioning method according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the invention provides a mobile robot positioning system based on a hybrid navigation band, which comprises:
a DataMatrix code (two-dimension code) belt and a color belt which are arranged on the ground of the walking path of the mobile robot, wherein the color belt is arranged on the path of the high-speed movement area, the DataMatrix code belt is arranged on the path of the accurately positioned area,
the circumscribed rectangle of the DataMatrix code consists of two adjacent solid line edges and two adjacent dotted line edges, and the DataMatrix code image carries the path distance of the corresponding DataMatrix code mark point;
the mobile robot is provided with a camera and a positioning sensor, a shooting plane of the camera is parallel to the ground, the camera sends an acquired image to the positioning sensor, the positioning sensor identifies whether a DataMatrix code exists in the image, if so, positioning is carried out based on the DataMatrix code in the image, if not, positioning is carried out based on a color band, the color of the color band and the real width of the color band are calibrated before use, and the mobile robot can shoot at least one DataMatrix code in an accurately positioned area each time.
Fig. 1 is a flowchart of a hybrid navigation band-based positioning method according to an embodiment of the present invention, and for convenience of description, the method includes the following steps:
s1, it is identified whether or not the DataMatrix code is present in the image, and if present, the positioning is performed based on the path distance of the DataMatrix code index point, and if not, the positioning is performed based on the color bar.
In the embodiment of the present invention, the positioning based on the color band specifically includes the following steps:
s11, extracting a navigation path: extracting a color band central line and the average pixel width by using a skeleton extraction algorithm, and fitting the central line to obtain a navigation path, wherein the navigation path is a straight line;
s12, calculating the deflection angle of the camera plane relative to the navigation path;
and S13, calculating the lateral offset of the robot relative to the navigation path.
In the embodiment of the present invention, the calculation method of the deflection angle specifically includes:
acquiring two intersection points P of the navigation path straight line and the image boundary1And P2The calculation formula of the deflection angle is as follows: theta-atan 2 (y)2-y1,x2-x1) Wherein (x)1,y1) Is P1Image coordinates of points, (x)2,y2) Is P2The image coordinates of the points.
In the embodiment of the present invention, step S13 specifically includes the following steps:
s131, calculating the intersection point P of the straight line where the navigation path is located and the straight line y which is height/2 in the image3,P3The image coordinates of the point are (x)3,y3) Height is the image height;
s132, the lateral offset of the robot with respect to the navigation path is dist ═ x3-width/2) × ratio, where width is the image width, ratio is set to lenWrd/lenImg, lenWrd is the true width of the color bar, and lenImg is the pixel width of the color bar on the image.
In the embodiment of the present invention, the positioning based on the DataMatrix code in the image specifically includes the following steps:
s21, identifying a rectangular area of the DataMatrix code in the image, namely a Roi area;
in the embodiment of the present invention, step S21 includes the following steps:
s211, partitioning the image, and calculating the average gray scale of the image in each partition;
s212, carrying out binarization on each block according to the average gray level self-adaption;
s213, performing minimum external rectangle fitting on the binarized connected domain;
s214, if the fitted circumscribed rectangle size accords with the circumscribed rectangle size of the DataMatrix code, the corresponding rectangular area is the Roi area;
s22, determining four boundary intersection points and calibration points of the DataMatrix code based on the rectangular area and the circumscribed rectangle fitted to the rectangular area;
in the embodiment of the present invention, step S22 specifically includes the following steps:
s221, detecting the black-white conversion times along four sides of the rectangular area, and identifying a solid line side and a dotted line side;
s222, determining four boundary intersection points of the DataMatrix code according to the circumscribed rectangle fitted based on the rectangular region, wherein the intersection point of two adjacent solid line edges or two adjacent dotted line edges is the calibration point of the DataMatrix code.
S23, identifying the DataMatrix code image in the Roi area, and acquiring the path distance of the index point carried by the DataMatrix code image;
s24, calculating coordinates of four boundary intersection points of the DataMatrix code in a ground coordinate system based on the path distance of the calibration point;
in the embodiment of the present invention, the four boundary intersections include: the x coordinate of the four boundary intersection points is the path distance of the calibration point and is used for identifying the moving distance of the current camera center along the path, and the y coordinate is the offset distance of the boundary intersection points relative to the path.
S25, coordinates of four boundary intersection points in an image coordinate system based on the DataMatrix code, coordinates of the four boundary intersection points in a ground coordinate system, and an internal parameter matrix M of the cameracamTo calculate the position of the robot in the ground coordinate system.
In the embodiment of the present invention, step S25 specifically includes the following steps:
s251, calculating the height from the center of the camera to the bottom surface, wherein the calculation formula is as follows: h-distw/disticWherein, distwIs the distance, dist, of the boundary intersection point i and the boundary intersection point j in the ground coordinate systemicDefining a coordinate system for the boundary intersection point i and the boundary intersection point j
Figure GDA0002816135170000071
Wherein a coordinate system is defined
Figure GDA0002816135170000072
Figure GDA0002816135170000073
As image coordinate XiHomogeneous coordinate of (A), McamIs an internal parameter matrix of the camera;
s252 based on equation sigma Xi=McamXcCalculating coordinates of the four boundary intersections in a camera coordinate system, wherein XiCoordinates in the image coordinate system, X, being boundary intersectionscIs the coordinate of the boundary intersection point in the camera coordinate system, and sigma is the photographing depth factor, when the photographing plane of the camera is parallel to the ground, the height from the center of the camera to the ground is equal to the photographing depth of the cameraA factor;
s253, under the condition that the camera shooting plane is parallel to the ground, the conversion from the ground plane coordinate to the camera plane coordinate can be simplified into affine transformation, namely Xc=Rw-c*Xw+tw-cThe rotation matrix R of this affine transformation can be determined from the coordinates of the three boundary intersections in the camera coordinate system and the coordinates in the ground coordinate systemw-cAnd shifting adjacent tw-c
S254, the coordinate of the center of the camera is Xcan=-Rw-c*tw-cThe angle between the camera coordinate system and the ground coordinate system is theta (arctan) (R)w-c(2,1)/Rw-c(1,1)) to facilitate real-time adjustment of the orientation of the robot relative to the navigation path;
s255, based on the relation between the camera center and the robot position: xcam=Rr-w*Xrobot+tr-wKnowing the coordinates of the robot as
Figure GDA0002816135170000081
The positioning method provided by the embodiment of the invention has the following beneficial effects:
1. accurate coordinate information of the robot is obtained in an area needing accurate positioning, stable and reliable position information of the robot can be obtained in an area needing high-speed movement, the positioning speed is high, and the requirement of high-speed movement is met;
2. positioning is carried out based on the DataMatrix code and the color strip, and the actual physical distance corresponding to the pixels does not need to be calibrated in advance, so that the camera can randomly change the shooting height within a preset height range, and accurate positioning results can be obtained;
3. the method for positioning the DataMatrix-based codes fully utilizes a plurality of boundary intersections of each DataMatrix code to calculate, and the positioning result is more accurate.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (1)

1. A mobile robot positioning method based on hybrid navigation is characterized in that a mobile robot positioning system based on hybrid navigation comprises: the DataMatrix code strip and the color strip are arranged on the ground of a traveling path of the mobile robot, the color strip is arranged on the path of a high-speed moving area, the DataMatrix code strip is arranged on the path of a precise positioning area, an external rectangle of the DataMatrix code consists of two adjacent solid line edges and two adjacent dotted line edges, a path distance corresponding to a DataMatrix code mark point is carried in a DataMatrix code image, and the mark point refers to an intersection point of the two adjacent solid line edges or an intersection point of the two adjacent dotted line edges; the mobile robot is provided with a camera and a positioning sensor, the shooting plane of the camera is parallel to the ground, the camera sends the acquired image to the positioning sensor, the positioning sensor identifies whether a DataMatrix code exists in the image, if so, the positioning is carried out based on the DataMatrix code in the image, and if not, the positioning is carried out based on a color band; the positioning method of the mobile robot positioning system based on hybrid navigation comprises the following steps:
s1, identifying whether the DataMatrix code is present in the image, if so, performing positioning based on the path distance of the DataMatrix code calibration point, and if not, performing positioning based on the color bar:
the positioning based on the color band specifically comprises the following steps:
s11, extracting a navigation path, namely extracting a color band central line and an average pixel width by using a skeleton extraction algorithm, and fitting the central line to obtain the navigation path, wherein the navigation path is a straight line;
s12, calculating the deflection angle of the camera plane relative to the navigation path;
s13, calculating the transverse offset of the robot relative to the navigation path;
the deflection angle calculation method specifically comprises the following steps:
acquiring two intersection points P of the navigation path straight line and the image boundary1And P2The calculation formula of the deflection angle is as follows: theta-atan 2 (y)2-y1,x2-x1) Wherein (A) isx1,y1) Is P1Image coordinates of points, (x)2,y2) Is P2Image coordinates of the points;
the step S13 includes the following steps:
s131, calculating the intersection point P of the straight line where the navigation path is located and the straight line y which is height/2 in the image3,P3The image coordinates of the point are (x)3,y3) Height is the image height;
s132, the lateral offset of the robot with respect to the navigation path is dist ═ x3-width/2) × ratio, where width is the image width, ratio is set to lenWrd/lenImg, lenWrd is the true width of the color bar, lenImg is the pixel width of the color bar on the image;
the positioning based on the DataMatrix code in the image specifically comprises the following steps:
s21, identifying a rectangular area of the DataMatrix code in the image, namely a Roi area;
s22, determining four boundary intersection points and calibration points of the DataMatrix code based on the rectangular area and the circumscribed rectangle fitted to the rectangular area;
s23, identifying the DataMatrix code image in the Roi area, and acquiring the path distance of the index point carried by the DataMatrix code image;
s24, calculating coordinates of four boundary intersection points of the DataMatrix code in a ground coordinate system based on the path distance of the calibration point;
s25, coordinates of the four boundary intersections in the image coordinate system based on the DataMatrix code, and coordinates of the four boundary intersections in the ground coordinate system;
the step S21 includes the following steps:
s211, partitioning the image, and calculating the average gray scale of the image in each partition;
s212, carrying out binarization on each block according to the average gray level self-adaption;
s213, performing minimum external rectangle fitting on the binarized connected domain;
s214, if the fitted circumscribed rectangle size accords with the circumscribed rectangle size of the DataMatrix code, the corresponding rectangular area is the Roi area;
the step S25 specifically includes the following steps:
s251, calculating the height from the center of the camera to the bottom surface, wherein the calculation formula is as follows: h-distw/disticWherein, distwIs the distance, dist, of the boundary intersection point i and the boundary intersection point j in the ground coordinate systemicDefining a coordinate system for the boundary intersection point i and the boundary intersection point j
Figure FDA0002816135160000021
Wherein a coordinate system is defined
Figure FDA0002816135160000022
Figure FDA0002816135160000023
As image coordinate XiHomogeneous coordinate of (A), McamIs an internal parameter matrix of the camera;
s252 based on equation sigma Xi=McamXcCalculating coordinates of the four boundary intersections in a camera coordinate system, wherein XiCoordinates in the image coordinate system, X, being boundary intersectionscThe coordinates of the boundary intersection points in a camera coordinate system are shown, sigma is a photographing depth factor, and when a photographing plane of the camera is parallel to the ground, the height from the center of the camera to the ground is equal to the photographing depth factor of the camera;
s253, under the condition that the camera shooting plane is parallel to the ground, the conversion from the ground plane coordinate to the camera plane coordinate can be simplified into affine transformation, namely Xc=Rw-c*Xw+tw-cThe rotation matrix R of this affine transformation can be determined from the coordinates of the three boundary intersections in the camera coordinate system and the coordinates in the ground coordinate systemw-cAnd shifting adjacent tw-c
S254, the coordinate of the center of the camera is Xcan=-Rw-c*tw-cThe angle between the camera coordinate system and the ground coordinate system is theta (arctan) (R)w-c(2,1)/Rw-c(1,1));
S255, based on phaseRelationship of center of machine to robot position: xcam=Rr-w*Xrobot+tr-wKnowing the coordinates of the robot as
Figure FDA0002816135160000031
The step S22 specifically includes the following steps:
s221, detecting the black-white conversion times along four sides of the rectangular area, and identifying a solid line side and a dotted line side;
s222, determining four boundary intersection points of the DataMatrix code according to the circumscribed rectangle fitted based on the rectangular region, wherein the intersection point of two adjacent solid line edges or two adjacent dotted line edges is the calibration point of the DataMatrix code.
CN201711497983.4A 2017-12-31 2017-12-31 Mobile robot positioning system and method based on hybrid navigation band Active CN108180912B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711497983.4A CN108180912B (en) 2017-12-31 2017-12-31 Mobile robot positioning system and method based on hybrid navigation band

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711497983.4A CN108180912B (en) 2017-12-31 2017-12-31 Mobile robot positioning system and method based on hybrid navigation band

Publications (2)

Publication Number Publication Date
CN108180912A CN108180912A (en) 2018-06-19
CN108180912B true CN108180912B (en) 2021-03-05

Family

ID=62549698

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711497983.4A Active CN108180912B (en) 2017-12-31 2017-12-31 Mobile robot positioning system and method based on hybrid navigation band

Country Status (1)

Country Link
CN (1) CN108180912B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109129397A (en) * 2018-09-07 2019-01-04 北京特种机械研究所 Its system for carrying mechanical arm position is demarcated using AGV location information
CN109387194B (en) * 2018-10-15 2020-10-09 浙江明度智控科技有限公司 Mobile robot positioning method and positioning system
CN109596120A (en) * 2018-12-25 2019-04-09 芜湖哈特机器人产业技术研究院有限公司 A kind of combined positioning and navigating sensing system
CN109993798B (en) * 2019-04-09 2021-05-28 上海肇观电子科技有限公司 Method and equipment for detecting motion trail by multiple cameras and storage medium
CN111045431B (en) * 2019-12-31 2022-05-27 芜湖哈特机器人产业技术研究院有限公司 Ribbon-based mobile robot navigation method and system
CN111103801B (en) * 2019-12-31 2022-05-17 芜湖哈特机器人产业技术研究院有限公司 Mobile robot repositioning method based on genetic algorithm and mobile robot
CN111551176A (en) * 2020-04-09 2020-08-18 成都双创时代科技有限公司 Robot indoor positioning method based on double-color bar and two-dimensional code
CN111578930B (en) * 2020-05-21 2022-06-21 深圳市海柔创新科技有限公司 Navigation method and navigation device
CN114279461B (en) * 2022-03-02 2022-07-08 中科开创(广州)智能科技发展有限公司 Mileage positioning method, unit, device, equipment and storage medium of robot

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6249415A (en) * 1985-08-29 1987-03-04 Fanuc Ltd Line-off detecting device for unmanned carrying car
CN103064417A (en) * 2012-12-21 2013-04-24 上海交通大学 Global localization guiding system and method based on multiple sensors
CN103294059A (en) * 2013-05-21 2013-09-11 无锡普智联科高新技术有限公司 Hybrid navigation belt based mobile robot positioning system and method thereof
CN104197899A (en) * 2014-09-24 2014-12-10 中国科学院宁波材料技术与工程研究所 Mobile robot location method and system
CN206075136U (en) * 2016-08-29 2017-04-05 深圳市劲拓自动化设备股份有限公司 Vision navigation control system based on fuzzy algorithmic approach

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6249415A (en) * 1985-08-29 1987-03-04 Fanuc Ltd Line-off detecting device for unmanned carrying car
CN103064417A (en) * 2012-12-21 2013-04-24 上海交通大学 Global localization guiding system and method based on multiple sensors
CN103294059A (en) * 2013-05-21 2013-09-11 无锡普智联科高新技术有限公司 Hybrid navigation belt based mobile robot positioning system and method thereof
CN104197899A (en) * 2014-09-24 2014-12-10 中国科学院宁波材料技术与工程研究所 Mobile robot location method and system
CN206075136U (en) * 2016-08-29 2017-04-05 深圳市劲拓自动化设备股份有限公司 Vision navigation control system based on fuzzy algorithmic approach

Also Published As

Publication number Publication date
CN108180912A (en) 2018-06-19

Similar Documents

Publication Publication Date Title
CN108180912B (en) Mobile robot positioning system and method based on hybrid navigation band
CN109791052B (en) Method and system for classifying data points of point cloud by using digital map
Yang et al. 3D local feature BKD to extract road information from mobile laser scanning point clouds
CN107850445B (en) Method and system for generating and using positioning reference data
CN105700532B (en) The Intelligent Mobile Robot navigator fix control method of view-based access control model
CN111801711A (en) Image annotation
CN108286970B (en) Mobile robot positioning system, method and device based on DataMatrix code band
CN110197157B (en) Pavement crack growth detection method based on historical crack data
CN105260699A (en) Lane line data processing method and lane line data processing device
JP2011511281A (en) Map matching method with objects detected by sensors
CN110307791B (en) Vehicle length and speed calculation method based on three-dimensional vehicle boundary frame
JP6593088B2 (en) Vehicle position estimation apparatus and program
CN104217427A (en) Method for positioning lane lines in traffic surveillance videos
CN102592454A (en) Intersection vehicle movement parameter measuring method based on detection of vehicle side face and road intersection line
CN103206957B (en) The lane detection and tracking method of vehicular autonomous navigation
CN104794425B (en) A kind of car statistics method based on driving trace
CN102354457A (en) General Hough transformation-based method for detecting position of traffic signal lamp
CN110197173B (en) Road edge detection method based on binocular vision
CN102243705A (en) Method for positioning license plate based on edge detection
CN105444741A (en) Double view window based route characteristic identifying, deviation measuring, and accurate positioning method
RU2686279C1 (en) Ledges detection device and the ledges detection method
CN115100292A (en) External parameter online calibration method between laser radar and camera in road environment
CN109544607B (en) Point cloud data registration method based on road sign line
KR102137043B1 (en) Positioning accuracy improvement system
CN116958218A (en) Point cloud and image registration method and equipment based on calibration plate corner alignment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant