CN108469823B - Homography-based mobile robot formation following method - Google Patents

Homography-based mobile robot formation following method Download PDF

Info

Publication number
CN108469823B
CN108469823B CN201810301612.2A CN201810301612A CN108469823B CN 108469823 B CN108469823 B CN 108469823B CN 201810301612 A CN201810301612 A CN 201810301612A CN 108469823 B CN108469823 B CN 108469823B
Authority
CN
China
Prior art keywords
robot
following
image
formation
homography
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201810301612.2A
Other languages
Chinese (zh)
Other versions
CN108469823A (en
Inventor
刘山
曹雨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201810301612.2A priority Critical patent/CN108469823B/en
Publication of CN108469823A publication Critical patent/CN108469823A/en
Application granted granted Critical
Publication of CN108469823B publication Critical patent/CN108469823B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0287Control of position or course in two dimensions specially adapted to land vehicles involving a plurality of land vehicles, e.g. fleet or convoy travelling
    • G05D1/0291Fleet control

Abstract

The invention discloses a homography-based mobile robot formation following method, which utilizes a homography matrix to construct a virtual robot capable of reflecting the real-time pose of a random robot in an ideal formation on the premise of giving an ideal formation spacing distance and an ideal expected image, and converts the original formation problem into the track tracking problem of the virtual robot. In the formation following process, the speed of the pilot robot is estimated, and the pilot speed can be accurately estimated by utilizing a relation model between the homography and the speed and the real-time speed of the following robot, so that the formation experiment cost is saved by avoiding adopting a local communication mode. The method is simple and feasible, and can meet the requirement of formation and following of the mobile robots.

Description

Homography-based mobile robot formation following method
Technical Field
The invention relates to a trajectory tracking method, in particular to a homography-based mobile robot formation following method.
Background
The mobile robot formation control is a typical multi-robot cooperation problem, and has a wide application prospect in the aspects of military, life, industry and the like, the research on the problems is mainly concentrated in the intelligent traffic fields such as scheduling and tracking, the problems are always hot spots and difficulties of multi-robot cooperation related research in recent decades, and the research on the problems plays an important role when multiple robots need to complete operation together.
Formation following essentially belongs to the problem of trajectory tracking, so most of the current research on formation control is performed on the basis of trajectory tracking research, and the traditional research methods mainly comprise: the original nonlinear system is decomposed into a plurality of low-order subsystems, partial Lyapunov functions are designed for the subsystems, then a tracking controller is designed, or the system is converted into a serial or chain system form, and a track tracking controller is designed by a backstepping method. These traditional methods are all performed on the premise that the pose information of the robot is known, and the acquisition of the pose information of the robot needs to be considered in an actual scene, and are generally realized by adopting laser and radar equidistant sensors or establishing a local communication network and the like.
In recent years, robot vision servo control based on a monocular camera is rapidly developed, so that some related researches are gradually implemented around monocular vision feedback, and common methods include: a monocular camera is used for shooting a series of peripheral images, and depth information acquired by a laser sensor is used for constructing a local three-dimensional map to position the pose information of the robot in real time so as to complete a formation task, so that the cost is high and the design difficulty of a controller is high; estimating the related pose by adding mark points on the premise that related information needs to be marked and recorded in advance, and more prior knowledge is provided; methods based on multi-view geometry are more extensive, such as pole and homography decomposition, but there are also problems affecting system stability, such as pole odd xor decomposition is not unique.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a mobile robot formation following method based on homography, which has the following specific technical scheme:
a mobile robot formation following method based on homography is used for formation following control of mobile robots in a multi-robot cooperation system, a piloting-following model is adopted in the method, the piloting robot and the following robot are both wheel robots, and the following robot is provided with a monocular camera which can collect image information in real time, and the method is characterized by comprising the following steps:
step 1: at an initial moment, giving an expected image and distance information representing the pose of the following random robot in an ideal formation, acquiring a current image as an initial image by using a monocular camera installed on the following robot, and calculating and recording a homography matrix according to the expected image and the initial image;
step 2: after a homography matrix between the initial image and the expected image is obtained in the step 1, selecting a plurality of characteristic points in a certain plane area of the initial image, which belongs to the pilot robot and is vertical to the optical axis of the camera;
and step 3: giving a signal to enable the following robot to start to operate, continuously tracking the characteristic points in the operation process to calculate a homography matrix between the current image and the expected image, and further constructing a system error according to homography matrix elements;
and 4, step 4: calculating and recording the linear velocity and the angular velocity of the random robot in the current running period according to the homography matrix of the current image and the expected image obtained by calculation in the step 3, and calculating the homography matrix between the next current image and the expected image when the running period passes; estimating the linear velocity and the angular velocity of the piloting robot by combining the homography matrix of the previous period and the current period and the linear velocity and the angular velocity of the following robot of the previous period;
and 5: the real-time system error obtained in the step (3) and the linear velocity and the angular velocity of the piloting robot estimated in the step (4) are jointly used as input signals of a controller, then the controller outputs signals to drive the following robot to move, and the piloting robot always moves autonomously in the running process;
step 6: and (3) the following robot receives the driving signal to move, simultaneously acquires a real-time current image by using the monocular camera, and repeats the steps 3, 4 and 5, and the following steps form an ideal queue appointed by a desired image with the pilot robot gradually.
Preferably, the calculation formula of the homography matrix in step 3 is as follows:
Figure GDA0002358991160000021
wherein (x)e,zee) Representing the relative pose of the following robot to its desired position, d*And the distance between the expected position and the plane area is represented, namely the distance between the random robot and the pilot robot in the ideal formation.
Preferably, the systematic error constructed in step 3 is calculated by the following equation:
Figure GDA0002358991160000022
wherein h is11,h13,h31,h33Are elements in the homography matrix.
Preferably, in step 4, the linear velocity and the angular velocity of the moving robot during operation can be estimated according to the following formulas:
Figure GDA0002358991160000031
Figure GDA0002358991160000032
wherein (v)ff) And following the linear speed and the angular speed of the robot for the current operation period.
Preferably, the speed signal output by the controller for driving the following robot in step 5 is calculated by the following formula:
Figure GDA0002358991160000033
wherein k is1,k2The gain is controlled, and the value is taken according to an actual experiment.
The invention has the beneficial effects that: the method and the device have the advantages that the system error is obtained by combining the characteristic point tracking mode with the homography transfer characteristic, the real-time performance of system operation is improved, the speed of the piloting robot is accurately estimated by utilizing a relation model between the homography and the speed and the real-time speed of the following robot, and no additional information interaction is needed.
Drawings
FIG. 1 is a flow chart of a homography based mobile robot formation following method of the present invention;
FIG. 2 is a schematic diagram of the calculation of homographies between a current image and a desired image;
FIG. 3 is a schematic view of a local coordinate system of the following robot;
FIG. 4 is a representation of a desired image following an ideal pose of a robot;
FIG. 5 is a real-time image taken following the robot;
FIG. 6 is a diagram of a path followed by a robot;
fig. 7 is a system error graph.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and preferred embodiments, and the objects and effects of the present invention will become more apparent, and the present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, a homography-based mobile robot formation following method is used for formation following control of a mobile robot in a multi-robot cooperative system, a piloting-following model is adopted, the piloting robot and the following robot are both wheeled robots, and the following robot is provided with a monocular camera to collect image information in real time, and the method comprises the following steps:
step 1: at an initial moment, giving an expected image and distance information representing the pose of the following random robot in an ideal formation, acquiring a current image by using a monocular camera installed on the following robot, taking the current image as an initial image, and calculating and recording a homography matrix according to the expected image and the current initial image;
step 2: after a homography matrix between the initial image and the expected image is obtained in the step 1, selecting a plurality of characteristic points in a certain plane area of the initial image, which belongs to the pilot robot and is vertical to the optical axis of the camera;
the pixel coordinates of a plurality of matching point pairs in two images of the expected image and the initial image can be obtained by adopting a characteristic point matching mode based on SIFT, a homography matrix between the two images is calculated by utilizing the pixel coordinates, the planar attribute of homography is considered, and the relationship between the following robot pose and the navigation robot pose is described, so that the pixel points are correspondingly positioned in a certain planar area of the navigation robot and vertical to the optical axis of the camera.
And step 3: giving a signal to enable the following robot to start to operate, continuously tracking the characteristic points in the operation process to calculate a homography matrix between the current image and the expected image, and further constructing a system error according to homography matrix elements;
the feature point tracking used in the step is realized by adopting an LK sparse optical flow method, firstly, feature points are searched and recorded, angular points with obvious features and sub-pixel angular points are selected from images, then, the movement from the feature points of a current frame to the pixel position of a next frame, namely, the optical flow, is calculated based on the LK optical flow method, and finally, the correct positions of the feature points in the next frame of images are screened out through the judgment of certain indexes, so that the tracking of the feature points between two adjacent frames of images is realized. According to the method, the homography between two adjacent images can be rapidly calculated, and in view of the transfer characteristic of the homography, any two non-adjacent images can be calculated in an accumulative multiplication mode, so that the homography matrix between the real-time current image and the expected image can be further rapidly calculated, and the process is represented by fig. 2:
in the figure HitRepresenting a homography matrix between the initial image and the expected image, obtained in step 1, wherein a subscript k represents a k-th frame image, and a homography matrix of two adjacent frames is Hk+1 kReferred to as incremental homography, then the incremental homographies H of a series of adjacent frames between the current image and the initial imagek+1 kObtaining the homography H between the current image and the initial image by multiplicationciAnd then according to Hct=HciHitObtaining the homography H of the current image and the expected imagect
A local coordinate system is established by taking the current position of the following robot as an origin and taking the orientation as a z-axis as shown in figure 3, a virtual robot is constructed to represent the ideal pose of the following robot, and the coordinate is (x) under the current local coordinate systeme,zee) Then, it is not difficult to find that the rotation matrix and the translation vector between the following robot and the virtual robot at the current moment are respectively:
Figure GDA0002358991160000041
the homography matrix between the following robot pose and the virtual robot pose, i.e., the expected pose, can be calculated by the following formula:
Figure GDA0002358991160000051
wherein a certain plane which belongs to the piloting robot and is vertical to the optical axis of the camera is set as a reference plane, d*The distance between the expected position and the reference plane is represented, namely the distance between the random robot and the pilot robot in the ideal formation,
Figure GDA0002358991160000052
the normal vector of the reference plane in the expected pose coordinate system is shown, in the practical situation, the optical axis of the monocular camera on the virtual robot, namely the z-axis of the expected pose coordinate system is vertical to the reference plane of the pilot robot, namely, the normal vector of the reference plane is shown as the normal vector in the expected pose coordinate system
Figure GDA0002358991160000053
Then the homography matrix can be further represented as:
Figure GDA0002358991160000054
the homography H represented by the above formula is the homography H of the current image and the desired image calculated based on the feature point trackingctIt can be seen that only four elements in the homography matrix are variables, respectively h11,h13,h31,h33The error variables are constructed using these four elements as follows:
Figure GDA0002358991160000055
and 4, step 4: calculating and recording the linear velocity and the angular velocity of the random robot in the current running period according to the homography matrix of the current image and the expected image obtained by calculation in the step 3, and calculating the homography matrix between the next current image and the expected image when the running period passes; estimating the linear velocity and the angular velocity of the piloting robot by combining the homography matrix of the previous period and the current period and the linear velocity and the angular velocity of the following robot of the previous period;
in world coordinatesUnder the system, the coordinate of the following robot is assumed to be (x)f,zff) The coordinates of the virtual robot are (x)v,zvv) The coordinate of the piloted robot is (x)l,zll) And the coordinates of the latter two satisfy:
Figure GDA0002358991160000061
the linear velocity and angular velocity of the following robot and the pilot robot are assumed to be (v) respectivelyff),(vldld) Under the local coordinate system of the following robot, the relative position and posture coordinates of the virtual robot and the following robot are (x)e,zee) The relative coordinate and the inertial coordinate of the following virtual robot satisfy the following relation:
Figure GDA0002358991160000062
the following robot has a kinematic model of
Figure GDA0002358991160000063
The virtual robot does not satisfy the general wheel type robot kinematics model
Figure GDA0002358991160000064
Taking out the non-constant element h in the homography matrix11,h13,h31,h33And estimating the linear velocity and the angular velocity of the pilot robot according to the formula (6) and the following and virtual robot kinematics model, wherein the calculation method comprises the following steps:
Figure GDA0002358991160000065
Figure GDA0002358991160000066
and 5: the real-time system error obtained in the step (3) and the linear velocity and the angular velocity of the piloting robot estimated in the step (4) are jointly used as input signals of a controller, then the controller outputs signals to drive the following robot to move, and the piloting robot always moves autonomously in the running process;
given two control gains k greater than 01,k2Combining e in systematic error1,e2And estimated piloting robot velocity vldldThe speed input signal following the next operating cycle of the robot can be calculated as follows:
Figure GDA0002358991160000067
k1,k2as an empirical value, a suitable value is determined by actual experiments.
Step 6: and (3) the following robot receives the driving signal to move, simultaneously acquires a real-time current image by using the monocular camera, and repeats the steps 3, 4 and 5, and the following steps form an ideal queue appointed by a desired image with the pilot robot gradually.
Aiming at a mobile robot formation system represented by a piloting-following model, the invention adopts a formation following control scheme based on homography, and utilizes a homography matrix to construct a virtual robot capable of reflecting the real-time pose of a following random robot in an ideal formation on the premise of giving an ideal formation spacing distance and an ideal expected image, so that the original formation problem is converted into the track tracking problem of the virtual robot. In the formation following process, the speed of the pilot robot is estimated, and the pilot speed can be accurately estimated by utilizing a relation model between the homography and the speed and the real-time speed of the following robot, so that the formation experiment cost is saved by avoiding adopting a local communication mode. The method is simple and feasible, and can meet the requirement of formation and following of the mobile robots.
Given the expected image representation of the pose of the following robot in the ideal formation as shown in FIG. 4, the following robot has a wheel spacing of48cm, a Rochtech S5500 network camera is adopted as the monocular camera fixed at the geometric center of the monocular camera, and an internal reference matrix of the camera is obtained according to calibration
Figure GDA0002358991160000071
The following robot in the embodiment is a track robot, the piloting robot is a simple model, the piloting robot is equivalent to a general wheel type robot in terms of kinematics, the track robot is powered by a human and is pulled to move to meet incomplete constraint, and the control gain is k1=0.045,k2The distance of the desired image is d 0.03*=0.35m。
Fig. 5,6 and 7 show the operation results of the method, fig. 5 and fig. 6 respectively show a real-time image taken by the following robot in the operation process and a robot moving route (partially intercepted image) taken by an experimenter, in the experiment, the following robot firstly moves straight under artificial dragging, then turns left, and finally moves straight to stop, and it can be seen from the real-time image that the following robot gradually approaches to the following robot (because the initial pose is farther than the expected pose), the following robot and the leading robot keep a certain relative angle during turning to generate an angular velocity signal to drive the following robot to turn, and the part from the last moving straight to the stop and the last real-time image are very similar to the expected image, which also shows that the following robot and the leading robot keep an ideal formation to realize the task of formation following.
The error curve of the formation following system in the experiment is shown in FIG. 7, and the final error converges to (0.0061, -0.0042, -0.0078), the upper two are e1,e2Despite the large curve fluctuation, the overall trend shows that the position error e1,e2The action on the control law gradually converges to near 0 and tends to remain, with a directional deviation e3And also gradually converges to 0. Therefore, the method can effectively realize the conventional formation following task of the mobile robot, and has stable control effect and small convergence error.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and although the invention has been described in detail with reference to the foregoing examples, it will be apparent to those skilled in the art that various changes in the form and details of the embodiments may be made and equivalents may be substituted for elements thereof. All modifications, equivalents and the like which come within the spirit and principle of the invention are intended to be included within the scope of the invention.

Claims (1)

1. A mobile robot formation following method based on homography is used for formation following control of mobile robots in a multi-robot cooperation system, a piloting-following model is adopted in the method, the piloting robot and the following robot are both wheel robots, and the following robot is provided with a monocular camera which can collect image information in real time, and the method is characterized by comprising the following steps:
step 1: at an initial moment, giving an expected image and distance information representing the pose of the following random robot in an ideal formation, acquiring a current image as an initial image by using a monocular camera installed on the following robot, and calculating and recording a homography matrix according to the expected image and the initial image;
step 2: after a homography matrix between the initial image and the expected image is obtained in the step 1, selecting a plurality of characteristic points in a certain plane area of the initial image, which belongs to the pilot robot and is vertical to the optical axis of the camera;
and step 3: giving a signal to enable the following robot to start to operate, continuously tracking the characteristic points in the operation process to calculate a homography matrix between the current image and the expected image, and further constructing a system error according to homography matrix elements;
the calculation formula of the homography matrix is as follows:
Figure FDA0002358991150000011
wherein (x)e,zee) Representing the relative pose of the following robot to its desired position, d*Presentation periodThe distance from the expected position to the plane area is the distance between the random robot and the pilot robot in the ideal formation;
the system error is calculated by the following formula:
Figure FDA0002358991150000012
wherein h is11,h13,h31,h33Are elements in a homography matrix;
and 4, step 4: calculating and recording the linear velocity and the angular velocity of the random robot in the current running period according to the homography matrix of the current image and the expected image obtained by calculation in the step 3, and calculating the homography matrix between the next current image and the expected image when the running period passes; and estimating the moving linear velocity and angular velocity of the pilot robot in the running process according to the following formulas by combining the homography matrix of the previous period and the current period and the linear velocity and angular velocity of the following robot of the previous period:
Figure FDA0002358991150000013
Figure FDA0002358991150000014
wherein (v)ff) Following the linear velocity and the angular velocity of the robot for the current operation cycle;
and 5: the real-time system error obtained in the step (3) and the linear velocity and the angular velocity of the piloting robot estimated in the step (4) are jointly used as input signals of a controller, then the controller outputs signals to drive the following robot to move, and the piloting robot always moves autonomously in the running process;
wherein the controller outputs a velocity signal for driving the following robot calculated by the following formula:
Figure FDA0002358991150000021
wherein k is1,k2The gain is controlled, and the value is taken according to an actual experiment;
step 6: and (3) the following robot receives the driving signal to move, simultaneously acquires a real-time current image by using the monocular camera, and repeats the steps 3, 4 and 5, and the following steps form an ideal queue appointed by a desired image with the pilot robot gradually.
CN201810301612.2A 2018-04-04 2018-04-04 Homography-based mobile robot formation following method Expired - Fee Related CN108469823B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810301612.2A CN108469823B (en) 2018-04-04 2018-04-04 Homography-based mobile robot formation following method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810301612.2A CN108469823B (en) 2018-04-04 2018-04-04 Homography-based mobile robot formation following method

Publications (2)

Publication Number Publication Date
CN108469823A CN108469823A (en) 2018-08-31
CN108469823B true CN108469823B (en) 2020-03-20

Family

ID=63262482

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810301612.2A Expired - Fee Related CN108469823B (en) 2018-04-04 2018-04-04 Homography-based mobile robot formation following method

Country Status (1)

Country Link
CN (1) CN108469823B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210232151A1 (en) * 2018-09-15 2021-07-29 Qualcomm Incorporated Systems And Methods For VSLAM Scale Estimation Using Optical Flow Sensor On A Robotic Device
CN109491381B (en) * 2018-11-06 2020-10-27 中国科学技术大学 Observer-based multi-mobile-robot self-adaptive formation tracking control method
CN110244772B (en) * 2019-06-18 2021-12-03 中国科学院上海微系统与信息技术研究所 Navigation following system and navigation following control method of mobile robot
CN111208830B (en) * 2020-02-23 2023-04-25 陕西理工大学 Three-closed-loop formation track tracking control method for wheeled mobile robot
CN112099505B (en) * 2020-09-17 2021-09-28 湖南大学 Low-complexity visual servo formation control method for mobile robot
US11429112B2 (en) * 2020-12-31 2022-08-30 Ubtech North America Research And Development Center Corp Mobile robot control method, computer-implemented storage medium and mobile robot
CN113110495B (en) * 2021-05-08 2022-11-08 广东省智能机器人研究院 Formation control method of mobile robots under consideration of external interference
CN115542904B (en) * 2022-09-27 2023-09-05 安徽对称轴智能安全科技有限公司 Grouping type collaborative fire-fighting robot fire scene internal grouping queue driving control method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679172A (en) * 2013-10-10 2014-03-26 南京理工大学 Method for detecting long-distance ground moving object via rotary infrared detector
CN104808590A (en) * 2015-02-14 2015-07-29 浙江大学 Mobile robot visual servo control method based on key frame strategy
WO2018049581A1 (en) * 2016-09-14 2018-03-22 浙江大学 Method for simultaneous localization and mapping

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679172A (en) * 2013-10-10 2014-03-26 南京理工大学 Method for detecting long-distance ground moving object via rotary infrared detector
CN104808590A (en) * 2015-02-14 2015-07-29 浙江大学 Mobile robot visual servo control method based on key frame strategy
WO2018049581A1 (en) * 2016-09-14 2018-03-22 浙江大学 Method for simultaneous localization and mapping

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Vision-based Control for Car Platooning using Homography Decomposition;Selim Benhimane等;《Proceedings of the 2005 IEEE International Conference on Robotics and Automation》;20050430;第2161-2166页 *
Visual Servo Control for Wheeled Robot Platooning Based on Homography;Cao Yu等;《2017 IEEE 6th Data Driven Control and Learning Systems Conference》;20170527;第628-630页 *
基于单应性矩阵的移动机器人视觉伺服切换控制;曹雨等;《控制理论与应用》;20170115;第34卷(第1期);第109-119页 *

Also Published As

Publication number Publication date
CN108469823A (en) 2018-08-31

Similar Documents

Publication Publication Date Title
CN108469823B (en) Homography-based mobile robot formation following method
Wang et al. A hybrid visual servo controller for robust grasping by wheeled mobile robots
WO2017088720A1 (en) Method and device for planning optimal following path and computer storage medium
CN110989639B (en) Underwater vehicle formation control method based on stress matrix
Hao et al. Planning and control of UGV formations in a dynamic environment: A practical framework with experiments
CN105196292B (en) Visual servo control method based on iterative duration variation
CN112097769B (en) Homing pigeon brain-hippocampus-imitated unmanned aerial vehicle simultaneous positioning and mapping navigation system and method
Li et al. Localization and navigation for indoor mobile robot based on ROS
CN104808590A (en) Mobile robot visual servo control method based on key frame strategy
CN103112015A (en) Operating object position and posture recognition method applicable to industrial robot
CN105241449A (en) Vision navigation method and system of inspection robot under parallel architecture
CN114721275B (en) Visual servo robot self-adaptive tracking control method based on preset performance
Miao et al. Low-complexity leader-following formation control of mobile robots using only FOV-constrained visual feedback
Liang et al. Adaptive image-based visual servoing of wheeled mobile robots with fixed camera configuration
Barbosa et al. Robust image-based visual servoing for autonomous row crop following with wheeled mobile robots
KR101981641B1 (en) Method and system for formation control of multiple mobile robots
CN115993089B (en) PL-ICP-based online four-steering-wheel AGV internal and external parameter calibration method
CN109542094B (en) Mobile robot vision stabilization control without desired images
Wang et al. A LiDAR based end to end controller for robot navigation using deep neural network
Barbosa et al. Vision-based autonomous crop row navigation for wheeled mobile robots using super-twisting sliding mode control
Salhi et al. Fuzzy-PID hybrid controller for mobile robot using point cloud and low cost depth sensor
Yu et al. Visual confined-space navigation using an efficient learned bilinear optic flow approximation for insect-scale robots
Cao et al. Visual Servo Control for wheeled robot platooning based on homography
Hmeyda et al. Camera-based autonomous mobile robot path planning and trajectory tracking using PSO algorithm and PID Controller
Mehta et al. Finite-time visual servo control for robotic fruit harvesting in the presence of fruit motion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200320