CN109816687A - The concurrent depth identification of wheeled mobile robot visual servo track following - Google Patents

The concurrent depth identification of wheeled mobile robot visual servo track following Download PDF

Info

Publication number
CN109816687A
CN109816687A CN201711171646.6A CN201711171646A CN109816687A CN 109816687 A CN109816687 A CN 109816687A CN 201711171646 A CN201711171646 A CN 201711171646A CN 109816687 A CN109816687 A CN 109816687A
Authority
CN
China
Prior art keywords
follows
mobile robot
depth
coordinate
indicate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711171646.6A
Other languages
Chinese (zh)
Inventor
李宝全
邱雨
师五喜
徐壮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Polytechnic University
Original Assignee
Tianjin Polytechnic University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Polytechnic University filed Critical Tianjin Polytechnic University
Priority to CN201711171646.6A priority Critical patent/CN109816687A/en
Publication of CN109816687A publication Critical patent/CN109816687A/en
Pending legal-status Critical Current

Links

Abstract

A kind of concurrent depth discrimination method of wheeled mobile robot visual servo track following.The wheeled mobile robot that the present invention is directed under nonholonomic devises a kind of vision tracking and controlling method, and the depth information of scene can be picked out while servo track tracks.Firstly, recording one section of image/video to coplanar characteristic point to represent desired trajectory to be tracked.Then, by comparing the characteristic point in static reference image, the characteristic point in realtime graphic and the image/video prerecorded, the homography matrix under European coordinate system is established in conjunction with perspective geometry relationship.Then it by decomposing obtained homography matrix, devises motion control rule and the adaptive updates about unknown depth parameter is restrained.By the way that concurrent learning algorithm is added in adaptive updates rate, restore unknown depth information using history and current system data.It is restrained simultaneously finally by Lyapunov method and extension Barbara theorem proving track following error and depth Identification Errors, thus this method can effectively pick out depth information of scene.

Description

The concurrent depth identification of wheeled mobile robot visual servo track following
Technical field
The invention belongs to the technical fields of computer vision and mobile robot, same more particularly to a kind of mobile robot When the concurrent adaptive depth discrimination method of visual servo track following, desired time-varying track can be tracked and believe its scene depth Breath identifies.
Background technique
Wheeled mobile robot often works in dangerous environment, many due to the uncertain factor in working environment Researcher has developed a variety of different solutions to improve the autonomous control ability of system.Recently, due to image procossing The development of the raising of technology and control algolithm theory, many scientific research personnel use the autonomous control technology of view-based access control model sensor, And then to realize independent navigation and the control of system.
For mobile-robot system, its intelligence, flexibility and environment can be greatly enhanced by introducing visual sensor Sensing capability is controlled the movement of mobile robot, i.e. visual servo technology using realtime graphic feedback, can be widely used in Various fields, if intelligent transportation and environment are explored, for these reasons, this technology is especially paid close attention to and becomes robot The research hotspot in field.For visual sensor, by being imaged thus according to perspective projection model, the missing of depth information is main Defect is wanted, therefore, for monocular-camera vision system, it is difficult to completely recover exterior three dimensional scene information and mobile robot Displacement information;In addition, because there is nonholonomic in mobile robot, so that the design of Pose Control device Very challenging, therefore, the missing of depth information and the limitation of nonholonomic constraint make mobile robot visual control task Become abnormal arduous;However, existing method is mostly that unknown scene information is set on the basis of original Visual servoing control device Count compensating module.In the sense that, it is still unable to get model of place after the completion of visual servo task, since working space is believed Breath can not obtain completely, therefore limit further applying and popularization for robot system.In conclusion how in visual servo Depth information identification is carried out while control, is a difficult but very valuable problem in robot control field.
Summary of the invention
A kind of concurrent depth discrimination method of wheeled mobile robot visual servo track following.The present invention is directed to incomplete fortune Wheeled mobile robot under moving constraint devises a kind of vision tracking and controlling method, and can be in the same of visual servo track following When recognize appearance scape depth information.Firstly, recording one section of image/video to coplanar characteristic point to represent expectation rail to be tracked Mark.Then, by comparing the characteristic point in static reference image, characteristic point in current realtime graphic and the image prerecorded Video establishes the homography matrix under European coordinate system in conjunction with perspective geometry relationship.Then by decomposing obtained homography square Battle array devises the adaptive updates rule of motion control rule and depth identified parameters.In addition, being used by extending concurrent learning method History and current system data go to establish an adaptive updates rate to restore unknown depth information.Finally by Lyapunov Method restrains simultaneously with extension Barbara theorem proving system tracking error and depth Identification Errors, system Existence of Global Stable, energy It is enough effectively to pick out the depth information of scene.
A kind of wheeled mobile robot concurrent adaptive depth discrimination method of visual servo simultaneously, it is characterised in that include following Step:
1, a kind of wheeled mobile robot concurrent adaptive depth discrimination method of vision track following simultaneously, it is characterised in that The following steps are included:
1st, define system coordinate system, comprising:
1.1st, establish system model
The coordinate system for defining wheeled mobile robot is as follows: withIndicate that video camera is sat relative to the reference of static feature point Mark system, withIndicate the current pose coordinate system of wheeled mobile robot, withIndicate that corresponding to wheeled mobile robot it is expected position The rectangular coordinate system of appearance, by coplanar characteristic point PiDetermining plane is reference planes π, and the unit normal vector for defining plane π is n*, 3-D Euclidean coordinate Pi?It is lower to use P respectivelyi(t), Pi d(t),To indicate:
Assuming that distance perseverance of the origin of each coordinate system to characteristic point along optical axis direction is positive, fromIt arrivesSpin matrix ForFromIt arrivesTranslation vector beWhereincT*(t) forMiddle expression, equally Indicate expectation fromIt arrivesTime-varying it is expected spin matrix, fromIt arrivesExpectation translation vector beWhereindT*(t) ?Middle expression,cT*(t) anddT*(t) it is defined as follows:
WithIt is defined as follows:
It is defined as follows:
In addition indicate fromIt arrivesSpin matrix the right hand rotate angle, θdIt indicatesIt arrivesSpin matrix the right hand Rotate angle, it will thus be seen that
WhereinThe expectation angular speed for representing WMR existsIn expression, fromTo the unit normal vector along π of π Distance be set asThen
WhereinIndicate the unit normal vector of π.
1.2nd, European reconstruction
Reconstruction features point firstWithThe European coordinate P of normalizationi?WithUnder expression:
Euclidean coordinate in order to obtain, each characteristic point byUnder vi(t), Under vdi(t), Under The projected pixel coordinate shown, Ta Menshi(i.e. practical time varying image point),(it is expected Trajectory diagram picture point),The normalization Euclidean coordinate of the element of (i.e. constant reference picture point), characteristic point passes through pinhole lens Model and picture point establish following relationship:
WhereinIt is known constant camera intrinsic parameter calibration matrix, spin matrix and translation between coordinate system Vector can be as follows by normalization Euclidean coordinate representation:
Wherein H (t),Indicate Euclidean homography matrix,cT*h(t),For the flat of the content ratio factor The amount of shifting to:
Then pass through Euclid's reconstruction technique decomposing H and Hd, obtain cT*h(t),dT*h(t), θ (t), θd (t).
3rd, construct adaptive controller
According to the open loop dynamical equation of system, for the mobile-robot system design controller equipped with video camera and adaptively The purpose of more new law, control is to ensure that coordinate systemIn trackingTime-varying track, with e (t)=[e1, e2, e3]TIndicate translation With rotation tracking error, it is defined as follows:
WhereincT*h1(t),cT*h2(t),dT*h1(t) anddT*h2Definition, θ (t), θ in (12)d(t) definition in (13), separately Outer auxiliary variableIt is defined as follows:
Open loop error equation are as follows:
Depth estimation errorIt is defined as follows:
WhereinIt is depth identification, whenUnknown depth information will be identified effectively when being intended to zero, wheel The linear velocity and angular speed of formula mobile robot design are as follows:
According to concurrent learning method, recognized for depthAdaptive updates rule is designed, form is as follows:
WhereinFor more new gain,For normal number, tk∈ [0, t] be initial time and current time it Between time point.Projection function Proj (χ) is defined as:
WhereinIt is positive valueLower bound, thus haveAvailable following formula:
According to derivation before, closed-loop error system is as follows:
When following formula is set up:
So far, completing mobile robot, visual servo track following and concurrent adaptive depth recognize simultaneously.
The advantages of the present invention
The invention proposes a kind of wheeled mobile robot while visual servos and adaptive depth discrimination method.The present invention Mainly be made that following several respects work: 1. successfully recognize the depth information in the visual field, are obtained by vision system to external environment Excellent perception;2. the homography matrix established under European coordinate system is used, later the homography matrix as obtained by decomposing, by robot Effectively expected pose is arrived in driving;3. combined controller and depth identification module solve system Existence of Global Stable because error restrains simultaneously Problem.
Detailed description of the invention:
Fig. 1 is to define coordinate system relationship;
Fig. 2 is simulation result: the desired current kinetic path of wheeled mobile robot;
Fig. 3 is simulation result: the image path [dotted line: required track of characteristic point;Solid line: current track];
Fig. 4 is simulation result: systematic error converges to zero [dotted line: desired value;Solid line: true value];
Fig. 5 is simulation result: the angular speed and linear velocity of wheeled mobile robot;
Fig. 6 indicates experimental result: being obtained by parameter adaptive more new lawVariation [solid line:Value;Dotted line: d*'s True value].
Specific embodiment:
Embodiment 1
1, a kind of wheeled mobile robot concurrent adaptive depth discrimination method of vision track following simultaneously, it is characterised in that The following steps are included:
1st, define system coordinate system, comprising:
The coordinate system for defining wheeled mobile robot is as follows: withIndicate that video camera is sat relative to the reference of static feature point Mark system, withIndicate the current pose coordinate system of wheeled mobile robot, withIndicate that corresponding to wheeled mobile robot it is expected position The rectangular coordinate system of appearance, by coplanar characteristic point PiDetermining plane is reference planes π, and the unit normal vector for defining plane π is n*, 3-D Euclidean coordinate Pi?It is lower to use respectivelyTo indicate:
Assuming that distance perseverance of the origin of each coordinate system to characteristic point along optical axis direction is positive, fromIt arrivesSpin matrix ForFromIt arrivesTranslation vector beWhereincT*(t) forMiddle expression, equally Indicate expectation fromIt arrivesTime-varying it is expected spin matrix, fromIt arrivesExpectation translation vector beWhereindT* (t) existMiddle expression,cT*(t) anddT*(t) it is defined as follows:
WithIt is defined as follows:
It is defined as follows:
In addition indicate fromIt arrivesSpin matrix the right hand rotate angle, θdIt indicatesIt arrivesSpin matrix the right hand Rotate angle, it will thus be seen that
WhereinThe expectation angular speed for representing WMR existsIn expression, fromTo the unit normal vector along π of π Distance be set asThen
WhereinIndicate the unit normal vector of π.
1.2nd, European reconstruction
Reconstruction features point firstWithThe European coordinate P of normalizationi?WithUnder expression:
Euclidean coordinate in order to obtain, each characteristic point byUnderUnder Under 'sThe projected pixel coordinate shown, Ta Menshi(i.e. practical time varying image point),(i.e. desired trajectory Picture point),The normalization Euclidean coordinate of the element of (i.e. constant reference picture point), characteristic point passes through pinhole lens model Following relationship is established with picture point:
WhereinIt is known constant camera intrinsic parameter calibration matrix, spin matrix and translation between coordinate system Vector can be as follows by normalization Euclidean coordinate representation:
Wherein H (t),Indicate Euclidean homography matrix,For the translation of the content ratio factor Vector:
Then pass through Euclid's reconstruction technique decomposing H and Hd, obtain cT*h(t),dT*h(t), θ (t), θd (t).
3rd, construct adaptive controller
According to the open loop dynamical equation of system, for the mobile-robot system design controller equipped with video camera and adaptively The purpose of more new law, control is to ensure that coordinate systemIn trackingTime-varying track, with e (t)=[e1, e2, e3]TIndicate translation With rotation tracking error, it is defined as follows:
WhereincT*h1(t),cT*h2(t),dT*h1(t) anddT*h2Definition, θ (t), θ in (10)d(t) definition in (4), separately Outer auxiliary variableIt is defined as follows:
Open loop error equation are as follows:
Depth estimation errorIt is defined as follows:
It is depth identification, whenUnknown depth information will be identified effectively when being intended to zero, wheeled shifting The linear velocity and angular speed of mobile robot design are as follows:
According to concurrent learning method, recognized for depthAdaptive updates rule is designed, form is as follows:
WhereinFor more new gain,For normal number, tk∈ [0, t] be initial time and current time it Between time point.WhereinFor more new gain,For normal number, tk∈ [0, t] is initial time and current time Between time point, it can be seen that used in the concurrent learning method of adaptive updates rate and to have been recorded in N sampling period Data, it is therefore desirable to which optimal smoother is estimatedwc(tk),cT*h2(tk), vc(tk) exact value, in this journey For degree, it can be seen that have in parameter Estimation and be significantly improved.
Projection function Proj (χ) is defined as:
WhereinIt is positive valueLower bound, thus haveAvailable following formula:
In fact, control parameter kv, kwMainly to the control of robot and more new gain Γ1, Γ2Have an impact to change The depth recognition of robot, but since parameter is fairly small, this makes it be easy to correct adjusting parameter, so as to for actually answering With.According to derivation before, closed-loop error system is as follows:
When following formula is set up:
So far, completing mobile robot, visual servo track following and concurrent adaptive depth recognize simultaneously.
4th, stability analysis
Theorem 1: control law and parameter more new law ensure that the tracking error of wheeled mobile robot is progressive and go to zero, and will move Mobile robot calms and tracks desired path while having carried out depth identification, i.e., following formula is set up:
It is assumed that the time-derivative of expectation path meets following condition:
It proves:
Define non-negative Lyapunov function are as follows:
It can be obtained to Lyapunov function derivation, and by the substitution of open loop error equation:
(16) are substituted into above formula (24), obtain following expression:
It can be seen that
From e known to (23) and (26)1(t),e3(t),And e1(t),Find out from (14)d*It is constant, according to what is constructeddT*h1(t),dT*h2(t), θd(t),For bounded function andProperty, utilize (11) (12) and (16) in the provable e of expression formula2(t),cT*h1(t),cT*h2(t), θd (t),Development based on front passes through (13) (15) (16) available vc(t),Therefore all system state variables all keep boundedness.In addition, defining Ф is all makePoint set:
The maximum invariant set that M is Ф is defined, the point in M is set up from following relationship known to (25):
Therefore:
Then (28) and (29) are substituted intoClosed-loop dynamic equation in (19), obtains as follows:
Therefore according to aboveIt is assumed that obtaining in set ME can be proved from (12) and (29)2 =0, due to using projection function (17) to makePiecewise smooth, but it is continuously, from (14) (16) for given primary condition (28) it can be seen that
Therefrom obtainPositive boundaryBe it is constant, maximum invariant set M only includes equalization point, and form is as follows:
According to Russell's principle of invariance, mobile robot tracking error and depth Identification Errors asymptotic convergence to zero, i.e.,
5th, system emulation
In in this section, the performance of proposed method is verified the present invention provides simulation result, is singly answered in order to obtain Property matrix, randomly selects 4 plane characteristic points, then the intrinsic parameter of video camera is provided that
The stability and feature height that the picture noise test controller that standard deviation is σ=0.15 is added recognize anti-interference Ability, selection control parameter are simultaneously set as kv=0.605, kω=0.100, Γ12=0.900.N is selected as 100, record Data in preceding 100 sampling periods.
After emulation, from the path of wheeled mobile robot shown in Fig. 2, it can be seen that robot successfully tracks the phase The track of prestige;It can find out in Fig. 3, current signature point image track effectively tracks the image path of desired character point;From Fig. 4 As can be seen that systematic error converges to zero;Fig. 5 shows the v of wheeled mobile robotc(t) and ωc(t);Fig. 6 shows that depth is estimated Meter rapidly and efficiently converges on its true value, it means that unknown depth information of scene can identify completely;As can be seen that Robot has small steady-state error with trajectory path needed for high efficiency tracking.

Claims (1)

1. a kind of wheeled mobile robot concurrent adaptive depth discrimination method of vision track following simultaneously, it is characterised in that including Following steps:
1st, define system coordinate system, comprising:
1.1st, establish system model
The coordinate system for defining wheeled mobile robot is as follows: withIndicate reference frame of the video camera relative to static feature point, WithIndicate the current pose coordinate system of wheeled mobile robot, withIt indicates to correspond to the straight of wheeled mobile robot expected pose Angular coordinate system, by coplanar characteristic point PiDetermining plane is reference planes π, and the unit normal vector for defining plane π is n*, 3-D Euclidean Coordinate Pi?It is lower to use respectivelyTo indicate:
Assuming that distance perseverance of the origin of each coordinate system to characteristic point along optical axis direction is positive, fromIt arrivesSpin matrix beFromIt arrivesTranslation vector beWhereincT*(t) forMiddle expression.EquallyTable Show expectation fromIt arrivesTime-varying it is expected spin matrix, fromIt arrivesExpectation translation vector beWhereindT*(t) ?Middle expression,cT*(t) anddT*(t) it is defined as follows:
cT*(t)=[cT*x, 0,cT*z]T,dT*(t)=[dT*x, 0,dT*z]T. (2)
KindIt is defined as follows:
It is defined as follows:
In addition indicate fromIt arrivesSpin matrix the right hand rotate angle, θdIt indicatesIt arrivesSpin matrix the right hand rotation Angle, it will thus be seen that
WhereinThe expectation angular speed for representing WMR existsIn expression, fromTo the distance of the unit normal vector along π of π It is set asThen
d*=n*TPi * (6)
WhereinIndicate the unit normal vector of π.
1.2nd, European reconstruction
Reconstruction features point firstWithThe European coordinate P of normalizationi?WithUnder expression:
Euclidean coordinate in order to obtain, each characteristic point byUnderUnder UnderThe projected pixel coordinate shown, Ta Menshi(i.e. practical time varying image point),(i.e. expectation trajectory diagram Picture point),The element of (i.e. constant reference picture point), the normalization Euclidean coordinate of characteristic point by pinhole lens model with Picture point establishes following relationship:
WhereinIt is known constant camera intrinsic parameter calibration matrix, spin matrix and translation vector between coordinate system It can be as follows by normalization Euclidean coordinate representation:
Wherein H (t),Indicate Euclidean homography matrix,For the translation vector of the content ratio factor:
Then pass through Euclid's reconstruction technique decomposing H and Hd, obtain
3rd, construct adaptive controller
According to the open loop dynamical equation of system, controller and adaptive updates are designed for the mobile-robot system equipped with video camera Rule, the purpose of control are to ensure that coordinate systemIn trackingTime-varying track, with e (t)=[e1, e2, e3]TIndicate translation and rotation Turn tracking error, be defined as follows:
WhereincT*h1(t),cT*h2(t),dT*h1(t) anddT*h2Definition, θ (t), θ in (10)d(t) definition in (4).In addition auxiliary Help variableIt is defined as follows:
Open loop error equation are as follows:
Depth estimation errorIt is defined as follows:
It is depth identification, whenUnknown depth information will be identified effectively when being intended to zero, wheel type mobile machine The linear velocity and angular speed of device people design are as follows:
According to concurrent learning method, recognized for depthAdaptive updates rule is designed, form is as follows:
WhereinFor more new gain,For normal number, tk∈ [0, t] is between initial time and current time Time point.Projection function Proj (χ) is defined as:
WhereinIt is positive valueLower bound, thus haveAvailable following formula:
According to derivation before, closed-loop error system is as follows:
When following formula is set up:
So far, completing mobile robot, visual servo track following and concurrent adaptive depth recognize simultaneously.
CN201711171646.6A 2017-11-20 2017-11-20 The concurrent depth identification of wheeled mobile robot visual servo track following Pending CN109816687A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711171646.6A CN109816687A (en) 2017-11-20 2017-11-20 The concurrent depth identification of wheeled mobile robot visual servo track following

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711171646.6A CN109816687A (en) 2017-11-20 2017-11-20 The concurrent depth identification of wheeled mobile robot visual servo track following

Publications (1)

Publication Number Publication Date
CN109816687A true CN109816687A (en) 2019-05-28

Family

ID=66601029

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711171646.6A Pending CN109816687A (en) 2017-11-20 2017-11-20 The concurrent depth identification of wheeled mobile robot visual servo track following

Country Status (1)

Country Link
CN (1) CN109816687A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111024003A (en) * 2020-01-02 2020-04-17 安徽工业大学 3D four-wheel positioning detection method based on homography matrix optimization
CN111283683A (en) * 2020-03-04 2020-06-16 湖南师范大学 Servo tracking accelerated convergence method for robot visual feature planning track
CN111338347A (en) * 2020-03-05 2020-06-26 大连海事大学 Monocular vision-based finite time continuous control method for water surface vehicle
CN111340854A (en) * 2019-12-19 2020-06-26 南京理工大学 Mobile robot target tracking method based on ICamshift algorithm
CN111496848A (en) * 2020-03-19 2020-08-07 中山大学 Mobile robot repeated positioning precision test based on Euclidean distance
CN111578947A (en) * 2020-05-29 2020-08-25 天津工业大学 Unmanned aerial vehicle monocular SLAM extensible framework with depth recovery capability
CN112123370A (en) * 2019-06-24 2020-12-25 天津工业大学 Mobile robot vision stabilization control with expected pose change
CN113051767A (en) * 2021-04-07 2021-06-29 绍兴敏动科技有限公司 AGV sliding mode control method based on visual servo
WO2022143626A1 (en) * 2020-12-31 2022-07-07 深圳市优必选科技股份有限公司 Method for controlling mobile robot, computer-implemented storage medium, and mobile robot
CN115629550A (en) * 2022-12-22 2023-01-20 西北工业大学 Self-adaptive attitude tracking control and parameter identification method for service spacecraft

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103121451A (en) * 2013-03-19 2013-05-29 大连理工大学 Tracking and controlling method for lane changing trajectories in crooked road
CN105096341A (en) * 2015-07-27 2015-11-25 浙江大学 Mobile robot pose estimation method based on trifocal tensor and key frame strategy
US20170019653A1 (en) * 2014-04-08 2017-01-19 Sun Yat-Sen University Non-feature extraction-based dense sfm three-dimensional reconstruction method
CN106774309A (en) * 2016-12-01 2017-05-31 天津工业大学 A kind of mobile robot is while visual servo and self adaptation depth discrimination method
CN106737774A (en) * 2017-02-23 2017-05-31 天津商业大学 One kind is without demarcation mechanical arm Visual servoing control device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103121451A (en) * 2013-03-19 2013-05-29 大连理工大学 Tracking and controlling method for lane changing trajectories in crooked road
US20170019653A1 (en) * 2014-04-08 2017-01-19 Sun Yat-Sen University Non-feature extraction-based dense sfm three-dimensional reconstruction method
CN105096341A (en) * 2015-07-27 2015-11-25 浙江大学 Mobile robot pose estimation method based on trifocal tensor and key frame strategy
CN106774309A (en) * 2016-12-01 2017-05-31 天津工业大学 A kind of mobile robot is while visual servo and self adaptation depth discrimination method
CN106737774A (en) * 2017-02-23 2017-05-31 天津商业大学 One kind is without demarcation mechanical arm Visual servoing control device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
QIU Y: "Homography-based visual servo tracking control of wheeled mobile robots with simultaneous depth identification" *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112123370B (en) * 2019-06-24 2024-02-06 内蒙古汇栋科技有限公司 Mobile robot vision stabilization control with desired pose change
CN112123370A (en) * 2019-06-24 2020-12-25 天津工业大学 Mobile robot vision stabilization control with expected pose change
CN111340854A (en) * 2019-12-19 2020-06-26 南京理工大学 Mobile robot target tracking method based on ICamshift algorithm
CN111024003B (en) * 2020-01-02 2021-12-21 安徽工业大学 3D four-wheel positioning detection method based on homography matrix optimization
CN111024003A (en) * 2020-01-02 2020-04-17 安徽工业大学 3D four-wheel positioning detection method based on homography matrix optimization
CN111283683A (en) * 2020-03-04 2020-06-16 湖南师范大学 Servo tracking accelerated convergence method for robot visual feature planning track
CN111338347A (en) * 2020-03-05 2020-06-26 大连海事大学 Monocular vision-based finite time continuous control method for water surface vehicle
CN111338347B (en) * 2020-03-05 2023-08-25 大连海事大学 Monocular vision-based limited time continuous control method for water surface aircraft
CN111496848A (en) * 2020-03-19 2020-08-07 中山大学 Mobile robot repeated positioning precision test based on Euclidean distance
CN111496848B (en) * 2020-03-19 2022-03-15 中山大学 Mobile robot repeated positioning precision testing method based on Euclidean distance
CN111578947B (en) * 2020-05-29 2023-12-22 国网浙江省电力有限公司台州市椒江区供电公司 Unmanned plane monocular SLAM (selective liquid level adjustment) expandable frame with depth recovery capability
CN111578947A (en) * 2020-05-29 2020-08-25 天津工业大学 Unmanned aerial vehicle monocular SLAM extensible framework with depth recovery capability
WO2022143626A1 (en) * 2020-12-31 2022-07-07 深圳市优必选科技股份有限公司 Method for controlling mobile robot, computer-implemented storage medium, and mobile robot
CN113051767A (en) * 2021-04-07 2021-06-29 绍兴敏动科技有限公司 AGV sliding mode control method based on visual servo
CN115629550A (en) * 2022-12-22 2023-01-20 西北工业大学 Self-adaptive attitude tracking control and parameter identification method for service spacecraft
CN115629550B (en) * 2022-12-22 2023-04-18 西北工业大学 Self-adaptive attitude tracking control and parameter identification method for service spacecraft

Similar Documents

Publication Publication Date Title
CN109816687A (en) The concurrent depth identification of wheeled mobile robot visual servo track following
US10268201B2 (en) Vehicle automated parking system and method
CN106774309B (en) A kind of mobile robot visual servo and adaptive depth discrimination method simultaneously
Wang et al. A hybrid visual servo controller for robust grasping by wheeled mobile robots
Malis et al. 2 1/2 D visual servoing
López-Nicolás et al. Homography-based control scheme for mobile robots with nonholonomic and field-of-view constraints
Tian et al. RGB-D based cognitive map building and navigation
CN109102525A (en) A kind of mobile robot follow-up control method based on the estimation of adaptive pose
Cai et al. Uncalibrated 3d stereo image-based dynamic visual servoing for robot manipulators
Gratal et al. Visual servoing on unknown objects
Zhao et al. Vision-based tracking control of quadrotor with backstepping sliding mode control
CN102736626A (en) Vision-based pose stabilization control method of moving trolley
Mühlig et al. Automatic selection of task spaces for imitation learning
CN105096341A (en) Mobile robot pose estimation method based on trifocal tensor and key frame strategy
Poonawala et al. Formation control of wheeled robots with vision-based position measurement
MacKunis et al. Unified tracking and regulation visual servo control for wheeled mobile robots
Nadi et al. Visual servoing control of robot manipulator with Jacobian matrix estimation
Goronzy et al. QRPos: Indoor positioning system for self-balancing robots based on QR codes
Siradjuddin et al. A real-time model based visual servoing application for a differential drive mobile robot using beaglebone black embedded system
Coaguila et al. Selecting vantage points for an autonomous quadcopter videographer
CN110722547B (en) Vision stabilization of mobile robot under model unknown dynamic scene
López-Nicolás et al. Vision-based exponential stabilization of mobile robots
Gava et al. Nonlinear control techniques and omnidirectional vision for team formation on cooperative robotics
CN109542094B (en) Mobile robot vision stabilization control without desired images
Wong et al. Ant Colony Optimization and image model-based robot manipulator system for pick-and-place tasks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190528

WD01 Invention patent application deemed withdrawn after publication