CN106774309B - A kind of mobile robot visual servo and adaptive depth discrimination method simultaneously - Google Patents

A kind of mobile robot visual servo and adaptive depth discrimination method simultaneously Download PDF

Info

Publication number
CN106774309B
CN106774309B CN201611085990.9A CN201611085990A CN106774309B CN 106774309 B CN106774309 B CN 106774309B CN 201611085990 A CN201611085990 A CN 201611085990A CN 106774309 B CN106774309 B CN 106774309B
Authority
CN
China
Prior art keywords
robot
mobile robot
depth
pose
adaptive
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201611085990.9A
Other languages
Chinese (zh)
Other versions
CN106774309A (en
Inventor
李宝全
师五喜
邱雨
郭利进
陈奕梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Polytechnic University
Original Assignee
Tianjin Polytechnic University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Polytechnic University filed Critical Tianjin Polytechnic University
Priority to CN201611085990.9A priority Critical patent/CN106774309B/en
Publication of CN106774309A publication Critical patent/CN106774309A/en
Application granted granted Critical
Publication of CN106774309B publication Critical patent/CN106774309B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle

Abstract

A kind of mobile robot visual servo and adaptive depth discrimination method simultaneously, belong to the technical field of computer vision and mobile robot, first according to the polar coordinate representation method of robot pose, obtain the open loop kinematical equation of calm error.Then, according to concurrent learning strategy, design can restrain the adaptive updates that depth information picks out, and then construct the vision stabilization control law of mobile robot.The parameter adaptive more new law of this method design can be learnt at the initial stage that robot does calm movement, carry out on-line identification to depth information in robot kinematics later.According to Lyapunov method and LaSalle principle of invariance, provable position and attitude error out restrains simultaneously with depth Identification Errors.While mobile robot completes vision point stabilization, the present invention can accurately and reliably pick out depth information, that is, can prove that out that controller is restrained simultaneously with identification module.

Description

A kind of mobile robot visual servo and adaptive depth discrimination method simultaneously
Technical field
The invention belongs to the technical fields of computer vision and mobile robot, same more particularly to a kind of mobile robot When visual servo and adaptive depth discrimination method.
Background technique
For mobile-robot system, its intelligence, flexibility and environment sensing can be greatly enhanced by introducing visual sensor Ability [1-3] (referring to annex document [1-3], statement is annex Literature below).It is controlled using realtime graphic feedback The movement of mobile robot, i.e. visual servo technology, can be widely used in various fields, as intelligent transportation and environment are explored. For these reasons, this technology is especially paid close attention to and becomes the research hotspot of robot field.For visual sensor, By what is be imaged thus according to perspective projection model, the missing of depth information is its major defect.Therefore, monocular-camera is regarded Feel system, it is difficult to completely recover exterior three dimensional scene information and mobile robot displacement letter.In addition, due to mobile machine There are nonholonomic features by people, so that the design of Pose Control device is very challenging.Therefore, depth information is scarce It becomes estranged nonholonomic constraint, mobile robot visual control task is made to become abnormal arduous.However, existing method is mostly in original view It is unknown depth information design compensation module on the basis of feel servo controller.In the sense that, complete in visual servo task Model of place is still unable to get after.Since work space information can not still obtain completely, limit robot system into One step application.Therefore, how depth information identification is carried out while Visual servoing control, be robot and control field Interior one difficult but very valuable problem.
So far, in order to complete mobile robot visual servo task, there are many solutions of processing loss of depth information problem Certainly method.In document [4], Zhang et al., which proposes a kind of two stage controller, makes mobile robot using backstepping method Object pose is arrived in driving under external parameters of cameras unknown situation, wherein being restrained using adaptive updates to the unknown of characteristic point plane Depth information compensates.In document [5], after being compensated to single features point depth information, set for mobile robot The controller of a kind of joint vision tracking and control is counted.Mariottini et al. according to the true value during visual servo come Set distance parameter [6], and same method is used in document [7].In document [8], Becerra et al. is in supertwist Control gain appropriate is devised in controller to merge unknown depth information, and in document [9] and [10], according to singly answering Item in matrix counteracts unknown depth information with control gain.Li et al. people is driven using the model predictive control of view-based access control model Robot reaches expected pose, and by range sensor, such as laser is obtained depth information [11].Unfortunately, in addition to device Range sensor, existing method can not recognize depth information by compensation way.On the other hand, although increasing range sensor It is able to solve depth identification problem, but inevitably increases system complexity and cost.In order to make real application systems More convenient to use, making full use of image data and system mode to recognize depth information is still preferable method.
Recently, some research achievements are achieved in terms of the range information identification of robot system.Hu et al. is with non-linear Observer has measured the European coordinate of object, wherein progressively picking out range information [12] with known kinematic parameter.Dani etc. People depression of order nonlinear observer is devised under Global Exponential Stability come recognize one between stationary object and moving camera away from From [13].In addition, range information and camera motion can be detected by nonlinear observer, such as grinding for document [14] and [15] Study carefully result.In terms of the vision tracing task of robot arm, designed distance observer measures the pose of latter end actuator [16], the speed [18] of the pose [17] of mobile object and robot.The structure of the artificial common visual target such as Spica proposes A kind of method for dynamic estimation can run under desired transient response and improve performance when executing visual servo task [19].Compared with robotic arm, nonholonomic constraint should be taken into account when driving mobile robot, bring for depth identification and more choose War.In document [20], a kind of adaptive algorithm is constructed, is estimated using the target signature in visual track tracing task Mobile robot pose.Luca et al. devises nonlinear observer progressively to find spy in mobile robot visual servo It levies depth [21].However, existing method common demands under persistent excitation condition and only asymptotic convergence property.Thus, nothing Observation error converges to zero before method guarantees control error convergence, causes not can guarantee controller/observer combined system overall situation Stablize.Therefore, how to be completed at the same time control and depth identification is still a task with challenge.
In order to recognize relevant system parameters under the premise of consistent with designed controller coordinate, many researchers are Have been noted that concurrent learning structure.Chowdhary et al. develops a kind of concurrently study model-reference control device, he uses current It concurrently carries out adaptively guaranteeing the complete of the unknown linear dynamic system in no persistent excitation condition with given data Office's Exponential Stability [22].They simultaneously also apply to concurrent learning adaptive control device on aircraft, and because adaptive law limits It has made weight update and has improved its athletic performance [23].In addition, can be with concurrent learning structure with to recognize neural network Then unknown parameter obtains the near-optimization property of control task, path trace [24] and nonlinear system such as mobile robot System control [25].In order to rebuild scene information in control process, Parikh et al. devise it is a kind of concurrently learn it is self-adaptive controlled System strategy completes the track tracing task of robotic arm, and the adaptive updates rule of the wherein enhancing of usage history data can guarantee do not having Index tracking and depths of features estimation [26] are completed when having Persistent Excitation.In addition, due to nonholonomic and motion path Finite length complete during wheeled mobile robot visual servo recognize depth information can face more challenges.The present invention By the inspiration of [26] and [27], a kind of adaptive visual servo method is developed, is completed to wheeled mobile robot simultaneously Pose Control and depth recognize task.
Summary of the invention
Present invention aim to address existing mobile robot visual depth informations to recognize above shortcomings, provides one Kind mobile robot while visual servo and adaptive depth discrimination method.
The invention proposes a kind of novel mobile robot while visual servos and adaptive depth discrimination method.The party The feature of method maximum is to complete visual servo simultaneously and depth identification.When thus solving existing mobile robot visual servo The problem of recognizing depth information, and it is not necessarily to additional range sensor, do not increase system complexity and cost.Specifically, first First, it is then moved by the polar coordinates comprising unknown characteristics depth by can measure signal definition mobile robot position and attitude error Learn model.It in turn, is the enhancing of unknown characteristics depth design using record data and current data according to concurrent Its Learning Strategies Adaptive updates rule.Later, design have the adjusting controller of polar coordinate representation method drive mobile robot it is incomplete about Specified pose is reached under beam.Then, according to Lyapunov method and LaSalle principle of invariance, it was demonstrated that go out position and attitude error and depth Degree Identification Errors are restrained simultaneously.Therefore, it solves the problems, such as controller and recognizes the global stability of module as a whole.It is imitative Very prove that this method is effectively reliable with experimental result.
The present invention is mainly made that following several respects contribution: 1. successfully recognize the depth information in the visual field, pass through vision system Obtain the excellent perception to external environment;2. robot is effectively driven to desired position with the continuous controller of polar coordinate representation method Appearance;3. combined controller and depth identification module solve system global stability because error restrains simultaneously.
In document [28], Fang et al. has built a kind of smooth controller adjusting robot arrival changed over time Expected pose, wherein restraining complementary characteristics depth information by adaptive updates.It is same to pass through design adaptive controller, Zhang Et al. effectively and be naturally completed Pose Control task.It is compared with both the above method, this method proves depth Identification Errors Zero is converged in control process.
Visual servo with adaptive depth discrimination method includes: mobile robot provided by the invention simultaneously
1st, define system coordinate system
1.1st, system coordinate system description
The coordinate system for defining vehicle-mounted vidicon is consistent with the coordinate system of mobile robot;WithIndicate robot/video camera The rectangular coordinate system of expected pose, whereinOrigin wheel axis central point, also in the optical center position of video camera;z*Axis With the optical axis coincidence of camera lens, while also with robot direction of advance be overlapped;X-axis is parallel with robot wheel shaft;y*Axis hangs down Directly in x*z*Plane;WithIndicate the current pose coordinate system of video camera/robot;
The distance between desired locations and current location are indicated with e (t);θ (t) is indicatedRelative toRotation angle; α (t) indicate the current pose of robot and fromIt arrivesRotating vector between angle;θ (t) indicates robot expected pose With fromIt arrivesRotating vector between angle;The direction of θ (t), α (t), φ (t) are labeled, they are positive in figure Value;
In addition, there is N number of static coplanar characteristic point P in the visual fieldi(i=1,2..., n);DefinitionIt is special The unit normal vector of sign plane existsIn expression;Be fromOrigin is to along n*Characteristic plane unknown distance;This In we assume that characteristic plane withoutOrigin, i.e. d*≠0;
Therefore, the purpose of the present invention is on the basis of defined system coordinate system, with a kind of novel visual servo Method drive mobile robot so thatWithIt is overlapped, and real-time perfoming d*Identification;
1.2nd, coordinate system transformation
Without loss of generality, this method withAs reference frame;FromIt arrivesSpin matrix and translation vector point It is not denoted asWith*Tc(t);In view of the plane motion of mobile robot constrains,With*Tc(t) form can be written as follows Form:
Wherein*Tcx(t) and*Tcz(t) it respectively indicatesOrigin existIn x and z coordinate;Therefore, robot works as Preceding pose is expressed as*Tcz t,*Tcxt,θt;
2nd, construct system model
2.1st, it can measure signal
For characteristic point Pi,WithAcquired image respectively indicates desired image and present image;Using singly answering Property Matrix Estimation and fast decoupled technology, obtain the relative pose of robot, i.e., from present image and desired image * TC/d*(t) and n*;Then, the robot coordinate containing scale factor under cartesian coordinate system is obtained, form is
For the ease of the design of controller, the cartesian coordinate of robot is changed into polar form;Defining e (t) is* Tc(t) norm, i.e.,
According to hypothesis d above*≠ 0, define the measurable distance e containing scale factors(t) are as follows:
In addition, the calculation formula of φ (t), α (t) are respectively as follows:
α=φ-θ (4)
2.2nd, establish Robot kinematics equations
In this section, it selects to can measure signal es(t), φ (t), α (t) construction execute the video camera machine of visual servo task Device people's system model;Moveable robot movement equation is indicated with the e (t) under polar coordinates, φ (t), α (t) are as follows:
vr(t) and wr(t) linear velocity and angular speed of mobile robot are respectively indicated;
By es(t) (5) are brought in definition (2) into, and it is as follows to obtain system open loop dynamical equation:
In addition,It indicates depth identification, defines depth and distinguish that error is
es(t), φ (t), when α (t) converges to 0, mobile robot is calmed to expected pose;WhenWhen being 0, system at Function picks out depth information;
3rd, construct adaptive controller
According to system open loop dynamical equation above, for the mobile-robot system equipped with video camera design controller and Adaptive updates rule;
According to concurrent learning method, recognized for depthAdaptive updates rule is designed, form is as follows:
WhereinFor more new gain,For normal number, tk∈ [0, t] is initial time and current time Between time point;
Projection function Proj (χ) is defined as:
WhereinIt is positive valueLower bound;
It was found from (9)I.e.It is also the lower bound of depth identification, should chooses's Initial value is greater thanFurther, it is seen that:
In order to reach Pose Control purpose, the linear velocity and angular speed of mobile robot are designed are as follows:
WhereinTo control gain;
It should be pointed out that due to having used the data recorded (8) in the concurrently study item that adaptive updates are restrained, using most Excellent smoothing filter providesAccurate estimation;Therefore, estimates of parameters is significantly improved.
In addition, control parameter and undated parameter ke,kα,kφ12It is worth small, parameter ke,kα,kφIt is main to influence robot It is calm, parameter Γ12Main influence depth identification.Therefore, this system parameter is easy to adjust, and the present invention is also made to be suitable for actually answering With.
Theorem 1: while control law (11) (12) and parameter more new law (8) calm mobile robot to expected pose into The identification of row depth, i.e. following formula are set up:
So far, completing mobile robot, visual servo and adaptive depth recognize simultaneously.
The advantages of the present invention are:
The present invention is mainly made that following several respects contribution: 1. successfully recognize the depth information in the visual field, pass through vision system Obtain the excellent perception to external environment;2. robot is effectively driven to desired position with the continuous controller of polar coordinate representation method Appearance;3. combined controller and depth identification module solve system global stability because error restrains simultaneously.
Detailed description of the invention:
Fig. 1 is the coordinate system relationship of visual servo task;
Fig. 2 is simulation result: the motion path [overstriking triangle is expected pose] of mobile robot;
Fig. 3 is simulation result: mobile robot pose changes [solid line: robot pose;Dotted line: expected pose (zero)];
Fig. 4 is simulation result: being obtained by parameter more new law (8)Variation [solid line:Value;Dotted line: d*It is true Value;
Fig. 5 indicates experimental result of the present invention: the motion path [overstriking triangle is expected pose] of mobile robot;
Fig. 6 indicates experimental result: mobile robot pose changes [solid line: robot pose;Dotted line: expected pose (zero)];
Fig. 7 indicates experimental result: the speed [dotted line: zero] of mobile robot;
Fig. 8 shows experimental results: the image path of characteristic point;
Fig. 9 indicates experimental result: the d obtained in initial 6 seconds of control process by vision measurement*Calculated value;
Figure 10 indicates experimental result: being obtained by parameter more new law (8)Variation [solid line:Value;Dotted line: Fig. 9 Obtained in d*The calculated value Deping mean value of t];
Specific embodiment:
Embodiment 1:
1st, define system coordinate system
1.1st, system coordinate system description
The coordinate system for defining vehicle-mounted vidicon is consistent with the coordinate system of mobile robot.WithIndicate robot/video camera The rectangular coordinate system of expected pose, whereinOrigin wheel axis central point, also in the optical center position of video camera.z*Axis With the optical axis coincidence of camera lens, while also with robot direction of advance be overlapped;x*Axis is parallel with robot wheel shaft;y*Axis hangs down Directly in x*z*Plane (moveable robot movement plane).WithIndicate the current pose coordinate system of video camera/robot.
The distance between desired locations and current location are indicated with e (t);θ (t) is indicatedRelative toRotation angle; α (t) indicate the current pose of robot and from
It arrivesRotating vector between angle.φ (t) indicate robot expected pose and fromIt arrivesRotation to Angle between amount.The direction of θ (t), α (t), φ (t) are labeled, they are positive value in figure.
In addition, there is N number of static coplanar characteristic point P in the visual fieldi(i=1,2..., n);DefinitionIt is special The unit normal vector of sign plane existsIn expression;Be fromOrigin is to along n*Characteristic plane unknown distance;This In we assume that characteristic plane withoutOrigin, i.e. d*≠0。
Therefore, the purpose of the present invention is on the basis of defined system coordinate system, with a kind of novel visual servo Method drive mobile robot so thatWithIt is overlapped, and real-time perfoming d*Identification.
1.2nd, coordinate system transformation
Without loss of generality, this method withAs reference frame.FromIt arrivesSpin matrix and translation vector point It is not denoted asWith*Tc(t).In view of the plane motion of mobile robot constrains,With*Tc(t) form can be written as follows Form:
Wherein*Tcx(t) and*Tcz(t) it respectively indicatesOrigin existIn x and z coordinate.Therefore, robot is current Pose is expressed as*Tcz t,*Tcx t,θt。
2nd, construct system model
2.1st, it can measure signal
For characteristic point Pi,WithAcquired image respectively indicates desired image and present image.Using singly answering Property Matrix Estimation and fast decoupled technology, obtain the relative pose [28] of robot, i.e., from present image and desired imageAnd n*.Then, the robot coordinate containing scale factor under cartesian coordinate system is obtained, form is
For the ease of the design of controller, the cartesian coordinate of robot is changed into polar form.Defining e (t) is* Tc(t) norm, i.e.,
According to hypothesis d above*≠ 0, define the measurable distance e containing scale factors(t) are as follows:
In addition, the calculation formula of φ (t), α (t) are respectively as follows:
α=φ-θ (4)
2.2nd, establish Robot kinematics equations
In this section, it selects to can measure signal es(t), φ (t), α (t) construction execute the video camera machine of visual servo task Device people's system model.Indicate that moveable robot movement equation is [29] with the e (t) under polar coordinates, φ (t), α (t):
vr(t) and wr(t) linear velocity and angular speed of mobile robot are respectively indicated.
By es(t) (5) are brought in definition (2) into, and it is as follows to obtain system open loop dynamical equation:
In addition,It indicates depth identification, defines depth and distinguish that error is
Know from Fig. 1, es(t), φ (t), when α (t) converges to 0, mobile robot is calmed to expected pose.WhenIt is 0 When, system successfully picks out depth information.
3rd, construct adaptive controller
According to system open loop dynamical equation above, for the mobile-robot system equipped with video camera design controller and Adaptive updates rule.
According to concurrent learning method [26], recognized for depthAdaptive updates rule is designed, form is as follows:
WhereinFor more new gain,For normal number, tk∈ [0, t] is initial time and current time Between time point.
Projection function Proj (x) is defined as:
WhereinIt is positive valueLower bound.
It was found from (9)I.e.It is also the lower bound of depth identification, should chooses's Initial value is greater thanFurther, it is seen that:
In order to reach Pose Control purpose, the linear velocity and angular speed of mobile robot are designed are as follows:
WhereinTo control gain.
It should be pointed out that due to having used the data recorded (8) in the concurrently study item that adaptive updates are restrained, using most Excellent smoothing filter providesAccurate estimation.Therefore, estimates of parameters [26] are significantly improved.
In addition, control parameter and undated parameter ke,kα,kφ12It is worth small, parameter ke,kα,kφIt is main to influence robot It is calm, parameter Γ12Main influence depth identification.Therefore, this system parameter is easy to adjust, and the present invention is also made to be suitable for actually answering With.
Theorem 1: while control law (11) (12) and parameter more new law (8) calm mobile robot to expected pose into The identification of row depth, i.e. following formula are set up:
4th, theorem 1 proves
The present invention provides the proof of theorem 1 herein.
It proves: using (6) and (7), adaptive updates rule (8) is written as
On the other hand, (11) and (12) are brought into (6) and obtain closed-loop dynamic equation:
Then, selection Lyapunov Equation V (t) is as follows:
(16) to time derivation and are brought into closed-loop dynamic equation group (15):
More new law (14) is applied to (17), is obtained using relationship (10):
Since projection function ensure thatIt is positive, finds out from (18):
Therefore it is obtained from (16) and (19):
It can be seen that by (7)Then from (11) (12) and (20), we have vc(t),
In addition it is obtained by (7) (8) and (15):
Therefore, the variation of all system modes is all bounded.
In addition, it is all make that we, which define Φ,Point set:
Define the maximum invariant set that M is Φ.The point in M is set up from following relationship known to (18):
Therefore, it is known that:
Then (23) and (24) are substituted intoDynamical equation (15), obtain:
d*kekφφ=0 (25)
Therefore, according to above d*≠ 0 it is assumed that obtaining φ=0 in set M.
Although due to using projection function (8) to makePiecewise smooth, but he is continuous for primary condition.From (7) (8) find out with (23):
Therefore, it obtainsPositive boundaryIt is constant [30].
Therefore, maximum invariant set M only includes equalization point, and form is as follows:
According to Russell's principle of invariance [31], mobile robot pose and depth Identification Errors asymptotic convergence to zero, i.e.,
In the present invention, activating system is answered meet the regressor in (18)Appoint to complete control Business, the initial pose of robot should not be overlapped with expected pose, i.e. es(0)≠0.Therefore, es(t) under designed controller Reduce, especially in the initial stage, makes the condition that is content with very little in control process.
Although not having singular point in designed controller and more new law, e is worked as according to polar coordinates propertysIt (t) was zero opportunity Device people's pose is meaningless.In order to solve this problem, work as es(t) it is zero that linear velocity is arranged when meeting threshold value, and controls moving machine Device people does pure rotational motion to control its direction.
5th, emulation is described with experiment effect
5.1st, simulation result
Simulation result is provided in this section to verify to this method.
Firstly, being randomly provided four coplanar characteristic points calculates homography matrix, the intrinsic parameter of video camera and then experiment institute It is consistent:
The initial pose of robot is designed as:
-2.1m,-0.6m,-28° (29)
Expected pose is 0m, 0m, 0 °.In addition, be added that the picture noise that standard deviation is σ=0.15 carrys out test controller can Jamproof ability is recognized by property and depth.
Setting control parameter is ke,kα,kφ12, N is set as 100, these are in 100 initial sampling times In the data that record.With cubic polynomial Function Fitting es(tk), inhibit to interfere with this method, and by cubic polynomial letter Several pairs of time derivations obtainAccurate estimation.
Fig. 2 illustrates mobile robot in the motion path of cartesian space as a result, overstriking triangle indicates expected pose.It can With find out robot effective exercise to expected pose and motion path it is smooth.Robotary*Tcz t,*TcxThe variation of t, θ t It indicates in Fig. 3, and knows that steady-state error is sufficiently small.In addition, depth informationEstimation indicate in Fig. 4.As can be seen that Depth information converges to true value quickly and is consistent well with true value.Therefore, the depth information of scene is successfully distinguished Know.
5.2nd, experimental result
After emulation testing, further collecting experimental result confirms this patent.Experiment, which uses, is equipped with an Airborne camera Pioneer3-DX mobile robot and four coplanar characteristic points in the visual field, they are two square total vertex.It is all Strategy is all to implement operation under 2005 environment of Vsiual Studio and under the auxiliary of the laboratory OpenCV.Sampling rate is 32 times per second, it is sufficient to complete visual servo task.
The initial pose of robot is randomly set to -2.1m, 0.6m, -25 °, and expected pose is (0m, 0m, 0 °).Control ginseng Number is selected as ke=0.4, kα=0.1, kφ=1, Γ1=2, Γ2=4.Record data and matchingMode and emulation experiment Part is identical.
The route result of robot indicates in Fig. 5;Robotary*Tcz t,*TcxThe variation of t, θ t are indicated in Fig. 6 In, he be by document [32] method calculate Lai.The speed of Fig. 7 expression mobile robot, it can be seen that mobile robot Object pose is reached by very efficient path with lesser steady-state error.The image path of feature as the result is shown in fig. 8, Origin indicates that obtained characteristic point initial position image, five-pointed star indicate the characteristic point in desired image as reference.Characteristics of image Expected pose is moved closer to, indicates that robot is mobile to expected pose.
In addition, for the order of accuarcy of test depth identification, d*True value be to be calculated by the method in document [32] The distance between it arrives, used expectation and present image information and required known certain characteristic points.Fig. 9 expression is controlling Preceding 6 seconds d of process*Calculated result, in the periodWithDistance it is larger, make to count counted d in this way*It is relatively accurate. Then, the calculated d in Fig. 9*Average value be 1.30 meters.Figure 10 is illustratedVariation, wherein dotted line indicate d*Be averaged Value.Accordingly, it is seen thatEstimation of Depth value converge to its true value d quickly*, and steady state estimation errors are also smaller. Therefore, it can be deduced that depth identification and visual servo task while the conclusion successfully completed.
It should be noted that only explaining the present invention the foregoing is merely the preferred embodiment of the present invention, not thereby limiting The invention patent range processed.It is only obviously changed to the technology of the present invention design is belonged to, equally protects model in the present invention Within enclosing.
Annex: bibliography
1.A.Sabnis,G.K.Arunkumar,V.Dwaracherla V,and L.Vachhani, “Probabilistic approach for visual homing of a mobile robot in the presence of dynamic obstacles,”IEEE Trans.Ind.Electron.,vol.63,no.9,pp.5523-5533, Sep.2016.
2.J.-S.Hu,J.-J.Wang,and D.M.Ho,“Design of sensing system and anticipative behavior for human following of mobile robots,”IEEE Trans.Ind.Electron.,vol.61,no.4,pp.1916-1927,Apr.2014.Faugeras O.,Luong Q.T., Maybank S.,Camera self-calibration:theory and experiments,in Proc.2nd Euro.Conf.Comput.Vis.,1992:321-334.
3.T.N.Shene,K.Sridharan,and N.Sudha,“Real-time SURF-based video stabilization system for an FPGA-driven mobile robot,”IEEE Trans.Ind.Electron.,vol.63,no.8,pp.5012-5021,Sep.2016.
4.X.Zhang,Y.Fang,B.Li,and J.Wang,“Visual servoing of nonholonomic mobile robots with uncalibrated camera-to-robot parameters,”IEEE Trans.Ind.Electron.,online published,DOI:10.1109/TIE.2016.2598526.
5.B.Li,Y.Fang,G.Hu,and X.Zhang,“Model-free unified tracking and regulation visual servoing of wheeled mobile robots,”IEEE Trans.Control Syst.Technol.,vol.24,no.4,pp.1328-1339,Jul.2016.
6.G.L.Mariottini,G.Oriolo,and D.Prattichizzo,“Image-based visual servoing for nonholonomic mobile robots using epipolar geometry,”IEEE Trans.Robot.,vol.23,no.1,pp.87-100,Feb.2007.
7.B.Li,Y.Fang,and X.Zhang,“Visual servo regulation of wheeled mobile robots with an uncalibrated onboard camera,”IEEE/ASME Trans.Mechatronics, vol.21,no.5,pp.2330-2342,Oct.2016.
8.H.M.Becerra,J.B.Hayet,and C.Saüés,“A single visual-servo controller of mobile robots with super-twisting control,”Robot.Auton.Syst.,vol.62,no.11, pp.1623-1635,Nov.2014.
9.G.López-Nicolás,N.R.Gans,S.Bhattacharya,C.Sagüés,J.J.Guerrero,and S.Hutchinson,“Homography-based control scheme for mobile robots with nonholonomic and field-of-view constraints,”IEEE Trans.Syst.Man Cybern.Part B-Cybern.,vol.40,no.4,pp.1115-1127,Aug.2010.
10.Y.Fang,X,Liu,and X.Zhang,“Adaptive active visual servoing of nonholonomic mobile robots,”IEEE Trans.Ind.Electron.,vol.59,no.1,pp.486-497, Jan.2012.
11.Z.Li,C.Yang,C.-Y.Su,J.Deng,and W.Zhang,“Vision-based model predictive control for steering of a nonholonomic mobile robot,”IEEE Trans.Control Syst.Technol.,vol.24,no.2,pp.553-564,Mar.2016.
12.G.Hu,D.Aiken,S.Gupta,and W.E.Dixon,“Lyapunov-based range identification for paracatadioptic systems,”IEEE Trans.Autom.Control,vol.53, no.7,pp.1775-1781,Aug.2008.
13.A.P.Dani,N.R.Fischer,Z.Kan,and W.E.Dixon,“Globally exponentially stable observer for vision-based range estimation,”Mechatronics,vol.22,no.4, pp.381-389,Jun.2012.
14.D.Chwa,A.P.Dani,and W.E.Dixon,“Range and motion estimation of a monocular camera using static and moving objects,”IEEE Trans.Control Syst.Technol.,vol.24,no.4,pp.1174-1183,Jul.2016.
15.A.P.Dani,N.R.Fischer,and W.E.Dixon,“Single camera structure and motion,”IEEE Trans.Autom.Control,vol.57,no.1,pp.241-246,Jan.2012.
16.X.Liang,H.Wang,Y.-H Liu,W.Chen,G.Hu,and J.Zhao,“Adaptive task- space cooperative tracking control of networked robotic manipulators without task-space velocity measurements,”IEEE Trans.Cybern.,online published,DOI: 10.1109/TCYB.2015.2477606.
17.H.Wang,Y.-H,Liu,W.Chen,and Z.Wang,“A new approach to dynamic eye- in-hand visual tracking using nonlinear observers,”IEEE/ASME Trans.Mechatronics,vol.16,no.2,pp.387-394,Apr.2011.
18.H.Wang,Y.-H.Liu,and W Chen,“Uncalibrated visual tracking control without visual velocity,”IEEE Trans.Control Syst.Technol.,vol.18,no.6, pp.1359-1370,Nov.2010.
19.R.Spica,P.R.Giordano,and F.Chaumette,“Active structure from motion:application to point,sphere,and cylinder,”IEEE Trans.Robot.,vol.30, no.6,pp.1499-1513,Dec.2014.
20.K.Wang,Y.Liu,and L.Li,“Visual servoing trajectory tracking of nonholonomic mobile robots without direct position measurement,”IEEE Trans.Robot.,vol.30,no.4,pp.1026-1035,Aug.2014.
21.A.D.Luca,G.Oriolo,and P.R.Giordano,“Feature depth observation for image-based visual servoing:theory and experiments,”Int.J.Robot.Res.,vol.27, no.10,pp.1093-1116,Oct.2008.
22.G.Chowdhary,T.Yucelen,M.Mühlegg,and E.N.Johnson,“Concurrent learning adaptive control of linear systems with exponentially convergent bounds,”Int.J.Adapt.Control Signal Process.,vol.27,no.4,pp.280-301,Apr.2013.
23.G.V.Chowdhary and E.N.Johnson,“Theory and flight-test validation of a concurrent-learning adaptive controller,”J.Guid.Control Dynam.,vol.34, no.2,pp.592-607,Mar.2011.
24.P.Walters,R.Kamalapurkar,L.Anderws,and W.E.Dixon,“Online approximate optimal path-following for a mobile robot,”in Proc.IEEE Conf.Decis.Control,Dec.2014,pp.4536-4541.
25.R.Kamalapurkar,P.Walters,and W.E.Dixon,“Model-based reinforcement learning for approximate optimal regulation,”Automatica,vol.64,pp.94-104, Feb.2016.
26.A.Parikh,R.Kamalapurkar,H.-Y.Chen,and W.E.Dixon,“Homography based visual servo control with scene reconstruction,”in Proc.IEEE Conf.Decis.Control,Osaka,Japan,Dec.2015,pp.6972-6977.
27.X.Zhang,Y.Fang,and N.Sun,“Visual servoing of mobile robots for posture stabilization:from theory to experiments,”Int.J.Robust Nonlinear Control,vol.25,no.1,pp.1-15,Jan.2015.
28.Y.Fang,W.E.Dixon,D.M.Dawson,and P.Chawda,“Homography-based visual servo regulation of mobile robots,”IEEE Trans.Syst.Man Cybern.Part B-Cybern., vol.35,no.5,pp.1041-1050,Oct.2005.
29.M.Aicardi,G.Casalino,A.Bicchi,and A.Balestrino,“Closed loop steering of unicycle-like vehicles via Lyapunov techniques,”IEEE Robot.Autom.Mag.,vol.2,no.1,pp.27-35,Mar.1995.
30.S.Chareyron and P.-B.Wieber,“Lasalle's invariance theorem for nonsmooth lagrangian dynamical systems,”in Proc.Euromech Nonlinear Dynamics Conf.,Eindhoven,Netherlands,Aug.2005.
31.J.J.Slotine and W.Li,Applied nonlinear control,Englewood Cliff,NJ: Prentice Hall,Inc.,1991.
32.W.MacKunis,N.Gans,K.Kaiser,and W.E.Dixon,“Unified tracking and regulation visual servo control for wheeled mobile robots,”in Proc.IEEE Int.Conf.Control Application,2007,pp.88-93.

Claims (1)

1. a kind of mobile robot while visual servo and adaptive depth discrimination method, it is characterised in that the following steps are included:
1st, define system coordinate system, comprising:
1.1st, system coordinate system description
The coordinate system for defining vehicle-mounted vidicon is consistent with the coordinate system of mobile robot;WithIndicate robot/video camera expectation The rectangular coordinate system of pose, whereinOrigin wheel axis central point, also in the optical center position of video camera;z*Axis with take the photograph The optical axis coincidence of camera lens, while being also overlapped with robot direction of advance;x*Axis is parallel with robot wheel shaft;y*Axis perpendicular to x*z*Plane;WithIndicate the current pose coordinate system of video camera/robot;
The distance between desired locations and current location are indicated with e (t);θ (t) is indicatedRelative toRotation angle;α(t) Indicate the current pose of robot and fromIt arrivesRotating vector between angle;φ (t) indicate robot expected pose and fromIt arrivesRotating vector between angle;
In addition, there is N number of static coplanar characteristic point P in the visual fieldi(i=1,2..., n);DefinitionIt is that feature is flat The unit normal vector in face existsIn expression;Be fromOrigin is to along n*Characteristic plane unknown distance;Here false If characteristic plane withoutOrigin, i.e. d*≠0;
1.2nd, coordinate system transformation
Without loss of generality, this method withAs reference frame;FromIt arrivesSpin matrix and translation vector be denoted as respectivelyWith*Tc(t);In view of the plane motion of mobile robot constrains,With*Tc(t) form can be written as following form:
Wherein*Tcx(t) and*Tcz(t) it respectively indicatesOrigin existIn x and z coordinate;Therefore, the current pose of robot Be expressed as (*Tcz(t),*Tcx(t), θ (t));
2nd, construct system model, comprising:
2.1st, it can measure signal
For characteristic point Pi,WithAcquired image respectively indicates desired image and present image;Utilize homography square Battle array estimation and fast decoupled technology, obtain the relative pose of robot, i.e., from present image and desired imageAnd n*;Then, the robot coordinate containing scale factor under cartesian coordinate system is obtained, form is
For the ease of the design of controller, the cartesian coordinate of robot is changed into polar form;Defining e (t) is*Tc(t) Norm, i.e.,
According to the 1.1st hypothesis d*≠ 0, define the measurable distance e containing scale factors(t) are as follows:
In addition, the calculation formula of φ (t), α (t) are respectively as follows:
α=φ-θ (4)
2.2nd, establish Robot kinematics equations
It selects to can measure signal es(t), φ (t), α (t) construction execute the robot camera system model of visual servo task; Moveable robot movement equation is indicated with the e (t) under polar coordinates, φ (t), α (t) are as follows:
vr(t) and wr(t) linear velocity and angular speed of mobile robot are respectively indicated;
By es(t) (5) are brought in definition (2) into, and it is as follows to obtain system open loop dynamical equation:
In addition,It indicates depth identification, defines depth and distinguish that error is
es(t), φ (t), when α (t) converges to 0, mobile robot is calmed to expected pose;WhenWhen being 0, system is successfully distinguished Know depth information out;
3rd, construct adaptive controller
According to the system open loop dynamical equation in the 2.2nd, for the mobile-robot system design controller equipped with video camera and certainly Adapt to more new law;
According to concurrent learning method, recognized for depthAdaptive updates rule is designed, form is as follows:
Wherein Γ1,For more new gain,For normal number, tk∈ 0, t are between initial time and current time Time point;
Projection function Proj (x) is defined as:
WhereinIt is positive valueLower bound;
It was found from (9)I.e.It is also the lower bound of depth identification, should choosesInitial value It is greater thanFurther, it is seen that:
In order to reach Pose Control purpose, the linear velocity and angular speed of mobile robot are designed are as follows:
Wherein kα,ke,To control gain;
Control law (11) (12) and parameter more new law (8) carry out depth while mobile robot is calmed to expected pose and distinguish Know, i.e., following formula is set up:
So far, completing mobile robot, visual servo and adaptive depth recognize simultaneously.
CN201611085990.9A 2016-12-01 2016-12-01 A kind of mobile robot visual servo and adaptive depth discrimination method simultaneously Expired - Fee Related CN106774309B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611085990.9A CN106774309B (en) 2016-12-01 2016-12-01 A kind of mobile robot visual servo and adaptive depth discrimination method simultaneously

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611085990.9A CN106774309B (en) 2016-12-01 2016-12-01 A kind of mobile robot visual servo and adaptive depth discrimination method simultaneously

Publications (2)

Publication Number Publication Date
CN106774309A CN106774309A (en) 2017-05-31
CN106774309B true CN106774309B (en) 2019-09-17

Family

ID=58914037

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611085990.9A Expired - Fee Related CN106774309B (en) 2016-12-01 2016-12-01 A kind of mobile robot visual servo and adaptive depth discrimination method simultaneously

Country Status (1)

Country Link
CN (1) CN106774309B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109542094B (en) * 2017-09-21 2021-06-08 天津工业大学 Mobile robot vision stabilization control without desired images
CN109816687A (en) * 2017-11-20 2019-05-28 天津工业大学 The concurrent depth identification of wheeled mobile robot visual servo track following
CN109816717A (en) * 2017-11-20 2019-05-28 天津工业大学 The vision point stabilization of wheeled mobile robot in dynamic scene
CN109816709B (en) * 2017-11-21 2020-09-11 深圳市优必选科技有限公司 Monocular camera-based depth estimation method, device and equipment
CN108153301B (en) * 2017-12-07 2021-02-09 深圳市杰思谷科技有限公司 Intelligent obstacle avoidance system based on polar coordinates
CN108255063B (en) * 2018-01-26 2021-03-19 深圳禾苗通信科技有限公司 Small rotor unmanned aerial vehicle system modeling method based on closed-loop subspace identification
CN108614560B (en) * 2018-05-31 2021-04-06 浙江工业大学 Tracking control method for visual servo performance guarantee of mobile robot
CN108762219B (en) * 2018-06-15 2019-11-15 广东嘉腾机器人自动化有限公司 Single steering wheel AGV point-stabilized control method and device
CN110722533B (en) * 2018-07-17 2022-12-06 天津工业大学 External parameter calibration-free visual servo tracking of wheeled mobile robot
CN110722547B (en) * 2018-07-17 2022-11-15 天津工业大学 Vision stabilization of mobile robot under model unknown dynamic scene
CN111612843A (en) * 2019-02-26 2020-09-01 天津工业大学 Mobile robot vision stabilization control without expected image
CN112123370B (en) * 2019-06-24 2024-02-06 内蒙古汇栋科技有限公司 Mobile robot vision stabilization control with desired pose change
CN111506996A (en) * 2020-04-15 2020-08-07 郑州轻工业大学 Self-adaptive identification method of turntable servo system based on identification error limitation
CN111931387B (en) * 2020-09-23 2020-12-22 湖南师范大学 Visual servo approach method facing to moving columnar assembly
CN113172632A (en) * 2021-05-12 2021-07-27 成都瑞特数字科技有限责任公司 Simplified robot vision servo control method based on images

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103955954A (en) * 2014-04-21 2014-07-30 杭州电子科技大学 Reconstruction method for high-resolution depth image in combination with space diagram pairs of same scene
CN104950893A (en) * 2015-06-26 2015-09-30 浙江大学 Homography matrix based visual servo control method for shortest path
CN105144710A (en) * 2013-05-20 2015-12-09 英特尔公司 Technologies for increasing the accuracy of depth camera images
CN105729468A (en) * 2016-01-27 2016-07-06 浙江大学 Enhanced robot workbench based on multiple depth cameras

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100246899A1 (en) * 2009-03-26 2010-09-30 Rifai Khalid El Method and Apparatus for Dynamic Estimation of Feature Depth Using Calibrated Moving Camera

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105144710A (en) * 2013-05-20 2015-12-09 英特尔公司 Technologies for increasing the accuracy of depth camera images
CN103955954A (en) * 2014-04-21 2014-07-30 杭州电子科技大学 Reconstruction method for high-resolution depth image in combination with space diagram pairs of same scene
CN104950893A (en) * 2015-06-26 2015-09-30 浙江大学 Homography matrix based visual servo control method for shortest path
CN105729468A (en) * 2016-01-27 2016-07-06 浙江大学 Enhanced robot workbench based on multiple depth cameras

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Homography Based Visual Servo Control with Scene Reconstruction;Anup Parikh 等;《2015 54th IEEE fonference 0nDecision and Control》;20160211;6972-6977
Homography一Based Visual Servo Regulation of Mobile Robots;Yongchun Fang 等;《IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS-PART B: CYBERNETICS》;20051031;第35卷(第5期);1041-1050
Visual servoing of mobile robots for posture stabilization:from theory to experiments;Xuebo Zhang 等;《INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL》;20131231;1-15

Also Published As

Publication number Publication date
CN106774309A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
CN106774309B (en) A kind of mobile robot visual servo and adaptive depth discrimination method simultaneously
Lee et al. Camera-to-robot pose estimation from a single image
Zhang et al. Motion-estimation-based visual servoing of nonholonomic mobile robots
Li et al. Visual servo regulation of wheeled mobile robots with simultaneous depth identification
CN108499054B (en) A kind of vehicle-mounted mechanical arm based on SLAM picks up ball system and its ball picking method
Vahrenkamp et al. Visual servoing for humanoid grasping and manipulation tasks
Qiu et al. Visual servo tracking of wheeled mobile robots with unknown extrinsic parameters
WO2015058297A1 (en) Image-based trajectory robot programming planning approach
CN110722533B (en) External parameter calibration-free visual servo tracking of wheeled mobile robot
CN109816687A (en) The concurrent depth identification of wheeled mobile robot visual servo track following
Silveira On intensity-based nonmetric visual servoing
Lippiello et al. 3D monocular robotic ball catching with an iterative trajectory estimation refinement
Lehnert et al. 3d move to see: Multi-perspective visual servoing for improving object views with semantic segmentation
Qu et al. Dynamic visual tracking for robot manipulator using adaptive fading Kalman filter
Yang et al. Vision-based localization and mapping for an autonomous mower
Song et al. On-line stable evolutionary recognition based on unit quaternion representation by motion-feedforward compensation
Tsai et al. Robust visual tracking control system of a mobile robot based on a dual-Jacobian visual interaction model
Mehta et al. New approach to visual servo control using terminal constraints
Hebert et al. Dual arm estimation for coordinated bimanual manipulation
Ranjan et al. Identification and control of NAO humanoid robot to grasp an object using monocular vision
CN110722547B (en) Vision stabilization of mobile robot under model unknown dynamic scene
CN109542094B (en) Mobile robot vision stabilization control without desired images
Ryan et al. Probabilistic correspondence in video sequences for efficient state estimation and autonomous flight
Fuchs et al. Advanced 3-D trailer pose estimation for articulated vehicles
Usayiwevu et al. Probabilistic plane extraction and modeling for active visual-inertial mapping

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190917

Termination date: 20211201