CN103878770A - Space robot visual delay error compensation method based on speed estimation - Google Patents

Space robot visual delay error compensation method based on speed estimation Download PDF

Info

Publication number
CN103878770A
CN103878770A CN201410138351.9A CN201410138351A CN103878770A CN 103878770 A CN103878770 A CN 103878770A CN 201410138351 A CN201410138351 A CN 201410138351A CN 103878770 A CN103878770 A CN 103878770A
Authority
CN
China
Prior art keywords
theta
centerdot
robot
space
time delay
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410138351.9A
Other languages
Chinese (zh)
Other versions
CN103878770B (en
Inventor
王滨
李振宇
刘宏
赵京东
李志奇
王志超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN201410138351.9A priority Critical patent/CN103878770B/en
Publication of CN103878770A publication Critical patent/CN103878770A/en
Application granted granted Critical
Publication of CN103878770B publication Critical patent/CN103878770B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1671Programme controls characterised by programming, planning systems for manipulators characterised by simulation, either to verify existing program or to create and verify new program, CAD/CAM oriented, graphic oriented programming systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a space robot visual delay error compensation method based on speed estimation. The problem of compensating the visual measurement delay errors of a space robot below a floating base is solved. The method comprises the steps that the visual delay of a system is determined, and the mathematic relation of visual measurement data with the delay and a physical reality relative position is set up; according to the visual measurement data with the delay and the joint instruction of a mechanical arm, the current space robot terminal speed is estimated; an error controller is designed, and estimation errors of the space robot terminal speed are reduced; according to a corrected space robot terminal speed estimation value, the current visual measurement data with the delay are compensated, and the compensated visual measurement data are obtained. According to the method, the current space robot terminal speed is estimated through historical measurement data combined with the joint angular speed instruction, the error controller is designed to reduce the errors of speed estimation, precise visual delay compensation of the space robot below the floating base is achieved, and the space robot can conveniently complete fine operation tasks with high precision.

Description

Robot for space vision time delay error compensating method based on velocity estimation
Technical field
The present invention relates to a kind of robot for space vision measurement time delay error compensation method.
Background technology
When robot for space (comprising carrier spacecraft and mechanical arm) under floating pedestal is carried out space tasks, the practical problem facing is the vision measurement system on its mechanical arm, while carrying out Vision information processing, often need to consume the longer time, one of them reason is due to illumination condition complexity, the Vision information processing method that robot for space adopts is more complicated, Another reason is that to can be used in the processor computing capability of robot for space Vision information processing often very limited, because having caused vision measurement information, the contradiction between visual information method complexity and Vision information processing equipment computing capability often there is larger time delay, show on videogrammetry system and just show as and there is larger measure error.Due to the existence of vision measurement error, make robot for space in the time that performance objective such as catches at the task, can affect its performance accuracy.In order to make robot for space can complete more fast, stably meticulous service role in-orbit, in Space Robot System design process, need to consider robot for space vision time delay compensation of error problem.
At present there are some to can be used for prediction and the method for estimation of ground fixed pedestal robot vision time delay error processing.Least-squares estimation be quadratic sum minimum taking error as criterion, estimate a kind of basic parameter method of estimation of unknown parameter in linear model according to observation data.Its basic ideas are that Selectivity Estimating amount makes model output and the quadratic sum of the difference of actual measurement output reach minimum.This mode of asking error sum of squares can avoid positive negative error to offset, and is convenient to Mathematical treatment.Kalman filtering is that a kind of Linear Minimum Variance is estimated, past and current state that can estimated signal, even can estimate state in the future.Kalman prediction model estimates desired signal by method from the measured value relevant with being extracted signal.Be wherein the random response being caused by white-noise excitation by estimated signal, oneself knows the system equation between driving source and response, measuring amount and also oneself knows by the functional relation between estimator.Smith prediction device is often applied to the processing of ground robot vision time delay.The action principle of Smith prediction device is by introducing time delay rectification link, to compensating with the vision data of measuring time delay, thereby the pure time delay part in system being separated to outside whole closed-loop system, thereby improves the control performance of whole control system.Can find out from the action principle of Smith prediction device, the basis of Smith Predictor Design is must guarantee to dope accurately the dynamic characteristic of controlled device under given input state, can compensate the time delay part in metrical information, therefore Smith prediction device depends on and sets up accurate Mathematical Modeling very much.Robot for space under floating pedestal is different from ground robot, the pedestal of not fixing due to robot for space, together with the sports coupling of the motion of mechanical arm and carrier spacecraft, the motion of any space manipulator all can change position and the attitude of carrier spacecraft, and the change of carrier spacecraft position and attitude also can affect the location of space manipulator conversely, therefore the dynamics calculation of robot for space is very complicated.Moreover, along with the continuous consumption of carrier spacecraft fuel, quality, centroid position and the inertia matrix of carrier spacecraft all constantly changing, and accurate Mathematical Modeling is difficult to obtain, and therefore classical prediction and method of estimation are difficult to obtain good effect in robot for space application.
Summary of the invention
The invention provides a kind of Free-floating underchassis space robot vision time delay error compensation method based on velocity estimation, the vision measurement error producing because time delay exists to reduce robot for space vision system, is conducive to robot for space and completes meticulous space tasks based on compensation after-vision information.
Robot for space vision time delay error compensating method based on velocity estimation is completed by following steps:
Step 1, determine the vision time delay of vision system according to adopted visual processes algorithm and applied hardware, set up the mathematical relationship between vision measurement data with time delay and the true relative pose of physics (measurement camera and target); Step 2, basis, with the vision measurement data of time delay and the joint instruction of mechanical arm, are estimated the tip speed (seeing formula (26)) of current robot for space; Step 3, design error controller, reduce the error of robot for space tip speed valuation, obtains proofreading and correct rear space robot end Velocity Estimation (seeing formula (30)); Step 4, according to proofreading and correct rear space robot end Velocity Estimation, the current vision measurement data with time delay are compensated, obtain the vision measurement data (seeing formula (32)) through overcompensation.
The present invention has following beneficial effect: the inventive method, by robot for space mathematics model analysis, has provided and utilized historical measurement data fusion joint angle speed command to estimate current robot for space tip speed.The measurement data with time delay and the instruction of joint of mechanical arm angular speed that the present invention utilizes robot for space vision system to obtain, estimate current robot for space tip speed, design evaluated error controller simultaneously, reduce the error of robot for space tip speed valuation, according to obtaining proofreading and correct rear space robot end Velocity Estimation, can effectively compensate the vision measurement error of the Free-floating underchassis space robot causing due to time delay to target, can ensure that robot for space completes accurate operation task in-orbit accurately.
The present invention does not need to set up the accurate Mathematical Modeling of robot for space, the true tip speed of mechanical arm of utilizing mathematics model analysis to obtain and the simplification relation with time delay tip speed, carry out tip speed estimation, and adopt the method for closed-loop control to improve the precision of robot for space vision time delay error compensation; The method has been avoided complicated dynamics calculation, calculate simple, be convenient to realize the accurate vision time delay compensation of Free-floating underchassis space robot, be conducive to implementation space robot and complete accurately accurate operation task, can be used for the space applications such as robot for space is safeguarded in-orbit, space junk cleaning, survey of deep space.
Brief description of the drawings
Fig. 1 is vision time delay error control system block diagram of the present invention; Fig. 2 is flow chart of the present invention; Fig. 3 is Space Robot System of the present invention and each coordinate system schematic diagram; Fig. 4 is data estimator and raw measurement data and the true relative position comparison diagram that compensation after-vision is measured, wherein: Fig. 4 a be compensation after-vision data estimator and the raw measurement data measured and truly relative position at the axial comparison diagram of x, Fig. 4 b be compensation after-vision data estimator and the raw measurement data measured and true relative position at the axial comparison diagram of y, Fig. 4 c is the data estimator measured of compensation after-vision with raw measurement data with true relative position at the axial comparison diagram of z;
Fig. 5 is compensation after-vision measurement data evaluated error and the comparison diagram that does not compensate the time delay error of anterior optic measurement data, wherein: Fig. 5 a is for compensation after-vision measurement data evaluated error and do not compensate the time delay error of anterior optic measurement data at the axial comparison diagram of x, Fig. 5 b is vision measurement data estimation error and do not compensate the time delay error of anterior optic measurement data at the axial comparison diagram of y, and Fig. 5 c is vision measurement data estimation error and do not compensate the time delay error of anterior optic measurement data at the axial comparison diagram of z.
Detailed description of the invention
Detailed description of the invention one: present embodiment is described in conjunction with Fig. 1, Fig. 2 and Fig. 3, present embodiment is completed by following steps: one, determine the time delay of vision system according to adopted visual processes algorithm and applied hardware, set up the mathematical relationship between vision measurement data with time delay and the true relative pose of physics (measurement camera and target); Two, according to the vision measurement data of time delay and the joint instruction of mechanical arm, estimate the tip speed of current robot for space; Three, design error controller, reduces the error of robot for space tip speed valuation, obtains proofreading and correct rear space robot end Velocity Estimation; Four, according to proofreading and correct rear space robot end Velocity Estimation, the current vision measurement data with time delay are compensated, obtain the vision measurement data through overcompensation.
Detailed description of the invention two: the difference of present embodiment and detailed description of the invention one is: present embodiment is being determined the time delay of vision system according to adopted visual processes algorithm and applied hardware described in step 1, mathematical relationship between the vision measurement data of foundation with time delay and the true relative pose of physics (measuring camera and target) for: determine that according to adopted Vision information processing method and applied hardware platform the time delay that whole vision measurement link causes is m cycle, if the duration in each cycle of system is t s, the vision time delay T of Space Robot System dfor:
T d=m×t s (1)
The vision measurement information D in the k of definition space robot moment vand actual distance information D (k) r(k) be
D v(k)=[x v(k)y v(k)z v(k)α v(k)β v(k)γ v(k)] T (2)
D r(k)=[x r(k)y r(k)z r(k)α r(k)β r(k)γ r(k)] T (3)
Wherein k is the general moment, x (k), y (k), z (k) is relative position information, α (k), β (k), γ (k) is for describing the Eulerian angles of relative pose, robot for space vision measurement information D vand actual distance information D (k) r(k) pass between is
D r(k)=D v(k+m) (4)
K represents any time, and m represents time delay.
If current time is N, vision measurement information is D v(N), can know D r(N-m) actual distance information before all can directly be obtained by the vision measurement data in moment in past.Definition true velocity information V r(k) be
V r ( k ) = x · r ( k ) y · r ( k ) z · r ( k ) α · r ( k ) β · r ( k ) γ · r ( k ) T - - - ( 5 )
Can be by D rand D (k+1) r(k) calculate
V r ( k ) = D r ( k + 1 ) - D r ( k ) t s - - - ( 6 )
Therefore the true velocity information V of robot for space rand V (N-m-1) r(N-m-1) true velocity information before also can be by calculating, real range information D now k(N) be
D r ( N ) = D v ( N ) + t s × Σ i = 1 m V r ( N - i ) - - - ( 7 )
Detailed description of the invention three: the difference of present embodiment and detailed description of the invention one or two is: present embodiment described in step 2 according to the joint instruction of the vision measurement data with time delay and mechanical arm, estimate the tip speed of current robot for space, its process is:
Utilize the vision measurement data with time delay, can obtain true velocity sequence
Figure BDA0000487995810000044
valuation sequence
Figure BDA0000487995810000045
can Negotiation speed valuation sequence current vision measurement information Dv (N) is compensated, and formula (7) can be rewritten as:
D ~ r ( N ) = D v ( N ) + t s × Σ i = 1 m V ~ r ( N - i ) - - - ( 8 )
Thereby obtain the estimated value of current actual distance information ;
Below the robot for space vision measurement data with time delay are combined with the joint instruction of mechanical arm, construct the valuation of the robot for space tip speed of current time.
Because the robot for space under zero-g state meets the conservation of momentum and the conservation of angular momentum, can set up the mathematical relationship between robot for space joint motions state and end-effector generalized velocity according to this, meet equation:
v e ω e = J g ( Ψ 0 , Θ , m i , I i ) Θ · = J g _ v J g _ ω Θ · - - - ( 9 )
Wherein parameter matrix J gbeing exactly the broad sense Jacobian matrix of robot for space, is inertia matrix I i, mass parameter m iwith joint of robot angle Θ, carrier spacecraft attitude Ψ 0function,
Figure BDA00004879958100000411
the joint angle speed of robot.Wherein
v e = x · r y · r z · r T - - - ( 10 )
ω e = 0 - sin α r cos α r cos β r 0 cos α r sin α r cos β r 1 0 - sin β r α · r β · r γ · r - - - ( 11 )
V eand ω erespectively linear velocity and the angular speed of robot for space end effector,
Figure BDA0000487995810000058
with
Figure BDA0000487995810000059
for the differential of robot for space terminal angle Eulerian angles.
Generalized Jacobian is launched, consider the end generalized velocity of robot for space
v e ω e = Σ i = 1 n J gi θ i - - - ( 12 )
Wherein J githe i row of broad sense Jacobian matrix.
Because the actual value of the each kinematics of robot for space, kinetic parameter is included in the measurement data with time delay, when definition M, be engraved in the neighborhood of current time N, by J gi(M) expand into:
J gi ( M ) = J gi ( N ) + Σ i = 1 n Σ j = 1 ∞ 1 j ! ∂ j J gi ∂ θ i j [ θ i ( M ) - θ i ( N ) ] j + E Ji ′ ( M ) - - - ( 13 )
Wherein E' ji(M) for changing because of attitude of carrier the error causing.Only retain once item wherein, be further deformed into
J gi ( M ) = J gi ( N ) + Σ i = 1 n ∂ J gi ( N ) ∂ θ i [ θ i ( M ) - θ i ( N ) ] + E Ji ( M ) E Ji ( M ) = Σ i = 1 n Σ j = 2 ∞ 1 j ! ∂ j J gi ( N ) ∂ θ i j [ θ i ( M ) - θ i ( N ) ] j + E Ji ′ ( M ) - - - ( 14 )
Now the robot for space tip speed in M moment can be expressed as
v e ( M ) ω e ( M ) = Σ i = 1 n J gi ( M ) θ · i ( M ) = J g ( N ) θ · ( M ) + ΔJ N ( M ) θ · ( M ) + E J ( M ) θ · ( M ) - - - ( 15 )
Wherein
ΔJ N ( M ) = [ ∂ J g 1 ( N ) ∂ θ 1 [ θ 1 ( M ) - θ 1 ( N ) ] · · · ∂ J gn ( N ) ∂ θ n [ θ n ( M ) - θ n ( N ) ] ] - - - ( 16 )
The robot for space joint angle speed in M moment is expressed as
θ · ( M ) = θ · ( M ) · θ · ( N ) | θ · ( N ) | 2 2 θ · ( N ) + E θ ( M ) - - - ( 17 )
Wherein
E θ ( M ) = θ · ( M ) - θ · ( M ) · θ · ( N ) | θ · ( N ) | 2 2 θ · ( N ) - - - ( 18 )
While moving due to robot for space, joint angle velocity variations is slow, so formula mistake! Do not find Reference source.Section 1 is the major part of M moment robot for space joint angle speed, E θ(M) be the remainder relevant to acceleration;
Bring formula (17) into formula (15), the robot for space tip speed that obtains the M moment is expressed as:
v e ( M ) ω e ( M ) = Σ i = 1 n J gi ( M ) θ · i ( M ) = θ · ( M ) · θ · ( N ) | θ · ( N ) | 2 2 J g ( N ) θ · ( N ) + ΔJ N ( M ) θ · ( M ) · θ · ( N ) | θ · ( N ) | 2 2 θ · ( N ) + E V ( M ) = θ · ( M ) · θ · ( N ) | θ · ( N ) | 2 2 v e ( N ) ω e ( N ) + ΔJ N ( M ) θ · ( M ) · θ · ( N ) | θ · ( N ) | 2 2 θ · ( N ) + E V ( M ) - - - ( 19 )
Wherein
E V ( M ) = J g ( N ) E θ ( M ) + ΔJ ( M ) E θ + E J ( M ) θ · ( M ) - - - ( 20 )
Current time is n-hour, and vision measurement time delay is P sampling period, and according to formula (19), the tip speed that obtains the N-P moment is:
v e ( N - P ) ω e ( N - P ) = θ · ( N - P ) · θ · ( N ) | θ · ( N ) | 2 2 v e ( N ) ω e ( N ) + ΔJ N ( N - P ) θ · ( N - P ) · θ · ( N ) | θ · ( N ) | 2 2 θ · ( N ) + E V ( N - P ) - - - ( 21 )
According to formula (19), the tip speed that obtains the N-P-1 moment is:
v e ( N - P - 1 ) ω e ( N - P - 1 ) = θ · ( N - P - 1 ) · θ · ( N ) | θ · ( N ) | 2 2 v e ( N ) ω e ( N ) + ΔJ N ( N - P - 1 ) θ · ( N - P - 1 ) · θ · ( N ) | θ · ( N ) | 2 2 θ · ( N ) + E V ( N - P - 1 ) - - - ( 22 )
And the N-P-1 moment is for the target rough estimate matrix △ J of n-hour ncan be expressed as:
ΔJ N ( N - P - 1 ) = [ ∂ J g 1 ( N ) ∂ θ 1 [ θ 1 ( N - P - 1 ) - θ 1 ( N ) ] · · · ∂ J gn ( N ) ∂ θ n [ θ n ( N - P - 1 ) θ n ( N ) ] ] [ θ 1 ( N - P - 1 ) - θ 1 ( N ) ] · [ θ 1 ( N - P ) - θ 1 ( N ) ] | [ θ 1 ( N - P ) - ] θ 1 ( N ) ] | 2 2 ΔJ N ( N - P ) + T ΔJ - - - ( 23 )
Wherein T △ Jfor remainder.Can obtain current tip speed according to above conclusion is
v e ( N ) ω e ( N ) = - β α 1 ( 1 - β ) v e ( N - P ) ω e ( N - P ) + 1 α 2 ( 1 - β ) v e ( N - P - 1 ) ω e ( N - P - 1 ) + ΔE - - - ( 24 )
Wherein
β = [ θ 1 ( N - P - 1 ) - θ 1 ( N ) ] · [ θ 1 ( N - P ) - θ 1 ( N ) ] | [ θ 1 ( N - P ) - θ 1 ( N ) ] | 2 2 α 1 = θ · ( N - P ) · θ · ( N ) | θ · | 2 2 α 2 = θ · ( N - P - 1 ) · θ · ( N ) | θ · ( N ) | 2 2 - - - ( 25 )
For convenient, can claim α 1, α 2for second-order linearity estimation coefficient, β is state difference item.Ignore high-order error △ E, now tip speed can approximate calculation be
v ~ e ( N ) ω ~ e ( N ) = - β α 1 ( 1 - β ) v e ( N - P ) ω e ( N - P ) + 1 α 2 ( 1 - β ) v e ( N - P - 1 ) ω e ( N - P - 1 ) - - - ( 26 )
Evaluated error is now
ΔE = - β α 1 ( 1 - β ) E V ( N - P ) + 1 α 2 ( 1 - β ) [ T ΔJ θ · ( N ) + E V ( N - P - 1 ) ] - - - ( 27 )
Estimating speed now may be calculated
V ~ r ( k ) = x · r ( k ) y · r ( k ) z · r ( k ) α · r ( k ) β · r ( k ) γ · r ( k ) T v e = 0 - sin α r cos α r cos β r 0 cos α r sin α r cos β r 1 0 - sin β r - 1 ω e . - - - ( 28 )
Measuring the motion of less, the robot for space of time delay more stably in situation, the error △ E directly estimating should be very little, and evaluated error △ E also may be larger in other cases.For the larger special circumstances of evaluated error, design an error controller, estimated value is carried out to real-time correction, thereby improve estimated accuracy.
Detailed description of the invention four: the difference of present embodiment and detailed description of the invention three is: present embodiment is designed at the error controller described in step 3: regard the process of the estimating speed sequence described in step 2 as a differentiation element with error:
P ( s ) = e ~ st s = e st s [ 1 + H Δ ( s ) ] - - - ( 29 )
Wherein
Figure BDA0000487995810000085
for the error transfer function of differentiation element, the transfer function of vision time delay error control system output is:
Y ( s ) = e - st s e ~ st s + G ( s ) e - st s 1 + G ( s ) e st s V r ( s ) = V r ( s ) + H Δ ( s ) 1 + G ( s ) e - st s V r ( s ) - - - ( 30 )
Wherein V r(s) be the actual value of robot end's speed, Y (s) is robot end's Velocity Estimation after proofreading and correct, and G (s) is error controller.
The evaluated error transfer function of system is:
E ( s ) - H Δ ( s ) e - st s - G ( s ) e - 2 st s 1 + G ( s ) e - s t s V r ( s ) - - - ( 31 )
By POLE PLACEMENT USING, design makes the error controller G (s) that E (s) is stable, and evaluated error E (t) is convergence gradually in time, and after proofreading and correct, robot end's Velocity Estimation levels off to speed actual value gradually.
Detailed description of the invention five: the difference of present embodiment and detailed description of the invention four is: described according to proofreading and correct rear space robot end Velocity Estimation described in step 4 of present embodiment, the current vision measurement data with time delay are compensated, through the vision measurement data of overcompensation be calculated as follows:
D ~ r ( N ) = D v ( N ) + t s × Σ i = 1 m Y ( N - i ) - - - ( 32 )
Wherein Y (N-i) is the correction rear space robot end Velocity Estimation that step 4 draws, has so far completed vision time delay compensation of error.
The content not being described in detail in description of the present invention belongs to the known prior art of professional and technical personnel in the field.
Embodiment
In conjunction with Fig. 1, Fig. 2, Fig. 3, the present embodiment is described, experimental system is made up of robot for space, target satellite, and robot for space comprises six degree of freedom mechanical arm and carrier spacecraft, and mechanical arm tail end is installed camera.Mechanical arm band camera is followed the tracks of and is approached target star, and target satellite moves along mechanical arm tail end Z-direction with the speed of 4mm/s.
For setting up analogue system, kinematics and the kinetic parameter of carrier spacecraft and six degree of freedom mechanical arm are as shown in table 1
Table 1. robot for space parameter
Figure BDA0000487995810000091
The vision time delay error compensation step of robot for space is:
Step 1, determine the time delay of vision system according to adopted Vision information processing method and applied hardware environment, according to formula mistake! Do not find Reference source.Set up the mathematical relationship between actual distance information and vision measurement information.
Step 2, utilize formula (26) to calculate current time robot for space tip speed to estimate.
Step 3, according to formula (30) design vision time delay error controller, via controller output, obtains proofreading and correct rear space robot end Velocity Estimation.
Step 4, utilize formula (32) calculate compensation after vision measurement data.
Designed robot for space vision time delay error compensation actual effect as shown in Figure 4, Figure 5.The data estimator of being measured by the compensation after-vision of Fig. 4 and raw measurement data and true relative position comparison diagram, can find out, due to the existence of vision time delay, in X, Y and tri-directions of Z, there is larger error in measured value and the actual value of vision system to target, through error compensation, the data estimator of vision measurement approaches actual value very much.By the compensation after-vision measurement data evaluated error and the comparison diagram that does not compensate the time delay error of anterior optic measurement data of Fig. 5, can find out, compensation anterior optic is measured the worst error with actual value, be reduced to 7mm at directions X by 18mm, be reduced to 8mm in Z-direction by 38mm, can find out this delay compensation method of the present invention, can obviously reduce the vision measurement error that Free-floating underchassis space robot causes due to time delay, be conducive to ensure better that robot for space completes the requirement of accurate operation task in-orbit accurately, and whole process computation is simple, do not need to set up the accurate Mathematical Modeling of robot for space, do not need to carry out complicated dynamics calculation, can meet engineering actual demand.

Claims (5)

1. the robot for space vision time delay error compensating method based on velocity estimation, is characterized in that: described method is completed by following steps:
Step 1, determine the vision time delay of vision system according to adopted visual processes algorithm and applied hardware, set up the mathematical relationship between vision measurement data and the true relative pose of physics with time delay;
Step 2, basis, with the vision measurement data of time delay and the joint instruction of mechanical arm, are estimated the tip speed of current robot for space;
Step 3, design error controller, reduce the error of robot for space tip speed valuation, obtains proofreading and correct rear space robot end Velocity Estimation;
Step 4, according to proofreading and correct rear space robot end Velocity Estimation, the current vision measurement data with time delay are compensated, obtain the vision measurement data through overcompensation.
2. a kind of robot for space vision time delay error compensating method based on velocity estimation according to claim 1, is characterized in that:
In step 1, determine the time delay of vision system according to adopted visual processes algorithm and applied hardware, set up the mathematical relationship between vision measurement data with time delay and the true relative pose of physics (measurement camera and target), its process is:
Determine that according to adopted Vision information processing method and applied hardware platform the time delay that whole vision measurement link causes is m cycle, if the duration in each cycle of system is t s, the vision time delay T of Space Robot System dfor:
T d=m×t s (1)
The vision measurement information D in the k of definition space robot moment vand actual distance information D (k) r(k) be:
D v(k)=[x v(k)y v(k)z v(k)α v(k)β v(k)γ v(k)] T (2)
D r(k)=[x r(k)y r(k)z r(k)α r(k)β r(k)γ r(k)] T (3)
Wherein k is the general moment, x (k), y(k), z (k) is relative position information, α (k), and β (k), γ (k) is for describing the Eulerian angles of relative pose, robot for space vision measurement information D vand actual distance information D (k) r(k) pass between is:
D r(k)=D v(k+m) (4)
K represents any time, and m represents time delay;
If current time is N, vision measurement information is D v(N), can know D r(N-m) actual distance information before all can directly be obtained by the vision measurement data in moment in past.Definition true velocity information V r(k) be:
V r ( k ) = x · r ( k ) y · r ( k ) z · r ( k ) α · r ( k ) β · r ( k ) γ · r ( k ) T - - - ( 5 )
By D rand D (k+1) r(k) calculate
V r ( k ) = D r ( k + 1 ) - D r ( k ) t s - - - ( 6 )
The true velocity information V of robot for space rand V (N-m-1) r(N-m-1) true velocity information exchange is before crossed and is calculated: real range information D k(N) be
D r ( N ) = D v ( N ) + t s × Σ i = 1 m V r ( N - i ) - - - ( 7 )
3. a kind of robot for space vision time delay error compensating method based on velocity estimation according to claim 1 and 2, it is characterized in that: in step 2, described according to the joint instruction of the vision measurement data with time delay and mechanical arm, estimate the tip speed of current robot for space, its process is:
Utilize the vision measurement data with time delay, obtain true velocity sequence valuation sequence
Figure FDA0000487995800000024
can Negotiation speed valuation sequence
Figure FDA0000487995800000025
to current vision measurement information D v(N) compensate, formula (7) is rewritten as:
D ~ r ( N ) = D v ( N ) + t s × Σ i = 1 m V ~ r ( N - i ) - - - ( 8 )
Thereby obtain the estimated value of current actual distance information
Figure FDA0000487995800000027
Robot for space vision measurement data with time delay are combined with the joint instruction of mechanical arm, construct the valuation of the robot for space tip speed of current time;
Because the robot for space under zero-g state meets the conservation of momentum and the conservation of angular momentum, set up the mathematical relationship between robot for space joint motions state and end-effector generalized velocity according to this, meet equation:
v e ω e = J g ( Ψ 0 , Θ , m i , I i ) Θ · = J g _ v J g _ ω Θ · - - - ( 9 )
Wherein parameter matrix J gbeing exactly the broad sense Jacobian matrix of robot for space, is inertia matrix I i, mass parameter m iwith joint of robot angle Θ, carrier spacecraft attitude Ψ 0function,
Figure FDA00004879958000000211
the joint angle speed of robot; Wherein
v e = x · r y · r z · r T - - - ( 10 )
ω e = 0 - sin α r cos α r cos β r 0 cos α r sin α r cos β r 1 0 - sin β r α · r β · r γ · r - - - ( 11 )
V eand ω erespectively linear velocity and the angular speed of robot for space end effector,
Figure FDA00004879958000000212
with
Figure FDA00004879958000000213
for the differential of robot for space terminal angle Eulerian angles;
Generalized Jacobian is launched, considers the end generalized velocity of robot for space:
v e ω e = Σ i = 1 n J gi θ i - - - ( 12 )
Wherein J githe i row of broad sense Jacobian matrix, θ ii joint of robot angle;
Because the actual value of the each kinematics of robot for space, kinetic parameter is included in the measurement data with time delay, when definition M, be engraved in the neighborhood of current time N, by J gi(M) expand into:
J gi ( M ) = J gi ( N ) + Σ i = 1 n Σ j = 1 ∞ 1 j ! ∂ j J gi ∂ θ i j [ θ i ( M ) - θ i ( N ) ] j + E Ji ′ ( M ) - - - ( 13 )
Wherein E ' ji(M) for changing because of attitude of carrier the error causing; Only retain once item wherein, be further deformed into
J gi ( M ) = J gi ( N ) + Σ i = 1 n ∂ J gi ( N ) ∂ θ i [ θ i ( M ) - θ i ( N ) ] + E Ji ( M ) E Ji ( M ) = Σ i = 1 n Σ j = 2 ∞ 1 j ! ∂ j J gi ( N ) ∂ θ i j [ θ i ( M ) - θ i ( N ) ] j + E Ji ′ ( M ) - - - ( 14 )
Now the robot for space tip speed in M moment can be expressed as
v e ( M ) ω e ( M ) = Σ i = 1 n J gi ( M ) θ · i ( M ) = J g ( N ) θ · ( M ) + ΔJ N ( M ) θ · ( M ) + E J ( M ) θ · ( M ) - - - ( 15 )
Wherein
ΔJ N ( M ) = [ ∂ J g 1 ( N ) ∂ θ 1 [ θ 1 ( M ) - θ 1 ( N ) ] · · · ∂ J gn ( N ) ∂ θ n [ θ n ( M ) - θ n ( N ) ] ] - - - ( 16 )
The robot for space joint angle speed in M moment is expressed as:
θ · ( M ) = θ · ( M ) · θ · ( N ) | θ · ( N ) | 2 2 θ · ( N ) + E θ ( M ) - - - ( 17 )
Wherein
E θ ( M ) = θ · ( M ) - θ · ( M ) · θ · ( N ) | θ · ( N ) | 2 2 θ · ( N ) - - - ( 18 )
While moving due to robot for space, joint angle velocity variations is slow, so formula (17) Section 1 is the major part of M moment robot for space joint angle speed, and E θ(M) be the remainder relevant to acceleration;
Bring formula (17) into formula (15), the robot for space tip speed that obtains the M moment is expressed as:
v e ( M ) ω e ( M ) = Σ i = 1 n J gi ( M ) θ · i ( M ) = θ · ( M ) · θ · ( N ) | θ · ( N ) | 2 2 J g ( N ) θ · ( N ) + ΔJ N ( M ) θ · ( M ) · θ · ( N ) | θ · ( N ) | 2 2 θ · ( N ) + E V ( M ) = θ · ( M ) · θ · ( N ) | θ · ( N ) | 2 2 v e ( N ) ω e ( N ) + ΔJ N ( M ) θ · ( M ) · θ · ( N ) | θ · ( N ) | 2 2 θ · ( N ) + E V ( M ) - - - ( 19 )
Wherein
E V ( M ) = J g ( N ) E θ ( M ) + ΔJ ( M ) E θ + E J ( M ) θ · ( M ) - - - ( 20 )
Current time is n-hour, and vision measurement time delay is P sampling period, and according to formula (19), the tip speed that obtains the N-P moment is:
v e ( N - P ) ω e ( N - P ) = θ · ( N - P ) · θ · ( N ) | θ · ( N ) | 2 2 v e ( N ) ω e ( N ) + ΔJ N ( N - P ) θ · ( N - P ) · θ · ( N ) | θ · ( N ) | 2 2 θ · ( N ) + E V ( N - P ) - - - ( 21 )
According to formula (19), the tip speed that obtains the N-P-1 moment is:
v e ( N - P - 1 ) ω e ( N - P - 1 ) = θ · ( N - P - 1 ) · θ · ( N ) | θ · ( N ) | 2 2 v e ( N ) ω e ( N ) + ΔJ N ( N - P - 1 ) θ · ( N - P - 1 ) · θ · ( N ) | θ · ( N ) | 2 2 θ · ( N ) + E V ( N - P - 1 ) - - - ( 22 )
And the N-P-1 moment is for the target rough estimate matrix △ J of n-hour ncan be expressed as:
ΔJ N ( N - P - 1 ) = [ ∂ J g 1 ( N ) ∂ θ 1 [ θ 1 ( N - P - 1 ) - θ 1 ( N ) ] · · · ∂ J gn ( N ) ∂ θ n [ θ n ( N - P - 1 ) θ n ( N ) ] ] [ θ 1 ( N - P - 1 ) - θ 1 ( N ) ] · [ θ 1 ( N - P ) - θ 1 ( N ) ] | [ θ 1 ( N - P ) - ] θ 1 ( N ) ] | 2 2 ΔJ N ( N - P ) + T ΔJ - - - ( 23 )
Wherein T △ Jfor remainder, according to formula (19), formula (21) and formula (22), the tip speed that can obtain current n-hour is:
v e ( N ) ω e ( N ) = - β α 1 ( 1 - β ) v e ( N - P ) ω e ( N - P ) + 1 α 2 ( 1 - β ) v e ( N - P - 1 ) ω e ( N - P - 1 ) + ΔE - - - ( 24 )
Wherein
β = [ θ 1 ( N - P - 1 ) - θ 1 ( N ) ] · [ θ 1 ( N - P ) - θ 1 ( N ) ] | [ θ 1 ( N - P ) - θ 1 ( N ) ] | 2 2 α 1 = θ · ( N - P ) · θ · ( N ) | θ · | 2 2 α 2 = θ · ( N - P - 1 ) · θ · ( N ) | θ · ( N ) | 2 2 - - - ( 25 )
α 1, α 2for second-order linearity estimation coefficient, β is state difference item; Ignore high-order error △ E, now tip speed can approximate calculation be
v ~ e ( N ) ω ~ e ( N ) = - β α 1 ( 1 - β ) v e ( N - P ) ω e ( N - P ) + 1 α 2 ( 1 - β ) v e ( N - P - 1 ) ω e ( N - P - 1 ) - - - ( 26 )
Evaluated error is:
ΔE = - β α 1 ( 1 - β ) E V ( N - P ) + 1 α 2 ( 1 - β ) [ T ΔJ θ · ( N ) + E V ( N - P - 1 ) ] - - - ( 27 )
According to formula (10) and formula (11), estimating speed for:
V ~ r ( k ) = x · r ( k ) y · r ( k ) z · r ( k ) α · r ( k ) β · r ( k ) γ · r ( k ) T v e = 0 - sin α r cos α r cos β r 0 cos α r sin α r cos β r 1 0 - sin β r - 1 ω e . - - - ( 28 )
4. a kind of robot for space vision time delay error compensating method based on velocity estimation according to claim 3, is characterized in that: in step 3, described error controller is designed to:
Regard the process of the estimating speed sequence described in step 2 as a differentiation element with error:
P ( s ) = e ~ st s = e st s [ 1 + H Δ ( s ) ] - - - ( 29 )
Wherein
Figure FDA0000487995800000058
for the error transfer function of differentiation element, the transfer function of vision time delay error control system output is:
Y ( s ) = e - st s e ~ st s + G ( s ) e - st s 1 + G ( s ) e st s V r ( s ) = V r ( s ) + H Δ ( s ) 1 + G ( s ) e - st s V r ( s ) - - - ( 30 )
Wherein V r(s) be the actual value of robot end's speed, Y (s) is robot end's Velocity Estimation after proofreading and correct, and G (s) is error controller;
The evaluated error transfer function of system is:
E ( s ) - H Δ ( s ) e - st s - G ( s ) e - 2 st s 1 + G ( s ) e - s t s V r ( s ) - - - ( 31 )
By POLE PLACEMENT USING, design makes the error controller G (s) that E (s) is stable, and evaluated error E (t) is convergence gradually in time, and after proofreading and correct, robot end's Velocity Estimation levels off to speed actual value gradually.
5. a kind of robot for space vision time delay error compensating method based on velocity estimation according to claim 4, it is characterized in that: in step 4, described according to proofreading and correct rear space robot end Velocity Estimation, the current vision measurement data with time delay are compensated, through the vision measurement data of overcompensation
Figure FDA0000487995800000063
(N) be calculated as follows:
D ~ r ( N ) = D v ( N ) + t s × Σ i = 1 m Y ( N - i ) - - - ( 32 )
Wherein Y (N-i) is the correction rear space robot end Velocity Estimation that step 4 draws, has so far completed vision time delay compensation of error.
CN201410138351.9A 2014-04-08 2014-04-08 Robot for space vision time delay error compensating method based on velocity estimation Expired - Fee Related CN103878770B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410138351.9A CN103878770B (en) 2014-04-08 2014-04-08 Robot for space vision time delay error compensating method based on velocity estimation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410138351.9A CN103878770B (en) 2014-04-08 2014-04-08 Robot for space vision time delay error compensating method based on velocity estimation

Publications (2)

Publication Number Publication Date
CN103878770A true CN103878770A (en) 2014-06-25
CN103878770B CN103878770B (en) 2016-08-31

Family

ID=50948051

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410138351.9A Expired - Fee Related CN103878770B (en) 2014-04-08 2014-04-08 Robot for space vision time delay error compensating method based on velocity estimation

Country Status (1)

Country Link
CN (1) CN103878770B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104408299A (en) * 2014-11-17 2015-03-11 广东产品质量监督检验研究院 Position error compensation method for distance recognition superfluous kinematics parameter-based robot
CN108267960A (en) * 2018-02-01 2018-07-10 南阳师范学院 A kind of motion control method of remote operating wheel robot
CN108423427A (en) * 2018-03-05 2018-08-21 菲尼克斯(南京)智能制造技术工程有限公司 Vacuum sucking device and method
CN108714914A (en) * 2018-03-19 2018-10-30 山东超越数控电子股份有限公司 A kind of mechanical arm vision system
CN109606753A (en) * 2018-11-11 2019-04-12 上海宇航系统工程研究所 A kind of control method of Dual-arm space robot collaboration capture target
CN110900581A (en) * 2019-12-27 2020-03-24 福州大学 Four-degree-of-freedom mechanical arm vision servo control method and device based on RealSense camera
CN111051012A (en) * 2016-07-15 2020-04-21 快砖知识产权私人有限公司 Robot arm kinematics for end effector control
WO2020220469A1 (en) * 2019-04-30 2020-11-05 东南大学 Visual measurement time lag compensation method for photoelectric tracking system
CN112847323A (en) * 2021-01-06 2021-05-28 中国铁建重工集团股份有限公司 Robot model parameter error compensation method, device, electronic device and medium
CN113427487A (en) * 2021-07-09 2021-09-24 华南理工大学 DH parameter calibration method and system based on electromagnetic wave ranging
CN113799127A (en) * 2021-09-15 2021-12-17 华南理工大学 Six-degree-of-freedom mechanical arm non-calibration pose positioning method under optical binocular positioning system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS60151712A (en) * 1984-01-19 1985-08-09 Hitachi Ltd Calibration system for robot visual coordinate system
US4985668A (en) * 1989-09-19 1991-01-15 Kabushiki Kaisha Kobe Seiko Sho Robot controller
CN103302668A (en) * 2013-05-22 2013-09-18 东南大学 Kinect-based space teleoperation robot control system and method thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS60151712A (en) * 1984-01-19 1985-08-09 Hitachi Ltd Calibration system for robot visual coordinate system
US4985668A (en) * 1989-09-19 1991-01-15 Kabushiki Kaisha Kobe Seiko Sho Robot controller
CN103302668A (en) * 2013-05-22 2013-09-18 东南大学 Kinect-based space teleoperation robot control system and method thereof

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
史国振: "视觉伺服空间机器人运动硬件仿真研究", 《系统仿真学报》, vol. 20, no. 13, 31 July 2008 (2008-07-31), pages 3566 - 3570 *
陈俊杰: "空问机器人遥操作克服时延影响的研究进展", 《测控技术》, vol. 26, no. 2, 28 February 2007 (2007-02-28), pages 1 - 4 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104408299A (en) * 2014-11-17 2015-03-11 广东产品质量监督检验研究院 Position error compensation method for distance recognition superfluous kinematics parameter-based robot
CN111051012A (en) * 2016-07-15 2020-04-21 快砖知识产权私人有限公司 Robot arm kinematics for end effector control
CN111051012B (en) * 2016-07-15 2024-02-02 快砖知识产权私人有限公司 Robot arm kinematics for end effector control
CN108267960A (en) * 2018-02-01 2018-07-10 南阳师范学院 A kind of motion control method of remote operating wheel robot
CN108423427A (en) * 2018-03-05 2018-08-21 菲尼克斯(南京)智能制造技术工程有限公司 Vacuum sucking device and method
CN108714914A (en) * 2018-03-19 2018-10-30 山东超越数控电子股份有限公司 A kind of mechanical arm vision system
CN108714914B (en) * 2018-03-19 2021-09-07 山东超越数控电子股份有限公司 Mechanical arm vision system
CN109606753A (en) * 2018-11-11 2019-04-12 上海宇航系统工程研究所 A kind of control method of Dual-arm space robot collaboration capture target
CN109606753B (en) * 2018-11-11 2022-03-29 上海宇航系统工程研究所 Control method for cooperatively capturing target by space double-arm robot
WO2020220469A1 (en) * 2019-04-30 2020-11-05 东南大学 Visual measurement time lag compensation method for photoelectric tracking system
US11838636B2 (en) 2019-04-30 2023-12-05 Southeast University Method for compensating for visual-measurement time lag of electro-optical tracking system
CN110900581A (en) * 2019-12-27 2020-03-24 福州大学 Four-degree-of-freedom mechanical arm vision servo control method and device based on RealSense camera
CN110900581B (en) * 2019-12-27 2023-12-22 福州大学 Four-degree-of-freedom mechanical arm vision servo control method and device based on RealSense camera
CN112847323A (en) * 2021-01-06 2021-05-28 中国铁建重工集团股份有限公司 Robot model parameter error compensation method, device, electronic device and medium
CN113427487B (en) * 2021-07-09 2022-03-25 华南理工大学 DH parameter calibration method and system based on electromagnetic wave ranging
CN113427487A (en) * 2021-07-09 2021-09-24 华南理工大学 DH parameter calibration method and system based on electromagnetic wave ranging
CN113799127B (en) * 2021-09-15 2023-05-23 华南理工大学 Six-degree-of-freedom mechanical arm nonstandard positioning pose positioning method under optical binocular positioning system
CN113799127A (en) * 2021-09-15 2021-12-17 华南理工大学 Six-degree-of-freedom mechanical arm non-calibration pose positioning method under optical binocular positioning system

Also Published As

Publication number Publication date
CN103878770B (en) 2016-08-31

Similar Documents

Publication Publication Date Title
CN103878770A (en) Space robot visual delay error compensation method based on speed estimation
CN108241292B (en) Underwater robot sliding mode control method based on extended state observer
CN110376882A (en) Pre-determined characteristics control method based on finite time extended state observer
CN106840151B (en) Model-free deformation of hull measurement method based on delay compensation
CN107621266B (en) Space non-cooperative target relative navigation method based on feature point tracking
CN108267952B (en) Self-adaptive finite time control method for underwater robot
CN110929402A (en) Probabilistic terrain estimation method based on uncertain analysis
CN113763549A (en) Method, device and storage medium for simultaneous positioning and mapping by fusing laser radar and IMU
Bai et al. A novel plug-and-play factor graph method for asynchronous absolute/relative measurements fusion in multisensor positioning
CN110967017A (en) Cooperative positioning method for rigid body cooperative transportation of double mobile robots
CN109764876A (en) The multi-modal fusion localization method of unmanned platform
CN108871322B (en) Model-free hull deformation measuring method based on attitude angle matching
US11685049B2 (en) Robot localization using variance sampling
CN111409865B (en) Deep space probe approach segment guidance method based on intersection probability
CN110900608B (en) Robot kinematics calibration method based on optimal measurement configuration selection
CN111546344A (en) Mechanical arm control method for alignment
CN109737902B (en) Industrial robot kinematics calibration method based on coordinate measuring instrument
CN109648566A (en) The Trajectory Tracking Control method of the unknown all directionally movable robot of the parameter of electric machine
Gu et al. Dexterous obstacle-avoidance motion control of Rope Driven Snake Manipulator based on the bionic path following
CN106055818B (en) A kind of variable geometry truss robot modelling localization method
Guicheng et al. Kinematics simulation analysis of a 7-DOF series robot
Huang et al. A fast initialization method of Visual-Inertial Odometry based on monocular camera
Wang et al. Visual servoing based trajectory tracking of underactuated water surface robots without direct position measurement
Xu et al. Multi-Sensor Fusion Framework Based on GPS State Detection
CN113601508B (en) Robot motion control method and system and robot

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160831

CF01 Termination of patent right due to non-payment of annual fee