JP5733516B2 - Moving body gripping apparatus and method - Google Patents

Moving body gripping apparatus and method Download PDF

Info

Publication number
JP5733516B2
JP5733516B2 JP2011106822A JP2011106822A JP5733516B2 JP 5733516 B2 JP5733516 B2 JP 5733516B2 JP 2011106822 A JP2011106822 A JP 2011106822A JP 2011106822 A JP2011106822 A JP 2011106822A JP 5733516 B2 JP5733516 B2 JP 5733516B2
Authority
JP
Japan
Prior art keywords
gripping
value
robot
state
moving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2011106822A
Other languages
Japanese (ja)
Other versions
JP2012236254A (en
Inventor
周平 江本
周平 江本
Original Assignee
株式会社Ihi
株式会社Ihi
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社Ihi, 株式会社Ihi filed Critical 株式会社Ihi
Priority to JP2011106822A priority Critical patent/JP5733516B2/en
Priority claimed from CN201280022935.9A external-priority patent/CN103517789B/en
Publication of JP2012236254A publication Critical patent/JP2012236254A/en
Application granted granted Critical
Publication of JP5733516B2 publication Critical patent/JP5733516B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Description

  The present invention relates to a moving body gripping apparatus and a method for gripping a moving object among automatic apparatuses using a robot or the like.

  In an automatic device that handles a moving object with a robot, it is necessary to measure the position of the object with a visual sensor such as a camera installed on the robot or outside the robot, and to follow-control the robot based on the measurement result. .

  In the robot follow-up control described above, the position of an object (for example, a workpiece) that changes from moment to moment is measured at a constant period (for example, a 30 fps camera), and the robot is brought close to the object, for example, based on the measurement result. The movement command is output to the robot.

However, even if the robot is operated with the measured workpiece position as a target, the robot may not be able to follow the workpiece that is moving due to control delays such as sensor measurement delay, data acquisition delay, and robot operation delay.
In the case of a visual sensor, since the sensor measurement cycle is generally longer than the robot control cycle (eg, 4 ms), the robot cannot obtain the latest measurement results for each control cycle. Become. In particular, if the image processing takes time or the work is out of the field of view of the camera, the measurement result update period becomes longer and is not constant. As described above, the apparatus that handles the moving workpiece has a problem that the tracking performance of the robot is deteriorated due to the control delay.

  In order to solve the above-described problems, various control means have already been proposed (for example, Patent Documents 1 to 4).

In the “robot device and its control method” of Patent Document 1, first, the arm is caused to follow the object, and the movement state of the arm at the time of following and the movement state of the object coincide with each other. A method for predicting the destination of the movement has been proposed. As described above, the moving object is grasped by predicting the movement destination of the object in consideration of the operation time of the arm.
The “three-dimensional motion prediction device” of Patent Document 2 estimates the motion parameters from the position data of a measured object that vibrates in a simple manner, predicts its future position, and uses a manipulator to measure the measured object based on the position information. Is to hold.
The state estimation means of Patent Document 3 estimates the internal state of the system that performed the observation based on observation signals input in time series by observation. The internal state means state variables such as the position, posture, and deflection angle of the object.
The “motion prediction device” of Patent Document 4 performs automatic tracking and motion prediction by using both background processing and foreground processing.

Japanese Patent No. 4265088, “Robot Device and Control Method Therefor” Japanese Patent Application Laid-Open No. 07-019818, “3D Motion Prediction Device” Japanese Patent No. 4072017, “State Estimation Device, Method and Program, Current State Estimation Device and Future State Estimation Device” Japanese Patent No. 4153625, “Motion Estimation Device”

  According to Patent Document 2 described above, by estimating the motion state of the target object from the observation history of the target object (for example, a workpiece), it is possible to supplement the data while the measured value is not obtained, It is possible to predict a gripping point when the robot is gripped by a robot.

  A Kalman filter or the like is generally used for estimating the motion state. In the state estimation means of Patent Documents 2 and 3, the speed, acceleration, and the like of an object are defined as “internal states”, and a “state transition model” that represents how the internal state changes with time is defined in advance.

In a device that predicts and holds a movement destination as in Patent Documents 1 and 2, if the accuracy of prediction is low, the object may be out of the predicted position and may fail to be held.
In Patent Literature 1, it is determined that the deviation between the arm and the object is constant (complete tracking state, incomplete tracking state) before shifting to the gripping operation. This determination improves the gripping accuracy because the gripping operation is performed only when the movement state of the arm and the object coincide with each other with high accuracy. However, this determination condition can be applied to constant velocity linear motion, constant velocity circular motion, and constant acceleration linear motion, but cannot be applied to other motions.

The accuracy of the predicted position of the object is lowered mainly in the following situation.
(1) Immediately after the start of measurement.
(2) When the time when the object cannot be measured continues for a while.
(3) When the object moves differently from the state transition model determined in advance.

  The present invention has been developed to solve the above-described problems. That is, the object of the present invention is based on the internal state (for example, position, posture, velocity, angular velocity) of an object (for example, a workpiece) even for a motion other than constant velocity linear motion, constant velocity circular motion, or constant acceleration linear motion. It is an object of the present invention to provide a moving body gripping apparatus and method capable of predicting a motion of an object and reliably gripping a robot by following the object without being affected by a decrease in prediction accuracy. .

According to the present invention, a mobile body gripping device that grips a relatively moving object with a robot,
A measuring device for measuring the position or orientation of an object;
Based on the measurement result of the object, based on the state transition model and the observation model, state estimation is performed to obtain an estimated value of the internal state of the object, and the position and orientation of the object are obtained using the obtained estimated value. predicted value predicted and the error between the actually measured value, the grip of the using the covariance matrix the expected variance of the error to calculate the accuracy index value calculated, and the object on the basis of the accuracy index value A state estimation device for determining whether or not
And a robot control device for controlling the robot so as to perform a gripping operation by predicting a movement destination of an object based on an estimation result of an internal state when gripping is possible. The

According to the present invention, there is also provided a moving body gripping method for gripping a relatively moving object with a robot,
(A) The position or posture of the object is measured by the measuring device,
(B) From the measured results, the state estimation device performs state estimation based on the state transition model and the observation model, acquires an estimated value of the internal state of the object, and uses the acquired estimated value as a target. calculated predicted value predicted position and orientation of the object and the error between the actually measured value, the accuracy index value is calculated using a covariance matrix expected variance of the error, and (C) the index of precision Determine whether the object can be gripped based on the value,
(D) When gripping is possible, the robot controller performs a gripping operation by predicting the movement destination of the object based on the estimation result of the internal state,
(E) When the gripping is impossible, a moving body gripping method characterized by repeating (A) to ( C ) is provided.

  According to the apparatus and method of the present invention, the position or orientation of the object is measured and the internal state of the object is estimated by the measuring device and the state estimating device, so that constant velocity linear motion, constant velocity circular motion, etc. Even if the movement is other than the linear acceleration movement, the movement of the object can be predicted based on the internal state (for example, position, posture, speed, angular velocity) of the object (for example, workpiece).

  In addition, the state estimation device estimates the internal state of the object including the accuracy index value based on the state transition model and the observation model, and determines whether or not the object can be gripped from the accuracy index value. The robot control device can avoid a gripping failure even when the prediction accuracy decreases.

Furthermore, when the robot can be gripped, the robot control device predicts the movement destination of the target object based on the estimation result of the internal state, and grips the robot, so that the robot is surely gripped by following the target object. Can do.

It is an embodiment figure of the robot system which has a moving body holding | gripping apparatus by this invention. It is embodiment drawing of the mobile body holding | grip apparatus by this invention. It is a whole flowchart of the moving body holding | gripping method by this invention. It is a figure which shows the relationship between the elapsed time and the precision parameter | index value E in an Example.

  Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. In addition, the same code | symbol is attached | subjected to the common part in each figure, and the overlapping description is abbreviate | omitted.

FIG. 1 is a diagram showing an embodiment of a robot system having a moving body gripping device according to the present invention.
In this figure, 1 is an object (work), 2 is a robot, 3 is a robot arm, 4 is a hand, 5a is a first camera fixed to the hand 4, and 5b is a second camera fixed to an external fixed position. Reference numeral 10 denotes a moving body gripping device.
In this robot system, a workpiece 1 that moves while moving in a pendulum is measured by cameras 5 a and 5 b, the robot 2 is controlled to follow the workpiece 1, and the workpiece 4 is gripped by the hand 4.

FIG. 2 is a diagram showing an embodiment of the moving body gripping device according to the present invention.
In this figure, the moving body gripping device 10 includes a measuring device 12, a state estimating device 14, and a robot control device 20. The state estimation device 14 includes a data storage device 16.

The measuring device 12 measures the position and orientation of the object 1.
In this example, the measurement device 12 includes two measurement devices (a first measurement device 12a and a second measurement device 12b) connected to the first camera 5a and the second camera 5b, respectively.
That is, in this example, two cameras and two measuring devices are used, the image of the object 1 is captured from the first camera 5a and the second camera 5b, and the position and orientation of the object 1 are obtained by image processing. The obtained position and orientation of the object 1 are stored in the data storage device 16.
Note that the measuring device 12 is not limited to this configuration, and a measuring device that can measure the position of the object 1 three-dimensionally with one unit, for example, a laser radar may be used.

  The state estimation device 14 estimates the internal state of the target 1 including the accuracy index value E (described later) from the measurement result of the target 1 based on the state transition model and the observation model, and the accuracy index value. E has a function of determining whether or not the object 1 can be gripped.

In this example, a state estimation algorithm based on the Kalman filter algorithm is implemented in the state estimation device 14.
In this state estimation algorithm, the following four are defined as “state transition models”.
Internal state at time t = 0: X 0
Covariance of initial condition X 0 : C ov X 0
State transition equation: X t + Δt = f (X t , Δt) (1)
Process noise per unit time (covariance): Q

In this algorithm, state variables such as the swing angle θ, the angular velocity Δθ, and the fulcrum positions x, y, and z of the pendulum on which the object 1 is suspended are collectively defined as an internal state X. When defining a model, the initial conditions are not always known accurately. Therefore, a covariance C ov X 0 is defined that predicts how large the difference between the initial condition and the actual one will be.
The state transition equation (1) shows how the internal state X changes with time. In the case of this example, the state transition equation is defined from the equation of motion of the pendulum and the fact that the fulcrum position moves at a constant linear velocity.

The defined state transition equation (1) does not necessarily match the actual motion of the object 1. Therefore, a covariance Q is defined which indicates how much difference is produced from the actual motion when the state transition calculation is performed for the unit time (Δt = 1).
The Kalman filter defines the “observation model” (observation equation and observation noise) of equations (2) and (3) with the position and orientation of the object 1 calculated by the measuring device 12 as the measurement value Y.
Observation equation: Y t = g (X t ) (2)
Observation noise (covariance): C ov S t ··· ( 3)

  The observation equation (2) is an equation that associates the internal state X with the measured value Y. The observation noise (3) is a covariance representing how much measurement error is included in the measurement at time t, and is determined by the resolution of the camera and the orientation of the viewpoint. The observation noise (3) is not a fixed value, but is calculated by the measurement device 12 for each image, and is passed to the state estimation device 14 together with the measurement value.

  The robot controller 20 has a function of controlling the robot so as to perform a gripping operation by predicting the destination of the target 1 based on the estimation result of the internal state when it can be gripped.

  The robot control device 20 receives hand position information from the robot 2 and transmits a hand speed command value and a hand opening / closing command value. These transmission and reception are performed at a constant control cycle (4 ms cycle).

Hereinafter, the moving body gripping method of the present invention will be described with reference to FIG.
In the method of the present invention, (A) First, the position and orientation of the object 1 are measured by the measuring device 12.
(B) Next, based on the measured measurement results, the state estimation device 14 estimates the internal state of the object 1 including the accuracy index value E based on the state transition model, and (C) the target from the accuracy index value E It is determined whether or not the object 1 can be gripped.
(D) Next, in the case where gripping is possible, the robot controller 20 performs a gripping operation on the robot 2 by predicting the movement destination of the object 1 based on the estimation result of the internal state. The robot controller 20 moves the robot 2 so that the measurement of the position and orientation of the object 1 can be continued.

  The accuracy index value E includes an error between a predicted value obtained by predicting the position and orientation of the target object 1 and an actual measurement value thereof, and a covariance matrix that predicts the variance of the error based on a state transition model and an observation model.

In (C), when the square of the Mahalanobis distance M t obtained by dividing the error by the covariance matrix is smaller than the first threshold value and the determinant of the covariance matrix is smaller than the second threshold value, it is determined that grasping is possible. Is preferable.
The first threshold value and the second threshold value are arbitrary threshold values set in advance.

In the (C), the error less than a square third threshold Mahalanobis distance M t divided by the covariance matrix, and, when the trace of the covariance matrix is less than the fourth threshold value, determines that the graspable May be.
The third threshold value and the fourth threshold value are arbitrary threshold values set in advance.

Hereinafter, the operation of the mobile body gripping device of the invention will be described.
(A1) The measuring device 12 (the first measuring device 12a and the second measuring device 12b) transmits a shutter signal to each of the two cameras (the first camera 5a and the second camera 5b) at an arbitrary timing, and is imaged. Get an image.
(A2) The measurement device 12 acquires information (measurement value Y) on the position and orientation of the object 1 from the obtained image by image processing. For example, a white area in the image is extracted and the center of gravity is obtained.
(A3) The time when the shutter signal is transmitted from the measuring device 12 is ty. Further, the observation noise C ov S is calculated from the orientations and resolutions of the cameras 5a and 5b.
(A4) The state estimation device 14 corrects the internal state X by comparing the measured value Y with the predicted value Ym of Y predicted by the state transition model and the observation model. In addition, an accuracy index value E representing the accuracy of predicting the position of the object 1 is calculated.
(A5) The robot control device 20 refers to the result of the state estimation device 14 for each control cycle, evaluates the accuracy index value E, and determines whether to start the gripping operation.
(A6) When starting the gripping operation, the future position of the object 1 is predicted, and the hand speed command value is calculated aiming at the predicted gripping position. When the gripping operation is not performed, the hand speed command value is calculated so that the cameras 5a and 5b can continue to capture the object 1.

Details of the processing contents of the state estimation device 14 in the above (a4) will be described below. The following processing contents are general state estimation means using a Kalman filter.
(B1) A time width Δt for state transition is calculated from the measurement time ty and the current model time t. The initial value of the model time t is the time when measurement is started. Further, the measurement time may be defined as an elapsed time with the start time as a base point, and the initial value of the model time may be set to zero.
A time width Δt for state transition is expressed by Δt = ty−t (4).

(B2) The internal state at the measurement time ty is predicted.
The predicted value X t + Δt of the internal state is represented by X t + Δt = f (X t , Δt) (1) defined by the state transition model described above.

(B3) The covariance of the predicted internal state value at the measurement time is calculated. Here, the matrix A is a matrix obtained by partial differentiation of the state transition equation f.
The covariance C ov X t + Δt | t of the internal state before the update (b9) is expressed by Expression (5).
C ov Xt + Δt | t = A t (Δt) · C ov X t · A t (Δt) T + Q · | Δt |
... (5)
The partial differential matrix A t (Δt) of the state transition equation is expressed by Expression (6).
A t (Δt) = (∂f (X t , Δt) / ∂X t ) (6)

(B4) Using the internal state predicted value and the observation equation, the predicted value Ymt + Δt of the measured value Y is predicted by Expression (7).
Ym t + Δt = g (X t + Δt ) (7)

(B5) Based on the covariance of the internal state predicted value, the covariance C ov Y t + Δt of the predicted value Ym is obtained by Expression (8). Here, the matrix H is a matrix obtained by partial differentiation of the observation equation g.
C ov Y t + Δt = Ht · C ov X t + Δt | t · Ht T (8)
The partial differential matrix H t of the observation equation is expressed by Expression (9).
H t = (∂g (X t ) / ∂X t ) (9)

(B6) In addition to the covariance C ov Y of the predicted value Ym, a covariance V t + Δt considering the camera measurement error is obtained by Expression (10). C ov S t + Δt is observation noise at time ty (= t + Δt), and is calculated by the measurement device 12.
The covariance V t + Δt is a covariance indicating the magnitude of the difference between the predicted value Ym and the observed value Y because camera observation noise is added to the covariance of the predicted value Ym.
V t + Δt = C ov Y t + Δt + C ov S t + Δt (10)

(B7) The Kalman gain K t + Δt is calculated by the equation (11).
K t + Δt = (C ov X t + Δt | t · Ht T ) / (V t + Δt ) (11)

(B8) The internal state Xt + Δt is updated with the equations (12) and (13). Note that the internal state before the update is expressed as X t + Δt | t . Y is an observed value measured by the sensor.
eY t + Δt = Ym t + Δt | t −Y t + Δt (12)
Xt + Δt = Xt + Δt | t −Kt + Δt · eY t + Δt (13)

(B9) The covariance C ov X t + Δt of the internal state after the update is calculated by Expression (14).
C ov X t + Δt = (I−K t + Δt H t ) C ov X t + Δt | t (14)

  By the processing from (b1) to (b9) described above, the internal state is corrected based on the observed value at time ty. Therefore, when the entire system repeats the processes (a1) to (a6), the internal state gradually approaches a true value (such as the actual speed of the object).

To determine how close the internal state is to the true value, the covariances C ov Y and V of the predicted values obtained in (b5) and (b6) above are evaluated. These covariances represent the size of the prediction error when the position and orientation of the object are predicted using the current estimation result.

When the state transition model and the observation model defined in advance are accurate, the difference eY between the observed value and the predicted value becomes a distribution according to the covariance V. Therefore, when the actual difference eY is larger than V, it means that the model determined in advance is not correct. For such evaluation, the Mahalanobis distance M t shown in Expression (15) is used.
M t = (eY t T · V t −1 · eY t ) 0.5 (15)

Therefore, the state estimation device 14 calculates an accuracy index value E representing the accuracy of the current state estimation after the processing from (b1) to (b9) as follows.
(B10) The accuracy index value E is calculated by any of the following (c1), (c2), and (c3).

(C1) E = exp (-M t + Δt 2/2) / ((2π) m | V t + Δt |) 0.5
... (16)
Here, m is the order of eY.
(C2) E = exp (-M t + Δt 2/2) / (trace (C ov Y t + Δt)) 0.5
... (17)
(C3) Expression (18) obtained by summing up E in Expressions (16) and (17) for all past measurement results. Here, p is the number of variables in the internal state.

Note that equation (16) in (c1) is an evaluation equation for model fitness of a normal distribution, and equation (18) in (c3) is an information criterion (AIC).
Expression (17) of (c2) is characterized in that a covariance C ov Y trace is used as the denominator.
In equation (16), the square root of the covariance determinant represents the volume of the distribution of eY. On the other hand, in equation (17), the square root of the trace represents the radius of the smallest sphere that wraps around the distribution.

Next, processing contents of the robot control device 20 in the above (a5) and (a6) are shown below.
FIG. 3 is an overall flowchart of the moving object gripping method according to the present invention. As shown in this figure, the moving body gripping method includes steps (steps) S1 to S12.

In S1, hand position information and the like are received from the robot 2. Communication with the robot 2 is managed by the robot 2 so as to be performed, for example, with a control period of 4 ms. Therefore, the robot controller 20 must wait until data is received, and complete the transmission process of S12 within a control period of, for example, 4 ms after receiving the data.
In S2, a flag F indicating that the gripping operation is being performed is determined. The flag F is initialized to false at the start of the program.

  In S3, when the gripping operation is not being performed, the processes (a1) to (a4) are performed. This processing may be performed by another processing system at an arbitrary execution cycle, and the robot controller 20 may refer to the latest estimation result and the accuracy index value E.

  In S4, it is determined from the accuracy index value E of the current state estimation whether the gripping operation is performed or the tracking operation is continued. The determination means uses any of the following (d1) to (d3).

(D1) It is determined whether or not the accuracy index value E of (c1) described above exceeds a certain threshold value (first threshold value). In Expression (16), when the Mahalanobis distance M t is large, the exponential function part approaches 0. Further, when the volume of the covariance V is large, the denominator becomes large. Therefore, smaller Mahalanobis distance M t, and, when the volume of the covariance V is small only the grasping operation is started.

When two cameras (the first camera 5a and the second camera 5b) are used as in this example, a situation may occur in which the object 1 is out of the field of view of one camera and measurement is continued with only one camera. At this time, since the measurement is performed only from one viewpoint, the covariances C ov Y and V are covariance matrices indicating a long and narrow distribution in the viewing direction of the camera. In the case of means (d1), since the volume is reduced even when V is elongated, the gripping operation may be started.

(D2) It is determined whether or not the accuracy index value E of (c2) described above exceeds a certain threshold value (third threshold value). The accuracy index value E in Expression (17) is a small value when C ov Y represents a long and narrow distribution. Therefore, the gripping operation is started only when both the two cameras are capturing the object 1.

  (D3) Both (d1) and (d2) described above are determined by the latest measurement result, but may be determined in consideration of accuracy index values for the past several points. For example, as described above (c3), there are a method of taking the logarithm of accuracy index values calculated in the past and summing them up, a method of taking an average value for a certain past time, and the like.

In S5, when the gripping operation is not performed, the gripping operation flag F is set to false.
In S6, when the gripping operation is not performed, the arm speeds Vx, Vy, and Vz are set as in the following formulas (19), (20), and (21), for example.
Vx = Kpx · (mx−rx) + Kdx · (mx−rx−pr_x) (19)
Vy = Kpy · (my−ry) + Kdy · (my−ry−pr_y) (20)
Vz = Kpz · (mz−rz) + Kdz · (mz−rz−pr_z) (21)

  This control is PD control, where mx, my, and mz are current positions [mm] of the measured object, and rx, ry, and rz are current positions of tracking control points (points on the line of sight of the camera 1) [ mm], pr_x, pr_y, pr_z are position deviations (my-ry, mz-rz, etc.) calculated in the previous step, Kp is a position control gain, and Kd is a differential control gain.

In S7, if it is determined in S4 that a gripping operation is to be performed, the position and orientation of the future target object 1 are predicted using the state transition model and the observation model, and set as the gripping position A.
In S8, the gripping operation flag F is set to true when the gripping operation is started.
In S9, the target speed of the robot arm 3 is calculated so that the robot arm 3 moves to the gripping position A during the gripping operation.
In S10, it is determined whether the gripping operation is finished. For example, when the robot arm 3 has reached the gripping position A and the operation of closing the hand 4 has ended, it is determined that the gripping operation has ended.
In S11, when the gripping operation is finished, the gripping operation flag F is set to false. When the gripping operation is not terminated, the gripping operation flag F remains true, so that the gripping operation is continued from the next time due to the branch of S2.
In S12, the calculated target speed and hand opening / closing command value are transmitted to the robot 2.

In the above-described example, the Kalman filter is used as a method for estimating the motion state of the object 1. However, any other method may be used as long as the state estimation method satisfies the following conditions. For example, a particle filter or a least square method can be applied.
(E1) The future state can be predicted by estimating the motion state of the object 1.
(E2) A covariance matrix representing the variance of the prediction error when a future position is predicted can be calculated.

In the above c1, an example of determination using the determinant of V is shown, but either determinant of V or C ov Y may be used. Further, in the above c2, an example of determination using the trace of C ov Y is shown, but either trace of V or C ov Y may be used.
That is, it is only necessary that the “covariance matrix in which the variance of error is predicted” can be either V or C ov Y.

FIG. 4 is a diagram illustrating the relationship between the elapsed time and the accuracy index value E in the embodiment.
This example is a result of calculating the accuracy index value E by the above-described equation (17) of (c2).

From this figure, it can be seen that the accuracy index value E gradually increases from the start of measurement, and stabilizes at 80 or more after 2 seconds from the start of measurement.
Therefore, it can be said that the gripping failure is unlikely to occur when the gripping operation start condition is that the accuracy index value E of Expression (17) is 80 or more.

  In addition, the accuracy index value may temporarily increase so that the elapsed time can be seen around 1 second in this figure. Therefore, as in (c3) described above, it is preferable to make a determination in consideration of accuracy index values for the past several points.

For example, it is considered that the average value of past several points exceeds 80, or exceeds 80 continuously for a certain time.
For example, “the accuracy index value exceeds 80 continuously for 120 ms or longer” may be set as the gripping operation start condition.

  According to the above-described apparatus and method of the present invention, the measuring device 12 and the state estimating device 14 measure the position or orientation of the object 1 and estimate the internal state of the object 1, so that constant velocity linear motion, etc. Even if it is a motion other than a fast circular motion or a constant acceleration linear motion, the motion of the target 1 can be predicted based on the internal state (for example, position, posture, speed, angular velocity) of the target 1 (for example, a workpiece).

  Further, the state estimation device 14 estimates the internal state of the object 1 including the accuracy index value E based on the state transition model and the observation model, and determines whether or not the object 1 can be gripped from the accuracy index value E. In the case where gripping is impossible, the robot controller 20 moves the robot 2 so that measurement of the position and orientation of the target object 1 can be continued, for example, so that gripping failure can be avoided even when the prediction accuracy is reduced. be able to.

  Further, when the robot 2 can grip, the robot controller 20 predicts the movement destination of the object 1 based on the estimation result of the internal state and grips the robot 2, so that the robot 2 follows the object 1. It can be securely gripped.

In the above-described example, the object 1 moves and the robot 2 is fixed. However, the present invention is not limited to this example, and even when the object moves without moving the object, both the object and the robot move. It may be moved.
Therefore, “movement of the object” in the present invention means relative movement of the object as viewed from the robot.

  In addition, this invention is not limited to embodiment mentioned above, is shown by description of a claim, and also includes all the changes within the meaning and range equivalent to description of a claim.

1 work (object), 2 robot,
3 Robot arms, 4 hands,
5a 1st camera, 5b 2nd camera,
10 mobile body gripping device, 12 measuring device,
12a 1st measuring device, 12b 2nd measuring device,
14 state estimation device, 16 data storage device,
20 Robot controller

Claims (5)

  1. A moving body gripping device for gripping a relatively moving object with a robot,
    A measuring device for measuring the position or orientation of an object;
    Based on the measurement result of the object, based on the state transition model and the observation model, state estimation is performed to obtain an estimated value of the internal state of the object, and the position and orientation of the object are obtained using the obtained estimated value. predicted value predicted and the error between the actually measured value, the grip of the using the covariance matrix the expected variance of the error to calculate the accuracy index value calculated, and the object on the basis of the accuracy index value A state estimation device for determining whether or not
    And a robot control device for controlling the robot so as to perform a gripping operation by predicting a movement destination of an object based on an estimation result of an internal state when the gripping is possible.
  2. The mobile body gripping apparatus according to claim 1, wherein a Kalman filter is used for the state estimation.
  3. A moving body gripping method for gripping a relatively moving object with a robot,
    (A) The position or posture of the object is measured by the measuring device,
    (B) From the measured results, the state estimation device performs state estimation based on the state transition model and the observation model, acquires an estimated value of the internal state of the object, and uses the acquired estimated value as a target. calculated predicted value predicted position and orientation of the object and the error between the actually measured value, the accuracy index value is calculated using a covariance matrix expected variance of the error, and (C) the index of precision Determine whether the object can be gripped based on the value,
    (D) When gripping is possible, the robot controller performs a gripping operation by predicting the movement destination of the object based on the estimation result of the internal state,
    (E) A moving body gripping method characterized by repeating (A) to ( C ) when gripping is impossible.
  4. In (C), the square of the Mahalanobis distance obtained from the covariance obtained by adding the sensor observation noise to the difference between the measured value and the predicted value and the covariance of the predicted value is smaller than the first threshold, and the covariance matrix The moving body gripping method according to claim 3, wherein it is determined that gripping is possible when the determinant of is smaller than a second threshold value.
  5. In (C), the square of the Mahalanobis distance obtained from the difference between the measured value and the predicted value and the covariance obtained by adding sensor observation noise to the covariance of the predicted value is smaller than the third threshold, and the covariance matrix 4. The moving object gripping method according to claim 3, wherein the gripping is determined to be possible when the trace is less than a fourth threshold value.
JP2011106822A 2011-05-12 2011-05-12 Moving body gripping apparatus and method Active JP5733516B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2011106822A JP5733516B2 (en) 2011-05-12 2011-05-12 Moving body gripping apparatus and method

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
JP2011106822A JP5733516B2 (en) 2011-05-12 2011-05-12 Moving body gripping apparatus and method
CN201280022935.9A CN103517789B (en) 2011-05-12 2012-04-24 motion prediction control device and method
EP12782445.6A EP2708334B1 (en) 2011-05-12 2012-04-24 Device and method for controlling prediction of motion
PCT/JP2012/060918 WO2012153629A1 (en) 2011-05-12 2012-04-24 Device and method for controlling prediction of motion
US14/078,342 US9108321B2 (en) 2011-05-12 2013-11-12 Motion prediction control device and method

Publications (2)

Publication Number Publication Date
JP2012236254A JP2012236254A (en) 2012-12-06
JP5733516B2 true JP5733516B2 (en) 2015-06-10

Family

ID=47459637

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2011106822A Active JP5733516B2 (en) 2011-05-12 2011-05-12 Moving body gripping apparatus and method

Country Status (1)

Country Link
JP (1) JP5733516B2 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6026897B2 (en) * 2013-01-25 2016-11-16 本田技研工業株式会社 Working method and working device
JP6514156B2 (en) 2016-08-17 2019-05-15 ファナック株式会社 Robot controller

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0719818A (en) * 1993-06-30 1995-01-20 Kajima Corp Three-dimensional movement predicting device
JPH1124718A (en) * 1997-07-07 1999-01-29 Toshiba Corp Device and method for controlling robot
JP3988255B2 (en) * 1998-05-20 2007-10-10 株式会社安川電機 Position command method and apparatus in numerical controller
DE102008008499B4 (en) * 2008-02-11 2009-12-31 Siemens Aktiengesellschaft Method for computer-aided calculation of the movement of an object from sensor data
JP2011209203A (en) * 2010-03-30 2011-10-20 Sony Corp Self-position estimating device and self-position estimating method

Also Published As

Publication number Publication date
JP2012236254A (en) 2012-12-06

Similar Documents

Publication Publication Date Title
Park et al. ITOMP: Incremental trajectory optimization for real-time replanning in dynamic environments
US8886359B2 (en) Robot and spot welding robot with learning control function
DE102016009030B4 (en) Machine learning device, robot system and machine learning system for learning a workpiece receiving operation
JP6180087B2 (en) Information processing apparatus and information processing method
EP2859999A2 (en) Robot controller, robot system, robot, robot control method, and program
CN102914293B (en) Messaging device and information processing method
EP2865495B1 (en) Control synchronization for high-latency teleoperation
JP5306313B2 (en) Robot controller
KR102017404B1 (en) How to update the full posture angle of agricultural machinery based on 9 axis MEMS sensor
US20150213617A1 (en) Method and apparatus for estimating position
Garcia et al. Sensor fusion for compliant robot motion control
JP5330138B2 (en) Reinforcement learning system
DE112013003209B4 (en) Robot control device and robot control method
US20120259462A1 (en) Information processing apparatus, control method thereof and storage medium
US9694499B2 (en) Article pickup apparatus for picking up randomly piled articles
DE112016002797B4 (en) Calibration device and robot system with such a calibration device
US6278906B1 (en) Uncalibrated dynamic mechanical system controller
JP3907649B2 (en) Interference prevention control device between robots
US9469031B2 (en) Motion limiting device and motion limiting method
CN104057447B (en) The manufacture method of robot picking up system and machined object
US10564635B2 (en) Human-cooperative robot system
JP5198514B2 (en) Robot controller
CN101402199B (en) Hand-eye type robot movable target extracting method with low servo accuracy based on visual sensation
KR100561855B1 (en) Robot localization system
US7324907B2 (en) Self-calibrating sensor orienting system

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20140219

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20141219

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20150213

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20150318

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20150331

R151 Written notification of patent or utility model registration

Ref document number: 5733516

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R151

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250