EP4094043A1 - Method and apparatus for estimating system state - Google Patents

Method and apparatus for estimating system state

Info

Publication number
EP4094043A1
EP4094043A1 EP20915254.5A EP20915254A EP4094043A1 EP 4094043 A1 EP4094043 A1 EP 4094043A1 EP 20915254 A EP20915254 A EP 20915254A EP 4094043 A1 EP4094043 A1 EP 4094043A1
Authority
EP
European Patent Office
Prior art keywords
system state
pose
eye
hand camera
manipulator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP20915254.5A
Other languages
German (de)
French (fr)
Other versions
EP4094043A4 (en
Inventor
Martin Rupp
Marc Patrick ZAPF
Christian Knoll
Wei Wang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Robert Bosch GmbH
Original Assignee
Robert Bosch GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Robert Bosch GmbH filed Critical Robert Bosch GmbH
Publication of EP4094043A1 publication Critical patent/EP4094043A1/en
Publication of EP4094043A4 publication Critical patent/EP4094043A4/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • the present application relates to object tracking field, in particular, to system state estimation in object tracking system with multiple cameras.
  • either eye-in-hand cameras or eye-to-hand cameras are often used for object tracking.
  • both eye-in-hand and eye-to-hand are combined for collaboration, only object can be tracked and thus only the object pose can be measured.
  • TCP tool center point
  • Embodiments of the present disclosure propose method and apparatus for estimating system state in an object tracking system.
  • multiple images from multiple cameras in the object tracking system may be received, wherein the multiple cameras comprise at least one of eye-to-hand camera and eye-in-hand camera.
  • Pose information may be extracted from the multiple images.
  • System state may be estimated based on the pose information, wherein the system state is comprised from a robot configuration and a target object pose.
  • FIG. 1 illustrates an exemplary block graph of an object tracking system for estimating system state according to an embodiment.
  • FIG. 2 illustrates an exemplary measurement operation of an object tracking system according to an embodiment.
  • FIG. 3 illustrates another exemplary measurement operation for a mobile manipulator of an object tracking system according to an embodiment.
  • FIG. 4 illustrates an exemplary estimation process for a data-fusion based estimation model according to an embodiment.
  • FIG. 5 illustrates a flowchart of an exemplary method for estimating system state in an object tracking system according to an embodiment.
  • FIG. 6 illustrates an exemplary apparatus for estimating system state in an object tracking system according to an embodiment.
  • FIG. 7 illustrates an exemplary controlling device for estimating system state in an object tracking system according to an embodiment.
  • the present application provides for estimation of at least one of object pose and robot configuration for a mobile manipulator including base position and orientation in world coordinates and mobile manipulator joint configuration, through multi-camera fusion technique, such as fusion of pose information of images from eye-in-hand camera and eye-to-hand camera.
  • the pose information may be represented by pose measurement extracted from the images and consist of 6D pose vectors.
  • robot base may be tracked via eye-to-hand camera in order to directly estimate configuration of a manipulator and base position/orientation from camera data.
  • the configuration of the manipulator is represented as a vector of joint angles, which is called joint space coordinates.
  • the approach described here utilizes state estimation based on pose measurements from camera data and image processing to estimate a pose of an object (for example, a workpiece) in world coordinates and the configuration of a redundant mobile manipulator in joint coordinates.
  • This system is build up from an eye-to-hand camera, which observes a scene containing a mobile object and a mobile robot having a base which is movable, a manipulator and a manipulator end-effector, and from an eye-in-hand camera mounted on the manipulator end-effector.
  • the eye-to-hand camera may track the object as well as the base of the robot. Based on this, 6D-pose measurements relative to the eye-to-hand camera for both of the object and the base are obtained.
  • the eye-in-hand camera tracks the object and provides a 6D-pose measurement of the object relative to its camera coordinate.
  • These three pose measurements or information are fused by using an Unscented Kalman Filter to get an estimate for an object pose in world coordinates and a robot configuration in joint angles coordinate.
  • the Unscented Kalman Filter uses a kinematic model for the object and mobile robot motions, and the measurement models are based on the kinematic relationships of the cameras and the mobile robot in world frame.
  • the object pose is obtained from state estimation by fusing pose measurement of the object relative to eye-to-hand camera coordinate and pose measurement of the object relative to eye-in-hand camera coordinate.
  • a pose measurement of the eye-to-hand camera in world coordinate is assumed to be known beforehand.
  • the robot configuration is reconstructed from all the above three pose measurements, that is, a pose measurement of an object pose relative to an eye-to-hand camera coordinate, a pose measurement of a base position/orientation relative to an eye-to-hand camera coordinate and a pose measurement of the object pose relative to an eye-in-hand camera coordinate.
  • relative camera poses may be obtained from a pose measurement of the object pose relative to an eye-to-hand camera coordinate and a pose measurement of the object pose relative to an eye-in-hand camera coordinate. Further, by combining with the pose measurement of the object pose relative to the eye-to-hand camera coordinate and the pose measurement of the robot base relative to the eye-to-hand camera coordinate, it may reconstruct a configuration of a manipulator with respect to the redundancy manipulators on different mobile bases but with an identical manipulator end-effector pose.
  • Embodiments of the present invention propose a method for estimating system state in an object tracking system.
  • the object tracking system may be used to track an object, such as a workpiece through multiple cameras, such as at least one eye-in-hand camera and at least one eye-to-hand camera.
  • the robot may have at least a base, a manipulator and a manipulator end-effector on which the eye-in-hand camera is mounted.
  • a tool on the manipulator end-effector may have a tool center point (TCP) .
  • TCP tool center point
  • the data-fusion based estimation model may be implemented by an Unscented Kalman Filter, for estimating a system state of the object tracking system.
  • Unscented Kalman Filter is taken only as an example but not a limitation to implement the estimation model and the estimation model described herein may be implemented by any other suitable kinds of estimation models.
  • the system state may comprise a robot configuration and an object pose.
  • the robot configuration may comprise base position and orientation of the base and configuration of the manipulator.
  • the robot base and manipulator may be tracked via eye-to-hand camera to directly estimate configuration of the manipulator and base position and orientation from images captured by the cameras.
  • This will be opposed to computationally expensive backward inference of robot configuration comprising a configuration of a manipulator and a base position and orientation based on joint angle measurements and/or localization data gained from other technologies such as laser localization etc.
  • FIG. 1 illustrates an exemplary block graph of an object tracking system 100 for estimating system state according to an embodiment.
  • FIG. 1 there are an eye-to-hand camera 102, an object 104, an estimation model 106 and a robot 108 in the object tracking system 100.
  • the eye-to-hand camera 102 may capture images of the object 104 and the robot 108, in particular, a base in the robot 108.
  • the object 104 herein may be a mobile object, for example, a workpiece conveyed by a conveyor.
  • the estimation model 106 may be a data-fusion based estimation model, which may be implemented through an Unscented Kalman Filter.
  • the robot 108 may be comprised of at least a base, a manipulators and a manipulator end effector, not shown in FIG. 1. As shown in FIG. 1, there is an eye-in-hand 110 mounted in the robot 108, for example, mounted on the manipulator end effector of the manipulator.
  • the eye-in-hand camera 110 may capture images of the object 104.
  • the estimation model 106 may receive images from the eye-to-hand camera 102 and the eye-in-hand camera 110, fuse information from the images based on data fusion approach for state estimation and generate an estimated system state. It should be understood that, although the estimation model 106 is shown in FIG. 1 as separated from the robot 108, it may also be incorporated in the robot 108 if appropriate.
  • FIG. 2 illustrates an exemplary measurement operation of an object tracking system 200 according to an embodiment.
  • FIG. 2 there are an object 202, an eye-to-hand camera 204, and a robot 206 comprising a robot base 208, a manipulator 210, and an eye-in-hand camera 212 mounted in a manipulator end effector.
  • the eye-to-hand camera 204 which is placed outside or separated from the robot 206, captures images of the object 202 and a pose measurement A of the object 202 relative to the eye-to-hand camera 204 may be extracted from the captured images, as shown by a solid line in FIG. 2.
  • the eye-to-hand camera 204 may also capture images of the robot base 208 and a pose measurement B of the robot base 208 relative to the eye-to-hand camera 204 may be extracted from the captured images, as shown by a dash dot line in FIG. 2.
  • the eye-in-hand camera 212 may capture images of the object 202, and a pose measurement C of the object 202 relative to the eye-in-hand camera 212 or the manipulator end effector may be extracted from the captured images, as shown by a dotted line in FIG. 2.
  • the captured images may be fed into an estimation model, such as the estimation model 106 in FIG. 1 but not shown in FIG. 2, to estimate a system state of the object tracking system by fusing information extracted from the images, such as the pose measurements.
  • the estimation model may be implemented by Kalman Filter for state estimation.
  • the estimated system state may comprise at least one of object pose of the object 202, a configuration of the manipulator 210, and base position and orientation of the robot base 208.
  • the configuration of the manipulator may be represented by joint arm parameters or vectors in a joint arm space or a joint space.
  • the object pose may be represented by in world coordinates, where the exponent W denotes the coordinate frame relative to which the pose represented, and Est denotes an estimation of the pose P.
  • the estimated base position and orientation may be combined with the estimated configuration of the manipulator to generate robot configuration estimation from the Kalman Filter, where q is the vector of joint space coordinate.
  • the robot base shown in FIG. 2 seems not moving, the robot base and/or manipulator may be mobile or movable, as descried below in FIG. 3.
  • FIG. 3 illustrates an exemplary measurement operation for a mobile manipulator of an object tracking system 300 according to an embodiment.
  • the mobile manipulator in FIG. 3 may have more than 6 DOF.
  • a robot base and/or a manipulator of a robot may move to another position or pose, it is possible that an end effector with a tool and/or an eye-in-hand camera mounted in the end effector may be not moved, and thus for the different base positions/orientations and/or different configurations of the manipulator, there may be the same pose of the manipulator tool center point (TCP) . Therefore, it cannot be inferred different base positions/orientations and/or different configurations of the manipulator from the same TCP.
  • TCP manipulator tool center point
  • the above problem may be addressed by tracking different robot base positions/orientations of the same robot base and different configurations of the same manipulator via an eye-to-hand camera, as described below by referring to FIG. 3.
  • the object tracking system 300 there are an object 302, an eye-to-hand camera 304 placed outside a robot 306, the robot 306 having a base 308, a mobile manipulator 310 with a manipulator end effector, and an eye-in-hand camera 312 mounted in the manipulator end effector.
  • the eye-to-hand camera 304 captures images of the object 302 and a pose measurement A of the object 302 relative to the eye-to-hand camera 304 may be extracted from the captured images, as shown by a solid line in FIG. 3.
  • the eye-to-hand camera 304 may capture images of the robot base 308, and a pose measurement B of the robot base 308 relative to the eye-to-hand camera 304 may be extracted from the captured images, as shown by dash dot lines in FIG. 3.
  • the eye-in-hand camera 312 may capture images of the object 302, and a pose measurement C of the object relative to the eye-in-hand camera 312 or the manipulator end effector may be extracted from the captured images.
  • the robot base 308 may be moved to another position, such as shown by 308’ .
  • the mobile manipulator 310 may be moved along with the robot base, such as shown by 310’ .
  • the mobile manipulator 310 may be moved on the robot base when the robot base is moving. That is, the TCP of the tool on the manipulator stays in the same place relative to the base (which is because the manipulator joints are not actuated) and the TCP of the tool on the manipulator moves with respect to the base (by actuating the manipulator joints) .
  • the eye-to-hand camera 304 may also capture images of the moved robot base 308’ , and a pose measurement B’ of the moved robot base 308’ relative to the eye-to-hand camera 304 may be extracted from the captured images, as shown by dash dot lines in FIG. 3.
  • the measurements B and B’ may provide different position and orientation information regarding moving of the robot base 308, that is, an original position and orientation information of the robot base 308 and a changed position and orientation information of the moved robot base 308’ , for state estimation operation.
  • the robot base and/or manipulator moved while the manipulator end effector or a tool maintaining unmoved, or pose of the manipulator end effector or the tool unchanged, by using the pose measurements of the base relative to an eye-to-hand camera and moved base relative to the eye-to-hand camera instead of TCP, it may determine the base position and orientation and/or configuration of the manipulator more accurately.
  • the captured images may be fed into an estimation model, such as the estimation model 106 in FIG. 1 but not shown in FIG. 3, and pose information or pose measurements may be extracted from such images to estimate a system state of the object tracking system by fusing the pose information extracted from the images, such as the pose measurements A, B or B’ and C.
  • the system state may comprise at least one of an object pose of the object 302, a configuration of the manipulator 310 or 310’ , and a base pose of the robot base 308 or 308’ .
  • the estimated base position and orientation may be combined with the estimated configuration of the manipulator to generate robot configuration estimation through an estimation model, for example, an Unscented Kalman Filter.
  • FIG. 4 illustrates an exemplary estimation process 400 for an estimation model 406 according to an embodiment.
  • the estimation model 406 may be implemented, for example, by an Unscented Kalman Filter. This estimation process is implemented through a prediction step and a correction step, which is called predictor-corrector approach.
  • the estimation model 406 receives images from an eye-in-hand camera in block 402 and images from an eye-to-hand camera in block 404.
  • the estimation model 406 may comprise a prediction model 408 and a correction model 410.
  • a process state vector at time instance k+1 may be represented as follows:
  • x k denotes a process state vector at time instance k
  • u k denotes a input vector at time instance k, such as joint velocities
  • f () denotes a process model with respect to parameters that is propagated through time
  • v k denotes a vector of model uncertainties
  • E [v k , v k ] R denotes a variance of model uncertainties that will further be denoted as R.
  • x k+1 may represent a system state after one model iteration.
  • the predicted system state and covariance matrix in the prediction process may be calculated based on a previous estimation of system state and system inputs, as shown below:
  • a prediction of process state vector at time instance k is an estimate for system state that is calculated by propagating the process model for one time step
  • p k, k denotes a Covariance Matrix for process state at time instance k with respect to the prediction at time instance k
  • P k+1, k denotes a Covariance Matrix for process state at time instance k+1 with respect to prediction k.
  • a real sensor output is compared with an estimated sensor output.
  • a sensor model 414 for the sensor output with respect to the system state is used to calculate the estimated sensor output with the state estimation from the prediction process.
  • a vector of predicted sensor output is calculated as follows:
  • x k denotes a process state vector at time instance k
  • h () denotes a sensor model or a measurement model that uses the system state to calculate the estimated sensor output
  • w k denotes a vector of measurement uncertainties
  • E [w k , w k ] Q denotes a variance of measurement uncertainties that will further be denoted as Q.
  • a corrected system state and a covariance matrix in a correction process performed in the correction model 410 may be calculated based on predicted measurements and real measurements, as shown below:
  • K denotes Kalman gain
  • h () denotes a sensor model or a measurement model that uses the system state to calculate the estimated sensor output
  • z meas is an actual measurement data
  • P k+1, k+1 denotes a Covariance Matrix for process state at time instance k+1 with respect to prediction k+1
  • P xz, k+1 denotes a Cross-Covariance matrix between a process state and measurement vectors at time instance k+1
  • P zz, k+1, k denotes Measurement Covariance at time instance k+1 with respect to prediction k.
  • the covariance matrices relate to the uncertainties in the process model 412 and sensor model 414 and represent relationships how uncertainties in different states affect another system state or how state uncertainties affect sensor outputs.
  • the covariance matrices are used to calculate the Kalman Gain.
  • the input images from the eye-to-hand camera and the eye-in-hand camera may be processed to generate an estimated system state as an output 416.
  • FIG. 5 illustrates a flowchart of an exemplary method 500 for estimating a system state in an object tracking system according to an embodiment.
  • multiple images may be received from multiple cameras in the object tracking system having at least an object and a robot, wherein the multiple cameras comprise at least one of eye-to-hand camera and eye-in-hand camera.
  • pose information may be extracted from the multiple images.
  • system state may be estimated based on the pose information, wherein the system state is comprised from a robot configuration and a target object pose.
  • the operation of estimating the system state may further comprise: fusing multiple pose measurements in the pose information; and generating the estimated system state based on the fused pose measurements.
  • the method is performed by a data-fusion based estimation model in the object tracking system, and the robot includes at least a base, a manipulator, and a manipulator end-effector, and wherein the eye-in-hand camera is mounted on the manipulator end-effector.
  • the robot configuration comprises base position and orientation of the base and configuration of the manipulator.
  • the pose information comprises at least one of a pose measurement of the object relative to the eye-to-hand camera, a pose measurement of the base relative to the eye-to-hand camera and a pose measurement of the object relative to the eye-in-hand camera.
  • the data-fusion based estimation model is configured for: receiving input images from multiple cameras; obtaining a predicted system state by performing prediction on the input images based on a previous estimated system state; and generating a current estimated system state by correcting the predicted system state based on system state measurement.
  • the base is a mobile base and the manipulator is mobile when it is mounted on the mobile base.
  • the method 500 may further comprise any steps/operations for estimating system state in an object tracking system according to the embodiments of the present disclosure as mentioned above.
  • FIG. 6 illustrates an exemplary apparatus 600 for estimating system state in an object tracking system according to an embodiment.
  • the apparatus 600 may comprise: a receiving module 610, for receiving multiple images from multiple cameras in the object tracking system having at least an object and a robot, wherein the multiple cameras comprise at least one of eye-to-hand camera and eye-in-hand camera; an extraction module 620, for extracting pose information from the multiple images; and an estimation module 630, for estimating system state based on the pose information, wherein the system state is comprised from a robot configuration and an object pose.
  • the estimation module further comprises: a fusion module, for fusing multiple pose measurements in the pose information; and a generation module, for generating the estimated system state based on the fused pose measurements.
  • the apparatus is implemented by a data-fusion based estimation model in the object tracking system, and the robot includes at least a base, a manipulator, and a manipulator end-effector, and wherein the eye-in-hand camera is mounted on the manipulator end-effector.
  • the data-fusion based estimation model is configured for: receiving input images from multiple cameras; obtaining a predicted system state by performing prediction on the input images based on a previous estimated system state; and generating a current estimated system state by correcting the predicted system state based on system state measurement.
  • the apparatus 600 may also comprise any other modules configured for estimating system state in an object tracking system according to the embodiments of the present disclosure as mentioned above.
  • FIG. 7 illustrates an exemplary controlling device 700 for estimating system state in an object tracking system according to an embodiment.
  • the controlling device 700 may comprise a processor 710.
  • the controlling device 700 may further comprise a memory 720 that is connected with the processor 710.
  • the memory 720 may store computer-executable instructions that, when executed, cause the processor 710 to perform any operations of the methods for estimating system state in an object tracking system according to the embodiments of the present disclosure as mentioned above.
  • the embodiments of the present disclosure may be embodied in a non-transitory computer-readable medium.
  • the non-transitory computer-readable medium may comprise instructions that, when executed, cause one a computer to perform any operations of the methods for estimating system state in an object tracking system according to the embodiments of the present disclosure as mentioned above.
  • modules in the apparatuses described above may be implemented in various approaches. These modules may be implemented as hardware, software, or a combination thereof. Moreover, any of these modules may be further functionally divided into sub-modules or combined together.
  • processors have been described in connection with various apparatuses and methods. These processors may be implemented using electronic hardware, computer software, or any combination thereof. Whether such processors are implemented as hardware or software will depend upon the particular application and overall design constraints imposed on the system.
  • a processor, any portion of a processor, or any combination of processors presented in the present disclosure may be implemented with a microprocessor, microcontroller, digital signal processor (DSP) , a field-programmable gate array (FPGA) , a programmable logic device (PLD) , a state machine, gated logic, discrete hardware circuits, and other suitable processing components configured to perform the various functions described throughout the present disclosure.
  • DSP digital signal processor
  • FPGA field-programmable gate array
  • PLD programmable logic device
  • a state machine gated logic, discrete hardware circuits, and other suitable processing components configured to perform the various functions described throughout the present disclosure.
  • the functionality of a processor, any portion of a processor, or any combination of processors presented in the present disclosure may be
  • a computer-readable medium may include, by way of example, memory such as a magnetic storage device (e.g., hard disk, floppy disk, magnetic strip) , an optical disk, a smart card, a flash memory device, random access memory (RAM) , read only memory (ROM) , programmable ROM (PROM) , erasable PROM (EPROM) , electrically erasable PROM (EEPROM) , a register, or a removable disk.
  • RAM random access memory
  • ROM read only memory
  • PROM programmable ROM
  • EPROM erasable PROM
  • EEPROM electrically erasable PROM

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

A method (500) and an apparatus (600) for estimating system state in an object tracking system (100, 200, 300).Multiple images may be received from multiple cameras in the object tracking system having at least one object and a robot,wherein the multiple cameras comprise at least one of eye-to-hand camera and eye-in-hand camera (510). Pose information may be extracted from the multiple images (520). System state may be estimated based on the pose information, wherein the system state is comprised from a robot configuration and a target object pose (530).

Description

    METHOD AND APPARATUS FOR ESTIMATING SYSTEM STATE TECHNICAL FIELD
  • The present application relates to object tracking field, in particular, to system state estimation in object tracking system with multiple cameras.
  • BACKGROUND
  • In object tracking system, either eye-in-hand cameras or eye-to-hand cameras are often used for object tracking. In cases where both eye-in-hand and eye-to-hand are combined for collaboration, only object can be tracked and thus only the object pose can be measured. There may be a need for tracking the robot base position by using both eye-in-hand camera and eye-to-hand camera to obtain a more accurate measurement.
  • Further, for a manipulator with six degrees of freedom (6-DOF) , static or mobile, it is possible to infer the tool center point (TCP) pose from the eye-in-hand camera pose. However, for a redundant mobile manipulator with more than 6 DOF, there may be the same TCP corresponding to multiple configurations of the manipulator and multiple base positions. Therefore, the configuration of the manipulator and base position and orientation for a redundant mobile manipulator cannot be determined straightforwardly from only eye-in-hand camera data, such as TCP position.
  • SUMMARY
  • This Summary is provided to introduce a selection of concepts that are further described below in the Detailed Description. It is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
  • Embodiments of the present disclosure propose method and apparatus for estimating system state in an object tracking system. In some implementations, multiple images from multiple cameras in the object tracking system may be received, wherein the multiple cameras comprise at least one of eye-to-hand camera and eye-in-hand camera. Pose information may be extracted from the multiple images. System state may be estimated based on the pose information, wherein the system state is comprised from a robot configuration and a target object pose.
  • It should be noted that the above one or more aspects comprise the features hereinafter fully described and particularly pointed out in the claims. The following description and the drawings set forth in detail certain illustrative features of the one or more aspects. These features are only indicative of the various ways in which the principles of various aspects may be employed, and this disclosure is intended to include all such aspects and their equivalents.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The disclosed aspects will hereinafter be described in connection with the appended drawings that are provided to illustrate and not to limit the disclosed aspects.
  • FIG. 1 illustrates an exemplary block graph of an object tracking system for estimating system state according to an embodiment.
  • FIG. 2 illustrates an exemplary measurement operation of an object tracking system according to an embodiment.
  • FIG. 3 illustrates another exemplary measurement operation for a mobile manipulator of an object tracking system according to an embodiment.
  • FIG. 4 illustrates an exemplary estimation process for a data-fusion based estimation model according to an embodiment.
  • FIG. 5 illustrates a flowchart of an exemplary method for estimating system state in an object tracking system according to an embodiment.
  • FIG. 6 illustrates an exemplary apparatus for estimating system state in an object tracking system according to an embodiment.
  • FIG. 7 illustrates an exemplary controlling device for estimating system state in an object tracking system according to an embodiment.
  • DETAILED DESCRIPTION
  • The present disclosure will now be discussed with reference to several example implementations. It is to be understood that these implementations are discussed only for enabling those skilled in the art to better understand and thus implement the embodiments of the present disclosure, rather than suggesting any limitations on the scope of the present disclosure.
  • The present application provides for estimation of at least one of object pose and robot configuration for a mobile manipulator including base position and  orientation in world coordinates and mobile manipulator joint configuration, through multi-camera fusion technique, such as fusion of pose information of images from eye-in-hand camera and eye-to-hand camera. Herein the pose information may be represented by pose measurement extracted from the images and consist of 6D pose vectors. In particular, robot base may be tracked via eye-to-hand camera in order to directly estimate configuration of a manipulator and base position/orientation from camera data. Herein the configuration of the manipulator is represented as a vector of joint angles, which is called joint space coordinates.
  • The approach described here utilizes state estimation based on pose measurements from camera data and image processing to estimate a pose of an object (for example, a workpiece) in world coordinates and the configuration of a redundant mobile manipulator in joint coordinates. This system is build up from an eye-to-hand camera, which observes a scene containing a mobile object and a mobile robot having a base which is movable, a manipulator and a manipulator end-effector, and from an eye-in-hand camera mounted on the manipulator end-effector. The eye-to-hand camera may track the object as well as the base of the robot. Based on this, 6D-pose measurements relative to the eye-to-hand camera for both of the object and the base are obtained. The eye-in-hand camera tracks the object and provides a 6D-pose measurement of the object relative to its camera coordinate. These three pose measurements or information are fused by using an Unscented Kalman Filter to get an estimate for an object pose in world coordinates and a robot configuration in joint angles coordinate. In an embodiment, the Unscented Kalman Filter uses a kinematic model for the object and mobile robot motions, and the measurement models are based on the kinematic relationships of the cameras and the mobile robot in world frame.
  • In some examples, the object pose is obtained from state estimation by fusing pose measurement of the object relative to eye-to-hand camera coordinate and pose measurement of the object relative to eye-in-hand camera coordinate. Herein a pose measurement of the eye-to-hand camera in world coordinate is assumed to be known beforehand. In some embodiments, the robot configuration is reconstructed from all the above three pose measurements, that is, a pose measurement of an object pose relative to an eye-to-hand camera coordinate, a pose measurement of a base position/orientation relative to an eye-to-hand camera coordinate and a pose measurement of the object pose relative to an eye-in-hand camera coordinate. In some embodiments, relative camera  poses may be obtained from a pose measurement of the object pose relative to an eye-to-hand camera coordinate and a pose measurement of the object pose relative to an eye-in-hand camera coordinate. Further, by combining with the pose measurement of the object pose relative to the eye-to-hand camera coordinate and the pose measurement of the robot base relative to the eye-to-hand camera coordinate, it may reconstruct a configuration of a manipulator with respect to the redundancy manipulators on different mobile bases but with an identical manipulator end-effector pose.
  • Embodiments of the present invention propose a method for estimating system state in an object tracking system. Herein, the object tracking system may be used to track an object, such as a workpiece through multiple cameras, such as at least one eye-in-hand camera and at least one eye-to-hand camera. In some embodiments, there may be at least one object, a robot, a data-fusion based estimation model and multiple cameras in the object tracking system. In some examples, the robot may have at least a base, a manipulator and a manipulator end-effector on which the eye-in-hand camera is mounted. A tool on the manipulator end-effector may have a tool center point (TCP) . In an example, the data-fusion based estimation model may be implemented by an Unscented Kalman Filter, for estimating a system state of the object tracking system. It should be understood that, the Unscented Kalman Filter is taken only as an example but not a limitation to implement the estimation model and the estimation model described herein may be implemented by any other suitable kinds of estimation models. The system state may comprise a robot configuration and an object pose. In some examples, the robot configuration may comprise base position and orientation of the base and configuration of the manipulator. By fusing images from multiple cameras, in particular, at least one eye-in-hand camera and at least one eye-to-hand camera, both of object pose and robot configuration may be estimated straightforwardly from these cameras. That is, the robot base and manipulator may be tracked via eye-to-hand camera to directly estimate configuration of the manipulator and base position and orientation from images captured by the cameras. This will be opposed to computationally expensive backward inference of robot configuration comprising a configuration of a manipulator and a base position and orientation based on joint angle measurements and/or localization data gained from other technologies such as laser localization etc.
  • FIG. 1 illustrates an exemplary block graph of an object tracking system 100 for estimating system state according to an embodiment.
  • In FIG. 1, there are an eye-to-hand camera 102, an object 104, an estimation model 106 and a robot 108 in the object tracking system 100.
  • The eye-to-hand camera 102 may capture images of the object 104 and the robot 108, in particular, a base in the robot 108. The object 104 herein may be a mobile object, for example, a workpiece conveyed by a conveyor. In some examples, the estimation model 106 may be a data-fusion based estimation model, which may be implemented through an Unscented Kalman Filter. The robot 108 may be comprised of at least a base, a manipulators and a manipulator end effector, not shown in FIG. 1. As shown in FIG. 1, there is an eye-in-hand 110 mounted in the robot 108, for example, mounted on the manipulator end effector of the manipulator. The eye-in-hand camera 110 may capture images of the object 104. The estimation model 106 may receive images from the eye-to-hand camera 102 and the eye-in-hand camera 110, fuse information from the images based on data fusion approach for state estimation and generate an estimated system state. It should be understood that, although the estimation model 106 is shown in FIG. 1 as separated from the robot 108, it may also be incorporated in the robot 108 if appropriate.
  • It should be further understood that, all components shown in FIG. 1 may be exemplary, and any components may be added, omitted or replaced in FIG. 1 depending on specific application requirements.
  • FIG. 2 illustrates an exemplary measurement operation of an object tracking system 200 according to an embodiment.
  • In FIG. 2, there are an object 202, an eye-to-hand camera 204, and a robot 206 comprising a robot base 208, a manipulator 210, and an eye-in-hand camera 212 mounted in a manipulator end effector.
  • The eye-to-hand camera 204, which is placed outside or separated from the robot 206, captures images of the object 202 and a pose measurement A of the object 202 relative to the eye-to-hand camera 204 may be extracted from the captured images, as shown by a solid line in FIG. 2. The eye-to-hand camera 204 may also capture images of the robot base 208 and a pose measurement B of the robot base 208 relative to the eye-to-hand camera 204 may be extracted from the captured images, as shown by a dash dot line in FIG. 2. The eye-in-hand camera 212 may capture images of the object 202, and a pose measurement C of the object 202 relative to the eye-in-hand camera 212 or the manipulator end effector may be extracted from the captured images, as  shown by a dotted line in FIG. 2. The captured images may be fed into an estimation model, such as the estimation model 106 in FIG. 1 but not shown in FIG. 2, to estimate a system state of the object tracking system by fusing information extracted from the images, such as the pose measurements. Herein, the estimation model may be implemented by Kalman Filter for state estimation. The estimated system state may comprise at least one of object pose of the object 202, a configuration of the manipulator 210, and base position and orientation of the robot base 208. The configuration of the manipulator may be represented by joint arm parameters or vectors in a joint arm space or a joint space. In an embodiment, the object pose may be represented by in world coordinates, where the exponent W denotes the coordinate frame relative to which the pose represented, and Est denotes an estimation of the pose P. The estimated base position and orientation may be combined with the estimated configuration of the manipulator to generate robot configuration estimation from the Kalman Filter, where q is the vector of joint space coordinate.
  • It should be understood that, although the robot base shown in FIG. 2 seems not moving, the robot base and/or manipulator may be mobile or movable, as descried below in FIG. 3.
  • FIG. 3 illustrates an exemplary measurement operation for a mobile manipulator of an object tracking system 300 according to an embodiment. The mobile manipulator in FIG. 3 may have more than 6 DOF. When a robot base and/or a manipulator of a robot may move to another position or pose, it is possible that an end effector with a tool and/or an eye-in-hand camera mounted in the end effector may be not moved, and thus for the different base positions/orientations and/or different configurations of the manipulator, there may be the same pose of the manipulator tool center point (TCP) . Therefore, it cannot be inferred different base positions/orientations and/or different configurations of the manipulator from the same TCP. According to some embodiments of the present invention, the above problem may be addressed by tracking different robot base positions/orientations of the same robot base and different configurations of the same manipulator via an eye-to-hand camera, as described below by referring to FIG. 3.
  • As shown in FIG. 3, in the object tracking system 300, there are an object 302, an eye-to-hand camera 304 placed outside a robot 306, the robot 306 having a base 308, a mobile manipulator 310 with a manipulator end effector, and an eye-in-hand camera  312 mounted in the manipulator end effector. The eye-to-hand camera 304 captures images of the object 302 and a pose measurement A of the object 302 relative to the eye-to-hand camera 304 may be extracted from the captured images, as shown by a solid line in FIG. 3. The eye-to-hand camera 304 may capture images of the robot base 308, and a pose measurement B of the robot base 308 relative to the eye-to-hand camera 304 may be extracted from the captured images, as shown by dash dot lines in FIG. 3. The eye-in-hand camera 312 may capture images of the object 302, and a pose measurement C of the object relative to the eye-in-hand camera 312 or the manipulator end effector may be extracted from the captured images.
  • The robot base 308 may be moved to another position, such as shown by 308’ . In some examples, the mobile manipulator 310 may be moved along with the robot base, such as shown by 310’ . Alternatively or optionally, the mobile manipulator 310 may be moved on the robot base when the robot base is moving. That is, the TCP of the tool on the manipulator stays in the same place relative to the base (which is because the manipulator joints are not actuated) and the TCP of the tool on the manipulator moves with respect to the base (by actuating the manipulator joints) . Upon the robot base 308 moved, the eye-to-hand camera 304 may also capture images of the moved robot base 308’ , and a pose measurement B’ of the moved robot base 308’ relative to the eye-to-hand camera 304 may be extracted from the captured images, as shown by dash dot lines in FIG. 3. Herein, the measurements B and B’ may provide different position and orientation information regarding moving of the robot base 308, that is, an original position and orientation information of the robot base 308 and a changed position and orientation information of the moved robot base 308’ , for state estimation operation. Therefore, in a case that the robot base and/or manipulator moved while the manipulator end effector or a tool maintaining unmoved, or pose of the manipulator end effector or the tool unchanged, by using the pose measurements of the base relative to an eye-to-hand camera and moved base relative to the eye-to-hand camera instead of TCP, it may determine the base position and orientation and/or configuration of the manipulator more accurately.
  • The captured images may be fed into an estimation model, such as the estimation model 106 in FIG. 1 but not shown in FIG. 3, and pose information or pose measurements may be extracted from such images to estimate a system state of the object tracking system by fusing the pose information extracted from the images, such  as the pose measurements A, B or B’ and C. Herein, the system state may comprise at least one of an object pose of the object 302, a configuration of the manipulator 310 or 310’ , and a base pose of the robot base 308 or 308’ . The estimated base position and orientation may be combined with the estimated configuration of the manipulator to generate robot configuration estimation through an estimation model, for example, an Unscented Kalman Filter.
  • FIG. 4 illustrates an exemplary estimation process 400 for an estimation model 406 according to an embodiment. In some embodiments, the estimation model 406 may be implemented, for example, by an Unscented Kalman Filter. This estimation process is implemented through a prediction step and a correction step, which is called predictor-corrector approach.
  • As shown in FIG. 4, the estimation model 406 receives images from an eye-in-hand camera in block 402 and images from an eye-to-hand camera in block 404. The estimation model 406 may comprise a prediction model 408 and a correction model 410.
  • In a prediction process performed in the prediction model 408, the system state is estimated by propagating a process model 412 for one time step with a previous system estimate as initial condition. In the process model 412, a process state vector at time instance k+1 may be represented as follows: 
  • x k+1=f (x k, u k) +v k Equation (1)
  • E [v k, v k] =R            Equation (2)
  • where x k denotes a process state vector at time instance k, u k denotes a input vector at time instance k, such as joint velocities, f () denotes a process model with respect to parameters that is propagated through time, v k denotes a vector of model uncertainties, and E [v k, v k] =R denotes a variance of model uncertainties that will further be denoted as R. Herein, x k+1 may represent a system state after one model iteration.
  • The predicted system state and covariance matrix in the prediction process may be calculated based on a previous estimation of system state and system inputs, as shown below:
  • where denotes a prediction of process state vector at time instance k,  is an estimate for system state that is calculated by propagating the process model for one  time step, p k, k denotes a Covariance Matrix for process state at time instance k with respect to the prediction at time instance k, P k+1, k denotes a Covariance Matrix for process state at time instance k+1 with respect to prediction k.
  • In a correction process performed in the correction model 410, a real sensor output is compared with an estimated sensor output. A sensor model 414 for the sensor output with respect to the system state is used to calculate the estimated sensor output with the state estimation from the prediction process. In the sensor model 414, a vector of predicted sensor output is calculated as follows:
  • z k=h (x k) +w k   Equation (5)
  • E[w k, w k] =Q    Equation (6)
  • where x k denotes a process state vector at time instance k, h () denotes a sensor model or a measurement model that uses the system state to calculate the estimated sensor output, w k denotes a vector of measurement uncertainties, and E [w k, w k] =Q denotes a variance of measurement uncertainties that will further be denoted as Q.
  • A corrected system state and a covariance matrix in a correction process performed in the correction model 410 may be calculated based on predicted measurements and real measurements, as shown below:
  • p k+1, k+1=p k+1, k-K·P xz, k+1        Equation (8)
  • where denotes a prediction of process state vector at time instance k+1, K denotes Kalman gain, h () denotes a sensor model or a measurement model that uses the system state to calculate the estimated sensor output, z meas is an actual measurement data, P k+1, k+1 denotes a Covariance Matrix for process state at time instance k+1 with respect to prediction k+1, P xz, k+1 denotes a Cross-Covariance matrix between a process state and measurement vectors at time instance k+1, P zz, k+1, k denotes Measurement Covariance at time instance k+1 with respect to prediction k. Herein, the covariance matrices relate to the uncertainties in the process model 412 and sensor model 414 and represent relationships how uncertainties in different states affect another system state or how state uncertainties affect sensor outputs. The covariance matrices are used to calculate the Kalman Gain.
  • Through the estimation process of the estimation model 406, the input images from the eye-to-hand camera and the eye-in-hand camera may be processed to generate an estimated system state as an output 416.
  • FIG. 5 illustrates a flowchart of an exemplary method 500 for estimating a system state in an object tracking system according to an embodiment.
  • At 510, multiple images may be received from multiple cameras in the object tracking system having at least an object and a robot, wherein the multiple cameras comprise at least one of eye-to-hand camera and eye-in-hand camera.
  • At 520, pose information may be extracted from the multiple images.
  • At 530, system state may be estimated based on the pose information, wherein the system state is comprised from a robot configuration and a target object pose.
  • In an implementation, the operation of estimating the system state may further comprise: fusing multiple pose measurements in the pose information; and generating the estimated system state based on the fused pose measurements.
  • In an implementation, the method is performed by a data-fusion based estimation model in the object tracking system, and the robot includes at least a base, a manipulator, and a manipulator end-effector, and wherein the eye-in-hand camera is mounted on the manipulator end-effector.
  • In an implementation, the robot configuration comprises base position and orientation of the base and configuration of the manipulator.
  • In an implementation, the pose information comprises at least one of a pose measurement of the object relative to the eye-to-hand camera, a pose measurement of the base relative to the eye-to-hand camera and a pose measurement of the object relative to the eye-in-hand camera.
  • In an implementation, the data-fusion based estimation model is configured for: receiving input images from multiple cameras; obtaining a predicted system state by performing prediction on the input images based on a previous estimated system state; and generating a current estimated system state by correcting the predicted system state based on system state measurement.
  • In an implementation, the base is a mobile base and the manipulator is mobile when it is mounted on the mobile base.
  • It should be appreciated that the method 500 may further comprise any steps/operations for estimating system state in an object tracking system according to  the embodiments of the present disclosure as mentioned above.
  • FIG. 6 illustrates an exemplary apparatus 600 for estimating system state in an object tracking system according to an embodiment.
  • The apparatus 600 may comprise: a receiving module 610, for receiving multiple images from multiple cameras in the object tracking system having at least an object and a robot, wherein the multiple cameras comprise at least one of eye-to-hand camera and eye-in-hand camera; an extraction module 620, for extracting pose information from the multiple images; and an estimation module 630, for estimating system state based on the pose information, wherein the system state is comprised from a robot configuration and an object pose.
  • In an implementation, the estimation module further comprises: a fusion module, for fusing multiple pose measurements in the pose information; and a generation module, for generating the estimated system state based on the fused pose measurements.
  • In an implementation, the apparatus is implemented by a data-fusion based estimation model in the object tracking system, and the robot includes at least a base, a manipulator, and a manipulator end-effector, and wherein the eye-in-hand camera is mounted on the manipulator end-effector.
  • In an implementation, the data-fusion based estimation model is configured for: receiving input images from multiple cameras; obtaining a predicted system state by performing prediction on the input images based on a previous estimated system state; and generating a current estimated system state by correcting the predicted system state based on system state measurement.
  • Moreover, the apparatus 600 may also comprise any other modules configured for estimating system state in an object tracking system according to the embodiments of the present disclosure as mentioned above.
  • FIG. 7 illustrates an exemplary controlling device 700 for estimating system state in an object tracking system according to an embodiment.
  • The controlling device 700 may comprise a processor 710. The controlling device 700 may further comprise a memory 720 that is connected with the processor 710. The memory 720 may store computer-executable instructions that, when executed, cause the processor 710 to perform any operations of the methods for estimating system state in an object tracking system according to the embodiments of the present  disclosure as mentioned above.
  • The embodiments of the present disclosure may be embodied in a non-transitory computer-readable medium. The non-transitory computer-readable medium may comprise instructions that, when executed, cause one a computer to perform any operations of the methods for estimating system state in an object tracking system according to the embodiments of the present disclosure as mentioned above.
  • It should be appreciated that all the operations in the methods described above are merely exemplary, and the present disclosure is not limited to any operations in the methods or sequence orders of these operations, and should cover all other equivalents under the same or similar concepts.
  • It should also be appreciated that all the modules in the apparatuses described above may be implemented in various approaches. These modules may be implemented as hardware, software, or a combination thereof. Moreover, any of these modules may be further functionally divided into sub-modules or combined together.
  • Processors have been described in connection with various apparatuses and methods. These processors may be implemented using electronic hardware, computer software, or any combination thereof. Whether such processors are implemented as hardware or software will depend upon the particular application and overall design constraints imposed on the system. By way of example, a processor, any portion of a processor, or any combination of processors presented in the present disclosure may be implemented with a microprocessor, microcontroller, digital signal processor (DSP) , a field-programmable gate array (FPGA) , a programmable logic device (PLD) , a state machine, gated logic, discrete hardware circuits, and other suitable processing components configured to perform the various functions described throughout the present disclosure. The functionality of a processor, any portion of a processor, or any combination of processors presented in the present disclosure may be implemented with software being executed by a microprocessor, microcontroller, DSP, or other suitable platform.
  • Software shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, threads of execution, procedures, functions, etc. The software may reside on a computer-readable medium. A computer-readable medium may include, by way of example, memory such  as a magnetic storage device (e.g., hard disk, floppy disk, magnetic strip) , an optical disk, a smart card, a flash memory device, random access memory (RAM) , read only memory (ROM) , programmable ROM (PROM) , erasable PROM (EPROM) , electrically erasable PROM (EEPROM) , a register, or a removable disk. Although memory is shown separate from the processors in the various aspects presented throughout the present disclosure, the memory may be internal to the processors (e.g., cache or register) .
  • The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein. All structural and functional equivalents to the elements of the various aspects described throughout the present disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims.

Claims (16)

  1. A method for estimating system state in an object tracking system, comprising:
    receiving multiple images from multiple cameras in the object tracking system having at least an object and a robot, wherein the multiple cameras comprise at least one of eye-to-hand camera and eye-in-hand camera;
    extracting pose information from the multiple images; and
    estimating system state based on the pose information, wherein the system state is comprised from a robot configuration and an object pose.
  2. The method of claim 1, wherein estimating the system state further comprises: fusing multiple pose measurements in the pose information; and generating the estimated system state based on the fused pose measurements.
  3. The method of claim 1, wherein the method is performed by a data-fusion based estimation model in the object tracking system, and the robot includes at least a base, a manipulator, and a manipulator end-effector, and wherein the eye-in-hand camera is mounted on the manipulator end-effector.
  4. The method of claim 3, wherein the robot configuration comprises base position and orientation of the base and configuration of the manipulator.
  5. The method of claim 3, wherein the pose information comprises at least one of a pose measurement of the object relative to the eye-to-hand camera, a pose measurement of the base relative to the eye-to-hand camera and a pose measurement of the object relative to the eye-in-hand camera.
  6. The method of claim 3, wherein the data-fusion based estimation model is configured for:
    receiving input images from multiple cameras;
    obtaining a predicted system state by performing prediction on the input images based on a previous estimated system state; and generating a current estimated system state by correcting the predicted system state based on system state measurement.
  7. The method of claim 3, wherein the base is a mobile base and the manipulator is mobile when it is mounted on the mobile base.
  8. An apparatus for estimating system state in an object tracking system, comprising: 
    a receiving module, for receiving multiple images from multiple cameras in the object tracking system having at least an object and a robot, wherein the multiple cameras comprise at least one of eye-to-hand camera and eye-in-hand camera;
    an extraction module, for extracting pose information from the multiple images; and
    an estimation module, for estimating system state based on the pose information, wherein the system state is comprised from a robot configuration and an object pose.
  9. The apparatus of claim 8, wherein the estimation module further comprises: a fusion module, for fusing multiple pose measurements in the pose information; and a generation module, for generating the estimated system state based on the fused pose measurements.
  10. The apparatus of claim 8, wherein the apparatus is implemented by a data-fusion based estimation model in the object tracking system, and the robot includes at least a base, a manipulator, and a manipulator end-effector, and wherein the eye-in-hand camera is mounted on the manipulator end-effector.
  11. The apparatus of claim 10, wherein the robot configuration comprises base position and orientation of the base and configuration of the manipulator.
  12. The apparatus of claim 10, wherein the pose information comprises at least one of a pose measurement of the object relative to the eye-to-hand camera, a pose measurement of the base relative to the eye-to-hand camera and a pose measurement of the object relative to the eye-in-hand camera.
  13. The apparatus of claim 10, wherein the data-fusion based estimation model is configured for:receiving input images from multiple cameras;obtaining a predicted system state by performing prediction on the input images based on a previous estimated system state; andgenerating a current estimated system state by correcting the predicted system state based on system state measurement.
  14. The apparatus of claim 10, wherein the base is a mobile base and the manipulator is mobile when it is mounted on the mobile base.
  15. A controlling device for estimating system state in an object tracking system, comprising:
    a processor; and
    a memory connecting with the processor and storing computer-executable instructions that, when executed, cause the processor to perform the method of any one of claims 1-7.
  16. A non-transitory computer-readable medium, having computer-executable instruction stored thereon, that when executed, caused a computer to perform the method of any one of claims 1-7.
EP20915254.5A 2020-01-22 2020-01-22 Method and apparatus for estimating system state Pending EP4094043A4 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/073737 WO2021146989A1 (en) 2020-01-22 2020-01-22 Method and apparatus for estimating system state

Publications (2)

Publication Number Publication Date
EP4094043A1 true EP4094043A1 (en) 2022-11-30
EP4094043A4 EP4094043A4 (en) 2023-10-18

Family

ID=76991963

Family Applications (1)

Application Number Title Priority Date Filing Date
EP20915254.5A Pending EP4094043A4 (en) 2020-01-22 2020-01-22 Method and apparatus for estimating system state

Country Status (3)

Country Link
EP (1) EP4094043A4 (en)
CN (1) CN115023588A (en)
WO (1) WO2021146989A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113635311B (en) * 2021-10-18 2021-12-31 杭州灵西机器人智能科技有限公司 Method and system for out-of-hand calibration of eye for fixing calibration plate

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5733518B2 (en) * 2011-05-25 2015-06-10 株式会社Ihi Motion prediction control apparatus and method
CN205691126U (en) * 2016-03-22 2016-11-16 成都电科创品机器人科技有限公司 A kind of Indoor Robot alignment system
CN106041927A (en) * 2016-06-22 2016-10-26 西安交通大学 Hybrid vision servo system and method combining eye-to-hand and eye-in-hand structures
CN108297083A (en) * 2018-02-09 2018-07-20 中国科学院电子学研究所 Mechanical arm system
KR102006291B1 (en) * 2018-03-27 2019-08-01 한화시스템(주) Method for estimating pose of moving object of electronic apparatus
CN110335309A (en) * 2019-06-28 2019-10-15 北京云迹科技有限公司 Method and device based on camera positioning robot
CN110570474B (en) * 2019-09-16 2022-06-10 北京华捷艾米科技有限公司 Pose estimation method and system of depth camera

Also Published As

Publication number Publication date
WO2021146989A1 (en) 2021-07-29
EP4094043A4 (en) 2023-10-18
CN115023588A (en) 2022-09-06

Similar Documents

Publication Publication Date Title
CN109227532B (en) Robot control device and system thereof, robot and camera correction method
JP5647155B2 (en) Body feature detection and human pose estimation using inner distance shape relation
JP4004899B2 (en) Article position / orientation detection apparatus and article removal apparatus
JP6044293B2 (en) 3D object recognition apparatus and 3D object recognition method
KR20140008262A (en) Robot system, robot, robot control device, robot control method, and robot control program
JP2007319938A (en) Robot device and method of obtaining three-dimensional shape of object
JP2008014691A (en) Stereo image measuring method and instrument for executing the same
JP6349737B2 (en) Moving object tracking device and moving object tracking method
WO2018235219A1 (en) Self-location estimation method, self-location estimation device, and self-location estimation program
JP6922348B2 (en) Information processing equipment, methods, and programs
WO2021146989A1 (en) Method and apparatus for estimating system state
JP5573275B2 (en) Feature point extraction device, motion teaching device and motion processing device using the same
EP3936286A1 (en) Robot control device, robot control method, and robot control program
CN111179342B (en) Object pose estimation method and device, storage medium and robot
Sundaresan et al. Multi-camera tracking of articulated human motion using motion and shape cues
JP2005069757A (en) Method and system for presuming position and posture of camera using fovea wide-angle view image
Shaw et al. Automatic classification of moving objects on an unknown speed production line with an eye-in-hand robot manipulator
Motai et al. SmartView: hand-eye robotic calibration for active viewpoint generation and object grasping
US11691237B2 (en) Machine control device
CN112184819A (en) Robot guiding method and device, computer equipment and storage medium
CN111522295B (en) Mechanical control device
EP3708309A1 (en) A method for determining positional error within a robotic cell environment
Recatala et al. Vision-based grasp tracking for planar objects
JP2021091070A (en) Robot control device
Park et al. Robot-based Object Pose Auto-annotation System for Dexterous Manipulation

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20220822

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20230918

RIC1 Information provided on ipc code assigned before grant

Ipc: G06T 7/277 20170101ALI20230912BHEP

Ipc: G01C 21/00 20060101AFI20230912BHEP