WO2016193781A1 - Motion control system for a direct drive robot through visual servoing - Google Patents

Motion control system for a direct drive robot through visual servoing Download PDF

Info

Publication number
WO2016193781A1
WO2016193781A1 PCT/IB2015/054096 IB2015054096W WO2016193781A1 WO 2016193781 A1 WO2016193781 A1 WO 2016193781A1 IB 2015054096 W IB2015054096 W IB 2015054096W WO 2016193781 A1 WO2016193781 A1 WO 2016193781A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
visual servoing
control
fixed
camera
Prior art date
Application number
PCT/IB2015/054096
Other languages
French (fr)
Inventor
Jaime Julián CID-MONJARAZ
José Fernando REYES-CORTÉS
Original Assignee
Benemérita Universidad Autónoma De Puebla
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Benemérita Universidad Autónoma De Puebla filed Critical Benemérita Universidad Autónoma De Puebla
Priority to PCT/IB2015/054096 priority Critical patent/WO2016193781A1/en
Priority to MX2015009537A priority patent/MX2015009537A/en
Publication of WO2016193781A1 publication Critical patent/WO2016193781A1/en

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/39Robotics, robotics to robotics hand
    • G05B2219/39391Visual servoing, track end effector with camera image feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Definitions

  • the present invention relates to a system for controlling the motion of a direct drive robot via controllers with visual servoing for a fixed-camera configuration.
  • Fixed-camera robotic systems are characterized in that a vision system fixed in the coordinate frame captures images of both the robot and its environment.
  • the control objective of this approach is to move the robot end- effector in such a way that it reaches a desired target.
  • a camera is mounted in the robot end-effector and provides visual information of the environment.
  • the control objective is to move the robot end-effector in such a way that the projection of the static target be always at a desired location in the image given by the camera.
  • Park and Lee (2003) present a visual servoing control for a ball on a plate to track its desired trajectory.
  • Kelly proposes a novel approach aimed at the application of the velocity field control philosophy by using visual servoing of the robot manipulator under a fixed-camera configuration.
  • Schramm presents a novel visual servoing approach, aimed at a controller so-called extended-2D (E2D) for the coordinates of the points constituting a tracked target and provide simulation results.
  • E2D extended-2D
  • Malis and Benhimane (2005) present a generic and flexible system for vision-based robot control, the system integrating visual tracking and visual servoing approaches a unifying framework.
  • the present invention addresses the positioning problem with fixed- camera configuration to position-based visual servoing of planar robot manipula- tors.
  • the main contribution is the development of a new family of position-based visual controllers supported by rigorous local asymptotic stability analysis, taking into account the full nonlinear robot dynamics, and the vision model.
  • the objective concerning the control is defined in terms of joint coordinates which are deduced from visual information.
  • the general control problem is called positioning control, this approach is particularly relevant when in the servoing loop a video signal or images are included, which are open and interesting problems in the scientific community.
  • This application focuses on the positioning control of robot manipulators with visual servoing on the planar fixed-camera configuration.
  • the solution to this issue is through a proposed new control strategy with rigorous support according to automatic control techniques and experimental validation.
  • An important component of a robotic system is the acquisition, processing and interpretation of the information provided by the sensors. This information is used to derive signals for controlling a robot. Information about the system and its environment can be obtained through a variety of sensors such as position, speed, strength, touch or vision.
  • the international patent application WO 2015/058297 A1 (VAKANSKI ET AL), published on April 30 th , 2015 describes: a programming method of at least one robot by demonstration comprising: performing at least one demonstration of at least one task in held view (Held of View) of at least a fixed camera for at least one observed task trajectory observed from a manipulated object, preferably at least a set of the observed task trajectories; generating a generalized task trajectory of said at least one observed task trajectory, preferably from said at least one set of observed task trajectories; and executing said at least one task by said at least one robot in the field of view of said at least one fixed camera, preferably using image-based visual control to minimize the difference between the executed trajectory during said execution path and the generalized task trajectory.
  • US 8,879,822 B2 US (MATSUMOTO), published on November 4 th , 2014, describes a robot control system including a processing unit that performs visual control based on a reference image and a captured image, a robot control unit that controls a robot based on a control signal, and a storage unit that stores the reference image and a marker.
  • the storage unit stores, as a reference image, a reference image to the marker where the marker is located in an area of a workpiece or a robot hand.
  • the processing unit generates, based on the captured image, an image captured with the marker where the marker is located in an area of the workpiece or a robot hand, makes visual control based on the reference image with the marker and the image captured with the marker, generates the control signal, and outputs the control signal to the robot's control unit.
  • a proper testing system is a critical step towards validating new and existing control algorithms. It should be noted that it is easier to obtain experimental results than simulation results.
  • the first explicit solution to the issue of positioning control by visual servoing was contributed by Miyazaki and Masutani in 1990, by modeling the vision system as a rotation matrix, considering the design philosophy based on the so-called transposed Jacobian controller proposed by Takegaki and Arimoto in 1981 .
  • a more realistic model of the vision system incorporates a perspective projection based on lenses' geometrical optics.
  • One example of an object of the present invention is to provide a system for controlling the motion of a direct drive robot via controllers with visual servoing for a fixed-camera configuration.
  • Another example of an object of the invention is to model the fixed-camera configuration for the direct drive planar robot manipulator and CCD camera-type vision system.
  • Another example of an object of the invention is to propose new control algorithms using visual servoing, which can be implemented, configured and communicatively coupled to computing subsystems.
  • Another embodiment of the present invention is to provide a position- based visual-servoing control scheme, using the robot's full dynamics and vision model to show the overall asymptotic stability equilibrium point using the closed- loop equation. Inverse kinematics is used to obtain the desired angles for the joints and for the angles of the joint of the calculated centroid position.
  • a system to control the motion of a direct drive robot through visual servoing for a fixed-camera configuration comprising: a three-joint robot manipulator arm; and a fixed web camera, whose panoramic view completely covers the working area of the robot manipulator arm to locate the end-effector and the objective; and a microprocessor coupled to a three-joint robot manipulator arm and web camera; wherein the microprocessor is configured to: performing the visual servoing based on three marked reference images, each image corresponding to the positioning of each one of the three joints of the robot; transmitting the visual servoing information pertaining to the positioning images of each one of the three joints of the robot arm output by the fixed web camera; storing the information of visual servoing output by the fixed web camera; and calculating the coordinates of the joints based on the centroid of each reference image marked.
  • Figure 1 shows a block diagram of the experimental platform in accord- ance with the present invention.
  • Figure 2 shows a scheme of the fixed-camera configuration in accordance with the present invention.
  • Figure 3 shows a block diagram of the image acquisition and processing in accordance with the present invention.
  • Figure 4 shows a block diagram of the program in Simulink, in accordance with the present invention.
  • FIG. 5 shows a display of the centroids in accordance with the present invention.
  • Figure 6 shows the robot manipulator and the vision system in accordance with the present invention.
  • Figure 7 graphically shows the visual joint errors with a tanh controller in accordance with the present invention.
  • Figure 8 graphically shows the torque applied with the tanh controller in accordance with the present invention.
  • Figure 9 graphically shows the visual joint errors with an arctan controller in accordance with the present invention.
  • Figure 10 graphically shows the torque applied with the arctan controller in accordance with the present invention.
  • the present invention is considered a model for controlling the motion of a direct drive robot via controllers with visual servoing for a fixed-camera configuration.
  • the experimental platform implemented in the present invention comprises the following parts: a) The first part is related to the description of the experimental platform for experimental purposes in open architecture; i.e. the hardware or physical components; and b) The second part deals with the dynamic model of the experimental manipulator and the visual model focused on the positioning control in joint coordinates, i.e. the implemented, configured and communicatively coupled software to computer subsystems.
  • the robotic system described in the present invention comprises a direct drive robot with three degrees of freedom and a CCD camera placed in the working area and of the robot in the fixed-camera configuration.
  • Robot shall mean a direct drive mechanical manipulator, which is reprogrammable and constituted by a serial sequence of links of the rigid type, which are connected together by rotary joints and wherein the device used to equip the robot with input and output capabilities is the motion control board MFIO-3A by manufacturer Precision MicroDynamics.
  • Euler - Lagrange equations are commonly used; such systems being referred to as systems Euler - Lagrange systems; they are characterized by physical quantities inherent to systems such as kinetic energy, potential energy, friction forces and external forces applied.
  • One subclass of Euler - Lagrange systems are those characterized for their kinetic energy that can be mathematically expressed as a quadratic form and which potential energy depends only on the position used widely.
  • robot manipulators may be mentioned.
  • the dynamic model of a robot manipulator depends on the geometry thereof, as well as on the type of joints used (rotary or linear), such model may be useful in the manipulator's mechanical design stage.
  • the model which reveals the dynamic behavior of the manipulator, is the principle for the design of model- based controllers, since for purposes of scientific research in the development of new control algorithms for robot manipulators, the dynamic model has properties that are very helpful when analyzing stability and robustness.
  • the concept of stability is essential.
  • the dynamic model of a robot manipulator, the state-space theory and Lyapunov methods provide adequate means to design new model- based control laws with stability and robustness.
  • the dynamic model of a robot manipulator plays an important role for simulating motion, the manipulator's structure analysis and the design of control algorithms. Furthermore, it can be shown that as the number of degrees of freedom increases in a manipulator, the complexity in the use of these equations is increased.
  • the dynamic equation of n freedom degrees of the robot is based on the Euler-Lagrange methodology, which is given by:
  • q is the vector of joint positions of n X 1 , is the vector of joint velocities of n X 1
  • T is the applied torque vector of n X 1
  • M(q) is the manipulator's symmetric positive definite inertia matrix of n X n
  • C(q, q ' ) is the matrix of centrifu- gal and Coriolis forces of n X n
  • g(q) is the vector of gravitational torques of n X 1 .
  • Direct kinematics is a vectorial function that relates the joint coordinates of q with Cartesian coordinates of f:Rn ⁇ Rm where n is the number of degrees of freedom, and m represents the dimension of the Cartesian coordinate frame.
  • n is the number of degrees of freedom
  • m represents the dimension of the Cartesian coordinate frame.
  • the present invention is directed to the control of planar robot manipulators using visual information provided by a webcam.
  • the camera technology most widely used is CCD (charged-coupled device). These cameras are based on a single silicon chip built using standard lithographic processes. 4096x4096 image elements (pixels) from five to six microns have been built.
  • a vision system recovers useful information on a scene from its two- dimensional projections. Since images are two-dimensional projections of the three-dimensional world [32], the information not directly available must be recovered via mapping. To recover, such information, knowledge about the objects on the scene and their geometric project [33] is required.
  • the subject of sensors comprises image capture devices, which are the camera and the lens. This recovery requires many data on a plane; see Figure 2.
  • Determining the position of an object along the image sequence can be given by the centroid of said object which is obtained by a cluster analysis, dependent on prior segmentation and recognition operations. So that tracking objects based on centroid is acceptable, other general procedures must be previously performed such as thresholding, segmentation and recognition. While these contribute to improving the definition of the objects and therefore their monitoring, they also substantially increase its computational cost. Because of this, the process turns out to be too heavy for a general purpose processor to carry it out, so often these tasks are performed on components specially created for this purpose.
  • centroid coordinates This tracing method using centroid needs previous stages to be carried out, and these are commonly segmentation stages; once the segmentation is performed, the centroid of the region representing the object to be followed is calculated by determining their moments where the set of threshold values of the region in the image is the number of rows and columns of the region. With the result available, it is possible to determine the centroid coordinates.
  • the coordinates of a point regarding this reference system are expressed as xc-
  • Axis /1 e k are parallel and point in the same direction of axis Ci y C2 respectively.
  • To obtain the coordinates of the image at the CCD plane a perspective transformation is required. Considering that the camera has a perfect focus of the optical system and therefore free of optical aberrations, the optical axis intersects at the geometric center of the CCD plane.
  • the motion control objective is to determine the torque of the motors so that the error vector of joint positions tends to zero, similarly to the speed error. This means that the joints of the robot manipulator asymptotically follow the trajectory of the desired motion.
  • the problem of the control by means of visual servoing for the fixed- camera configuration consists of designing a law of control to calculate the applied torques ⁇ in such a way that the characteristics or distinctive features of the image [u v] T pertaining to the point that take place with the coordinates of the centroid of a black disk placed at the robot's end-effector reach the point or objective desired in the plane of the image [ud Vd] T .
  • the image error is defined as therefore, the objective of control is to guarantee that at least for initial conditions and that be sufficiently small. That is to say, to secure that the error of the image's characteristic features, the difference between the current position of the end-effector expressed as pixels [u v] T and the position desired also in pixels [ud Vd] T , tends to zero when time advances.
  • K v G W is a defined positive matrix also known as derivative gain; Vt'a( .p , q) re p resen t s t ne artificial potential energy, and it is a defined positive function, and ⁇ ' ⁇ ) denotes a diminishing dissipation function, i.e.
  • the present invention considers the dynamic robot model (1 ) along with the law of control (6), then the closed-loop system is asymptotically stable locally and the visual positioning targeted is achieved.
  • the following candidate Lyapunov function is proposed:
  • the first term of ⁇ 3 ⁇ 4> *3 ⁇ 4) is a positive defined function with regard to ⁇ because M ⁇ q) is a positive defined matrix.
  • the second term of the candidate function of Lyapunov (8) may be interpreted as an potential energy induced by the law of control, it is also a positive defined function with regard to the position error VI, because the term k p is a positive defined matrix. Consequently is positive defined and the function is radially unlimited.
  • the derivate with regard to the time of the candidate function of Lyapunov (8) is found along the trajectories of the closed-loop equation (7) and considered thereafter as property 1 , can be written as:
  • Test The object equation of the closed-loop system of the present invention is achieved by combining the dynamic robot model (1 ) and the control scheme (1 1 ) and written as:
  • the first term of ⁇ *3 ⁇ 4) is a positive defined function with regard to ⁇ because M ⁇ q) is a positive defined matrix.
  • the second term of the candidate function of Lyapunov (13) is interpreted as an potential energy induced by the law of control, it is also a positive defined function with regard to the position error*!, because the term k p is a positive defined matrix. Consequently Hq- q) is positive defined and the function is radially unlimited.
  • the derivate with regard to the time of the candidate function of Lyapunov (13) is found along the trajectories of the closed-loop equation (12) and considered thereafter as property 1 , can be defined as: qi
  • a second example for the purposes of this methodology according to the present invention is:
  • a new computed-created group of vision algorithms was designed and im- plemented, which particular objective was to extract the necessary visual information for the vision model.
  • This information consists of the spatial position of centroids, that is to say the location of the visual marks, as well as their size, to correlate these black circles with the robot joints (base, elbow and end-effector).
  • Figure 3 shows a block diagram on the steps in which the process has been divided into for the vision system stage.
  • FIG. 4 shows a block diagram of the program sequence in Simulink, in accordance with the present invention.
  • the webcam is connected to the computer via the USB port, from which the images obtained from the video are acquired and characterized through the Simulink module to perform the detection and characterization of the images sent by the camera.
  • Acquisition of the color images takes place in the RGB 8-bit format and obtained with a 320 x 240 pixel resolution, the color images transformed into gray scale tones, binarization of the gray tones of the images takes place, identification of the centroid is performed to determine the coordinates of the end-effector and these are kept in a file that is transferred to the computer that controls the robot motors.
  • the algorithm employed for the development of this program is shown subsequently with the following steps:
  • Figure 5 shows the information obtained and kept in the file that is transferred to the computer that controls the robot motors.
  • the experimental robot consists of a base and two aluminum 6061 links driven by direct drive servos by Parker Compumotor.
  • the advantages of this type of direct drive actuator is that it reduces friction, compared with other type of actuators.
  • the motors used in the robot in the board are listed in table 1 .
  • the servos work in the torque mode, so that the motors perform as a torque source, they accept an analog voltage as a reference to send the torque signal.
  • the working area of the manipulator is a circle with a 0.7 m radius.
  • the robot also includes a motion control table manufactured by Precision MicroDynamics, that is used to obtain the positions of the joints.
  • the control algorithms run in a central computer with a Pentium microprocessor.
  • centroids of each disk were selected as the characteristic points of the object. In all the controllers they were selected as the position desired in the
  • the transitory response is fast at around 3 sec. The positioning error is small and tends to zero asymptotically.
  • the transitory response is fast and was at around 1 sec. The positioning error is small and tends to zero asymptotically.

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)
  • Numerical Control (AREA)

Abstract

The present invention describes a system to control the motion of a direct drive robot through visual servoing for a fixed-camera configuration, comprising: a three-joint robot manipulator arm; and a fixed web camera, whose panoramic view completely covers the working area of the robot manipulator arm to locate the end-effector and the objective; and a microprocessor coupled to a three-joint robot manipulator arm and web camera; wherein the microprocessor is configured to: performing the visual servoing based on three marked reference images, each image corresponding to the positioning of each one of the three joints of the robot; transmitting the visual servoing information pertaining to the positioning images of each one of the three joints of the robot arm output by the fixed web camera; storing the information of visual servoing output by the fixed web camera; and calculating the coordinates of the joints based on the centroid of each reference image marked.

Description

MOTION CONTROL SYSTEM FOR A DIRECT DRIVE ROBOT
THROUGH VISUAL SERVOING
BACKGROUND. 1. Technical Field of the invention.
The present invention relates to a system for controlling the motion of a direct drive robot via controllers with visual servoing for a fixed-camera configuration.
2. Particulars of the invention The problem of positioning of robot manipulators using visual information has been an area of research in the last 30 years. In recent years, attention to this subject has drastically grown. The information with visual servoing can solve many problems that limit applications of current robots, such as: long range exploration, automatic driving, medical robotics, aerial robots, etc. Visual servoing refers to a closed-loop position control for a robot end- effector using such visual servoing. This term was introduced by Hill and Park in 1979. It represents an attractive solution for positioning and moving autonomous robot manipulators evolving in unstructured environments.
On visual servoing Weiss and William have categorized two classes of vi- sion-based robot control: position-based visual servoing, and image-based visual servoing. In the former, the main features are extracted from an image and the position of the target with respect to the camera is estimated. Using these values, an error signal between the current and the desired position of the robot is defined in the workspace; while in the latter the error signal is defined directly in terms of image main features to control the robot end-effector. In both classes of methods, object feature points are mapped onto the camera image plane, and from these points, for example a particularly useful class of image features is centroid used for robot control.
In the configuration between camera and robot, a fixed-camera or a cam- era-in-hand can be had. Fixed-camera robotic systems are characterized in that a vision system fixed in the coordinate frame captures images of both the robot and its environment. The control objective of this approach is to move the robot end- effector in such a way that it reaches a desired target. In the camera-in-hand configuration, often called an eye-in-hand, generally a camera is mounted in the robot end-effector and provides visual information of the environment. In this configuration, the control objective is to move the robot end-effector in such a way that the projection of the static target be always at a desired location in the image given by the camera.
Since the first visual servoing systems were reported in the early 1980s the last few years have seen an increase in these reports and in published research results. An excellent overview of the main issues in visual servoing in the control of robot manipulators is given by Corke. However, few rigorous results have been obtained incorporating the nonlinear robot dynamics. The first explicit solution of the problem formulated in this paper was due to Miyazaki and Masutani in 1990, where a control scheme to deliver bounded control actions belongs to a new control algorithm through visual servoing, using the Transpose Jacobian-based philosophy introduced by Takegaki and Arimoto. Kelly addresses the visual servoing of planar robot manipulators under the fixed-camera configuration. Malis proposed a new approach to vision-based robot control, called 2- 1 /2-D visual servoing in 1999. The visual servoing problem is addressed by coupling the robot's nonlinear control theory with a convenient representation of the visual information used by the robot in 1999 by Conticelli.
Park and Lee (2003) present a visual servoing control for a ball on a plate to track its desired trajectory. Kelly proposes a novel approach aimed at the application of the velocity field control philosophy by using visual servoing of the robot manipulator under a fixed-camera configuration. Schramm presents a novel visual servoing approach, aimed at a controller so-called extended-2D (E2D) for the coordinates of the points constituting a tracked target and provide simulation results. Malis and Benhimane (2005) present a generic and flexible system for vision-based robot control, the system integrating visual tracking and visual servoing approaches a unifying framework.
The present invention addresses the positioning problem with fixed- camera configuration to position-based visual servoing of planar robot manipula- tors. The main contribution is the development of a new family of position-based visual controllers supported by rigorous local asymptotic stability analysis, taking into account the full nonlinear robot dynamics, and the vision model. The objective concerning the control is defined in terms of joint coordinates which are deduced from visual information. The general control problem is called positioning control, this approach is particularly relevant when in the servoing loop a video signal or images are included, which are open and interesting problems in the scientific community. This application focuses on the positioning control of robot manipulators with visual servoing on the planar fixed-camera configuration. The solution to this issue is through a proposed new control strategy with rigorous support according to automatic control techniques and experimental validation.
An important component of a robotic system is the acquisition, processing and interpretation of the information provided by the sensors. This information is used to derive signals for controlling a robot. Information about the system and its environment can be obtained through a variety of sensors such as position, speed, strength, touch or vision.
The international patent application WO 2015/058297 A1 (VAKANSKI ET AL), published on April 30th, 2015 describes: a programming method of at least one robot by demonstration comprising: performing at least one demonstration of at least one task in held view (Held of View) of at least a fixed camera for at least one observed task trajectory observed from a manipulated object, preferably at least a set of the observed task trajectories; generating a generalized task trajectory of said at least one observed task trajectory, preferably from said at least one set of observed task trajectories; and executing said at least one task by said at least one robot in the field of view of said at least one fixed camera, preferably using image-based visual control to minimize the difference between the executed trajectory during said execution path and the generalized task trajectory.
US 8,879,822 B2 US (MATSUMOTO), published on November 4th, 2014, describes a robot control system including a processing unit that performs visual control based on a reference image and a captured image, a robot control unit that controls a robot based on a control signal, and a storage unit that stores the reference image and a marker. The storage unit stores, as a reference image, a reference image to the marker where the marker is located in an area of a workpiece or a robot hand. The processing unit generates, based on the captured image, an image captured with the marker where the marker is located in an area of the workpiece or a robot hand, makes visual control based on the reference image with the marker and the image captured with the marker, generates the control signal, and outputs the control signal to the robot's control unit.
Although in recent years the number of research on the control of robot manipulators has increased, most only exhibit simulation results and very few have an experimental evaluation. Positioning controllers have been developed but positioning controllers with visual servoing are practically nonexistent. The above is a direct consequence of the lack of adequate experimental platforms as well as the difficulty of obtaining an accurate dynamic model. The simulation of a particular control algorithm can be very useful in the initial stages of design; however, the simulation results are frequently incomplete because practical aspects are not taken into account, so the simulation has a limited value. For example, in most of the simulations with robot manipulators controllers, noise in sensors, friction phenomena and dynamics of the actuators of the manipulator are neglected. Validating a control algorithm experimentally provides the means to ensure its success in the real world of applications. Thus, a proper testing system is a critical step towards validating new and existing control algorithms. It should be noted that it is easier to obtain experimental results than simulation results. The first explicit solution to the issue of positioning control by visual servoing was contributed by Miyazaki and Masutani in 1990, by modeling the vision system as a rotation matrix, considering the design philosophy based on the so-called transposed Jacobian controller proposed by Takegaki and Arimoto in 1981 . Recently, a more realistic model of the vision system incorporates a perspective projection based on lenses' geometrical optics.
These developments are incipient though. No controllers exist in the state of the art for the movement of a direct drive robot with visual servoing to a fixed- camera configuration, based on stability algorithms as explained by Lyapunov, as described in greater detail below.
SUMMARY OF THE INVENTION One example of an object of the present invention is to provide a system for controlling the motion of a direct drive robot via controllers with visual servoing for a fixed-camera configuration.
Another example of an object of the invention is to model the fixed-camera configuration for the direct drive planar robot manipulator and CCD camera-type vision system.
Another example of an object of the invention is to propose new control algorithms using visual servoing, which can be implemented, configured and communicatively coupled to computing subsystems.
Another embodiment of the present invention is to provide a position- based visual-servoing control scheme, using the robot's full dynamics and vision model to show the overall asymptotic stability equilibrium point using the closed- loop equation. Inverse kinematics is used to obtain the desired angles for the joints and for the angles of the joint of the calculated centroid position.
The above objects are achieved by providing a system to control the motion of a direct drive robot through visual servoing for a fixed-camera configuration, comprising: a three-joint robot manipulator arm; and a fixed web camera, whose panoramic view completely covers the working area of the robot manipulator arm to locate the end-effector and the objective; and a microprocessor coupled to a three-joint robot manipulator arm and web camera; wherein the microprocessor is configured to: performing the visual servoing based on three marked reference images, each image corresponding to the positioning of each one of the three joints of the robot; transmitting the visual servoing information pertaining to the positioning images of each one of the three joints of the robot arm output by the fixed web camera; storing the information of visual servoing output by the fixed web camera; and calculating the coordinates of the joints based on the centroid of each reference image marked. Other features and advantages will become apparent from the following detailed description, taken together with the attached drawings, which illustrate by way of example the characteristics of various embodiments.
BRIEF DESCRIPTION OF THE DRAWINGS The present invention will be completely understood by the detailed description given below and in the attached drawings, which are given only by way of illustration and example and therefore do not limit the aspects of the present invention. In the drawings, identical reference numbers identify similar elements or actions. The sizes and relative positions of the elements in the drawings are not necessarily drawn to scale. For example, the forms of the various elements and angles are not drawn to scale, and some of these elements are enlarged and located arbitrarily to improve the understanding of the drawing. In addition, the particular forms of the elements as drawn do not intend to convey any information concerning the real shape of the particular elements and only have been selected to facilitate its recognition in the drawings, wherein:
Figure 1 shows a block diagram of the experimental platform in accord- ance with the present invention.
Figure 2 shows a scheme of the fixed-camera configuration in accordance with the present invention.
Figure 3 shows a block diagram of the image acquisition and processing in accordance with the present invention. Figure 4 shows a block diagram of the program in Simulink, in accordance with the present invention.
Figure 5 shows a display of the centroids in accordance with the present invention.
Figure 6 shows the robot manipulator and the vision system in accordance with the present invention.
Figure 7 graphically shows the visual joint errors with a tanh controller in accordance with the present invention.
Figure 8 graphically shows the torque applied with the tanh controller in accordance with the present invention.
Figure 9 graphically shows the visual joint errors with an arctan controller in accordance with the present invention.
Figure 10 graphically shows the torque applied with the arctan controller in accordance with the present invention. DETAILED DESCRIPTION OF THE INVENTION
Several aspects of the present invention are described in more detail below, with reference to the attached drawings (figures, diagrams and graphs), in which the variations and the aspects of the present invention are shown. Several examples of aspects of the present invention may, however, be realized of many different forms and should not be construed as limitations to the variations in the present invention; on the other hand, the variations are provided so that this description is complete in the illustrative embodiments, and the scope thereof is fully conveyed to those skilled in the art.
Unless otherwise defined, all the technical and scientific terms used in this document have the same meaning as generally understood by a person skilled in the art to which aspects of the present invention belong. The apparatuses, systems and examples provided in this document are for illustrative purposes only and are not intended to be limiting.
To the extent that the mathematical models are capable of reproducing the magnitudes reported in experiments, they can be still be considered for modeling various natural processes. Therefore, the present invention is considered a model for controlling the motion of a direct drive robot via controllers with visual servoing for a fixed-camera configuration.
In Figure 1 the experimental platform implemented in the present invention is shown, which comprises the following parts: a) The first part is related to the description of the experimental platform for experimental purposes in open architecture; i.e. the hardware or physical components; and b) The second part deals with the dynamic model of the experimental manipulator and the visual model focused on the positioning control in joint coordinates, i.e. the implemented, configured and communicatively coupled software to computer subsystems.
1. Robotic system model
1.1 Robot Manipulator.
The robotic system described in the present invention comprises a direct drive robot with three degrees of freedom and a CCD camera placed in the working area and of the robot in the fixed-camera configuration.
For purposes of the present invention, Robot shall mean a direct drive mechanical manipulator, which is reprogrammable and constituted by a serial sequence of links of the rigid type, which are connected together by rotary joints and wherein the device used to equip the robot with input and output capabilities is the motion control board MFIO-3A by manufacturer Precision MicroDynamics.
1.2 Robot Dynamics.
In mechanical, electrical, thermal and mechatronic systems whose dynamic behavior is described mathematically, Euler - Lagrange equations are commonly used; such systems being referred to as systems Euler - Lagrange systems; they are characterized by physical quantities inherent to systems such as kinetic energy, potential energy, friction forces and external forces applied. One subclass of Euler - Lagrange systems are those characterized for their kinetic energy that can be mathematically expressed as a quadratic form and which potential energy depends only on the position used widely. Within the subclass of Euler - Lagrange systems mentioned above, robot manipulators may be mentioned.
Since the dynamic model of a robot manipulator depends on the geometry thereof, as well as on the type of joints used (rotary or linear), such model may be useful in the manipulator's mechanical design stage. The model, which reveals the dynamic behavior of the manipulator, is the principle for the design of model- based controllers, since for purposes of scientific research in the development of new control algorithms for robot manipulators, the dynamic model has properties that are very helpful when analyzing stability and robustness. As in other problems of automatic control, in the case of robot manipulators, the concept of stability is essential. The dynamic model of a robot manipulator, the state-space theory and Lyapunov methods provide adequate means to design new model- based control laws with stability and robustness. The dynamic model of a robot manipulator plays an important role for simulating motion, the manipulator's structure analysis and the design of control algorithms. Furthermore, it can be shown that as the number of degrees of freedom increases in a manipulator, the complexity in the use of these equations is increased. The dynamic equation of n freedom degrees of the robot is based on the Euler-Lagrange methodology, which is given by:
M (q)q + C'fq, q)q + g(q = r
where q is the vector of joint positions of n X 1 , is the vector of joint velocities of n X 1 , T is the applied torque vector of n X 1 , M(q) is the manipulator's symmetric positive definite inertia matrix of n X n, C(q, q') is the matrix of centrifu- gal and Coriolis forces of n X n, and g(q) is the vector of gravitational torques of n X 1 . It is assumed that the robot links are joined together with revolute joints. Although the equation of motion (1 ) is complex, it has several fundamental properties which can be exploited to facilitate the control system design. For the new control scheme, the following important property is used:
Property 1 : C(q, q ) matrix and the time derivative of the inertia matrix regarding time M'(q) satisfies [12]:
T
q Α'/ ; η ; ( '( q. i 0 V q, q G R
(2)
1.3 Direct kinematic model. Direct kinematics is a vectorial function that relates the joint coordinates of q with Cartesian coordinates of f:Rn→Rm where n is the number of degrees of freedom, and m represents the dimension of the Cartesian coordinate frame. The position xReR3 of the end-effector of the robot with respect to the robot coordinate frame in terms of the joint positions is given by: xR=f(q).
2. Vision system.
The present invention is directed to the control of planar robot manipulators using visual information provided by a webcam. The camera technology most widely used is CCD (charged-coupled device). These cameras are based on a single silicon chip built using standard lithographic processes. 4096x4096 image elements (pixels) from five to six microns have been built.
The goal of a vision system is to create a model of the real world from images. A vision system recovers useful information on a scene from its two- dimensional projections. Since images are two-dimensional projections of the three-dimensional world [32], the information not directly available must be recovered via mapping. To recover, such information, knowledge about the objects on the scene and their geometric project [33] is required. The subject of sensors comprises image capture devices, which are the camera and the lens. This recovery requires many data on a plane; see Figure 2.
2.1 Centroid-Based servoinq
Determining the position of an object along the image sequence can be given by the centroid of said object which is obtained by a cluster analysis, dependent on prior segmentation and recognition operations. So that tracking objects based on centroid is acceptable, other general procedures must be previously performed such as thresholding, segmentation and recognition. While these contribute to improving the definition of the objects and therefore their monitoring, they also substantially increase its computational cost. Because of this, the process turns out to be too heavy for a general purpose processor to carry it out, so often these tasks are performed on components specially created for this purpose. This tracing method using centroid needs previous stages to be carried out, and these are commonly segmentation stages; once the segmentation is performed, the centroid of the region representing the object to be followed is calculated by determining their moments where the set of threshold values of the region in the image is the number of rows and columns of the region. With the result available, it is possible to determine the centroid coordinates.
2.2 Vision model.
The present invention employs a Cartesian coordinate system ∑fl = ,/¾,/¾} placed at the base of the robot. Where axis fli y /¾ represent the working area of the robot. Another Cartesian coordinate system is available ∑E= {Ei, E2, E3} present at the robot end-effector, the origin of which is determined by direct kinematics XR. The CCD camera has an associated therein a reference system, which is denoted by∑c = {Ci ,C2, C3} and the origin of which is located at the intersection of the optical axis and the center of ∑c lens. The coordinates of a point regarding this reference system are expressed as xc- The relative location between the robot's reference systems ∑R and the camera system ∑c is represented by the vector Oc = [oCi ,oC2,oC3]T.
An objective has a point in the Cartesian system ∑r = { 7Ί, Γ2, Γ3}, which origin makes reference with respect to its geometric center. The position of the frame of the object with respect to∑f?is denoted by or= [071 ,072, ore]7. The acquired scene by the camera is projected on the CCD, which has associated therein a reference system described by∑/= { ,h}, the origin of which is found in the geometric center of the CCD. Axis /1 e k are parallel and point in the same direction of axis Ci y C2 respectively. To obtain the coordinates of the image at the CCD plane a perspective transformation is required. Considering that the camera has a perfect focus of the optical system and therefore free of optical aberrations, the optical axis intersects at the geometric center of the CCD plane.
Finally, the image of the scene on the CCD is digitalized, transferred to the computer screen and a new two-dimensional coordinate system∑D = {U, V}, which origin is located at the upper left corner of the monitor. Therefore, the complete vision system, for the fixed-camera configuration, expresses the coordinates of the image in pixels. Therefore, the vision system model is described as follows:
M A au 0 ] xa, 1
(3)
(4) where au> 0, av> are the scale factors in pixels/m, A > 0 is the focal length of the camera and '
3. Positioning control.
The motion control objective is to determine the torque of the motors so that the error vector of joint positions tends to zero, similarly to the speed error. This means that the joints of the robot manipulator asymptotically follow the trajectory of the desired motion.
3.1 Visual servoinq scheme based on positioning and stability for the configuration of a fixed-camera. In this stage the stability analysis for the positioning-based visual servoing is described. The robot task is specified in the plane of the image for which it refers to values of the main features of the image that correspond to the robot and to the positions of the object. It is assumed that the target resides in the plane fi /¾, shown in Figure 2. Its description regarding the reference frame- work∑D of the computer image (screen) is [ud Vd]rand is denominated characteristic features vector of the desired image. The desired joint displacement qd is calculated from the inverse kinematics as a function of [ud Vd]T.
The problem of the control by means of visual servoing for the fixed- camera configuration consists of designing a law of control to calculate the applied torques τ in such a way that the characteristics or distinctive features of the image [u v]T pertaining to the point that take place with the coordinates of the centroid of a black disk placed at the robot's end-effector reach the point or objective desired in the plane of the image [ud Vd] T. The image error is defined as therefore, the objective of control is to guarantee that at least for initial conditions and that be sufficiently small. That is to say, to secure that the error of the image's characteristic features, the difference between the current position of the end-effector expressed as pixels [u v]T and the position desired also in pixels [ud Vd] T, tends to zero when time advances.
Figure imgf000016_0001
To solve the problem of control with servoing, the following control scheme with gravity compensation is presented:
T = Vva(kp, q) - fv (kv , q) + g(q)
(6) where ¾ ¾ ¾ * ' is the error vector of the joint positions, ¾ is the vector of the desired joint positions. p ^ is a diagonal matrix known as proportional gain,
Kv G W is a defined positive matrix also known as derivative gain; Vt'a( .p , q) represents tne artificial potential energy, and it is a defined positive function, and ί^' ^) denotes a diminishing dissipation function, i.e.
' ' ./ i /··,· . q : o
Proposition. The present invention considers the dynamic robot model (1 ) along with the law of control (6), then the closed-loop system is asymptotically stable locally and the visual positioning targeted is achieved. Test: The object equation of the closed-loop system of the present invention is achieved by combining the dynamic robot model (1 ) and the control scheme (6) and written as:
Figure imgf000017_0001
which is an autonomous equation of the differential, and the origin of the state-space is an equilibrium point. To carry out the stability analysis of the equation (7), the following candidate Lyapunov function is proposed:
1
Uq- q) = ^ qT (q)q + t;a(kp , q)
(8)
The first term of ^ ¾> *¾) is a positive defined function with regard to ^ because M{q) is a positive defined matrix. The second term of the candidate function of Lyapunov (8) may be interpreted as an potential energy induced by the law of control, it is also a positive defined function with regard to the position error VI, because the term kp is a positive defined matrix. Consequently is positive defined and the function is radially unlimited. The derivate with regard to the time of the candidate function of Lyapunov (8) is found along the trajectories of the closed-loop equation (7) and considered thereafter as property 1 , can be written as:
1
qJ (q)q - -qJ (q)q - V¾>a (kp j q) 1 q q Vi a(kp, q) - qJ fv(kv , q)-C(q, q)q
1 V7„ . / I ,
which is a negative semi-defined function and consequently it is possible to conclude stability of the equilibrium point. To demonstrate the local asymptotic stability, the autonomous nature of the closed-loop equation (7) is employed to apply the invariance principle of LaSalle's [14] in the Ω region: qi
G l2" : F(q, q) = 0
q
[ q= 0 G " , q = 0 e n : (q, q) = 0 J (10) since F(¾, q) < 0 € Ω, V(q(i), q(t)) is a decreasing function of i. l'(q, q) is continuous in fixed compact Ω, is limited underneath in Ω, for example, it satisfies 0 <
Figure imgf000018_0001
q(t)) < ¾0), q(0)). Consequently, the normal solution is the only solution of the closed-loop system (7) restricted to Ω, and therefore it is concluded that the origin of the state-space is locally stable asymptotically.
3.2 Specific case of the controller.
The purpose of this sub-phase is to take advantage of the methodology described above to derive the new regulators. Control schemes with gravity compensation are presented:
Figure imgf000018_0002
Figure imgf000018_0003
Test: The object equation of the closed-loop system of the present invention is achieved by combining the dynamic robot model (1 ) and the control scheme (1 1 ) and written as:
(12) which is an autonomous equation of the differential, and the origin of the state-space is an equilibrium point. To carry out the stability analysis of the equation (12), the following candidate Lyapunov function is proposed:
Figure imgf000019_0001
The first term of ^ *¾) is a positive defined function with regard to ^ because M{q) is a positive defined matrix. The second term of the candidate function of Lyapunov (13) is interpreted as an potential energy induced by the law of control, it is also a positive defined function with regard to the position error*!, because the term kp is a positive defined matrix. Consequently Hq- q) is positive defined and the function is radially unlimited. The derivate with regard to the time of the candidate function of Lyapunov (13) is found along the trajectories of the closed-loop equation (12) and considered thereafter as property 1 , can be defined as: qi
V" (q, q) ή ' . q iq < ' — q K„ tanli
¾2
qi i
q tarih q K(, tarih
¾2
1 qi
-^(q, q)q+ 2 ¾ <l)<l— q tanli
q2 q Kp tanli
Figure imgf000019_0002
(14)
q2
A second example for the purposes of this methodology according to the present invention, is:
(15) 4. Example: Implementation and experimental evaluation according to one embodiment of the present invention.
A new computed-created group of vision algorithms was designed and im- plemented, which particular objective was to extract the necessary visual information for the vision model. This information consists of the spatial position of centroids, that is to say the location of the visual marks, as well as their size, to correlate these black circles with the robot joints (base, elbow and end-effector).
Figure 3 shows a block diagram on the steps in which the process has been divided into for the vision system stage.
It is important to point out that two forms for programming were used in Matlab to perform the acquisition and processing of images, using block diagrams of the working environment called Simulink and with customized blocks called S functions. These tools use adapters of devices to connect different image acquisition equipment to their controllers. Besides, they include an adapter for video devices of generic Windows.
Figure 4 shows a block diagram of the program sequence in Simulink, in accordance with the present invention.
The webcam is connected to the computer via the USB port, from which the images obtained from the video are acquired and characterized through the Simulink module to perform the detection and characterization of the images sent by the camera. Acquisition of the color images takes place in the RGB 8-bit format and obtained with a 320 x 240 pixel resolution, the color images transformed into gray scale tones, binarization of the gray tones of the images takes place, identification of the centroid is performed to determine the coordinates of the end-effector and these are kept in a file that is transferred to the computer that controls the robot motors. The algorithm employed for the development of this program is shown subsequently with the following steps:
1 . Acquisition of images color (video input).
2. Transformation to tones of gray of the color images (Color Space Conversion). 3. Binarization of the images in tones of gray (Vision Operating).
4. Identification of the centroid of the circles in each image (Vision Operating).
5. Calculation of joint positions (Vision Operating).
6. Sending the information to a file (Vision Operating). 7. Displaying angle information of the joints (Display).
8. Visualizing the image of the three centroids (Video Out Robot).
Figure 5 shows the information obtained and kept in the file that is transferred to the computer that controls the robot motors.
The transfer of information of the vision stage to the control stage takes place via a parallel port where, once the data from the robot are obtained, these numeric values are converted from a real floating point format to a 32-bits format grouped in 8 levels; and through interruptions this information is transmitted and received at the control stage, and a process to apply the same directly to the law of control; this program was made with the of Visual C + + 6.0 compiler. This control scheme is working with sampling periods of 41 ms for the vision system and 2.5 ms for the control stage. Figure 5 shows some windows of the proposed visual control system, wherein said windows are displayed through a visual device controlled by a computer according to the present invention.
The experimental robot consists of a base and two aluminum 6061 links driven by direct drive servos by Parker Compumotor. The advantages of this type of direct drive actuator is that it reduces friction, compared with other type of actuators. The motors used in the robot in the board are listed in table 1 . The servos work in the torque mode, so that the motors perform as a torque source, they accept an analog voltage as a reference to send the torque signal. The working area of the manipulator is a circle with a 0.7 m radius. In addition to the positioning sensors and the motor conductors, the robot also includes a motion control table manufactured by Precision MicroDynamics, that is used to obtain the positions of the joints. The control algorithms run in a central computer with a Pentium microprocessor.
Figure imgf000022_0001
Table 1 Servo actuators of the experimental robot
With the reference of the direct drive robot, the gravitation torque is required to implement the new control scheme (6) that is available in [8]:
38.46 sin (qi ) + 1.82 sin (qL + q2)
ff(q) [Nm]
1.82 sin (gi + q2 )
(16) 4.1 Experimental results.
In this stage, the experimental results are presented, which were obtained from the controllers proposed in the planar robot with the fixed-camera configuration.
Three black disk are mounted on the joints. One large black disk for the shoulder, a medium-sized black disk for the elbow, and one small for the end- effector. The coordinates of the joints were calculated based on the centroid of each disk and by using the inverse kinematics as shown in Figure 6, from where it gathers:
Figure imgf000023_0001
"2 .Γ · r,f ( 18)
c]2 =
Figure imgf000023_0002
where 41 , represent the length of the links 1 and 2 respectively, U and are the axes of the visual information. To find the equations 19 and 20, the design shown in figure 3 is used.
The centroids of each disk were selected as the characteristic points of the object. In all the controllers they were selected as the position desired in the
[· . , I T
plane of the image to tu'd V(ii = [198 107]T [pixels] and the initial
N°) °)]T position - [50 210]T [pixels], this is 4ι (°)< ¾(0) = [0 o and = 0 [degrees/sec]. The evaluated controllers have been written in C language. The sampling time was executed to 2.5 ms, while the visual servoing was at 43 ms. The CCD camera was placed in front of the robot and its position with regard to the robot framework∑R was ^R i ue L/c ~ [υ¾ 1 u¾ 1 u¾ i = [-0.5, -0.6, - 1 ]T meters, the rotation angle Θ = 0 degrees. MATLAB ver. 2007a was used along with the SIMULINK module to process the image. The video signal of the CCD camera has a resolution of 320x240 pixels in the RGB format.
Figures 7 and 8 graphically show the experimental results obtained from the controller (1 1 ), the proportional and derivative gains were selected with these values Kp = diag{26.0, 1 ,8} [N], Kv = diag{12.0, 1 ,2} [Nm], respectively and Ud = 198 and Vd = 107. The transitory response is fast at around 3 sec. The positioning error is small and tends to zero asymptotically.
The experimental results for the controller (1 1 ) are graphically shown in Figures 7 and 8. The transitory response was around 3 sec. The error compo- nents spread asymptotically.
Finally, figures 9 and 10 show the experimental results obtained from the controller (15), the proportional and derivative gains were selected with these values Kp = diag{17.3, 1 ,2} [N], Kv = diag{6.6, 1 ,2} [Nm], respectively and Ud = 198 and Vd = 107. The transitory response is fast and was at around 1 sec. The positioning error is small and tends to zero asymptotically.
Although the invention ha been described with reference to diverse aspects of the present invention and examples with regard to a system to control the movement of a direct drive robot through controllers with visual servoing for a fixed-camera configuration, it is within the reach and spirit of the invention the incorporation or use with any system and/or adequate mechanical device. Therefore, it must be understood that numerous and varied modifications can be made without departing from the spirit of the invention.

Claims

1. - A system to control the motion of a direct drive robot through visual servoing for a fixed-camera configuration, comprising:
a three-joint robot manipulator arm; and
a fixed web camera, whose panoramic view completely covers the working area of the robot manipulator arm to locate the end-effector and the objective; and
a microprocessor coupled to a three-joint robot manipulator arm and web camera; wherein
the microprocessor is configured to:
performing the visual servoing based on three marked reference images, each image corresponding to the positioning of each one of the three joints of the robot;
transmitting the visual servoing information pertaining to the positioning images of each one of the three joints of the robot arm output by the fixed web camera;
storing the information of visual servoing output by the fixed web camera; and
calculating the coordinates of the joints based on the centroid of each reference image marked.
2. - A system to control the motion of a direct drive robot through visual servoing for a fixed-camera configuration according to claim 1 , wherein the working area of the robot is a circle with a radius of 0.7 meters.
3. - A system to control the motion of a direct drive robot through visual servoing for a fixed-camera configuration according to claim 1 , wherein the web camera is a CCD (charged-couple-device) and placed in front of the robot.
4. - A system to control the motion of a direct drive robot through visual servoing for a fixed-camera configuration according to claim 1 , wherein the three reference images marked correspond to three black disks of different sizes and mounted on the shoulder, elbow and end-effector of the robot manipulator arm each.
5. - A system to control the motion of a direct drive robot through visual servoing for a fixed-camera configuration according to claim 1 , wherein the transmission and receiving of information is wired or wirelessly.
PCT/IB2015/054096 2015-05-29 2015-05-29 Motion control system for a direct drive robot through visual servoing WO2016193781A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/IB2015/054096 WO2016193781A1 (en) 2015-05-29 2015-05-29 Motion control system for a direct drive robot through visual servoing
MX2015009537A MX2015009537A (en) 2015-05-29 2015-05-29 Motion control system for a direct drive robot through visual servoing.

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IB2015/054096 WO2016193781A1 (en) 2015-05-29 2015-05-29 Motion control system for a direct drive robot through visual servoing

Publications (1)

Publication Number Publication Date
WO2016193781A1 true WO2016193781A1 (en) 2016-12-08

Family

ID=57440690

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2015/054096 WO2016193781A1 (en) 2015-05-29 2015-05-29 Motion control system for a direct drive robot through visual servoing

Country Status (2)

Country Link
MX (1) MX2015009537A (en)
WO (1) WO2016193781A1 (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108181879A (en) * 2018-03-21 2018-06-19 佛山世科智能技术有限公司 A kind of intelligent vision numerical control cutting system
RU188585U1 (en) * 2018-06-25 2019-04-17 Федеральное государственное бюджетное образовательное учреждение высшего образования "ОРЛОВСКИЙ ГОСУДАРСТВЕННЫЙ УНИВЕРСИТЕТ имени И.С. ТУРГЕНЕВА" (ОГУ им. И.С. Тургенева) MANIPULATOR
CN109976188A (en) * 2019-03-12 2019-07-05 广东省智能制造研究所 A kind of cricket control method and system based on Timed Automata
WO2019138111A1 (en) * 2018-01-15 2019-07-18 Technische Universität München Vision-based sensor system and control method for robot arms
CN110900581A (en) * 2019-12-27 2020-03-24 福州大学 Four-degree-of-freedom mechanical arm vision servo control method and device based on RealSense camera
CN111086004A (en) * 2020-01-08 2020-05-01 山东理工大学 Human-simulated flexible joint arm electromechanical coupling modeling method
CN111510581A (en) * 2020-04-27 2020-08-07 辽宁科技大学 STM 32-based cricket control system
CN111553239A (en) * 2020-04-23 2020-08-18 厦门理工学院 Robot joint visual servo control method, terminal device and storage medium
CN111624875A (en) * 2019-02-27 2020-09-04 北京京东尚科信息技术有限公司 Visual servo control method and device and unmanned equipment
CN112291509A (en) * 2020-09-10 2021-01-29 扬州工业职业技术学院 Visual monitoring system based on mobile robot
JP2021045797A (en) * 2019-09-17 2021-03-25 オムロン株式会社 Simulation device, simulation program, and simulation method
CN112720449A (en) * 2019-10-14 2021-04-30 防灾科技学院 Robot positioning device and control system thereof
CN112947569A (en) * 2021-03-09 2021-06-11 中南大学 Visual servo target tracking control method for quad-rotor unmanned aerial vehicle based on preset performance
CN113093549A (en) * 2021-04-07 2021-07-09 中国科学院宁波材料技术与工程研究所 Compound control method of multi-axis numerical control equipment
CN114378827A (en) * 2022-01-26 2022-04-22 北京航空航天大学 Dynamic target tracking and grabbing method based on overall control of mobile mechanical arm
CN114946403A (en) * 2022-07-06 2022-08-30 青岛科技大学 Tea picking robot based on calibration-free visual servo and tea picking control method thereof
CN115493513A (en) * 2022-08-15 2022-12-20 北京空间飞行器总体设计部 Visual system applied to space station mechanical arm
CN116872216A (en) * 2023-08-28 2023-10-13 安徽工业大学 Robot vision servo operation method based on finite time control

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
CID, J.; ET AL.: "Controlador con retroalimentación visual para brazo robot'';", REVISTA IBEROAMERICANA OF SISTEMAS, CIBERNÉTICA E INFORMÁTICA, vol. 8, no. 2, 2011, pages 20 - 25, XP055332179, ISSN: 1690-8627, Retrieved from the Internet <URL:http://www.iiisci.org/journal/CV$/risci/pdfs/HCA688HA.pdf> [retrieved on 20160114] *
CID, J.M.; ET AL.: "Visual Servoing Controller for Robot Manipulators", INTERNATIONAL CONFERENCE ON ELECTRICAL, COMMUNICATIONS, AND COMPUTERS (CONIELECOMP 2009), February 2009 (2009-02-01), pages 153 - 158, XP031489676, ISBN: 978-0-7695-3587-6 *
MORENO-ARMENDARIZ, M.A.; ET AL.: "A new fuzzy visual servoing with application to robot manipulator", PROCEEDINGS OF THE 2005 AMERICAN CONTROL CONFERENCE, vol. 5, June 2005 (2005-06-01), pages 3688 - 3693, XP010820370, ISBN: 978-0-7803-9098-0 *
NADI, F.; ET AL.: "Visual servoing control of robot manipulator with Jacobian matrix estimation", 2014 SECOND RSI/ISM INTERNATIONAL CONFERENCE ON ROBOTICS AND MECHATRONICS (ICROM), October 2014 (2014-10-01), pages 405 - 409, XP032709481 *
YANTAO SHEN ET AL.: "Asymptotic trajectory tracking of manipulators using uncalibrated visual feedback", IEEE /ASME TRANSACTIONS ON MECHATRONICS, vol. 8, no. 1, March 2003 (2003-03-01), pages 87 - 98, XP011076243, ISSN: 1083-4435 *

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11833696B2 (en) 2018-01-15 2023-12-05 Technische Universität München Vision-based sensor system and control method for robot arms
WO2019138111A1 (en) * 2018-01-15 2019-07-18 Technische Universität München Vision-based sensor system and control method for robot arms
US20210023719A1 (en) * 2018-01-15 2021-01-28 Technische Universität München Vision-based sensor system and control method for robot arms
CN108181879A (en) * 2018-03-21 2018-06-19 佛山世科智能技术有限公司 A kind of intelligent vision numerical control cutting system
RU188585U1 (en) * 2018-06-25 2019-04-17 Федеральное государственное бюджетное образовательное учреждение высшего образования "ОРЛОВСКИЙ ГОСУДАРСТВЕННЫЙ УНИВЕРСИТЕТ имени И.С. ТУРГЕНЕВА" (ОГУ им. И.С. Тургенева) MANIPULATOR
CN111624875A (en) * 2019-02-27 2020-09-04 北京京东尚科信息技术有限公司 Visual servo control method and device and unmanned equipment
CN109976188A (en) * 2019-03-12 2019-07-05 广东省智能制造研究所 A kind of cricket control method and system based on Timed Automata
CN109976188B (en) * 2019-03-12 2022-01-07 广东省智能制造研究所 Cricket control method and system based on time automaton
JP7388074B2 (en) 2019-09-17 2023-11-29 オムロン株式会社 Simulation device, simulation program and simulation method
JP2021045797A (en) * 2019-09-17 2021-03-25 オムロン株式会社 Simulation device, simulation program, and simulation method
CN112720449A (en) * 2019-10-14 2021-04-30 防灾科技学院 Robot positioning device and control system thereof
CN110900581A (en) * 2019-12-27 2020-03-24 福州大学 Four-degree-of-freedom mechanical arm vision servo control method and device based on RealSense camera
CN110900581B (en) * 2019-12-27 2023-12-22 福州大学 Four-degree-of-freedom mechanical arm vision servo control method and device based on RealSense camera
CN111086004A (en) * 2020-01-08 2020-05-01 山东理工大学 Human-simulated flexible joint arm electromechanical coupling modeling method
CN111086004B (en) * 2020-01-08 2022-09-13 山东理工大学 Human-simulated flexible joint arm electromechanical coupling modeling method
CN111553239A (en) * 2020-04-23 2020-08-18 厦门理工学院 Robot joint visual servo control method, terminal device and storage medium
CN111553239B (en) * 2020-04-23 2023-04-28 厦门理工学院 Robot joint vision servo control method, terminal equipment and storage medium
CN111510581A (en) * 2020-04-27 2020-08-07 辽宁科技大学 STM 32-based cricket control system
CN112291509A (en) * 2020-09-10 2021-01-29 扬州工业职业技术学院 Visual monitoring system based on mobile robot
CN112947569A (en) * 2021-03-09 2021-06-11 中南大学 Visual servo target tracking control method for quad-rotor unmanned aerial vehicle based on preset performance
CN112947569B (en) * 2021-03-09 2022-08-12 中南大学 Visual servo target tracking control method for quad-rotor unmanned aerial vehicle based on preset performance
CN113093549A (en) * 2021-04-07 2021-07-09 中国科学院宁波材料技术与工程研究所 Compound control method of multi-axis numerical control equipment
CN113093549B (en) * 2021-04-07 2022-10-28 中国科学院宁波材料技术与工程研究所 Compound control method of multi-axis numerical control equipment
CN114378827B (en) * 2022-01-26 2023-08-25 北京航空航天大学 Dynamic target tracking and grabbing method based on overall control of mobile mechanical arm
CN114378827A (en) * 2022-01-26 2022-04-22 北京航空航天大学 Dynamic target tracking and grabbing method based on overall control of mobile mechanical arm
CN114946403A (en) * 2022-07-06 2022-08-30 青岛科技大学 Tea picking robot based on calibration-free visual servo and tea picking control method thereof
CN115493513A (en) * 2022-08-15 2022-12-20 北京空间飞行器总体设计部 Visual system applied to space station mechanical arm
CN116872216A (en) * 2023-08-28 2023-10-13 安徽工业大学 Robot vision servo operation method based on finite time control
CN116872216B (en) * 2023-08-28 2023-12-08 安徽工业大学 Robot vision servo operation method based on finite time control

Also Published As

Publication number Publication date
MX2015009537A (en) 2017-02-23

Similar Documents

Publication Publication Date Title
WO2016193781A1 (en) Motion control system for a direct drive robot through visual servoing
EP3753685A1 (en) Control system and control method
Asfour et al. Toward humanoid manipulation in human-centred environments
JP6826069B2 (en) Robot motion teaching device, robot system and robot control device
CN104570731A (en) Uncalibrated human-computer interaction control system and method based on Kinect
Seo et al. A comparative study of in-field motion capture approaches for body kinematics measurement in construction
Melchiorre et al. Collison avoidance using point cloud data fusion from multiple depth sensors: a practical approach
CN113751981B (en) Space high-precision assembling method and system based on binocular vision servo
Zhao et al. Image-based visual servoing using improved image moments in 6-DOF robot systems
Gratal et al. Visual servoing on unknown objects
Xu et al. Vision-based simultaneous measurement of manipulator configuration and target pose for an intelligent cable-driven robot
Li et al. Development of kinect based teleoperation of nao robot
Zhang et al. A handheld master controller for robot-assisted microsurgery
Al-Shanoon et al. Robotic manipulation based on 3-D visual servoing and deep neural networks
Li et al. Neural learning and kalman filtering enhanced teaching by demonstration for a baxter robot
CN113618367B (en) Multi-vision space assembly system based on seven-degree-of-freedom parallel double-module robot
Han et al. Grasping control method of manipulator based on binocular vision combining target detection and trajectory planning
Quesada et al. Holo-SpoK: Affordance-aware augmented reality control of legged manipulators
CN116872216B (en) Robot vision servo operation method based on finite time control
Gratal et al. Scene representation and object grasping using active vision
Liefhebber et al. Vision-based control of the Manus using SIFT
Bürkle et al. Computer vision based control system of a piezoelectric microrobot
Gu et al. Automated assembly skill acquisition through human demonstration
Bai et al. Kinect-based hand tracking for first-person-perspective robotic arm teleoperation
Song et al. Global visual servoing of miniature mobile robot inside a micro-assembly station

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: MX/A/2015/009537

Country of ref document: MX

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15894044

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 03/04/2018)

122 Ep: pct application non-entry in european phase

Ref document number: 15894044

Country of ref document: EP

Kind code of ref document: A1