CN110026987B - Method, device and equipment for generating grabbing track of mechanical arm and storage medium - Google Patents

Method, device and equipment for generating grabbing track of mechanical arm and storage medium Download PDF

Info

Publication number
CN110026987B
CN110026987B CN201910451779.1A CN201910451779A CN110026987B CN 110026987 B CN110026987 B CN 110026987B CN 201910451779 A CN201910451779 A CN 201910451779A CN 110026987 B CN110026987 B CN 110026987B
Authority
CN
China
Prior art keywords
track
target
determining
mechanical arm
grabbing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910451779.1A
Other languages
Chinese (zh)
Other versions
CN110026987A (en
Inventor
刘文印
叶子涵
陈俊洪
梁达勇
周小静
张启翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN201910451779.1A priority Critical patent/CN110026987B/en
Publication of CN110026987A publication Critical patent/CN110026987A/en
Application granted granted Critical
Publication of CN110026987B publication Critical patent/CN110026987B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a method, a device and equipment for generating a grabbing track of a mechanical arm and a computer readable storage medium; this scheme includes: determining the basis function weight of the motion model according to the teaching track information, and acquiring a target image through a depth camera; inputting a Mask R-CNN network, determining a starting point and a target point, and generating a grabbing track through an operation model; therefore, in the scheme, the basis function weight in the motion model is the same as the teaching track, so that the grabbing track is similar to the teaching track in shape, and the aim of track learning is fulfilled; according to the scheme, the target in the image is effectively identified and positioned through a detection method Mask R-CNN based on deep learning, so that the mechanical arm can grasp the object more intelligently; according to the scheme, visual perception and a dynamic motion element framework are combined, vision is given to a motion model, and the mechanical arm can better interact with the external environment in the aspects of teaching track learning and grabbing track planning.

Description

Method, device and equipment for generating grabbing track of mechanical arm and storage medium
Technical Field
The invention relates to the technical field of track generation, in particular to a method, a device and equipment for generating a grabbing track of a mechanical arm and a computer-readable storage medium.
Background
Teaching learning is a behavior of human-computer interaction, so that the robot reproduces teaching actions in a new environment. The teaching can be divided into kinesthesia teaching and remote teaching according to whether the teach pendant is in direct physical contact with the robot. The kinesthetic teaching is that an operator directly controls the robot to complete corresponding actions, the information of the robot is collected and is taken as a learning object, and the mode is not suitable for a mechanical arm with multiple degrees of freedom. Remote teaching generally controls robot motion through vision, wearable sensors, or other remote control tools. The traditional trajectory planning method represented by teaching learning emphasizes the capability of the robot to achieve the cooperative work with users by learning and simulating human demonstration. It is therefore of great significance to find a suitable and more versatile model of the powertrain system for a given behavior or action.
Dynamic Motion Primitives (DMP) are a trajectory learning and planning method, which facilitates the planning of motion trajectories when the target position changes and the generation of motion trajectories with the same motion trend by simulating teaching trajectories. The mechanical arm teaching learning method based on dynamic motion primitives can effectively perform single-degree and multi-degree-of-freedom trajectory learning, but in practical application, for example: when the arm snatchs the object, because lack the visual perception to the environment, consequently cause the research of this technique too simple and more unified, can not realize human-computer interaction well. The manual potential field trajectory planning method has the problem of local optimal points, and sometimes, the mechanical arm cannot move to a target position and has limitation.
Disclosure of Invention
The invention aims to provide a method, a device and equipment for generating a grabbing track of a mechanical arm and a computer-readable storage medium, so that the mechanical arm can automatically adjust a motion strategy according to different environments by learning a teaching track, and the reproduction and generalization of an original track are realized.
In order to achieve the above object, the present invention provides a method for generating a grabbing trajectory of a robot arm, including:
determining the basis function weight of the motion model according to the teaching track information; the motion model is a model based on a dynamic motion primitive algorithm framework;
acquiring a target image through a depth camera;
inputting the target image into a Mask R-CNN network, and determining the position of an object to be grabbed and the position of the object to be grabbed to be placed;
converting the position of the object to be grabbed into a corresponding seven-dimensional joint angle of the mechanical arm, and using the seven-dimensional joint angle as the starting point position of a grabbing track; converting the position of the object to be grabbed, where the object is to be placed, into a corresponding seven-dimensional joint angle of the mechanical arm, and using the seven-dimensional joint angle as a target point position of a grabbing track;
and inputting the position of the starting point and the position of the target point into the running model to generate a grabbing track.
Optionally, the determining the basis function weight of the motion model according to the teaching trajectory information includes:
acquiring an action track, starting point information and end point information of a teaching track;
determining speed information and acceleration information according to the action track;
determining a target forcing function of the motion model according to the starting point information, the end point information, the speed information and the acceleration information;
determining basis function weights for the motion model using the objective forcing function.
Optionally, the position of the object to be grabbed is converted into a corresponding seven-dimensional joint angle of the mechanical arm, and the seven-dimensional joint angle is used as an initial point position of a grabbing track; converting the position of the object to be grabbed to be placed into a corresponding seven-dimensional joint angle of the mechanical arm, and taking the seven-dimensional joint angle as a target point position of a grabbing track, wherein the method comprises the following steps:
converting the position of the object to be grabbed into a first three-dimensional coordinate under a robot base coordinate;
converting the first three-dimensional coordinate into a corresponding seven-dimensional joint angle of the mechanical arm, and using the seven-dimensional joint angle as an initial point position of a grabbing track;
converting the position of the object to be grabbed, where the object is to be placed, into a second three-dimensional coordinate under the robot base coordinate;
and converting the second three-dimensional coordinate into a corresponding seven-dimensional joint angle of the mechanical arm, and using the seven-dimensional joint angle as the position of a target point of the grabbing track.
Optionally, inputting the position of the starting point and the position of the target point into the operation model to generate a grabbing track, including:
inputting the starting point position and the target point position into the running model to generate a grabbing track;
the motion model includes:
Figure BDA0002075370290000031
Figure BDA0002075370290000032
where τ is the time scaling factor, y is the system displacement,
Figure BDA0002075370290000033
in order to be the speed of the vehicle,
Figure BDA0002075370290000034
is an acceleration, αyIs a first system parameter, betayIs a second system parameter, g is the target point position, f is the target forcing function; x is a time parameter, psi (x) is a basis function, N is the number of basis functions, omega is a weight of the basis function, y0Is the starting point position.
In order to achieve the above object, the present invention further provides a device for generating a grabbing trajectory of a robot arm, including:
the parameter determining module is used for determining the basis function weight of the motion model according to the teaching track information; the motion model is a model based on a dynamic motion primitive algorithm framework;
the acquisition module is used for acquiring a target image through the depth camera;
the position determining module is used for inputting the target image into a Mask R-CNN network and determining the position of the object to be grabbed and the position of the object to be grabbed to be placed;
the position conversion module is used for converting the position of the object to be grabbed into the starting point position of the seven-dimensional joint angle of the mechanical arm and converting the position of the object to be grabbed, where the object to be grabbed is placed, into the target point position of the seven-dimensional joint angle of the mechanical arm;
and the track generation module is used for inputting the starting point position and the target point position into the operation model to generate a grabbing track.
Optionally, the parameter determining module includes:
the information acquisition unit is used for acquiring the action track, the starting point information and the end point information of the teaching track;
the speed determining module is used for determining speed information according to the action track;
the acceleration determining module is used for determining acceleration information according to the action track;
the function determining unit is used for determining a target forcing function of the motion model according to the starting point information, the end point information, the speed information and the acceleration information;
a weight determination unit for determining basis function weights of the motion model using the target forcing function.
Optionally, the position conversion module includes:
the first three-dimensional coordinate conversion unit is used for converting the position of the object to be grabbed into a first three-dimensional coordinate under a robot base coordinate;
the first joint angle conversion unit is used for converting the first three-dimensional coordinate into a corresponding seven-dimensional joint angle of the mechanical arm and taking the seven-dimensional joint angle as the starting point position of the grabbing track;
the second three-dimensional coordinate conversion unit is used for converting the position of the object to be grabbed, where the object is to be placed, into a second three-dimensional coordinate under the robot base coordinate;
and the second joint angle conversion unit is used for converting the second three-dimensional coordinate into a corresponding seven-dimensional joint angle of the mechanical arm and using the seven-dimensional joint angle as a target point position of the grabbing track.
Optionally, the trajectory generating module is specifically configured to:
inputting the starting point position and the target point position into the running model to generate a grabbing track; the motion model includes:
Figure BDA0002075370290000041
Figure BDA0002075370290000042
where τ is the time scaling factor, y is the system displacement,
Figure BDA0002075370290000043
in order to be the speed of the vehicle,
Figure BDA0002075370290000044
is an acceleration, αyIs a first system parameter, betayIs a second system parameter, g is the target point position, f is the target forcing function; x is a time parameter, psi (x) is a basis function, N is the number of basis functions, omega is a weight of the basis function, y0Is the starting point position.
In order to achieve the above object, the present invention further provides a device for generating a grabbing trajectory of a robot arm, including:
a memory for storing a computer program;
and the processor is used for implementing the steps of the method for generating the grabbing track of the mechanical arm when the computer program is executed.
To achieve the above object, the present invention further provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the method for generating a grabbing trajectory of a robot arm as described above.
According to the scheme, the method for generating the grabbing track of the mechanical arm, provided by the embodiment of the invention, comprises the following steps: determining the basis function weight of the motion model according to the teaching track information; the motion model is a model based on a dynamic motion primitive algorithm framework; acquiring a target image through a depth camera; inputting the target image into a Mask R-CNN network, and determining the position of an object to be grabbed and the position of the object to be grabbed to be placed; converting the position of the object to be grabbed into a corresponding seven-dimensional joint angle of the mechanical arm, and using the seven-dimensional joint angle as the starting point position of a grabbing track; converting the position of the object to be grabbed, where the object is to be placed, into a corresponding seven-dimensional joint angle of the mechanical arm, and using the seven-dimensional joint angle as a target point position of a grabbing track; and inputting the position of the starting point and the position of the target point into the running model to generate a grabbing track.
Therefore, in the scheme, the basis function weight in the motion model is the same as that of the teaching track, so that the newly generated grabbing track is similar to the teaching track in shape, and the aim of track learning is fulfilled; in addition, the scheme effectively identifies and positions the target in the image through a detection method Mask R-CNN based on deep learning, so that the mechanical arm can grasp the object more intelligently, and the mechanical arm can better adapt to the service robot market; furthermore, the scheme combines visual perception and a dynamic motion element framework, gives vision to the motion model, better interacts with the external environment in the aspects of teaching track learning and grabbing track planning of the mechanical arm, and is more intelligent.
The invention also discloses a device and equipment for generating the grabbing track of the mechanical arm and a computer readable storage medium, and the technical effects can be realized.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a method for generating a grabbing track of a robot arm according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a network structure of a detection algorithm disclosed in the embodiment of the present invention;
fig. 3 is a schematic structural diagram of a device for generating a grabbing track of a robot arm according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a device for generating a grabbing track of a robot arm according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that systems in nature are mostly nonlinear, and therefore, the establishment of a model thereof is a very important research work. However, due to the parameter sensitivity of these systems, the complex variation of subtle parameter variations, and the difficulty of analysis, modeling target-oriented behavior with nonlinear systems is quite difficult; intuitive and time-consuming parameter adjustments play an important role.
On the basis, Ijspeert et al propose the concept of dynamic motion primitives, which is a research method for modeling attractor behaviors of an autonomous nonlinear dynamical system by using a statistical learning technology, and essentially starts from a simple dynamical system, such as a one-dimensional linear differential equation, and converts the dynamical system into a weak nonlinear system with a specified attractor through a learnable autonomous forcing term, so that almost any complex point attractor and limit-loop attractor can be generated finally. The dynamic motion element is a track learning and planning method, which is convenient for planning the motion track when the target position changes and simulating the teaching track to generate the motion track with the same motion trend. The framework core of the dynamic motion primitive system is a point-based attractor system, which allows the generation of arbitrarily shaped trajectories through strategy parameters that can be learned by local weighted regression methods for dynamic adjustments, or by reinforcement learning methods. Parameters of the dynamic motion primitive algorithm, such as motion time, end point, start point, etc., can be used to obtain a corresponding planned trajectory using the DMP.
Currently, learning of teaching trajectories includes two ways:
the first method is as follows: the mechanical arm teaching learning method based on the dynamic motion primitives is researched on the basis of the DMP theory, an algorithm flow of track learning is designed, and the effectiveness of the algorithm is proved through the reproduction and generalization of one-dimensional teaching tracks. The dynamic motion primitive multi-degree-of-freedom coupling is realized in a mode of sharing a standard system, and the multi-degree-of-freedom motion trail learning is realized. Parameters of dynamic motion elements are analyzed and summarized through single-degree-of-freedom track reproduction and generalized simulation experiments.
The second method comprises the following steps: the trajectory planning of the mechanical arm can be in joint space or in cartesian space. The trajectory planning of the cartesian space can intuitively represent the motion trajectory of the tail end of the mechanical arm or the manipulator, but a large amount of calculation needs to be performed on the planned motion trajectory to convert the planned motion trajectory into a joint angle. The artificial potential field method is a method for realizing trajectory planning under Cartesian coordinates, abstracts the motion of a robot into pose changes under the action of a virtual force field, namely, a target point and an obstacle respectively attract and repel the robot for renting, and controls the motion of the robot by acting the two virtual forces on a mechanical arm.
For the two modes, the former mode lacks visual perception of the environment, so that the research of the technology is simple and the human-computer interaction cannot be well realized; the latter has the problem of local optimum points, sometimes the mechanical arm cannot be moved to the target position, and has great limitation.
Therefore, the embodiment of the invention discloses a method, a device, equipment and a computer readable storage medium for generating a grabbing track of a mechanical arm, which enable the mechanical arm to automatically adjust a motion strategy according to different environments by learning a teaching track, and realize the reappearance and generalization of an original track; and a detection technology based on deep learning is integrated, and the external environment is effectively subjected to visual perception, so that the trajectory planning of the mechanical arm for grabbing the object is realized, and the service robot is better adapted to the changeable environment.
Referring to fig. 1, a method for generating a grabbing trajectory of a robot arm according to an embodiment of the present invention includes:
s101, determining a basis function weight of the motion model according to the teaching track information; the motion model is a model based on a dynamic motion primitive algorithm framework;
wherein, the determining the basis function weight of the motion model according to the teaching track information comprises:
acquiring an action track, starting point information and end point information of a teaching track;
determining speed information and acceleration information according to the action track;
determining a target forcing function of the motion model according to the starting point information, the end point information, the speed information and the acceleration information;
determining basis function weights for the motion model using the objective forcing function.
In the scheme, in order to learn the teaching track, the basis function weight of the teaching track needs to be determined, and the basis function weight is used as the basis function weight in the motion model for generating the current grabbing track; in the scheme, the motion model is a model based on a dynamic motion primitive algorithm framework; the model of the dynamic motion primitive algorithm framework is initially derived from a second order spring-damped system, the characteristics of the overall system being convergence towards the target position. The basic idea of DMP is to use a dynamic system with good stability characteristics and modulate it with a non-linear term, i.e. to introduce a non-linear function into a simple and stable dynamic system, and to control the motion process of the system by the non-linear function. The DMP abstracts the spring-mass-damping model as a point attraction subsystem and introduces a forcing term f:
Figure BDA0002075370290000071
Figure BDA0002075370290000081
ψi=exp(-hi(x-ci)2) (3)
Figure BDA0002075370290000082
wherein y is the motion state, i.e. displacement, of the single degree of freedom system,
Figure BDA0002075370290000083
corresponding speed and acceleration. g is a target value, also called attractor, i.e. a desired state of motion, such as the joint position of the robot arm or the position of a point in a cartesian product coordinate system. Alpha is alphay、βyFor system parameters, by setting parameter values, e.g. betay=αyAnd 4, the system can reach critical damping, the stability of the system can be ensured, the system state y gradually changes along with time, and finally the system converges to the target value g. Equation (1) is referred to as the conversion system.
The forcing function f is the core of the DMP, whereiReferred to as basis functions, N represents the number of basis functions. y is0The initial state of the system may be an initial value of a given teaching trajectory or may be a designated start point coordinate. ω is a weight of the basis function obeyed by ciGaussian distribution centered on hiThe forcing function f is a combination of a series of non-linear functions, which are the variances of the basis functions, and thus the entire DMP system is also non-linear. The above equation (4) is called a specification system, where τ is called a time scaling factor, and is used to adjust the decay rate, α, of the specification systemxTo specify the system parameters, x is a time parameter,
Figure BDA0002075370290000084
is the value of the first derivative of the time parameter.
Further, in order to learn the motion from the teaching track, the weights need to be determined according to the teaching track information, the teaching track information may include the motion track, the start point information, and the end point information, and the basis function weights may be determined according to the teaching track information, which specifically includes the following steps:
first, it is necessary to record the motion trajectory y of the teaching trajectorydemo(T) where T ∈ [0, …, T]Then, the speed information is obtained by derivation
Figure BDA0002075370290000085
And acceleration information
Figure BDA0002075370290000086
Starting point y in the formula0Setting the starting point information of the teaching track, and setting the target value g as the end point information of the teaching track:
y0=ydemo(t=0) (5)
g=ydemo(t=T) (6)
the formula (1) is modified and substituted into the initial condition to obtain:
Figure BDA0002075370290000087
obtaining a target forcing function ftargetThe problem of finding the weight ω can then be translated into minimizing the error
Figure BDA0002075370290000088
So that the forcing function f approximates ftargetThe problem is a linear Regression problem, which can be solved by using various methods such as local Weighted Regression (localization Weighted Regression) and the like to obtain basis function weights, and in this way, by adjusting the weight parameters of the forcing term through a learning algorithm, a track with any complex shape can be generated from an initial point to a target point.
S102, acquiring a target image through a depth camera;
in this scheme, this depth camera can be the color camera of Kinect V2 depth camera, can read the color image as the target image through this camera, in this target image, can include the object that waits to snatch to and wait to snatch the place position of object, for example: the spoon needs to be placed at a fixed position, the spoon is an object to be grabbed, the finally generated grabbing track is the grabbing track for placing the spoon at the fixed position, and the target image acquired through the S102 is used for determining the starting point position and the target point position of the grabbing track.
S103, inputting the target image into a Mask R-CNN network, and determining the position of an object to be grabbed and the position of the object to be grabbed to be placed;
at present, with the arrival of industrial 4.0, the role of machine vision in the field of intelligent manufacturing becomes more and more important, and the application of various target detection methods in the fields of industry and service robots is also more and more extensive. Deep learning is a rapid hot spot subject developed in recent years, and the target detection method based on the deep learning can effectively solve the problems of object identification and classification. R-CNN was proposed by Ross Girshick et al in 2014, and the detection rate on PASCAL VOCs was increased from 35.1% to 53.7% by using convolutional neural network CNN for target detection, but there were problems of too long training time and testing time. Then, the Ross Girshick team provides Fast R-CNN in 2015, the conception is exquisite, the flow is more compact, the speed of target detection is greatly increased, and the problems of low training and testing speed and large space required by training of the R-CNN are solved. Subsequently, the Faster R-CNN was introduced again, replacing the previous selective search with a neural network that extracts edges, saving significant time in the problem of finding candidate boxes. To this end, the four basic steps of target detection (candidate region generation, feature extraction, classification, location refinement) are unified into a deep neural network framework. The Mask R-CNN inherits the Faster R-CNN, only adds a Mask prediction branch on the fast R-CNN, improves the ROI Pooling, and provides the ROI Align, thereby having good effect on example segmentation.
Therefore, in the present application, the adopted detection algorithm is Mask R-CNN, which is shown in fig. 2 and is a schematic diagram of a network structure diagram of the detection algorithm; as can be seen from the figure, Mask R-CNN first extracts feature maps of the input image using a set of basic convolutional layers + activation functions + pooling layers, and then sends the obtained feature maps to a Region generation network (RPN) to generate a candidate Region after cropping and correction. And then, the Mask R-CNN adopts ROI Align to replace an ROI Pooling layer to generate a feature map with a fixed size, the ROI Align eliminates the rounding operation of the ROI Pooling on each feature picture block, and bilinear interpolation is used for accurately finding out the feature corresponding to each block. And finally, sending the feature graph with a fixed size into a full connection layer, carrying out classification and identification by using Softmax, positioning a frame by using L1Loss regression, and adding a mask branch for example segmentation.
Before the position is identified through the algorithm, the Mask R-CNN network needs to be trained, and a training data set with data labels on an object to be detected in advance is input into the Mask R-CNN network for training. After training is finished, taking a color image read by a color camera of the Kinect V2 depth camera as a target image, and inputting the target image into a Mask R-CNN network to obtain the position of an object to be grabbed and the position of the object to be grabbed to be placed; the position can be represented by a quadruple (x, y, w, h) to represent the position of a square frame of the object to be grabbed in the target image, and the four parameters respectively represent the two-dimensional coordinates of the upper left corner of the square frame and the width and the height of the square frame.
S104, converting the position of the object to be grabbed into a corresponding seven-dimensional joint angle of the mechanical arm, and taking the seven-dimensional joint angle as the initial point position of a grabbing track; converting the position of the object to be grabbed, where the object is to be placed, into a corresponding seven-dimensional joint angle of the mechanical arm, and using the seven-dimensional joint angle as a target point position of a grabbing track;
converting the position of the object to be grabbed into a corresponding seven-dimensional joint angle of the mechanical arm, and taking the seven-dimensional joint angle as the starting point position of a grabbing track; converting the position of the object to be grabbed to be placed into a corresponding seven-dimensional joint angle of the mechanical arm, and taking the seven-dimensional joint angle as a target point position of a grabbing track, wherein the method comprises the following steps:
converting the position of the object to be grabbed into a first three-dimensional coordinate under a robot base coordinate; converting the first three-dimensional coordinate into a corresponding seven-dimensional joint angle of the mechanical arm, and using the seven-dimensional joint angle as an initial point position of a grabbing track;
converting the position of the object to be grabbed, where the object is to be placed, into a second three-dimensional coordinate under the robot base coordinate; and converting the second three-dimensional coordinate into a corresponding seven-dimensional joint angle of the mechanical arm, and using the seven-dimensional joint angle as the position of a target point of the grabbing track.
In the scheme, visual perception is carried out through a Kinect V2 depth camera, in addition, internal and external parameter calibration is carried out on the camera in advance, the distortion of camera imaging is corrected through the internal parameter calibration, and the mapping from three-dimensional coordinates under a Kinect coordinate system to a robot base coordinate system is realized through the external parameter calibration.
In S103, the position of the object to be grasped and the position of the object to be grasped to be placed are obtained, and the positions are the positions of the targets in the two-dimensional image, that is, the two-dimensional coordinate positions in the Kinect vision system coordinate system. Further, a MapColorFrameToCameraSpace function provided in the Kinect SDK may convert a pixel point in an RGB image into a corresponding three-dimensional coordinate in a Kinect coordinate system, and may convert the obtained three-dimensional coordinate in the Kinect coordinate system into a three-dimensional coordinate in a robot base coordinate system through a conversion matrix obtained by external reference calibration, where the obtained three-dimensional coordinate is the first three-dimensional coordinate and the second three-dimensional coordinate in the present scheme.
It is understood that Inverse Kinematics (IK) can translate the robotic arm end effector position in cartesian volume space to various joint angles. Thus, in this scenario, Moveit! Solving inverse Kinematics of the Kinematical and Dynamics Library (KDL) kinematic plug-in integrated in the Baxter mechanical arm, namely converting the first three-dimensional coordinate and the second three-dimensional coordinate to obtain a 7-dimensional joint angle of the Baxter mechanical arm, and respectively using the 7-dimensional joint angle as a starting point position and a target point position.
And S105, inputting the starting point position and the target point position into the running model to generate a grabbing track.
Inputting the starting point position and the target point position into the running model to generate a grabbing track, wherein the grabbing track comprises the following steps:
inputting the starting point position and the target point position into the running model to generate a grabbing track;
the motion model includes:
Figure BDA0002075370290000111
Figure BDA0002075370290000112
where τ is the time scaling factor, y is the system displacement,
Figure BDA0002075370290000113
in order to be the speed of the vehicle,
Figure BDA0002075370290000114
is an acceleration, αyIs a first system parameter, betayIs a second system parameter, g is the target point position, f is the target forcing function; x is a time parameter, psi (x) is a basis function, N is the number of basis functions, omega is a weight of the basis function, y0Is the starting point position.
In step S101, the basis function weights in the motion model have been determined, and the starting point position and the target point position obtained in the previous step are input into the motion model, and the system state y, and y in the new target are calculated by the model,
Figure BDA0002075370290000115
Because the weight of the forcing function of the new motion state is the same as the weight of the teaching track, the new motion track is similar to the teaching track in shape, and the aim of track learning is fulfilled.
Compared with the mechanical arm teaching learning method based on the dynamic motion elements in the mode I, the mechanical arm object grabbing trajectory generation method has the advantages that the experiment setting is expanded by using the depth camera such as Kinect V2, and on the basis of teaching trajectory learning by using the dynamic motion elements, the external environment is subjected to visual perception through the target detection method based on the depth learning, so that the mechanical arm object grabbing trajectory planning is not single any more, and the interaction between the robot and the external environment is better realized. Compared with the artificial potential field method in the second mode, the trajectory planning method obtains the weight by extracting the characteristics of the teaching trajectory, the weight cannot be changed under the condition of changing the environment, the motion trend is always the same, and the problems of local optimization and the like do not exist, so that the method has higher robustness.
In conclusion, the planning method for the grabbing track of the mechanical arm object, provided by the invention, not only can effectively realize the reappearance and generalization of the teaching track by the mechanical arm, but also can better interact with the external environment based on visual perception. The stable dynamic system is regulated and controlled through the nonlinear function, so that the teaching and learning track is not simplified, different track planning strategies are automatically realized according to different environmental parameters, and better generalization capability is achieved; through a Mask R-CNN based on a deep learning target detection method, targets in the image are effectively identified and positioned, so that the mechanical arm can grab the object more intelligently, and the robot is better adapted to the service robot market.
The following describes a generating apparatus provided in an embodiment of the present invention, and the generating apparatus described below and the generating method described above may be referred to each other.
Referring to fig. 3, an apparatus for generating a grabbing trajectory of a robot provided in an embodiment of the present invention includes:
the parameter determining module 100 is configured to determine a basis function weight of the motion model according to the teaching trajectory information; the motion model is a model based on a dynamic motion primitive algorithm framework;
an obtaining module 200, configured to obtain a target image through a depth camera;
the position determining module 300 is configured to input the target image into a Mask R-CNN network, and determine a position of an object to be grabbed and a position where the object to be grabbed is to be placed;
the position conversion module 400 is configured to convert the position of the object to be grabbed into an initial point position of a seven-dimensional joint angle of the mechanical arm, and convert the position of the object to be grabbed, where the object is to be placed, into a target point position of the seven-dimensional joint angle of the mechanical arm;
and a track generating module 500, configured to input the start point position and the target point position into the operation model to generate a capturing track.
Wherein the parameter determination module comprises:
the information acquisition unit is used for acquiring the action track, the starting point information and the end point information of the teaching track;
the speed determining module is used for determining speed information according to the action track;
the acceleration determining module is used for determining acceleration information according to the action track;
the function determining unit is used for determining a target forcing function of the motion model according to the starting point information, the end point information, the speed information and the acceleration information;
a weight determination unit for determining basis function weights of the motion model using the target forcing function.
Wherein the position conversion module includes:
the first three-dimensional coordinate conversion unit is used for converting the position of the object to be grabbed into a first three-dimensional coordinate under a robot base coordinate;
the first joint angle conversion unit is used for converting the first three-dimensional coordinate into a corresponding seven-dimensional joint angle of the mechanical arm and taking the seven-dimensional joint angle as the starting point position of the grabbing track;
the second three-dimensional coordinate conversion unit is used for converting the position of the object to be grabbed, where the object is to be placed, into a second three-dimensional coordinate under the robot base coordinate;
and the second joint angle conversion unit is used for converting the second three-dimensional coordinate into a corresponding seven-dimensional joint angle of the mechanical arm and using the seven-dimensional joint angle as a target point position of the grabbing track.
Wherein the trajectory generation module is specifically configured to:
inputting the starting point position and the target point position into the running model to generate a grabbing track; the motion model includes:
Figure BDA0002075370290000131
Figure BDA0002075370290000132
where τ is the time scaling factor, y is the system displacement,
Figure BDA0002075370290000133
in order to be the speed of the vehicle,
Figure BDA0002075370290000134
is an acceleration, αyIs a first system parameter, betayIs a second system parameter, g is the target point position, f is the target forcing function; x is a time parameter, psi (x) is a basis function, N is the number of basis functions, omega is a weight of the basis function, y0Is the starting point position.
The embodiment of the invention also discloses equipment for generating the grabbing track of the mechanical arm, which comprises:
a memory for storing a computer program;
and a processor, configured to implement the steps of the method for generating a robot arm grabbing trajectory according to the above method embodiment when executing the computer program.
Referring to fig. 4, the device 1 may include a memory 11, a processor 12, and a bus 13.
The memory 11 includes at least one type of readable storage medium, which includes a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, and the like. The memory 11 may in some embodiments be an internal storage unit of the device 1, for example a hard disk of the device 1. The memory 11 may also be an external storage device of the device 1 in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), etc. provided on the device 1. Further, the memory 11 may also comprise both internal memory units of the device 1 and external memory devices. The memory 11 may be used not only to store application software installed in the device 1 and various types of data such as codes for executing a method of generating a grab trajectory, but also to temporarily store data that has been output or is to be output.
The processor 12 may be a Central Processing Unit (CPU), a controller, a microcontroller, a microprocessor or other data Processing chip in some embodiments, and is used for executing program codes stored in the memory 11 or Processing data, such as codes for executing a method for generating a capture track.
The bus 13 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 4, but this does not indicate only one bus or one type of bus.
Further, the device may further comprise a network interface 14, and the network interface 14 may optionally comprise a wired interface and/or a wireless interface (such as a WI-FI interface, a bluetooth interface, etc.), which are generally used for establishing a communication connection between the device 1 and other electronic devices.
Optionally, the device 1 may further comprise a user interface, which may comprise a Display (Display), an input unit such as a Keyboard (Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable for displaying information processed in the device 1 and for displaying a visual user interface.
Fig. 4 only shows the device 1 with the components 11-14, and it will be understood by a person skilled in the art that the structure shown in fig. 4 does not constitute a limitation of the device 1, and may comprise fewer or more components than shown, or a combination of certain components, or a different arrangement of components.
The embodiment of the invention also discloses a computer readable storage medium, wherein a computer program is stored on the computer readable storage medium, and when the computer program is executed by a processor, the steps of the method for generating the grabbing track of the mechanical arm are realized.
Wherein the storage medium may include: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. A method for generating a grabbing track of a mechanical arm is characterized by comprising the following steps:
determining the basis function weight of the motion model according to the teaching track information; the motion model is a model based on a dynamic motion primitive algorithm framework;
acquiring a target image through a depth camera;
inputting the target image into a Mask R-CNN network, and determining the position of an object to be grabbed and the position of the object to be grabbed to be placed;
converting the position of the object to be grabbed into a first three-dimensional coordinate under a robot base coordinate; converting the first three-dimensional coordinate into a corresponding seven-dimensional joint angle of the mechanical arm, and using the seven-dimensional joint angle as an initial point position of a grabbing track;
converting the position of the object to be grabbed, where the object is to be placed, into a second three-dimensional coordinate under the robot base coordinate; converting the second three-dimensional coordinate into a corresponding seven-dimensional joint angle of the mechanical arm, and using the seven-dimensional joint angle as a target point position of a grabbing track;
and inputting the position of the starting point and the position of the target point into the motion model to generate a grabbing track.
2. The generation method according to claim 1, wherein the determining the basis function weight of the motion model according to the teach pendant information includes:
acquiring an action track, starting point information and end point information of a teaching track;
determining speed information and acceleration information according to the action track;
determining a target forcing function of the motion model according to the starting point information, the end point information, the speed information and the acceleration information;
determining basis function weights for the motion model using the objective forcing function.
3. The generation method according to claim 1 or 2, wherein inputting the start point position and the target point position into the running model to generate a grabbing track comprises:
inputting the starting point position and the target point position into the running model to generate a grabbing track;
the motion model includes:
Figure FDA0003523515320000011
Figure FDA0003523515320000012
where τ is the time scaling factor, y is the system displacement,
Figure FDA0003523515320000021
in order to be the speed of the vehicle,
Figure FDA0003523515320000022
is an acceleration, αyIs a first system parameter, betayIs a second system parameter, g is the target point position, f is a target forcing function; x is a time parameter, psi (x) is a basis function, N is the number of basis functions, omega is a weight of the basis function, y0Is the starting point position.
4. A device for generating a grabbing track of a mechanical arm is characterized by comprising:
the parameter determining module is used for determining the basis function weight of the motion model according to the teaching track information; the motion model is a model based on a dynamic motion primitive algorithm framework;
the acquisition module is used for acquiring a target image through the depth camera;
the position determining module is used for inputting the target image into a Mask R-CNN network and determining the position of the object to be grabbed and the position of the object to be grabbed to be placed;
the position conversion module is used for converting the position of the object to be grabbed into the starting point position of the seven-dimensional joint angle of the mechanical arm and converting the position of the object to be grabbed, where the object to be grabbed is placed, into the target point position of the seven-dimensional joint angle of the mechanical arm;
the track generation module is used for inputting the starting point position and the target point position into the motion model to generate a grabbing track;
wherein the position conversion module includes:
the first three-dimensional coordinate conversion unit is used for converting the position of the object to be grabbed into a first three-dimensional coordinate under a robot base coordinate;
the first joint angle conversion unit is used for converting the first three-dimensional coordinate into a corresponding seven-dimensional joint angle of the mechanical arm and taking the seven-dimensional joint angle as the starting point position of the grabbing track;
the second three-dimensional coordinate conversion unit is used for converting the position of the object to be grabbed, where the object is to be placed, into a second three-dimensional coordinate under the robot base coordinate;
and the second joint angle conversion unit is used for converting the second three-dimensional coordinate into a corresponding seven-dimensional joint angle of the mechanical arm and using the seven-dimensional joint angle as a target point position of the grabbing track.
5. The generation apparatus of claim 4, wherein the parameter determination module comprises:
the information acquisition unit is used for acquiring the action track, the starting point information and the end point information of the teaching track;
the speed determining module is used for determining speed information according to the action track;
the acceleration determining module is used for determining acceleration information according to the action track;
the function determining unit is used for determining a target forcing function of the motion model according to the starting point information, the end point information, the speed information and the acceleration information;
a weight determination unit for determining basis function weights of the motion model using the target forcing function.
6. The generation apparatus according to claim 4 or 5, wherein the trajectory generation module is specifically configured to:
inputting the starting point position and the target point position into the running model to generate a grabbing track; the motion model includes:
Figure FDA0003523515320000031
Figure FDA0003523515320000032
where τ is the time scaling factor, y is the system displacement,
Figure FDA0003523515320000033
in order to be the speed of the vehicle,
Figure FDA0003523515320000034
is an acceleration, αyIs a first system parameter, betayIs a second system parameter, g is the target point position, f is a target forcing function; x is a time parameter, psi (x) is a basis function, N is the number of basis functions, omega is a weight of the basis function, y0Is the starting point position.
7. A generation device of a grabbing track of a mechanical arm is characterized by comprising:
a memory for storing a computer program;
a processor for implementing the steps of the method for generating a robot arm gripping trajectory according to any one of claims 1 to 3 when executing the computer program.
8. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when being executed by a processor, carries out the steps of the method for generating a gripping trajectory for a robot arm according to any one of claims 1 to 3.
CN201910451779.1A 2019-05-28 2019-05-28 Method, device and equipment for generating grabbing track of mechanical arm and storage medium Active CN110026987B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910451779.1A CN110026987B (en) 2019-05-28 2019-05-28 Method, device and equipment for generating grabbing track of mechanical arm and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910451779.1A CN110026987B (en) 2019-05-28 2019-05-28 Method, device and equipment for generating grabbing track of mechanical arm and storage medium

Publications (2)

Publication Number Publication Date
CN110026987A CN110026987A (en) 2019-07-19
CN110026987B true CN110026987B (en) 2022-04-19

Family

ID=67243671

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910451779.1A Active CN110026987B (en) 2019-05-28 2019-05-28 Method, device and equipment for generating grabbing track of mechanical arm and storage medium

Country Status (1)

Country Link
CN (1) CN110026987B (en)

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112287728A (en) * 2019-07-24 2021-01-29 鲁班嫡系机器人(深圳)有限公司 Intelligent agent trajectory planning method, device, system, storage medium and equipment
CN110480637B (en) * 2019-08-12 2020-10-20 浙江大学 Mechanical arm part image recognition and grabbing method based on Kinect sensor
CN111002302B (en) * 2019-09-09 2021-10-22 浙江瀚镪自动化设备股份有限公司 Mechanical arm grabbing track planning method combining Gaussian mixture model and dynamic system
CN110640736B (en) * 2019-10-23 2023-04-07 南京工业大学 Mechanical arm motion planning method for intelligent manufacturing
CN110895810B (en) * 2019-10-24 2022-07-05 中科院广州电子技术有限公司 Chromosome image example segmentation method and device based on improved Mask RCNN
CN110861083B (en) * 2019-10-25 2020-11-24 广东省智能制造研究所 Robot teaching method and device, storage medium and robot
CN110666804B (en) * 2019-10-31 2021-07-13 福州大学 Motion planning method and system for cooperation of double robots
CN110926852B (en) * 2019-11-18 2021-10-22 迪普派斯医疗科技(山东)有限公司 Automatic film changing system and method for digital pathological section
CN111015655B (en) * 2019-12-18 2022-02-22 深圳市优必选科技股份有限公司 Mechanical arm grabbing method and device, computer readable storage medium and robot
JP7463777B2 (en) * 2020-03-13 2024-04-09 オムロン株式会社 CONTROL DEVICE, LEARNING DEVICE, ROBOT SYSTEM, AND METHOD
CN111890353B (en) * 2020-06-24 2022-01-11 深圳市越疆科技有限公司 Robot teaching track reproduction method and device and computer readable storage medium
CN112183188B (en) * 2020-08-18 2022-10-04 北京航空航天大学 Method for simulating learning of mechanical arm based on task embedded network
CN112207835B (en) * 2020-09-18 2021-11-16 浙江大学 Method for realizing double-arm cooperative work task based on teaching learning
CN112605974A (en) * 2020-11-27 2021-04-06 广东省科学院智能制造研究所 Robot complex operation skill characterization method and system
CN113552871B (en) * 2021-01-08 2022-11-29 腾讯科技(深圳)有限公司 Robot control method and device based on artificial intelligence and electronic equipment
CN113043251B (en) * 2021-04-23 2023-07-07 江苏理工学院 Robot teaching reproduction track learning method
CN113125463B (en) * 2021-04-25 2023-03-10 济南大学 Teaching method and device for detecting weld defects of automobile hub
CN113344967B (en) * 2021-06-07 2023-04-07 哈尔滨理工大学 Dynamic target identification tracking method under complex background
CN113479405B (en) * 2021-07-19 2023-03-28 合肥哈工龙延智能装备有限公司 Control method for stably opening paper box by high-speed box packing machine
CN113547521A (en) * 2021-07-29 2021-10-26 中国科学技术大学 Method and system for autonomous grabbing and accurate moving of mobile robot guided by vision
CN113657551B (en) * 2021-09-01 2023-10-20 陕西工业职业技术学院 Robot grabbing gesture task planning method for sorting and stacking multiple targets
CN114227187B (en) * 2021-11-30 2023-03-28 浪潮(山东)计算机科技有限公司 Plug-in component mounting method and system and related assembly
CN114310918A (en) * 2022-03-14 2022-04-12 珞石(北京)科技有限公司 Mechanical arm track generation and correction method under man-machine cooperation
CN116061187B (en) * 2023-03-07 2023-06-16 睿尔曼智能科技(江苏)有限公司 Method for identifying, positioning and grabbing goods on goods shelves by composite robot

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2957397A2 (en) * 2014-06-20 2015-12-23 Ricoh Company, Ltd. Measurement system, object pickup system, measurement method, and carrier means
US9486918B1 (en) * 2013-03-13 2016-11-08 Hrl Laboratories, Llc System and method for quick scripting of tasks for autonomous robotic manipulation
CN109108978A (en) * 2018-09-11 2019-01-01 武汉科技大学 Three-dimensional space manipulator motion planning method based on study Generalization Mechanism
CN109108942A (en) * 2018-09-11 2019-01-01 武汉科技大学 The mechanical arm motion control method and system of the real-time teaching of view-based access control model and adaptive DMPS
CN109397285A (en) * 2018-09-17 2019-03-01 鲁班嫡系机器人(深圳)有限公司 A kind of assembly method, assembly device and assembly equipment
CN109711325A (en) * 2018-12-25 2019-05-03 华南农业大学 A kind of mango picking point recognition methods

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9486918B1 (en) * 2013-03-13 2016-11-08 Hrl Laboratories, Llc System and method for quick scripting of tasks for autonomous robotic manipulation
EP2957397A2 (en) * 2014-06-20 2015-12-23 Ricoh Company, Ltd. Measurement system, object pickup system, measurement method, and carrier means
CN109108978A (en) * 2018-09-11 2019-01-01 武汉科技大学 Three-dimensional space manipulator motion planning method based on study Generalization Mechanism
CN109108942A (en) * 2018-09-11 2019-01-01 武汉科技大学 The mechanical arm motion control method and system of the real-time teaching of view-based access control model and adaptive DMPS
CN109397285A (en) * 2018-09-17 2019-03-01 鲁班嫡系机器人(深圳)有限公司 A kind of assembly method, assembly device and assembly equipment
CN109711325A (en) * 2018-12-25 2019-05-03 华南农业大学 A kind of mango picking point recognition methods

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于动态运动基元的机械臂示教学习方法研究;刘崇德;《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》;20180615;摘要、第34-37、43-62页 *

Also Published As

Publication number Publication date
CN110026987A (en) 2019-07-19

Similar Documents

Publication Publication Date Title
CN110026987B (en) Method, device and equipment for generating grabbing track of mechanical arm and storage medium
US11694432B2 (en) System and method for augmenting a visual output from a robotic device
CN109483534B (en) Object grabbing method, device and system
CN108284436B (en) Remote mechanical double-arm system with simulation learning mechanism and method
CN107030692B (en) Manipulator teleoperation method and system based on perception enhancement
CN110385694A (en) Action teaching device, robot system and the robot controller of robot
Raessa et al. Teaching a robot to use electric tools with regrasp planning
WO2020180697A1 (en) Robotic manipulation using domain-invariant 3d representations predicted from 2.5d vision data
Zhou et al. Advanced robot programming: A review
Tian et al. Object grasping of humanoid robot based on YOLO
Lee et al. Sample-efficient learning of deformable linear object manipulation in the real world through self-supervision
Zhang et al. Robot programming by demonstration: A novel system for robot trajectory programming based on robot operating system
Liu et al. Understanding multi-modal perception using behavioral cloning for peg-in-a-hole insertion tasks
CN117103277A (en) Mechanical arm sensing method based on multi-mode data fusion
CN115338856A (en) Method for controlling a robotic device
Deng et al. A learning framework for semantic reach-to-grasp tasks integrating machine learning and optimization
Gao et al. Kinect-based motion recognition tracking robotic arm platform
Kim et al. Digital twin for autonomous collaborative robot by using synthetic data and reinforcement learning
Tang et al. A convenient method for tracking color-based object in living video based on ROS and MATLAB/Simulink
CN109934155B (en) Depth vision-based collaborative robot gesture recognition method and device
Fang et al. Learning from wearable-based teleoperation demonstration
Xin et al. Visual servoing of unknown objects for family service robots
WO2023100282A1 (en) Data generation system, model generation system, estimation system, trained model production method, robot control system, data generation method, and data generation program
Wang et al. DOREP 2.0: An Upgraded Version of Robot Control Teaching Experimental Platform with Reinforcement Learning and Visual Analysis
Joshi Antipodal Robotic Grasping using Deep Learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant