CN109571487B - Robot demonstration learning method based on vision - Google Patents
Robot demonstration learning method based on vision Download PDFInfo
- Publication number
- CN109571487B CN109571487B CN201811064626.3A CN201811064626A CN109571487B CN 109571487 B CN109571487 B CN 109571487B CN 201811064626 A CN201811064626 A CN 201811064626A CN 109571487 B CN109571487 B CN 109571487B
- Authority
- CN
- China
- Prior art keywords
- robot
- coordinate system
- teaching tool
- pose
- demonstration
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/0081—Programme-controlled manipulators with master teach-in means
Abstract
The invention discloses a robot demonstration learning method based on vision, which realizes the learning of demonstration tasks by using a demonstration tool and a vision sensor. Firstly, a demonstrator holds a teaching tool by hand to demonstrate an operation task, then a visual sensor acquires image characteristics of the teaching tool, a teaching track of the teaching tool in a demonstration process is obtained according to internal parameters of the visual sensor, the robot is controlled to move, a terminal movement track of the robot in the demonstration process is obtained, and finally Kalman filtering is carried out on the terminal movement track of the robot to obtain a learning track of the robot, so that the demonstration task is learned; the invention can easily extract the six-dimensional pose information of the teaching tool by using a simple visual tool, and has good real-time performance of demonstration and learning. The invention reduces the teaching difficulty of the operator, and the inexperienced operator can also perform the demonstration teaching of the robot.
Description
Technical Field
The invention belongs to the field of robot control, and particularly relates to a robot demonstration learning method based on vision.
Background
The robot is a mechanical device which completes repetitive work under programming and control, and the main task of the robot is to replace human beings to perform manual operation with repeatability, poor environment and high risk. With the continuous development of technology, robots have been able to perform some heavy, complex, dangerous activities instead of humans in many fields. The method can improve the operation efficiency and reduce the operation risk, and is widely applied to industrial production such as welding, assembly and the like.
However, most robots today work in a space separate from a human, who can only achieve a specific trajectory through a teaching board or programming. This approach requires the operator to be familiar with the robot operating system in advance, with certain programming capabilities. Moreover, the operator and the robot are in different spaces, so that the operation is difficult, the accuracy is low, and the time and the efficiency are not high.
The robot control system aims to improve the autonomy of robot behaviors, reduce the difficulty of non-professional personnel participating in robot control and demonstrate that the robot control system is suitable for the study. The demonstration learning is also called teaching learning, and the robot learns a motion control strategy by observing the motion behavior of a demonstrator (a human or a robot), so as to acquire the motion skill and generate the autonomous behavior like a human. Rozo et al realized that a six-degree-of-freedom industrial robot could autonomously complete Ball-In-Box force control operation tasks using a demonstration learning method (Rozo L., Jime nez P., Torras C. A robot learning from a hierarchy to a performance for a based management tasks [ J ]. Intelligent service robots, 2013,6(1): 33-51.). And (3) acquiring six-dimensional force vectors at the tail end of the mechanical arm and corresponding joint speeds in a demonstrator demonstration stage, and outputting the angular speeds of the joints when the current tail end torque information is input by the obtained model in an action recurrence stage, so that the movement of a small ball in the mechanical arm control box is driven and the small ball falls off at a hole. The method needs a hidden Markov model to model the action sequence, and has large calculation amount and low real-time property. Liu Kun et al regard Universal Robot as the research object, perceive the teaching power of operator through a power/moment sensor, utilize data acquisition card to collect the voltage analog signal of power/moment, change into power/moment at the host computer, then carry on the conversion of power and position, realize the Robot to the study of operator's action (Liu Kun, Li Shi, Wang Bao Xiang. direct teaching system research based on UR Robot. science and technology and engineering, 2015,15(28): 22-26). According to the method, filtering and temperature compensation are not carried out on signals acquired by the force sensor, and the teaching force of a human fluctuates greatly, so that the teaching precision is not high, and the learning precision of the robot is difficult to guarantee. Kinect camera is used for obtaining human motion information, a mapping relation model between human arms and a robot is established, and learning of human arm motion is achieved (Kinect-based human-like mechanical arm demonstration learning research [ D ]. Master academic thesis, Heilongjiang: Harbin university, 2017.). According to the method, the arm movement is tracked by means of the human body movement capturing function of the Kinect, but the data acquired by the motion sensing equipment has high noise, so that the learning movement track is unstable easily.
Disclosure of Invention
Based on the above background, the present invention provides a robot demonstration learning method based on vision. The method comprises the following steps:
step S0: a demonstrator holds a teaching tool to demonstrate an operation task to be learned by the robot;
step S1: a visual sensor is used for collecting teaching tool images in the demonstration process, and the characteristic information of the teaching tool is extracted from the collected visual images;
step S2: obtaining pose information of the teaching tool in a camera coordinate system according to the image characteristics of S1 and the internal parameters of the vision sensor;
step S3: obtaining the pose information of the teaching tool in the robot coordinate system according to the relation between the camera coordinate system and the robot coordinate system and the pose information of the teaching tool in the camera coordinate system in S2;
step S4: obtaining the next motion adjustment amount of the robot according to the pose of the teaching tool of S3 in the robot coordinate system, controlling the robot to move, and recording the terminal pose of the robot;
step S5: repeating the steps S0 to S4 until the demonstration of the operation task is finished, and obtaining the motion track of the robot in the whole demonstration process;
step S6: and performing Kalman filtering on the robot tail end track of the S5 to obtain a learning track of the robot, and sending the learning track to the robot to realize reproduction of the demonstration content.
Further, the vision sensor is an RGB-D camera, the teaching tool is a cross, a small ball is fixed to the upper end, the left end, the right end and the center of the cross, and the four small balls are different in color.
Further, the image features of the teaching tool described in step S1 are as follows:
based on the collected visual image, obtaining image areas of the four small balls by utilizing color segmentation, then respectively extracting pixel points of the small balls in each area, and further obtaining characteristic information of the teaching tool, wherein the characteristic information comprises the sphere center image coordinates (u) of the four small ballsi,vi) (i ═ 1,2,3,4), and four smallDepth z of center of spherei(i=1,2,3,4)。
Further, the pose information of the teaching tool in the camera coordinate system in step S2 is calculated as follows:
and establishing a coordinate system of the teaching tool by taking the center of a sphere of the small sphere in the center of the teaching tool as an origin of coordinates, taking the right end of the cross as the positive direction of an X axis and taking the upper end of the cross as the positive direction of a Y axis. From the characteristic information of S1, the position [ p ] of the teaching tool coordinate system in the camera coordinate system is obtainedx,py,pz]TThe following were used:
wherein, TinIs an intrinsic parameter of the vision sensor (u)0,v0) Is the image coordinate of the central sphere, z0Is the depth of the central sphere.
And (4) obtaining the coordinates of the three small balls at the upper end, the left end and the right end in the camera coordinate system by using the formula (1) according to the characteristic information of S1. According to the definition of the coordinate system of the teaching tool, normalized direction vectors n, o, a of the X-axis, Y-axis and Z-axis of the coordinate system of the teaching tool in the coordinate system of the camera can be obtained, and the position vector [ p ] of the coordinate system of the teaching tool is combinedx,py,pz]TObtaining a pose matrix T of the teaching tool in the camera coordinate systemcThe following were used:
further, the pose information of the teaching tool in the robot coordinate system in step S3 is as follows:
pose matrix T of teaching tool in camera coordinate system according to S2cAnd a relation matrix T of the vision sensor and the robot coordinate systemmAnd obtaining a pose matrix T of the teaching tool in a robot coordinate system as follows:
T=TcTm(3)
according to the general principleThe pose matrix T of formula (3) can be equivalently transformed into a six-dimensional pose vector [ dx, dy, dz, r ] by using rotation transformationx,ry,rz]T。
Further, the next adjustment amount of the robot motion in step S4 is as follows:
obtaining the current pose [ dx, dy, dz, r ] of the teaching tool in the robot coordinate system by using the formula (3)x,ry,rz]T. By taking the characteristics of the demonstration starting moment as initial characteristics, the initial pose [ dx ] of the teaching tool in the robot coordinate system can be obtained0,dy0,dz0,rx0,ry0,rz0]T. According to the current pose and the initial pose, obtaining the pose variation of the teaching tool in the robot coordinate system as follows:
therefore, the next motion adjustment amount [ x, y, z, theta ] of the robot is obtainedx,θy,θz]TThe following were used:
wherein λ ispIs the adjustment factor.
And (4) sending the motion adjustment quantity shown in the formula (5) to the robot, controlling the robot to move, and recording the terminal pose J after the robot moves.
Further, the robot motion track of the whole demonstration process described in step S5 is as follows:
and repeating the steps S0 to S4 every control cycle, and recording the terminal pose of the robot. After the operation task demonstration is finished, the motion trail of the robot is obtained as follows:
W=(J0,J1,…,Jm) (6)
where m is the number of control cycles of the demonstration process.
Further, the trajectory learned by the robot in step S6 is as follows:
establishing a prediction model of Kalman filtering:
wherein the content of the first and second substances,is the pose estimation value of the robot at the (i + 1) th time, Ki+1Is the Kalman gain coefficient of order i +1, Ji+1And the (i + 1) th robot pose real value is obtained.
The kalman gain coefficient is updated as follows:
Ki+1=(Pi+Q)/(Pi+Q+R) (8)
wherein, PiIs the variance of the last estimate, Q is the variance of gaussian noise, and R is the variance of the true value.
The variance of the estimates is calculated as follows:
Pi+1=(1-Ki+1)Pi(9)
performing kalman filtering on the robot motion trajectory of S5 according to the robot motion trajectory W of S5 by using formulas (7) to (9), so as to obtain a learning trajectory L of the robot, where L is:
and sending the learning track L to the robot, so that the reproduction of the demonstration task can be realized.
Based on the technical scheme, the invention has the following beneficial effects: the traditional teaching techniques such as demonstrating boards and programming have high requirements on operators, and the teaching process is complicated, time-consuming and low in efficiency. The current demonstration learning method mostly adopts a force/torque sensor, is high in cost and complex in acquisition process, and needs to carry out temperature compensation on acquired data. The method for performing demonstration learning based on the motion sensing camera is easier to acquire human motion information, but the learning effect is limited by the motion capture precision of the motion sensing camera.
Aiming at the demonstration learning of the robot, a demonstrator holds a teaching tool by hand to demonstrate an operation task to be learned by the robot, a vision sensor is used for collecting images of the teaching tool in the demonstration process, and motion information of the teaching tool is extracted to realize the learning of the robot to the demonstration task.
The visual sensor and the teaching tool used by the invention have low price and low cost. The invention takes the state of the demonstration starting moment as the initial state, and the demonstration learning can be started in any pose state, thereby greatly improving the demonstration learning efficiency. The invention can easily extract the six-dimensional pose information of the teaching tool by using a simple visual tool, and has good real-time performance of demonstration and learning. The invention reduces the teaching difficulty of the operator, and the inexperienced operator can also perform the demonstration teaching of the robot.
Drawings
Fig. 1 is a flow chart of a vision-based robot demonstration learning method of the present invention.
Detailed Description
The embodiments of the present invention will be described in detail below with reference to the accompanying drawings: the present embodiment is implemented on the premise of the technical solution of the present invention, and combines the detailed implementation manner and the specific operation process, but the protection scope of the present invention is not limited to the following embodiments.
The invention discloses a robot demonstration learning method based on vision, which is characterized in that a demonstrator holds a teaching tool by hand to demonstrate an operation task to be learned by a robot, a vision sensor is utilized to collect images of the teaching tool in the demonstration process, and the motion information of the teaching tool is extracted to realize the learning of the robot to the demonstration task.
More specifically, as a preferred embodiment of the present invention, fig. 1 shows a flow chart of the vision-based robot demonstration learning method of the present invention. In the demonstration learning process, firstly a demonstrator holds a teaching tool to demonstrate an operation task, then a visual sensor acquires image characteristics of the teaching tool in the demonstration process, a teaching track of the teaching tool under a camera coordinate system is obtained according to internal parameters of the visual sensor, finally the teaching track is converted into a robot terminal track, Kalman filtering is carried out on the robot terminal track, a learning track of the robot is obtained, and therefore the robot learns the demonstration task. The method comprises the following steps:
the first step is as follows: a demonstrator holds a teaching tool by hand to demonstrate an operation task to be learned by a robot, a visual sensor is used for collecting an image of the teaching tool in the demonstration process, and characteristic information of the teaching tool is extracted from the collected visual image;
the second step is that: obtaining pose information of the teaching tool in a camera coordinate system according to the image characteristics of the first step and the internal parameters of the visual sensor;
the third step: according to the relation between the camera coordinate system and the robot coordinate system and the pose information of the teaching tool in the camera coordinate system in the second step, obtaining the pose information of the teaching tool in the robot coordinate system;
the fourth step: according to the pose of the teaching tool in the third step in the robot coordinate system, the next motion adjustment amount of the robot is obtained, the robot is controlled to move, and the terminal pose of the robot is recorded;
the fifth step: repeating the first step to the fourth step until the operation task demonstration is finished to obtain the motion track of the robot in the whole demonstration process;
and a sixth step: and performing Kalman filtering on the robot tail end track in the fifth step to obtain a learning track of the robot, and sending the learning track to the robot to realize reproduction of the demonstration content.
The first step is as follows:
based on the teaching tool image collected by the visual sensor, the image areas of the four small balls are obtained by color segmentation, then the pixel points of the small balls are respectively extracted in each area, and further the characteristic information of the teaching tool is obtained, wherein the characteristic information comprises the sphere center image coordinates (u) of the four small ballsi,vi) (i ═ 1,2,3,4), and the sphere center depth z of the four spheresi(i=1,2,3,4)。
The second step is as follows:
obtaining the position [ p ] of the coordinate system of the teaching tool in the camera coordinate system according to the characteristic information of the first step by using the formula (1)x,py,pz]T. And (3) obtaining the posture of the coordinate system of the teaching tool in the coordinate system of the camera according to the definition of the coordinate system of the teaching tool, and further obtaining a posture matrix of the coordinate system of the teaching tool in the coordinate system of the camera, wherein the posture matrix is shown in the formula (2).
Wherein the formula (1) and the formula (2) are obtained by the following specific steps:
and establishing a coordinate system of the teaching tool by taking the center of a sphere of the small sphere in the center of the teaching tool as an origin of coordinates, taking the right end of the cross as the positive direction of an X axis and taking the upper end of the cross as the positive direction of a Y axis. From the characteristic information of S1, the position [ p ] of the teaching tool coordinate system in the camera coordinate system is obtainedx,py,pz]TThe following were used:
wherein, TinIs an intrinsic parameter of the vision sensor (u)0,v0) Is the image coordinate of the central sphere, z0Is the depth of the central sphere.
And (3) obtaining the coordinates of the three small balls at the upper end, the left end and the right end in the coordinate system of the camera by using a formula (1) according to the characteristic information of the first step. According to the definition of the coordinate system of the teaching tool, normalized direction vectors n, o, a of the X-axis, Y-axis and Z-axis of the coordinate system of the teaching tool in the coordinate system of the camera can be obtained, and the position vector [ p ] of the coordinate system of the teaching tool is combinedx,py,pz]TObtaining a pose matrix T of the teaching tool in the camera coordinate systemcThe following were used:
the third step is as follows:
and (3) obtaining the pose of the teaching tool in the robot coordinate system by using a formula (3) according to the pose matrix of the teaching tool in the camera coordinate system and the relation matrix of the vision sensor and the robot coordinate system.
Wherein the formula (3) is obtained by the following specific steps:
according to the pose matrix T of the teaching tool in the camera coordinate system in the second stepcAnd a relation matrix T of the vision sensor and the robot coordinate systemmAnd obtaining a pose matrix T of the teaching tool in a robot coordinate system as follows:
T=TcTm(3)
the pose matrix T of equation (3) can be equivalently transformed into a six-dimensional pose vector [ dx, dy, dz, r, according to a common rotational transformationx,ry,rz]T。
The fourth step is as follows:
and (4) obtaining the pose variation of the teaching tool in the robot coordinate system according to the pose of the teaching tool in the third step in the robot coordinate system by using a formula (4). And (5) obtaining the next motion adjustment amount of the robot by using a formula (5), controlling the robot to move, and recording the terminal pose of the robot after the robot moves.
Wherein the formula (4) and the formula (5) are obtained by the following specific steps:
by taking the characteristics of the demonstration starting moment as initial characteristics, the initial pose [ dx ] of the teaching tool in the robot coordinate system can be obtained0,dy0,dz0,rx0,ry0,rz0]T. According to the current pose and the initial pose, obtaining the pose variation of the teaching tool in the robot coordinate system as follows:
therefore, the next motion adjustment amount [ x, y, z, theta ] of the robot is obtainedx,θy,θz]TThe following were used:
wherein λ ispIs the adjustment factor.
The fifth step is as follows:
and repeating the first step to the fourth step in each control period, and recording the terminal pose of the robot. And (5) after the operation task demonstration is finished, obtaining the motion trail of the robot shown in the formula (6).
W=(J0,J1,…,Jm) (6)
Where m is the number of control cycles of the demonstration process.
The sixth step is as follows:
and (3) establishing a Kalman filtering prediction model according to a formula (7) based on the robot motion track obtained in the fifth step, updating Kalman gain coefficients according to formulas (8) and (9), performing Kalman filtering on the robot motion track to obtain a robot learning track shown in a formula (10), and sending the learning track to the robot to realize reproduction of the demonstration task.
Establishing a prediction model of Kalman filtering:
wherein the content of the first and second substances,is the pose estimation value of the robot at the (i + 1) th time, Ki+1Is the Kalman gain coefficient of order i +1, Ji+1And the (i + 1) th robot pose real value is obtained.
The kalman gain coefficient is updated as follows:
Ki+1=(Pi+Q)/(Pi+Q+R) (8)
wherein, PiIs the variance of the last estimate, Q is the variance of gaussian noise, and R is the variance of the true value.
The variance of the estimates is calculated as follows:
Pi+1=(1-Ki+1)Pi(9)
performing kalman filtering on the robot motion track of S5 according to the robot motion track W in the step five by using formulas (7) to (9), so as to obtain a learning track L of the robot, wherein the learning track L is as follows:
the above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention and are not intended to limit the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (3)
1. A robot demonstration learning method based on vision comprises the following steps:
step S0: a demonstrator holds a teaching tool to demonstrate an operation task to be learned by the robot;
step S1: a visual sensor is used for collecting teaching tool images in the demonstration process, and the characteristic information of the teaching tool is extracted from the collected visual images;
step S2: obtaining pose information of the teaching tool in a camera coordinate system according to the image characteristics of S1 and the internal parameters of the vision sensor;
step S3: obtaining the pose information of the teaching tool in the robot coordinate system according to the relation between the camera coordinate system and the robot coordinate system and the pose information of the teaching tool in the camera coordinate system in S2;
step S4: obtaining the next motion adjustment amount of the robot according to the pose of the teaching tool of S3 in the robot coordinate system, controlling the robot to move, and recording the terminal pose of the robot;
step S5: repeating the steps S0 to S4 until the demonstration of the operation task is finished, and obtaining the motion track of the robot in the whole demonstration process;
step S6: performing Kalman filtering on the robot tail end track of the S5 to obtain a learning track of the robot, and sending the learning track to the robot to realize reproduction of the demonstration content;
the vision sensor is an RGB-D camera, the teaching tool is a cross, a small ball is fixed at the upper end, the left end, the right end and the center of the cross respectively, and the four small balls are different in color;
the image characteristics of the teaching tool described in step S1 are as follows:
based on the collected visual image, obtaining image areas where the four small balls are respectively located by utilizing color segmentation, and then respectively extracting pixel points of the small balls in each area to further obtain characteristic information of the teaching tool; centre of sphere image coordinates (u) comprising four pelletsi,vi) (i ═ 1,2,3,4), and the sphere center depth z of the four spheresi(i=1,2,3,4);
The pose information of the teaching tool in the camera coordinate system described in step S2 is calculated as follows:
establishing a coordinate system of the teaching tool by taking the center of a sphere of a small sphere in the center of the teaching tool as an origin of coordinates, taking the right end of the cross as the positive direction of an X axis and taking the upper end of the cross as the positive direction of a Y axis; from the characteristic information of S1, the position [ p ] of the teaching tool coordinate system in the camera coordinate system is obtainedx,py,pz]TThe following were used:
wherein, TinIs an intrinsic parameter of the vision sensor (u)0,v0) Is the image coordinate of the central sphere, z0Is the depth of the center sphere;
obtaining coordinates of the three small balls at the upper end, the left end and the right end in a camera coordinate system by using a formula (1) according to the characteristic information of S1; according to the definition of the coordinate system of the teaching tool, normalized direction vectors n, o, a of the X-axis, Y-axis and Z-axis of the coordinate system of the teaching tool in the coordinate system of the camera can be obtained, and the position vector [ p ] of the coordinate system of the teaching tool is combinedx,py,pz]TObtaining a pose matrix T of the teaching tool in the camera coordinate systemcThe following were used:
the pose information of the teaching tool in the robot coordinate system described in step S3 is as follows:
pose matrix T of teaching tool in camera coordinate system according to S2cAnd a relation matrix T of the vision sensor and the robot coordinate systemmAnd obtaining a pose matrix T of the teaching tool in a robot coordinate system as follows:
T=TcTm(3)
the pose matrix T of equation (3) can be equivalently transformed into a six-dimensional pose vector [ dx, dy, dz, r, according to a common rotational transformationx,ry,rz]T;
The next motion adjustment amount of the robot described in step S4 is as follows:
obtaining the current pose [ dx, dy, dz, r ] of the teaching tool in the robot coordinate system by using the formula (3)x,ry,rz]T(ii) a By taking the characteristics of the demonstration starting moment as initial characteristics, the initial pose [ dx ] of the teaching tool in the robot coordinate system can be obtained0,dy0,dz0,rx0,ry0,rz0]T(ii) a According to the current pose and the initial pose, obtaining the pose variation of the teaching tool in the robot coordinate system as follows:
therefore, the next motion adjustment amount [ x, y, z, theta ] of the robot is obtainedx,θy,θz]TThe following were used:
wherein λ ispIs the adjustment factor;
and (4) sending the motion adjustment quantity shown in the formula (5) to the robot, controlling the robot to move, and recording the terminal pose J after the robot moves.
2. The vision-based robot demonstration learning method of claim 1, wherein the robot motion trajectory of the whole demonstration process in step S5 is as follows:
repeating the steps S0 to S4 every control cycle, and recording the terminal pose of the robot; after the operation task demonstration is finished, the motion trail of the robot is obtained as follows:
W=(J0,J1,…,Jm) (6)
where m is the number of control cycles of the demonstration process.
3. The vision-based robot demonstration learning method of claim 1, wherein the robot learning trajectory in step S6 is as follows:
establishing a prediction model of Kalman filtering:
wherein the content of the first and second substances,is the pose estimation value of the robot at the (i + 1) th time, Ki+1Is the Kalman gain coefficient of order i +1, Ji+1The real pose value of the robot at the (i + 1) th time is obtained;
the kalman gain coefficient is updated as follows:
Ki+1=(Pi+Q)/(Pi+Q+R) (8)
wherein, PiIs the variance of the last estimate, Q is the variance of gaussian noise, R is the variance of the true value;
the variance of the estimates is calculated as follows:
Pi+1=(1-Ki+1)Pi(9)
performing kalman filtering on the robot motion trajectory of S5 according to the robot motion trajectory W of S5 by using formulas (7) to (9), so as to obtain a learning trajectory L of the robot, where L is:
and sending the learning track L to the robot, so that the reproduction of the demonstration task can be realized.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811064626.3A CN109571487B (en) | 2018-09-12 | 2018-09-12 | Robot demonstration learning method based on vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811064626.3A CN109571487B (en) | 2018-09-12 | 2018-09-12 | Robot demonstration learning method based on vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109571487A CN109571487A (en) | 2019-04-05 |
CN109571487B true CN109571487B (en) | 2020-08-28 |
Family
ID=65919729
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811064626.3A Active CN109571487B (en) | 2018-09-12 | 2018-09-12 | Robot demonstration learning method based on vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109571487B (en) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110065068B (en) * | 2019-04-08 | 2021-04-16 | 浙江大学 | Robot assembly operation demonstration programming method and device based on reverse engineering |
CN110170995B (en) * | 2019-05-09 | 2022-09-23 | 广西安博特智能科技有限公司 | Robot rapid teaching method based on stereoscopic vision |
CN110919626B (en) * | 2019-05-16 | 2023-03-14 | 广西大学 | Robot handheld teaching device and method based on stereoscopic vision |
JP2020196060A (en) * | 2019-05-31 | 2020-12-10 | セイコーエプソン株式会社 | Teaching method |
CN110315544B (en) * | 2019-06-24 | 2022-10-14 | 南京邮电大学 | Robot operation learning method based on video image demonstration |
CN110561430B (en) * | 2019-08-30 | 2021-08-10 | 哈尔滨工业大学(深圳) | Robot assembly track optimization method and device for offline example learning |
CN110587579A (en) * | 2019-09-30 | 2019-12-20 | 厦门大学嘉庚学院 | Kinect-based robot teaching programming guiding method |
CN110480642A (en) * | 2019-10-16 | 2019-11-22 | 遨博(江苏)机器人有限公司 | Industrial robot and its method for utilizing vision calibration user coordinate system |
CN111002289B (en) * | 2019-11-25 | 2021-08-17 | 华中科技大学 | Robot online teaching method and device, terminal device and storage medium |
CN110900609A (en) * | 2019-12-11 | 2020-03-24 | 浙江钱江机器人有限公司 | Robot teaching device and method thereof |
CN111152230B (en) * | 2020-04-08 | 2020-09-04 | 季华实验室 | Robot teaching method, system, teaching robot and storage medium |
CN112509392B (en) * | 2020-12-16 | 2022-11-29 | 复旦大学 | Robot behavior teaching method based on meta-learning |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102135776A (en) * | 2011-01-25 | 2011-07-27 | 解则晓 | Industrial robot control system based on visual positioning and control method thereof |
CN102581445A (en) * | 2012-02-08 | 2012-07-18 | 中国科学院自动化研究所 | Visual real-time deviation rectifying system and visual real-time deviation rectifying method for robot |
CN105196292A (en) * | 2015-10-09 | 2015-12-30 | 浙江大学 | Visual servo control method based on iterative duration variation |
CN106142092A (en) * | 2016-07-26 | 2016-11-23 | 张扬 | A kind of method robot being carried out teaching based on stereovision technique |
CN106553195A (en) * | 2016-11-25 | 2017-04-05 | 中国科学技术大学 | Object 6DOF localization method and system during industrial robot crawl |
CN107160364A (en) * | 2017-06-07 | 2017-09-15 | 华南理工大学 | A kind of industrial robot teaching system and method based on machine vision |
CN108161882A (en) * | 2017-12-08 | 2018-06-15 | 华南理工大学 | A kind of robot teaching reproducting method and device based on augmented reality |
EP3366433A1 (en) * | 2017-02-09 | 2018-08-29 | Canon Kabushiki Kaisha | Method of controlling robot, method of teaching robot, and robot system |
-
2018
- 2018-09-12 CN CN201811064626.3A patent/CN109571487B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102135776A (en) * | 2011-01-25 | 2011-07-27 | 解则晓 | Industrial robot control system based on visual positioning and control method thereof |
CN102581445A (en) * | 2012-02-08 | 2012-07-18 | 中国科学院自动化研究所 | Visual real-time deviation rectifying system and visual real-time deviation rectifying method for robot |
CN105196292A (en) * | 2015-10-09 | 2015-12-30 | 浙江大学 | Visual servo control method based on iterative duration variation |
CN106142092A (en) * | 2016-07-26 | 2016-11-23 | 张扬 | A kind of method robot being carried out teaching based on stereovision technique |
CN106553195A (en) * | 2016-11-25 | 2017-04-05 | 中国科学技术大学 | Object 6DOF localization method and system during industrial robot crawl |
EP3366433A1 (en) * | 2017-02-09 | 2018-08-29 | Canon Kabushiki Kaisha | Method of controlling robot, method of teaching robot, and robot system |
CN107160364A (en) * | 2017-06-07 | 2017-09-15 | 华南理工大学 | A kind of industrial robot teaching system and method based on machine vision |
CN108161882A (en) * | 2017-12-08 | 2018-06-15 | 华南理工大学 | A kind of robot teaching reproducting method and device based on augmented reality |
Non-Patent Citations (1)
Title |
---|
基于视觉引导的工业机器人示教编程系统;倪自强;《北京航空航天大学学报》;20160331;第42卷(第3期);第562页-第568页 * |
Also Published As
Publication number | Publication date |
---|---|
CN109571487A (en) | 2019-04-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109571487B (en) | Robot demonstration learning method based on vision | |
Corke et al. | Real-time vision, tracking and control | |
Sun et al. | A review of robot control with visual servoing | |
CN106041927A (en) | Hybrid vision servo system and method combining eye-to-hand and eye-in-hand structures | |
WO2016193781A1 (en) | Motion control system for a direct drive robot through visual servoing | |
CN109079794B (en) | Robot control and teaching method based on human body posture following | |
CN111300384B (en) | Registration system and method for robot augmented reality teaching based on identification card movement | |
CN114912287A (en) | Robot autonomous grabbing simulation system and method based on target 6D pose estimation | |
Kimura et al. | Task-model based human robot cooperation using vision | |
CN113103230A (en) | Human-computer interaction system and method based on remote operation of treatment robot | |
CN112109074A (en) | Robot target image capturing method | |
CN107671838B (en) | Robot teaching recording system, teaching process steps and algorithm flow thereof | |
CN113858217B (en) | Multi-robot interaction three-dimensional visual pose perception method and system | |
CN109636856B (en) | Object six-dimensional pose information joint measurement method based on HOG feature fusion operator | |
Han et al. | Grasping control method of manipulator based on binocular vision combining target detection and trajectory planning | |
Song et al. | On-line stable evolutionary recognition based on unit quaternion representation by motion-feedforward compensation | |
Gao et al. | Kinect-based motion recognition tracking robotic arm platform | |
Cai et al. | 6D image-based visual servoing for robot manipulators with uncalibrated stereo cameras | |
CN111283664A (en) | Registration system and method for robot augmented reality teaching | |
Lang et al. | Visual servoing with LQR control for mobile robots | |
CN211890823U (en) | Four-degree-of-freedom mechanical arm vision servo control system based on RealSense camera | |
Sanches et al. | Scalable. Intuitive Human to Robot Skill Transfer with Wearable Human Machine Interfaces: On Complex, Dexterous Tasks | |
CN113492404B (en) | Humanoid robot action mapping control method based on machine vision | |
Lei et al. | Multi-stage 3d pose estimation method of robot arm based on RGB image | |
Wang et al. | Visual servoing control of video tracking system for tracking a flying target |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |