CN109571487A - A kind of robotic presentation learning method of view-based access control model - Google Patents

A kind of robotic presentation learning method of view-based access control model Download PDF

Info

Publication number
CN109571487A
CN109571487A CN201811064626.3A CN201811064626A CN109571487A CN 109571487 A CN109571487 A CN 109571487A CN 201811064626 A CN201811064626 A CN 201811064626A CN 109571487 A CN109571487 A CN 109571487A
Authority
CN
China
Prior art keywords
robot
tool
coordinate system
teaching display
sticking plastic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811064626.3A
Other languages
Chinese (zh)
Other versions
CN109571487B (en
Inventor
卢金燕
郭壮志
李小魁
黄全振
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan Institute of Engineering
Original Assignee
Henan Institute of Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan Institute of Engineering filed Critical Henan Institute of Engineering
Priority to CN201811064626.3A priority Critical patent/CN109571487B/en
Publication of CN109571487A publication Critical patent/CN109571487A/en
Application granted granted Critical
Publication of CN109571487B publication Critical patent/CN109571487B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/0081Programme-controlled manipulators with master teach-in means

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)
  • Numerical Control (AREA)

Abstract

The invention discloses a kind of robotic presentation learning method of view-based access control model, this method realizes the study of demonstration task using teaching display-tool with sticking plastic and visual sensor.Demonstrator holds teaching display-tool with sticking plastic demonstration operation task first, then visual sensor obtains the characteristics of image of teaching display-tool with sticking plastic, according to the intrinsic parameter of visual sensor, obtain teaching track of the teaching display-tool with sticking plastic in presentation process, robot motion is controlled, end movement track of the robot in presentation process is obtained, Kalman filtering finally is carried out to robot end's motion profile, the study track of robot is obtained, realizes the study of demonstration task;The present invention is by simple vision aid, it is easy to which the six-dimensional pose information for extracting teaching display-tool with sticking plastic, the real-time for demonstrating study are good.Present invention reduces the teaching difficulty of operator, the inexperienced operator of milli can also carry out the demonstration teaching of robot.

Description

A kind of robotic presentation learning method of view-based access control model
Technical field
The invention belongs to robot control fields, relate more specifically to a kind of robotic presentation study side of view-based access control model Method.
Background technique
Robot is a kind of mechanical device that repetitive operation is completed under programming and control, and main task is to replace people Class carries out some repeatability, the manual operations that environment is poor, risk is high.With the continuous development of technology, robot is in many necks Domain has been able to that the mankind is replaced to be engaged in some heavy, complicated, dangerous activities.It can be improved operating efficiency, reduce operation wind Danger is used widely in the industrial productions such as welding, assembly.
However, current most of robots all work in the space separated with the mankind, the mankind can only pass through demonstrator Or programming is to achieve specific track.This mode needs operator to be familiar with robot operating system in advance, has one Fixed program capability.Also, operator is in different spaces from robot, and operating difficulties, accuracy rate is low, and time-consuming efficiency is not It is high.
In order to improve the independence of robot behavior, the difficulty that layman participates in robot control is reduced, demonstration is learned Habit is come into being.Demonstration study is also known as learning from instruction, it makes movement of the robot by " observation " demonstrator (people or robot) Behavior learns Motion Control Strategies, and then obtains motor skill, generates the independent behaviour as people.Rozo etc. uses demonstration Learning method, which realizes a Six-DOF industrial robot, can independently complete the power control operation task of Ball-In-Box (Rozo L.,Jiménez P.,Torras C..A robot learning from demonstration framework to perform force-based manipulation tasks[J].Intelligent service robotics, 2013,6(1):33-51.).The corresponding each joint velocity of phase acquisition mechanical arm tail end six-dimensional force vector sum is demonstrated in demonstrator, Each joint angular speed is exported when inputting current end moment information by gained model in the movement reproduction stage, to drive machinery Arm controls the movement of bead in box and falls it at hole.This method needs to build action sequence with Hidden Markov Model Mould, computationally intensive, real-time is not high.Liu Kun et al. is passed using Universal Robot as research object by a power/torque Sensor perceives the teaching power of operator, is collected into power/torque voltage analog signal using data collecting card, carries out in host computer Power/torque is converted to, the conversion of power and position is then carried out, realizes study (Liu Kun, Lee's generation that robot acts operator In, direct teaching system research science and technology and engineering of the Wang Baoxiang based on UR robot, 2015,15 (28): 22-26).It should Method is not filtered to the signal of force snesor acquisition and temperature-compensating, and the teaching fluctuation of people is larger, so showing Teach precision not high, the study precision of robot is difficult to ensure.Wang Zhaoyang proposes a kind of class people's mechanical arm demonstration based on Kinect Learning method obtains body motion information by Kinect camera, establishes the mapping relations mould between human arm and robot Type realizes study (class people's mechanical arm demonstration Learning Studies [D] the master of Wang Zhaoyang based on Kinect to human arm motion Academic dissertation, Heilungkiang: Harbin Institute of Technology, 2017.).This method is carried out by the human body motion capture function of Kinect Robot arm motion tracking, but there are larger noises for the data of somatosensory device acquisition, are easy to cause the motion profile of study unstable.
Summary of the invention
Based on the above background, the present invention provides a kind of robotic presentation learning method of view-based access control model.This method includes step It is rapid as follows:
Step S0: demonstrator holds the teaching display-tool with sticking plastic demonstration robot operation task to be learnt;
Step S1: using the teaching display-tool with sticking plastic image in a visual sensor acquisition presentation process, from the vision figure of acquisition The characteristic information of teaching display-tool with sticking plastic is extracted as in;
Step S2: it according to the characteristics of image of S1 and the intrinsic parameter of visual sensor, obtains teaching display-tool with sticking plastic and is sat in video camera Posture information under mark system;
Step S3: according between camera coordinate system and robot coordinate system relationship and S2 in teaching display-tool with sticking plastic taking the photograph Posture information under camera coordinate system obtains posture information of the teaching display-tool with sticking plastic under robot coordinate system;
Step S4: according to pose of the teaching display-tool with sticking plastic of S3 under robot coordinate system, the movement of robot next step is obtained Adjustment amount controls robot motion, the end pose of recorder people;
Step S5: repeating step S0 to S4, and until operation task demonstration terminates, the robot for obtaining entire presentation process is transported Dynamic rail mark;
Step S6: Kalman filtering is carried out to the robot end track of S5, the study track of robot is obtained, will learn Track is sent to robot, realizes the reproduction to demo content.
It is further described that wherein the visual sensor is a RGB-D camera, the teaching display-tool with sticking plastic is a cross Frame, the upper end of cross, left end, right end and center is each fixes a bead, and four beads vary in color.
It is further described that wherein the characteristics of image of teaching display-tool with sticking plastic described in step S1 is as follows:
Visual pattern based on acquisition, using color segmentation, then image-region where obtaining four beads respectively exists The pixel of bead is extracted in each region respectively, and then obtains the characteristic information of teaching display-tool with sticking plastic, the centre of sphere including four beads Image coordinate (ui,vi) (i=1,2,3,4) and four beads centre of sphere depth zi(i=1,2,3,4).
It is further described that wherein posture information of the teaching display-tool with sticking plastic described in step S2 under camera coordinate system calculates such as Under:
Using the bead centre of sphere in teaching display-tool with sticking plastic center as coordinate origin, using the right end of cross as X-axis positive direction, with ten The upper end of cabinet frame is Y-axis positive direction, establishes teaching display-tool with sticking plastic coordinate system.According to the characteristic information of S1, teaching display-tool with sticking plastic coordinate system is obtained In the position [p of camera coordinate systemx,py,pz]TIt is as follows:
Wherein, TinIt is the intrinsic parameter of visual sensor, (u0,v0) be astrosphere image coordinate, z0It is the depth of astrosphere Degree.
Three top, left end and right end beads are obtained in camera coordinates using formula (1) according to the characteristic information of S1 The coordinate of system.According to the definition of teaching display-tool with sticking plastic coordinate system, and then teaching display-tool with sticking plastic coordinate system X-axis, Y-axis and Z axis can be obtained in video camera Normalization direction vector n, o, a of coordinate system, in conjunction with the position vector [p of teaching display-tool with sticking plastic coordinate systemx,py,pz]T, obtain teaching work Has the position auto―control T in camera coordinate systemcIt is as follows:
It is further described that wherein posture information of the teaching display-tool with sticking plastic described in step S3 under robot coordinate system is as follows:
According to position auto―control T of the teaching display-tool with sticking plastic of S2 under camera coordinate systemcAnd visual sensor and robot are sat Mark the relational matrix T of systemm, it is as follows to obtain position auto―control T of the teaching display-tool with sticking plastic under robot coordinate system:
T=TcTm (3)
According to general rotation transformation, can by the position auto―control T equivalence transformation of formula (3) at six-dimensional pose vector [dx, dy, dz,rx,ry,rz]T
It is further described that wherein the movement adjustment amount of robot next step described in step S4 is as follows:
Using formula (3), teaching display-tool with sticking plastic is obtained in current pose [dx, dy, dz, the r of robot coordinate systemx,ry,rz]T。 Using the feature for demonstrating start time as initial characteristics, teaching display-tool with sticking plastic can be obtained in the initial pose [dx of robot coordinate system0,dy0, dz0,rx0,ry0,rz0]T.According to current pose and initial pose, teaching display-tool with sticking plastic is obtained in the pose variable quantity of robot coordinate system Are as follows:
Therefore, movement adjustment amount [x, y, z, the θ of robot next step are obtainedxyz]TIt is as follows:
Wherein, λpIt is regulation coefficient.
Movement adjustment amount shown in formula (5) is sent to robot, controls robot motion, recorder people's post exercise End pose J.
It is further described that wherein the robot motion track of entire presentation process described in step S5 is as follows:
Each control period repeats step S0 to S4, the end pose of recorder people.After operation task is demonstrated, Obtaining robot motion track is are as follows:
W=(J0,J1,…,Jm) (6)
Wherein, m is the control periodicity of presentation process.
It is further described that wherein robot learning track described in step S6 is as follows:
Establish the prediction model of Kalman filtering:
Wherein,It is the robot pose estimated value of i+1 time, Ki+1It is the kalman gain coefficient of i+1 time, Ji+1 It is the robot pose true value of i+1 time.
Kalman gain coefficient update is as follows:
Ki+1=(Pi+Q)/(Pi+Q+R) (8)
Wherein, PiIt is the variance of last estimated value, Q is the variance of Gaussian noise, and R is the variance of true value.
The variance of estimated value calculates as follows:
Pi+1=(1-Ki+1)Pi (9)
According to the robot motion track W of S5, using formula (7)~(9), karr is carried out to the robot motion track of S5 Graceful filtering obtains the study track L of robot are as follows:
Study track L is sent to robot, the reproduction to demonstration task can be realized.
Based on the above-mentioned technical proposal it is found that the invention has the following advantages: the teachings skill such as traditional demonstrator, programming Art is more demanding to operator, and teaching process is cumbersome, time-consuming inefficient.Current demonstration learning method mostly uses power/torque Sensor, it is at high cost, and also collection process is complicated, and needs to carry out temperature-compensating to the data of acquisition.Based on body-sensing camera into The method of row demonstration study obtains body motion information and is easier to, but learning effect is limited to the motion-captured essence of body-sensing camera Degree.
In order to improve the independence of robot behavior, the difficulty that layman participates in robot control is reduced, the present invention For the demonstration study of robot, demonstrator is held the teaching display-tool with sticking plastic demonstration robot operation task to be learnt, is regarded using one Feel the teaching display-tool with sticking plastic image in sensor acquisition presentation process, extract the motion information of teaching display-tool with sticking plastic, realizes robot to demonstration The study of task.
The visual sensor and teaching display-tool with sticking plastic that the present invention uses are cheap, at low cost.The present invention is to demonstrate start time State be original state, demonstration study can start under any position and posture, significantly very high demonstration learning efficiency.This hair It is bright by simple vision aid, it is easy to the six-dimensional pose information for extracting teaching display-tool with sticking plastic, the real-time for demonstrating study are good.This hair The bright teaching difficulty for reducing operator, the inexperienced operator of milli can also carry out the demonstration teaching of robot.
Detailed description of the invention
Fig. 1 is the robotic presentation learning method flow chart of view-based access control model of the invention.
Specific embodiment
Be described in detail with reference to the accompanying drawing to the embodiment of the present invention: the present embodiment is being with technical solution of the present invention Under the premise of implemented, in conjunction with detailed embodiment and specific operating process, but protection scope of the present invention is not limited to down State embodiment.
The invention discloses a kind of robotic presentation learning method of view-based access control model, the present invention holds teaching by demonstrator The tool demonstration robot operation task to be learnt utilizes the teaching display-tool with sticking plastic figure in a visual sensor acquisition presentation process Picture extracts the motion information of teaching display-tool with sticking plastic, realizes study of the robot to demonstration task.
More specifically, as a preferred embodiment of the present invention, the robot of view-based access control model of the invention as shown in figure 1 Demonstrate learning method flow chart.It demonstrates in learning process, demonstrator holds teaching display-tool with sticking plastic demonstration operation task first, then vision The characteristics of image that sensor obtains teaching display-tool with sticking plastic in presentation process obtains teaching display-tool with sticking plastic and exists according to the intrinsic parameter of visual sensor Teaching track is finally converted into robot end track by the teaching track under camera coordinate system, and to robot end's rail Mark carries out Kalman filtering, obtains the study track of robot, to realize study of the robot to demonstration task.This method packet Include following steps:
Step 1: demonstrator holds the teaching display-tool with sticking plastic demonstration robot operation task to be learnt, a visual sensing is utilized Device acquires the teaching display-tool with sticking plastic image in presentation process, and the characteristic information of teaching display-tool with sticking plastic is extracted from the visual pattern of acquisition;
Step 2: obtaining teaching display-tool with sticking plastic according to the characteristics of image of the first step and the intrinsic parameter of visual sensor and imaging Posture information under machine coordinate system;
Step 3: according between camera coordinate system and robot coordinate system relationship and second step in teaching display-tool with sticking plastic Posture information under camera coordinate system obtains posture information of the teaching display-tool with sticking plastic under robot coordinate system;
Step 4: the pose according to the teaching display-tool with sticking plastic of third step under robot coordinate system, obtains robot next step Adjustment amount is moved, robot motion, the end pose of recorder people are controlled;
Step 5: repeating the first step to the 4th step, demonstrating until operation task terminates, and obtains the machine of entire presentation process People's motion profile;
Step 6: the robot end track to the 5th step carries out Kalman filtering, the study track of robot is obtained, it will Study track is sent to robot, realizes the reproduction to demo content.
The first step, specific as follows:
The teaching display-tool with sticking plastic image of view-based access control model sensor acquisition, using color segmentation, where obtaining four beads respectively Image-region, then extracts the pixel of bead respectively in each region, and then obtains the characteristic information of teaching display-tool with sticking plastic, including Centre of sphere image coordinate (the u of four beadsi,vi) (i=1,2,3,4) and four beads centre of sphere depth zi(i=1,2,3, 4)。
The second step, specific as follows:
Teaching display-tool with sticking plastic coordinate system is obtained in the position of camera coordinate system using formula (1) according to the characteristic information of the first step Set [px,py,pz]T.According to the definition of teaching display-tool with sticking plastic coordinate system, teaching display-tool with sticking plastic coordinate system is obtained in the posture of camera coordinate system, And then teaching display-tool with sticking plastic coordinate system shown in formula (2) is obtained in the position auto―control of camera coordinate system.
Wherein formula (1) and formula (2) are obtained by step in detail below:
Using the bead centre of sphere in teaching display-tool with sticking plastic center as coordinate origin, using the right end of cross as X-axis positive direction, with ten The upper end of cabinet frame is Y-axis positive direction, establishes teaching display-tool with sticking plastic coordinate system.According to the characteristic information of S1, teaching display-tool with sticking plastic coordinate system is obtained In the position [p of camera coordinate systemx,py,pz]TIt is as follows:
Wherein, TinIt is the intrinsic parameter of visual sensor, (u0,v0) be astrosphere image coordinate, z0It is the depth of astrosphere Degree.
Three top, left end and right end beads are obtained in video camera using formula (1) according to the characteristic information of the first step The coordinate of coordinate system.According to the definition of teaching display-tool with sticking plastic coordinate system, and then teaching display-tool with sticking plastic coordinate system X-axis, Y-axis and Z axis can be obtained and taken the photograph Normalization direction vector n, o, a of camera coordinate system, in conjunction with the position vector [p of teaching display-tool with sticking plastic coordinate systemx,py,pz]T, shown Teaching and administrative staff has the position auto―control T in camera coordinate systemcIt is as follows:
The third step, specific as follows:
According to position auto―control of the teaching display-tool with sticking plastic of second step under camera coordinate system and visual sensor and robot The relational matrix of coordinate system obtains pose of the teaching display-tool with sticking plastic under robot coordinate system using formula (3).
Wherein formula (3) is obtained by step in detail below:
According to position auto―control T of the teaching display-tool with sticking plastic of second step under camera coordinate systemcAnd visual sensor and machine The relational matrix T of people's coordinate systemm, it is as follows to obtain position auto―control T of the teaching display-tool with sticking plastic under robot coordinate system:
T=TcTm (3)
According to general rotation transformation, can by the position auto―control T equivalence transformation of formula (3) at six-dimensional pose vector [dx, dy, dz,rx,ry,rz]T
4th step, specific as follows:
It obtains teaching display-tool with sticking plastic using formula (4) according to pose of the teaching display-tool with sticking plastic of third step under robot coordinate system and exists The pose variable quantity of robot coordinate system.Using formula (5), the movement adjustment amount of robot next step is obtained, controls robot Movement, recorder people's post exercise end pose.
Wherein formula (4) and formula (5) are obtained by step in detail below:
Using the feature for demonstrating start time as initial characteristics, teaching display-tool with sticking plastic can be obtained in the initial pose of robot coordinate system [dx0,dy0,dz0,rx0,ry0,rz0]T.According to current pose and initial pose, teaching display-tool with sticking plastic is obtained in the position of robot coordinate system Appearance variable quantity are as follows:
Therefore, movement adjustment amount [x, y, z, the θ of robot next step are obtainedxyz]TIt is as follows:
Wherein, λpIt is regulation coefficient.
5th step, specific as follows:
Each control period repeats the step first step to the 4th step, the end pose of recorder people.Operation task demonstration After, obtain robot motion track shown in formula (6).
W=(J0,J1,…,Jm) (6)
Wherein, m is the control periodicity of presentation process.
6th step, specific as follows:
Based on the robot motion track that the 5th step obtains, Kalman prediction model is established according to formula (7), according to Formula (8) and (9) update kalman gain coefficient, carry out Kalman filtering to robot motion track, obtain shown in formula (10) Robot learning track, study track is sent to robot, realizes the reproduction of demonstration task.
Establish the prediction model of Kalman filtering:
Wherein,It is the robot pose estimated value of i+1 time, Ki+1It is the kalman gain coefficient of i+1 time, Ji+1 It is the robot pose true value of i+1 time.
Kalman gain coefficient update is as follows:
Ki+1=(Pi+Q)/(Pi+Q+R) (8)
Wherein, PiIt is the variance of last estimated value, Q is the variance of Gaussian noise, and R is the variance of true value.
The variance of estimated value calculates as follows:
Pi+1=(1-Ki+1)Pi (9)
The robot motion track of S5 is carried out using formula (7)~(9) according to the robot motion track W of step 5 Kalman filtering obtains the study track L of robot are as follows:
Particular embodiments described above has carried out further in detail the purpose of the present invention, technical scheme and beneficial effects Describe in detail bright, it should be understood that the above is only a specific embodiment of the present invention, is not intended to restrict the invention, it is all Within the spirit and principles in the present invention, any modification, equivalent substitution, improvement and etc. done should be included in protection of the invention Within the scope of.

Claims (8)

1. a kind of robotic presentation learning method of view-based access control model, comprising the following steps:
Step S0: demonstrator holds the teaching display-tool with sticking plastic demonstration robot operation task to be learnt;
Step S1: using the teaching display-tool with sticking plastic image in a visual sensor acquisition presentation process, from the visual pattern of acquisition Extract the characteristic information of teaching display-tool with sticking plastic;
Step S2: according to the characteristics of image of S1 and the intrinsic parameter of visual sensor, teaching display-tool with sticking plastic is obtained in camera coordinate system Under posture information;
Step S3: according between camera coordinate system and robot coordinate system relationship and S2 in teaching display-tool with sticking plastic in video camera Posture information under coordinate system obtains posture information of the teaching display-tool with sticking plastic under robot coordinate system;
Step S4: according to pose of the teaching display-tool with sticking plastic of S3 under robot coordinate system, the movement adjustment of robot next step is obtained Amount controls robot motion, the end pose of recorder people;
Step S5: repeating step S0 to S4, and demonstrating until operation task terminates, and obtains robot motion's rail of entire presentation process Mark;
Step S6: Kalman filtering is carried out to the robot end track of S5, the study track of robot is obtained, track will be learnt It is sent to robot, realizes the reproduction to demo content.
2. the robotic presentation learning method of view-based access control model according to claim 1, wherein the visual sensor is one A RGB-D camera, the teaching display-tool with sticking plastic are a crosses, the upper end of cross, left end, right end and center it is each fix it is one small Ball, and four beads vary in color.
3. the robotic presentation learning method of view-based access control model according to claim 1, wherein teaching described in step S1 The characteristics of image of tool is as follows:
Visual pattern based on acquisition, using color segmentation, image-region where obtaining four beads respectively, then each The pixel of bead is extracted in region respectively, and then obtains the characteristic information of teaching display-tool with sticking plastic.Centre of sphere image including four beads Coordinate (ui,vi) (i=1,2,3,4) and four beads centre of sphere depth zi(i=1,2,3,4).
4. the robotic presentation learning method of view-based access control model according to claim 1, wherein teaching described in step S2 Posture information of the tool under camera coordinate system calculates as follows:
Using the bead centre of sphere in teaching display-tool with sticking plastic center as coordinate origin, using the right end of cross as X-axis positive direction, with cross Upper end be Y-axis positive direction, establish teaching display-tool with sticking plastic coordinate system.According to the characteristic information of S1, obtains teaching display-tool with sticking plastic coordinate system and taking the photograph Position [the p of camera coordinate systemx,py,pz]TIt is as follows:
Wherein, TinIt is the intrinsic parameter of visual sensor, (u0,v0) be astrosphere image coordinate, z0It is the depth of astrosphere.
Three top, left end and right end beads are obtained in camera coordinate system using formula (1) according to the characteristic information of S1 Coordinate.According to the definition of teaching display-tool with sticking plastic coordinate system, and then teaching display-tool with sticking plastic coordinate system X-axis, Y-axis and Z axis can be obtained in camera coordinates Normalization direction vector n, o, a of system, in conjunction with the position vector [p of teaching display-tool with sticking plastic coordinate systemx,py,pz]T, obtain teaching display-tool with sticking plastic and exist The position auto―control T of camera coordinate systemcIt is as follows:
5. the robotic presentation learning method of view-based access control model according to claim 1, wherein teaching described in step S3 Posture information of the tool under robot coordinate system is as follows:
According to position auto―control T of the teaching display-tool with sticking plastic of S2 under camera coordinate systemcAnd visual sensor and robot coordinate system Relational matrix Tm, it is as follows to obtain position auto―control T of the teaching display-tool with sticking plastic under robot coordinate system:
T=TcTm (3)
It, can be by the position auto―control T equivalence transformation of formula (3) at six-dimensional pose vector [dx, dy, dz, r according to general rotation transformationx, ry,rz]T
6. the robotic presentation learning method of view-based access control model according to claim 1, wherein machine described in step S4 The movement adjustment amount of people's next step is as follows:
Using formula (3), teaching display-tool with sticking plastic is obtained in current pose [dx, dy, dz, the r of robot coordinate systemx,ry,rz]T.With demonstration The feature of start time is initial characteristics, can obtain teaching display-tool with sticking plastic in the initial pose [dx of robot coordinate system0,dy0,dz0,rx0, ry0,rz0]T.According to current pose and initial pose, teaching display-tool with sticking plastic is obtained in the pose variable quantity of robot coordinate system are as follows:
Therefore, movement adjustment amount [x, y, z, the θ of robot next step are obtainedxyz]TIt is as follows:
Wherein, λpIt is regulation coefficient.
Movement adjustment amount shown in formula (5) is sent to robot, controls robot motion, recorder people's post exercise end Pose J.
7. the robotic presentation learning method of view-based access control model according to claim 1, wherein entire described in step S5 The robot motion track of presentation process is as follows:
Each control period repeats step S0 to S4, the end pose of recorder people.After operation task is demonstrated, obtain Robot motion track is are as follows:
W=(J0,J1,…,Jm) (6)
Wherein, m is the control periodicity of presentation process.
8. the robotic presentation learning method of view-based access control model according to claim 1, wherein machine described in step S6 It is as follows that people learns track:
Establish the prediction model of Kalman filtering:
Wherein,It is the robot pose estimated value of i+1 time, Ki+1It is the kalman gain coefficient of i+1 time, Ji+1Be i-th+ 1 robot pose true value.
Kalman gain coefficient update is as follows:
Ki+1=(Pi+Q)/(Pi+Q+R) (8)
Wherein, PiIt is the variance of last estimated value, Q is the variance of Gaussian noise, and R is the variance of true value.
The variance of estimated value calculates as follows:
Pi+1=(1-Ki+1)Pi (9)
According to the robot motion track W of S5, using formula (7)~(9), Kalman's filter is carried out to the robot motion track of S5 Wave obtains the study track L of robot are as follows:
Study track L is sent to robot, the reproduction to demonstration task can be realized.
CN201811064626.3A 2018-09-12 2018-09-12 Robot demonstration learning method based on vision Active CN109571487B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811064626.3A CN109571487B (en) 2018-09-12 2018-09-12 Robot demonstration learning method based on vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811064626.3A CN109571487B (en) 2018-09-12 2018-09-12 Robot demonstration learning method based on vision

Publications (2)

Publication Number Publication Date
CN109571487A true CN109571487A (en) 2019-04-05
CN109571487B CN109571487B (en) 2020-08-28

Family

ID=65919729

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811064626.3A Active CN109571487B (en) 2018-09-12 2018-09-12 Robot demonstration learning method based on vision

Country Status (1)

Country Link
CN (1) CN109571487B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110065068A (en) * 2019-04-08 2019-07-30 浙江大学 A kind of robotic asssembly operation programming by demonstration method and device based on reverse-engineering
CN110170995A (en) * 2019-05-09 2019-08-27 广西安博特智能科技有限公司 A kind of quick teaching method of robot based on stereoscopic vision
CN110315544A (en) * 2019-06-24 2019-10-11 南京邮电大学 A kind of robot manipulation's learning method based on video image demonstration
CN110480642A (en) * 2019-10-16 2019-11-22 遨博(江苏)机器人有限公司 Industrial robot and its method for utilizing vision calibration user coordinate system
CN110561430A (en) * 2019-08-30 2019-12-13 哈尔滨工业大学(深圳) robot assembly track optimization method and device for offline example learning
CN110587579A (en) * 2019-09-30 2019-12-20 厦门大学嘉庚学院 Kinect-based robot teaching programming guiding method
CN110900609A (en) * 2019-12-11 2020-03-24 浙江钱江机器人有限公司 Robot teaching device and method thereof
CN110919626A (en) * 2019-05-16 2020-03-27 广西大学 Robot handheld teaching device and method based on stereoscopic vision
CN111002289A (en) * 2019-11-25 2020-04-14 华中科技大学 Robot online teaching method and device, terminal device and storage medium
CN111152230A (en) * 2020-04-08 2020-05-15 季华实验室 Robot teaching method, system, teaching robot and storage medium
CN112008692A (en) * 2019-05-31 2020-12-01 精工爱普生株式会社 Teaching method
CN112509392A (en) * 2020-12-16 2021-03-16 复旦大学 Robot behavior teaching method based on meta-learning

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102135776A (en) * 2011-01-25 2011-07-27 解则晓 Industrial robot control system based on visual positioning and control method thereof
CN102581445A (en) * 2012-02-08 2012-07-18 中国科学院自动化研究所 Visual real-time deviation rectifying system and visual real-time deviation rectifying method for robot
CN105196292A (en) * 2015-10-09 2015-12-30 浙江大学 Visual servo control method based on iterative duration variation
CN106142092A (en) * 2016-07-26 2016-11-23 张扬 A kind of method robot being carried out teaching based on stereovision technique
CN106553195A (en) * 2016-11-25 2017-04-05 中国科学技术大学 Object 6DOF localization method and system during industrial robot crawl
CN107160364A (en) * 2017-06-07 2017-09-15 华南理工大学 A kind of industrial robot teaching system and method based on machine vision
CN108161882A (en) * 2017-12-08 2018-06-15 华南理工大学 A kind of robot teaching reproducting method and device based on augmented reality
EP3366433A1 (en) * 2017-02-09 2018-08-29 Canon Kabushiki Kaisha Method of controlling robot, method of teaching robot, and robot system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102135776A (en) * 2011-01-25 2011-07-27 解则晓 Industrial robot control system based on visual positioning and control method thereof
CN102581445A (en) * 2012-02-08 2012-07-18 中国科学院自动化研究所 Visual real-time deviation rectifying system and visual real-time deviation rectifying method for robot
CN105196292A (en) * 2015-10-09 2015-12-30 浙江大学 Visual servo control method based on iterative duration variation
CN106142092A (en) * 2016-07-26 2016-11-23 张扬 A kind of method robot being carried out teaching based on stereovision technique
CN106553195A (en) * 2016-11-25 2017-04-05 中国科学技术大学 Object 6DOF localization method and system during industrial robot crawl
EP3366433A1 (en) * 2017-02-09 2018-08-29 Canon Kabushiki Kaisha Method of controlling robot, method of teaching robot, and robot system
CN107160364A (en) * 2017-06-07 2017-09-15 华南理工大学 A kind of industrial robot teaching system and method based on machine vision
CN108161882A (en) * 2017-12-08 2018-06-15 华南理工大学 A kind of robot teaching reproducting method and device based on augmented reality

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
倪自强: "基于视觉引导的工业机器人示教编程系统", 《北京航空航天大学学报》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110065068A (en) * 2019-04-08 2019-07-30 浙江大学 A kind of robotic asssembly operation programming by demonstration method and device based on reverse-engineering
CN110170995A (en) * 2019-05-09 2019-08-27 广西安博特智能科技有限公司 A kind of quick teaching method of robot based on stereoscopic vision
CN110919626B (en) * 2019-05-16 2023-03-14 广西大学 Robot handheld teaching device and method based on stereoscopic vision
CN110919626A (en) * 2019-05-16 2020-03-27 广西大学 Robot handheld teaching device and method based on stereoscopic vision
CN112008692A (en) * 2019-05-31 2020-12-01 精工爱普生株式会社 Teaching method
CN110315544A (en) * 2019-06-24 2019-10-11 南京邮电大学 A kind of robot manipulation's learning method based on video image demonstration
CN110561430A (en) * 2019-08-30 2019-12-13 哈尔滨工业大学(深圳) robot assembly track optimization method and device for offline example learning
CN110561430B (en) * 2019-08-30 2021-08-10 哈尔滨工业大学(深圳) Robot assembly track optimization method and device for offline example learning
CN110587579A (en) * 2019-09-30 2019-12-20 厦门大学嘉庚学院 Kinect-based robot teaching programming guiding method
CN110480642A (en) * 2019-10-16 2019-11-22 遨博(江苏)机器人有限公司 Industrial robot and its method for utilizing vision calibration user coordinate system
CN111002289A (en) * 2019-11-25 2020-04-14 华中科技大学 Robot online teaching method and device, terminal device and storage medium
CN111002289B (en) * 2019-11-25 2021-08-17 华中科技大学 Robot online teaching method and device, terminal device and storage medium
CN110900609A (en) * 2019-12-11 2020-03-24 浙江钱江机器人有限公司 Robot teaching device and method thereof
CN111152230B (en) * 2020-04-08 2020-09-04 季华实验室 Robot teaching method, system, teaching robot and storage medium
CN111152230A (en) * 2020-04-08 2020-05-15 季华实验室 Robot teaching method, system, teaching robot and storage medium
CN112509392A (en) * 2020-12-16 2021-03-16 复旦大学 Robot behavior teaching method based on meta-learning

Also Published As

Publication number Publication date
CN109571487B (en) 2020-08-28

Similar Documents

Publication Publication Date Title
CN109571487A (en) A kind of robotic presentation learning method of view-based access control model
Qi et al. Contour moments based manipulation of composite rigid-deformable objects with finite time model estimation and shape/position control
CN108284436B (en) Remote mechanical double-arm system with simulation learning mechanism and method
CN109102525A (en) A kind of mobile robot follow-up control method based on the estimation of adaptive pose
CN105512621A (en) Kinect-based badminton motion guidance system
CN106371442B (en) A kind of mobile robot control method based on the transformation of tensor product model
CN109108942A (en) The mechanical arm motion control method and system of the real-time teaching of view-based access control model and adaptive DMPS
CN105760894A (en) Robot navigation method based on machine vision and machine learning
Li et al. Visual servoing of wheeled mobile robots without desired images
Li et al. Development of kinect based teleoperation of nao robot
Chen et al. Transferable active grasping and real embodied dataset
CN109636856B (en) Object six-dimensional pose information joint measurement method based on HOG feature fusion operator
Ikeuchi et al. Applying learning-from-observation to household service robots: three common-sense formulation
Gao et al. Kinect-based motion recognition tracking robotic arm platform
CN108621164A (en) Taiji push hands machine people based on depth camera
Wang et al. Design and implementation of humanoid robot behavior imitation system based on skeleton tracking
Yin et al. Monitoring-based visual servoing of wheeled mobile robots
CN113134839A (en) Robot precision flexible assembly method based on vision and force position image learning
Lang et al. Visual servoing with LQR control for mobile robots
CN114434441A (en) Mobile robot visual servo tracking control method based on self-adaptive dynamic programming
CN113492404B (en) Humanoid robot action mapping control method based on machine vision
CN112257655B (en) Method for robot to recognize human body sewing action
Lepora et al. Pose-based servo control with soft tactile sensing
Jayasurya et al. Gesture controlled AI-robot using Kinect
CN205870565U (en) Robot control system is felt to long -range body based on kinect

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant