CN108161882A - A kind of robot teaching reproducting method and device based on augmented reality - Google Patents

A kind of robot teaching reproducting method and device based on augmented reality Download PDF

Info

Publication number
CN108161882A
CN108161882A CN201711291805.6A CN201711291805A CN108161882A CN 108161882 A CN108161882 A CN 108161882A CN 201711291805 A CN201711291805 A CN 201711291805A CN 108161882 A CN108161882 A CN 108161882A
Authority
CN
China
Prior art keywords
teaching
robot
augmented reality
data
demonstrator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711291805.6A
Other languages
Chinese (zh)
Other versions
CN108161882B (en
Inventor
张平
陈偕权
杜广龙
陈晓丹
李方
陈明轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201711291805.6A priority Critical patent/CN108161882B/en
Publication of CN108161882A publication Critical patent/CN108161882A/en
Application granted granted Critical
Publication of CN108161882B publication Critical patent/CN108161882B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J3/00Manipulators of master-slave type, i.e. both controlling unit and controlled unit perform corresponding spatial movements
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40519Motion, trajectory planning

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a kind of robot teaching method and devices based on augmented reality.Method includes the following steps:To somatosensory device obtain gesture pose be filtered with it is data-optimized;Data after optimization are sent to augmented reality equipment by wireless network, drive the teaching track of virtual robot rendering operation person in AR equipment;Demonstrator from augmented reality equipment from virtual robot Path Regeneration, by the taught point that voice alignment error is larger, and the taught point before changing, repetitive operation until the error of all taught points all in satisfied limit, then teaching is completed;Final teaching track is converted into the joint angle needed for driving real machine people, controls the teaching path of real machine people rendering operation person.The inventive system comprises somatosensory device, PC machine, microphone, optical perspective formula augmented reality equipment, WiFi routers and robots.The present invention is low with requiring demonstrator's profession, avoids damage Work robot, repeats the advantages that efficient.

Description

A kind of robot teaching reproducting method and device based on augmented reality
Technical field
The present invention relates to industrial robot teaching fields, exist more particularly to a kind of robot based on augmented reality Line teaching playback method and device.
Background technology
Robot is born till now from the sixties, experienced three generations:Playback robot has perception powerful machines People and intelligent robot, however teaching-playback robot be still industrial circle apply most robots.
The purpose of robot teaching is fulfiled assignment required pose to obtain robot, and teaching method can be divided into Line teaching and off-line teaching.The shortcomings that on-line teaching mode is that teaching process is cumbersome time-consuming, needs to be adjusted repeatedly according to task The posture of whole robot, poor in timeliness, and be difficult the complicated operating path of planning.Off-line teaching is to utilize computer graphics Technology establishes the geometrical model of robot and its working environment, then being completed to robot for task carry out segregation reasons and Programming, and dynamic simulation is carried out to result, finally send the teaching path met the requirements to robot controller.Off-line teaching The shortcomings that mode is that modeling process is more complicated, and model and true environment need to carry out corresponding there are deviation before teaching playback Correction, it is more demanding to programming personnel.
With the development of computer hardware technique, virtual reality technology is widely used, in robot teaching Field forms a kind of new Virtual Demonstration method.In order to carry out Virtual Demonstration, the complete of robot and its working environment is needed The CAD model of description.This CAD model there is it is not high to real scene simulation precision the problem of, in the ring of a loose structure Carrying out teaching in border to robot task needs to do the work of many calibration and workpiece positioning, this is lacking for Virtual Demonstration mode Point.
The method and dress for using augmented reality improving robot teaching method operability are disclosed this in recent years It puts.
China Patent No.:CN106363637A, title:A kind of quick teaching method of robot and device, the disclosure of the invention The quick teaching method of robot and device in a kind of augmented reality environment complete robot teaching process using gesture, and The action of teaching is named by voice, augmented reality equipment captures ambient condition information and forms three-dimensional scenic in display screen Upper display.However the invention is that voice is used for the name of teaching action, not using the accuracy of phonetic order to gesture The error of teaching process optimizes, and underuses the actual situation superimposed characteristics of augmented reality to reduce maloperation to true Real machine people and the damage of workpiece, only as display equipment.
China Patent No.:CN106863295A, title:Robot system, disclosure of the invention one kind have an image and show The robot system of device, the image display is using augmented reality treatment technology by the image superposition of dummy object in machine It is shown on the image of device people.The device can complete robot teaching in the case of no end effector of robot, However, this need a large amount of calculation amount by image recognition processing to complete the mode of actual situation object superposition.
To sum up, there is an urgent need to a kind of natural, efficient, safety, intuitive teaching programmed methods for robot teaching at present.
Invention content
In order to overcome the above-mentioned deficiencies of the prior art, the present invention provides a kind of robot teaching based on augmented reality again Existing method and device.
Used technical solution is in order to achieve the object of the present invention.
A kind of robot teaching reproducting method based on augmented reality, includes the following steps:
S1, start teaching pattern, somatosensory device obtains position and the posture of gesture
S1, start teaching pattern, somatosensory device obtains position and the posture of gesture;
S2, the gesture pose obtained in S1 is filtered with it is data-optimized;
S3, the data after optimization are sent to optical perspective formula augmented reality equipment by wireless network, drive AR equipment The teaching track of middle virtual robot rendering operation person;
S4, virtual robot reproduce during, demonstrator from augmented reality equipment from virtual robot again Existing track, is more than error the taught point of anticipation error, is adjusted by phonetic order, and the taught point before modification, Until the error of all taught points all in the range of anticipation error, then teaching is completed;
S5, final teaching track is converted into the joint angle needed for driving real machine people, is sent to robot control Device, so as to allow the teaching path of true Work robot rendering operation person.
Further, in step S1, operator is along time work pose of real work path teaching, and somatosensory device is (such as Depth camera Kinect, Leap Motion etc.) using position and the attitude data of the gesture of fixed frequency acquisition demonstrator as showing Pose at religion point.
Specifically, position of the position of demonstrator's right hand index finger tip as taught point, the centre of the palm and forefinger root, nameless root These three put Europe of the angle of the normal vector for forming plane and reference frame (i.e. world coordinate system) as taught point posture Draw angle;Specific calculating process is as follows:
World coordinate system is right-handed helix coordinate system, and to the right, vertically upward, z-axis is outside perpendicular to paper for y-axis for x-axis level. If the centre of the palm is point A (x1,y1,z1), forefinger root is point B (x2,y2,z2), nameless root be point C (x3,y3,z3), by not conllinear three The normal vector of plane that point is formed isIt is vertical with vector arbitrary in plane according to normal vector, then have
It is unfolded to obtain the final product
Normal vector is solved by formula (2)
Robot pose represents that the rotation angle around world coordinate system z-axis is known as with RPY (Roll, Pitch, Yaw) Roll is denoted as φa;Rotation angle around world coordinate system y-axis is known as Pitch, is denoted as φo;Around the rotation angle of world coordinate system x-axis Referred to as Yaw is denoted as φn.Can obtain its RPY angle by normal vector is
Further, in step S2, the accuracy of gesture teaching is completely dependent on the gesture pose of gesture acquisition system acquisition, And human hand is due to physiological reason, there is intrinsic shake, measuring apparatus is there is also measurement noise, therefore to being obtained in step S1 Gesture pose data be filtered and optimize, to obtain stable taught point pose.(1) data filtering
For the data that Kinect is obtained, it is filtered using autoregressive moving-average filter.Autoregressive moving average (ARMA) wave filter is a kind of linear filter, and the output of ARMA filter is that current value is filtered before with N number of input before and M The weighted average of wave device output:
Wherein coefficient aiAnd biRepresent i-th filter parameter.First part is called sliding average on the right of formula (4) equal sign (MA) item is that a low-pass filter allows DC component to pass through;Second part is called autoregression (AR) item;
The pose data of acquisition are filtered according to formula (4), so as to obtain the teaching track of smooth stabilization.(2) data are excellent Change
It is inherently shaken caused by eliminating human hand muscle using recursive least-squares (RLS) algorithm.The number of data-optimized systems Learning model is:A(z-1) y (k)=B (z-1) u (k)+w (k), wherein
na、nbFor the order of data-optimized systems model, z-iIt is i-th of system z-transform, represents i moment of delay, ai、 biFor parameter to be estimated, y (k) is that data, input sampling datas of the u (k) for the system k moment, w are observed in the output at system k moment (k) it is the random noise at system k moment.
We define:
Y (k-1) is exported by the system at previous moment ..., y (k-na) and system input u (k-1) ..., u (k-nb) composition The observation data vector φ (k-1) at k-1 moment be
φ (k-1)=[y (k-1) ..., y (k-na)u(k-1),...,u(k-nb)]T(T represents transposition)
By system model parameter to be estimatedThe regression parameter vector θ of composition is
Represent the estimated value of k moment θ, recurrence calculation formula is
K (k) is k moment Error Gains, and calculating formula is
P (k) updates matrix for k moment variance, and calculating formula is
P (k)=[I-K (k) φT(k-1)]P(k-1) (8)
Then recursive least squares algorithm basic calculating step is as follows:
1. determine multinomial A (z-1) and B (z-1) order na, nb
2. set recursive parameter initial valueP(0);
3. sampling obtains new observation data y (k) and u (k), and forms observation data vector φ (k-1);
4. calculate parameter current recurrence estimation value with the RLS methods shown in formula (6)~(8)
5. sampling number k adds 1, then rotate back into 3. step continue cycling through, until meeting following stopping criterion:
In formulaI-th of element for parameter vector θ is in the recurrence calculation of the N+1 times as a result, εRLSFor given table Show a certain positive number of required precision.
Further, in step S3, the data after optimizing for previous step need to send optics to by wireless network Perspective formula augmented reality equipment (such as Hololens), the virtual machine in these data-driven optical perspective formula augmented reality equipment People's end movement completes the reproducing processes of teaching task in the environment of augmented reality.
Further, in step S3, virtual robot in augmented reality equipment need before teaching pattern is started from Line models.
Further, it in step S4, is adjusted by the phonetic order taught point larger to error during gesture teaching It is whole, realize accurate teaching.This is because gesture teaching acquisition system is with the gesture of fixed frequency acquisition demonstrator in step S1 Pose, and human hand finger has certain size, therefore there are errors in itself for teaching track, need to be carried out in step s 4 by voice It corrects.
For quantitative analysis teaching error, foundation is provided for the accurate teaching of voice, we only consider the position in teaching path, Posture is not accounted for, therefore error can be defined as below:
Wherein, (xri,yri), (xKi,yKi) be respectively i-th of robot end position in the same coordinate system and by I-th point of the two-dimensional coordinate in the teaching path that Kinect is obtained, Num are the number of all taught points.
When site error is within anticipation error, that is, meet ε≤ε0, wherein ε0For anticipation error, then teaching is met the requirements, Into next taught point.
A kind of robot teaching apparatus based on augmented reality, including somatosensory device, PC machine, microphone, optical perspective formula Augmented reality equipment, WIFi routers and robot, wherein:
Somatosensory device is connected with PC machine by data line, for the gesture pose of real-time capture demonstrator;
PC machine, the core of data processing connect somatosensory device and robot controller by wired mode, body-sensing are set The standby teaching gesture pose data captured are filtered and remove hand tremor.Optical perspective formula is connected by WLAN to increase Strong real world devices, the data after processing are sent to augmented reality equipment, to drive the virtual machine in augmented reality equipment People moves, and reproduces the teaching track of demonstrator;
Microphone is connected with PC machine, for detecting the phonetic order of demonstrator, phonetic order form shaped like, to【It is left】It moves Dynamic { 5 } centimetre, the content in square brackets can be front and rear up and down, represent mobile orientation, and the content in brace is 1,2, 3,4,5 number represents mobile step-length;
Optical perspective formula augmented reality equipment using the Hololens augmented reality glasses of Microsoft, is modeled by Unity3D Virtual robot, and movement properties are added, the movement of virtual robot can be controlled in the way of real machine people is controlled, is shown Religion person observes virtual robot to complete Virtual Demonstration task by wearing augmented reality glasses;
WiFi routers provide a wireless LAN environment, for link enhancement real world devices;
Robot is connected with PC machine, is the execution body after the completion of teaching, and robot can be according to the teaching track of demonstrator Reproduce teaching process
Compared with prior art, the present invention having the following advantages that and advantageous effect:
1st, the present invention uses a kind of natural man-machine interaction mode, and teaching can be completed using gesture and voice, do not need to Operator has the knowledge of profession and skilled working experience.During gesture teaching, gesture acquisition system is with certain frequency The movement locus of demonstrator is acquired, the working trajectory of required by task is completed as robot.
2nd, inevitably there are errors during robot teaching, are carried out before real machine people reproduces using virtual robot It reproduces, the damage of real machine people when can fail to avoid teaching.
3rd, for other on-line teachings, when the on-line teaching based on augmented reality is not take up the operation of robot Between.Periods of robot operation stop is not needed to, greatly improves the effective operation time of robot
4th, can easily switch between different teaching tasks, only Work robot is virtual, environment be all it is true, It does not need to model environment again as Virtual Demonstration, repeat efficient.
Description of the drawings
Fig. 1 is augmented reality teaching flow chart.
Fig. 2 obtains schematic diagram for taught point pose.
Fig. 3 is voice teaching work flow diagram.
Fig. 4 is augmented reality teaching apparatus composition figure.
Specific embodiment
With reference to embodiment and attached drawing, the present invention is described in further detail, but embodiments of the present invention are unlimited In this.
Existing industrial robot on-line teaching reproducting method, need in true working environment to Work robot into Row scene teaching can damage Work robot and workpiece in the case of teaching failure.By using augmented reality, first use It is superimposed upon virtual robot in true environment and carries out teaching playback, to Work robot and workpiece when can avoid the teaching from failing Damage, the teaching mode being combined using gesture and voice so that the demonstrator for not having professional knowledge can also complete teaching. The present invention is based on this to propose a kind of robot teaching method and apparatus based on augmented reality, it is carried out specifically below It is bright.
A kind of robot teaching apparatus based on augmented reality, such as Fig. 4, including somatosensory device, PC machine, microphone, optics Perspective formula augmented reality equipment, WIFi routers and robot, wherein:
Somatosensory device is connected with PC machine by data line, for the gesture pose of real-time capture demonstrator;
PC machine, the core of data processing connect somatosensory device and robot controller by wired mode, body-sensing are set The standby teaching gesture pose data captured are filtered and remove hand tremor.Optical perspective formula is connected by WLAN to increase Strong real world devices, the data after processing are sent to augmented reality equipment, to drive the virtual machine in augmented reality equipment People moves, and reproduces the teaching track of demonstrator;
Microphone is connected with PC machine, for detecting the phonetic order of demonstrator, phonetic order form shaped like, to【It is left】It moves Dynamic { 5 } centimetre, the content in square brackets can be up, down, left, right, before and after, represent mobile orientation, and the content in brace is 1,2,3,4,5 number represents mobile step-length;
Optical perspective formula augmented reality equipment using the Hololens augmented reality glasses of Microsoft, is modeled by Unity3D Virtual robot, and movement properties are added, the movement of virtual robot can be controlled in the way of real machine people is controlled, is shown Religion person observes virtual robot to complete Virtual Demonstration task by wearing augmented reality glasses;
WiFi routers provide a wireless LAN environment, for link enhancement real world devices;
Robot is connected with PC machine, is the execution body after the completion of teaching, and robot can be according to the teaching track of demonstrator Reproduce teaching process
Robot teaching method based on above device, overall workflow is as shown in Figure 1, include the following steps:
S1, start teaching pattern, somatosensory device obtains position and the posture of gesture;
In step S1, somatosensory device is the equipment with infrared camera, and such as Kinect, Leep Motion etc. can be obtained The three-dimensional coordinate of skeleton.The extraction of hand gesture location and posture as shown in the figure, three-dimensional position of the position as gesture of finger tip, The normal vector of plane that the centre of the palm and two finger joints are formed and the posture that the angle of reference frame is gesture, the reference of normal vector are sat Mark system is horizontal positioned right-handed coordinate system.
Specifically, position of the position of demonstrator's right hand index finger tip as taught point, the centre of the palm and forefinger root, nameless root These three put Europe of the angle of the normal vector for forming plane and reference frame (i.e. world coordinate system) as taught point posture Draw angle;Specific calculating process is as follows:
World coordinate system is right-handed helix coordinate system, and to the right, vertically upward, z-axis is outside perpendicular to paper for y-axis for x-axis level. If the centre of the palm is point A (x1,y1,z1), forefinger root is point B (x2,y2,z2), nameless root be point C (x3,y3,z3), by not conllinear three The normal vector of plane that point is formed isIt is vertical with vector arbitrary in plane according to normal vector, then have
It is unfolded to obtain the final product
Normal vector is solved by formula (2)
Robot pose represents that the rotation angle around world coordinate system z-axis is known as with RPY (Roll, Pitch, Yaw) Roll is denoted as φa;Rotation angle around world coordinate system y-axis is known as Pitch, is denoted as φo;Around the rotation angle of world coordinate system x-axis Referred to as Yaw is denoted as φn.Can obtain its RPY angle by normal vector is
S2, the gesture pose obtained in S1 is filtered with it is data-optimized
In step S2, original three-dimensional coordinate data is there are noise, the selection small effect of time delay good mean filter during filtering Device, the physiology that human hand is then eliminated using recursive least-squares (RLS) algorithm are shaken, the data after optimization filtering.
(1) data filtering
For the data that Kinect is obtained, it is filtered using autoregressive moving-average filter.Autoregressive moving average (ARMA) wave filter is a kind of linear filter, and the output of ARMA filter is that current value is filtered before with N number of input before and M The weighted average of wave device output:
Wherein coefficient aiAnd biRepresent i-th filter parameter.First part is called sliding average on the right of formula (4) equal sign (MA) item is that a low-pass filter allows DC component to pass through;Second part is called autoregression (AR) item;
The pose data of acquisition are filtered according to formula (4), so as to obtain the teaching track of smooth stabilization.(2) data are excellent Change
It is inherently shaken caused by eliminating human hand muscle using recursive least-squares (RLS) algorithm.The number of data-optimized systems Learning model is:A(z-1) y (k)=B (z-1) u (k)+w (k), wherein
na、nbFor the order of data-optimized systems model, z-iIt is i-th of system z-transform, represents i moment of delay, ai、 biFor parameter to be estimated, y (k) is that data, input sampling datas of the u (k) for the system k moment, w are observed in the output at system k moment (k) it is the random noise at system k moment.
We define:
Y (k-1) is exported by the system at previous moment ..., y (k-na) and system input u (k-1) ..., u (k-nb) composition The observation data vector φ (k-1) at k-1 moment be
φ (k-1)=[y (k-1) ..., y (k-na)u(k-1),...,u(k-nb)]T(T represents transposition)
By system model parameter to be estimatedThe regression parameter vector θ of composition is
Represent the estimated value of k moment θ, recurrence calculation formula is
K (k) is k moment Error Gains, and calculating formula is
P (k) updates matrix for k moment variance, and calculating formula is
P (k)=[I-K (k) φT(k-1)]P(k-1) (8)
Then recursive least squares algorithm basic calculating step is as follows:
1. determine multinomial A (z-1) and B (z-1) order na, nb
2. set recursive parameter initial valueP(0);
3. sampling obtains new observation data y (k) and u (k), and forms observation data vector φ (k-1);
4. calculate parameter current recurrence estimation value with the RLS methods shown in formula (6)~(8)
5. sampling number k adds 1, be then return to the 3. step continue cycling through, until meeting following stopping criterion:
In formulaI-th of element for parameter vector θ is in the recurrence calculation of the N+1 times as a result, εRLSFor given table Show a certain positive number of required precision.
S3, the data after optimization are sent to optical perspective formula augmented reality equipment by wireless network, drive AR equipment The teaching track of middle virtual robot rendering operation person
In step S3, optical perspective formula augmented reality equipment uses the Hololens of Microsoft, does not have use in Hololens In the robot model of teaching, need to model Work robot with 3dMax 3 d modeling softwares, be then introduced into Movement properties are added in Unity3D, can control the movement of virtual robot in the way of Work robot is controlled later, Demonstrator reproduces the track of demonstrator's teaching through the virtual Work robot of Hololens glasses, that is, observable, true with allowing Work robot reproduction is compared, to the damage of true Work robot end and workpiece when can fail to avoid teaching.
S4, virtual robot reproduce during, operator from augmented reality equipment from virtual robot again Existing track, for the taught point that error is larger, is then adjusted by phonetic order, and the taught point before modification, repeats this The process of sample, until the error of all taught points all in satisfied limit, then teaching is completed
In step S4, voice adjustment is carried out to the teaching track of gesture teaching, gesture teaching is a kind of coarse fast deadbeat Religion method is not a particle because the finger fingertip of people has certain size, therefore step S4 need to be to the operation rail of gesture teaching Mark is adjusted, and the flow of the accurate teaching of voice is as shown in the figure, virtual robot reproduces each gesture teaching in AR equipment During point, judge whether to need to be adjusted, if it is not required, then being directly entered next taught point;It is adjusted if necessary, this When microphone acquisition demonstrator phonetic order, carry out feature extraction, matched with model file, extract the language of demonstrator Sound instructs, and is such as moved to the left 1 centimetre, phonetic order is converted into pose data, driving virtual robot movement, demonstrator passes through Observation virtual robot end judges whether to meet required precision, if it is satisfied, then preserving the teaching point data of modification and entering Next taught point;If fruit is unsatisfactory for, continue voice adjustment, until meeting required precision.Wherein error is carried out as follows Definition:
Wherein, (xri,yri), (xKi,yKi) be respectively i-th of robot end position in the same coordinate system and by I-th point of the two-dimensional coordinate in the teaching path that Kinect is obtained, Num are the number of all taught points.
S5, final teaching track is converted into the joint angle needed for driving real machine people, is sent to robot control Device, so as to allow the teaching path of real machine people rendering operation person.
Above-described embodiment is the preferable embodiment of the present invention, but embodiments of the present invention are not by above-described embodiment Limitation, other any Spirit Essences without departing from the present invention with made under principle change, modification, replacement, combine, simplification, Equivalent substitute mode is should be, is included within protection scope of the present invention.

Claims (7)

1. a kind of robot teaching reproducting method based on augmented reality, which is characterized in that include the following steps:
S1, start teaching pattern, somatosensory device obtains position and the posture of gesture;
S2, the gesture pose obtained in S1 is filtered with it is data-optimized;
S3, the data after optimization are sent to optical perspective formula augmented reality equipment by wireless network, driven empty in AR equipment Intend the teaching track of robot rendering operation person;
S4, virtual robot reproduce during, demonstrator from augmented reality equipment from virtual robot reproduction rail Mark is more than error the taught point of anticipation error, is adjusted by phonetic order, and the taught point before modification, until The error of all taught points all in the range of anticipation error, then complete by teaching;
S5, final teaching track is converted into the joint angle needed for driving real machine people, is sent to robot controller, from And allow the teaching path of true Work robot rendering operation person.
2. the robot teaching reproducting method according to claim 1 based on augmented reality, which is characterized in that in step S1 Demonstrator is along time work pose of real work path teaching, and somatosensory device is with the position of the gesture of fixed frequency acquisition demonstrator It puts with attitude data as the pose at taught point;The computational methods of pose are:
Position of the position of demonstrator's right hand index finger tip as taught point, the centre of the palm and forefinger root, these three point institutes of nameless root The normal vector of formation plane and Eulerian angles of the angle of reference frame, that is, world coordinate system as taught point posture;It is specific to calculate Process is as follows:
World coordinate system is right-handed helix coordinate system, and to the right, vertically upward, z-axis is outside perpendicular to paper for y-axis for x-axis level.If the palm The heart is point A (x1,y1,z1), forefinger root is point B (x2,y2,z2), nameless root be point C (x3,y3,z3), by not conllinear three-point shape Into the normal vector of plane beIt is vertical with vector arbitrary in plane according to normal vector, then have
It is unfolded to obtain the final product
Normal vector is solved by formula (2)
Robot pose represents that the rotation angle around the z-axis of world coordinate system is known as Roll with RPY (Roll, Pitch, Yaw), It is denoted as φa;Rotation angle around world coordinate system y-axis is known as Pitch, is denoted as φo;Rotation angle around world coordinate system x-axis is known as Yaw is denoted as φn;Can obtain its RPY angle by normal vector is
3. the robot teaching reproducting method according to claim 1 based on augmented reality, which is characterized in that step S2 In, the gesture pose data obtained in step S1 are filtered and optimized, to obtain stable and accurate taught point pose. Specifically calculating step is:
(1) data filtering
For the data that Kinect is obtained, it is filtered using autoregressive moving-average filter;Autoregressive moving average (ARMA) wave filter is a kind of linear filter, and the output of ARMA filter is that current value is filtered before with N number of input before and M The weighted average of wave device output:
Wherein coefficient aiAnd biRepresenting i-th filter parameter, formula (4) equal sign the right first part is called sliding average (MA) item, It is that a low-pass filter allows DC component to pass through;Second part is called autoregression (AR) item;
The pose data of acquisition are filtered according to formula (4), so as to obtain the teaching track of smooth stabilization;(2) it is data-optimized
It is inherently shaken caused by eliminating human hand muscle using recursive least-squares (RLS) algorithm.The mathematical modulo of data-optimized systems Type is:A(z-1) y (k)=B (z-1) u (k)+w (k), wherein
na、nbFor the order of data-optimized systems model, z-iIt is i-th of system z-transform, represents i moment of delay, ai、biFor Parameter to be estimated, y (k) are that data are observed in the output at system k moment, and u (k) is the input sampling data at system k moment, and w (k) is The random noise at system k moment;
Definition:Y (k-1) is exported by the system at previous moment ..., y (k-na) and system input u (k-1) ..., u (k-nb) group Into the observation data vector φ (k-1) at k-1 moment be
φ (k-1)=[y (k-1) ..., y (k-na) u(k-1),...,u(k-nb)]T, T expression transposition
By system model parameter to be estimatedThe regression parameter vector θ of composition is
Represent the estimated value of k moment θ, recurrence calculation formula is
K (k) is k moment Error Gains, and calculating formula is
P (k) updates matrix for k moment variance, and calculating formula is
P (k)=[I-K (k) φT(k-1)]P(k-1) (8)
Then recursive least squares algorithm basic calculating step is as follows:
1. determine multinomial A (z-1) and B (z-1) order na, nb
2. set recursive parameter initial valueP(0);
3. sampling obtains new observation data y (k) and u (k), and forms observation data vector φ (k-1);
4. calculate parameter current recurrence estimation value with the RLS methods shown in formula (6)~(8)
5. sampling number k adds 1, be then return to the 3. step continue cycling through, until meeting following stopping criterion:
In formulaI-th of element for parameter vector θ is in the recurrence calculation of the N+1 times as a result, εRLSFor given expression essence Spend a positive number of requirement.
4. the robot teaching reproducting method according to claim 1 based on augmented reality, which is characterized in that step S3 In, the data after optimizing for previous step need to send optical perspective formula augmented reality equipment to by wireless network, these Virtual robot end movement in data-driven optical perspective formula augmented reality equipment, completes to show in the environment of augmented reality The reproducing processes of religion task.
5. the robot teaching reproducting method according to claim 1 based on augmented reality, which is characterized in that step S4 In, it is adjusted by the phonetic order taught point larger to error during gesture teaching, realizes accurate teaching;In step S1 Gesture teaching acquisition system is the gesture pose of demonstrator to be acquired with fixed frequency, and human hand finger has certain size, therefore is shown Teaching track, there are errors in itself, need to be modified in step s 4 by voice, the modified error of voice is defined as:
Wherein, (xri,yri), (xKi,yKi) be respectively i-th of robot end position in the same coordinate system and obtained by Kinect I-th of teaching path point of the two-dimensional coordinate taken, Num are the number of all taught points;
When site error is within anticipation error, that is, meet ε≤ε0, wherein ε0For anticipation error, then teaching is met the requirements, and is entered Next taught point.
6. a kind of robot teaching transcriber based on augmented reality, which is characterized in that including somatosensory device, PC machine, Mike Wind, augmented reality equipment, WiFi routers and robot, wherein:
Somatosensory device is connected with PC machine by data line, for the gesture pose of real-time capture demonstrator;
PC machine, the core of data processing connect somatosensory device, microphone and robot controller, to body by wired mode The teaching gesture pose data that sense equipment captures are filtered and remove hand tremor, receive the phonetic order of microphone acquisition; Optical perspective formula augmented reality equipment is connected by WLAN, the data after processing are sent to augmented reality equipment, The virtual robot in augmented reality equipment to be driven to move, the teaching track of demonstrator is reproduced;
Microphone is connected with PC machine, and for detecting the phonetic order of demonstrator, phonetic order form shape is:To【It is left】Mobile { 5 } Centimetre, content in square brackets can be up, down, left, right, before and after, represent mobile orientation, and the content in brace is 1,2, 3,4,5 number represents mobile step-length;
Augmented reality equipment using optical perspective helmet display device, is generated in the visual field of demonstrator and is superimposed in environment Dummy activity robot completes teaching playback process instead of true Work robot;
WiFi routers provide a wireless LAN environment, for link enhancement real world devices;
Robot is connected with PC machine, is the execution body after the completion of teaching, and robot can be according to the teaching track reproducing of demonstrator Teaching process.
7. a kind of robot teaching apparatus based on augmented reality according to claim 6, which is characterized in that using Microsoft Hololens augmented reality glasses, virtual robot is modeled by Unity3D, and adds movement properties, it is true according to control The mode of robot controls the movement of virtual robot, and demonstrator observes virtual robot by wearing augmented reality glasses Complete Virtual Demonstration task.
CN201711291805.6A 2017-12-08 2017-12-08 Robot teaching reproduction method and device based on augmented reality Active CN108161882B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711291805.6A CN108161882B (en) 2017-12-08 2017-12-08 Robot teaching reproduction method and device based on augmented reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711291805.6A CN108161882B (en) 2017-12-08 2017-12-08 Robot teaching reproduction method and device based on augmented reality

Publications (2)

Publication Number Publication Date
CN108161882A true CN108161882A (en) 2018-06-15
CN108161882B CN108161882B (en) 2021-06-08

Family

ID=62524772

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711291805.6A Active CN108161882B (en) 2017-12-08 2017-12-08 Robot teaching reproduction method and device based on augmented reality

Country Status (1)

Country Link
CN (1) CN108161882B (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109571487A (en) * 2018-09-12 2019-04-05 河南工程学院 A kind of robotic presentation learning method of view-based access control model
CN109648570A (en) * 2018-09-12 2019-04-19 河南工程学院 Robotic presentation learning method based on HTCVIVE wearable device
CN109940611A (en) * 2019-02-26 2019-06-28 深圳市越疆科技有限公司 Trajectory reproducing method, system and terminal device
CN110142770A (en) * 2019-05-07 2019-08-20 中国地质大学(武汉) A kind of robot teaching system and method based on head-wearing display device
CN110238853A (en) * 2019-06-18 2019-09-17 广州市威控机器人有限公司 A kind of joint series Mobile Robot Control System, remote control system and method
CN110473535A (en) * 2019-08-15 2019-11-19 网易(杭州)网络有限公司 Teaching playback method and device, storage medium and electronic equipment
CN110587579A (en) * 2019-09-30 2019-12-20 厦门大学嘉庚学院 Kinect-based robot teaching programming guiding method
CN110599823A (en) * 2019-09-05 2019-12-20 北京科技大学 Service robot teaching method based on fusion of teaching video and spoken voice
CN110788860A (en) * 2019-11-11 2020-02-14 路邦科技授权有限公司 Bionic robot action control method based on voice control
CN110815189A (en) * 2019-11-20 2020-02-21 福州大学 Robot rapid teaching system and method based on mixed reality
CN110815210A (en) * 2019-08-26 2020-02-21 华南理工大学 Novel remote control method based on natural human-computer interface and augmented reality
CN110815258A (en) * 2019-10-30 2020-02-21 华南理工大学 Robot teleoperation system and method based on electromagnetic force feedback and augmented reality
CN111198365A (en) * 2020-01-16 2020-05-26 东方红卫星移动通信有限公司 Indoor positioning method based on radio frequency signal
WO2021017203A1 (en) * 2019-07-30 2021-02-04 南京埃斯顿机器人工程有限公司 Tool orientation adjusting method applied to robotic punching
CN112454363A (en) * 2020-11-25 2021-03-09 马鞍山学院 Control method of AR auxiliary robot for welding operation
CN112752119A (en) * 2019-10-31 2021-05-04 中兴通讯股份有限公司 Time delay error correction method, terminal equipment, server and storage medium
CN112847301A (en) * 2020-12-21 2021-05-28 山东华数智能科技有限公司 Robot augmented reality teaching programming method based on portable terminal

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103921265A (en) * 2013-01-16 2014-07-16 株式会社安川电机 Robot Teaching System And Robot Teaching Method
US20170036344A1 (en) * 2014-06-12 2017-02-09 Play-i, Inc. System and method for reinforcing programming education through robotic feedback
CN107351058A (en) * 2017-06-08 2017-11-17 华南理工大学 Robot teaching method based on augmented reality

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103921265A (en) * 2013-01-16 2014-07-16 株式会社安川电机 Robot Teaching System And Robot Teaching Method
US20170036344A1 (en) * 2014-06-12 2017-02-09 Play-i, Inc. System and method for reinforcing programming education through robotic feedback
CN107351058A (en) * 2017-06-08 2017-11-17 华南理工大学 Robot teaching method based on augmented reality

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邵泽燕: "基于Kinect的机械臂增强现实示教研究", 《计算技术与自动化》 *

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109571487A (en) * 2018-09-12 2019-04-05 河南工程学院 A kind of robotic presentation learning method of view-based access control model
CN109648570A (en) * 2018-09-12 2019-04-19 河南工程学院 Robotic presentation learning method based on HTCVIVE wearable device
CN109571487B (en) * 2018-09-12 2020-08-28 河南工程学院 Robot demonstration learning method based on vision
CN109940611A (en) * 2019-02-26 2019-06-28 深圳市越疆科技有限公司 Trajectory reproducing method, system and terminal device
CN110142770A (en) * 2019-05-07 2019-08-20 中国地质大学(武汉) A kind of robot teaching system and method based on head-wearing display device
CN110238853A (en) * 2019-06-18 2019-09-17 广州市威控机器人有限公司 A kind of joint series Mobile Robot Control System, remote control system and method
WO2021017203A1 (en) * 2019-07-30 2021-02-04 南京埃斯顿机器人工程有限公司 Tool orientation adjusting method applied to robotic punching
CN110473535A (en) * 2019-08-15 2019-11-19 网易(杭州)网络有限公司 Teaching playback method and device, storage medium and electronic equipment
CN110815210A (en) * 2019-08-26 2020-02-21 华南理工大学 Novel remote control method based on natural human-computer interface and augmented reality
CN110599823A (en) * 2019-09-05 2019-12-20 北京科技大学 Service robot teaching method based on fusion of teaching video and spoken voice
CN110599823B (en) * 2019-09-05 2021-08-13 北京科技大学 Service robot teaching method based on fusion of teaching video and spoken voice
CN110587579A (en) * 2019-09-30 2019-12-20 厦门大学嘉庚学院 Kinect-based robot teaching programming guiding method
CN110815258A (en) * 2019-10-30 2020-02-21 华南理工大学 Robot teleoperation system and method based on electromagnetic force feedback and augmented reality
CN112752119A (en) * 2019-10-31 2021-05-04 中兴通讯股份有限公司 Time delay error correction method, terminal equipment, server and storage medium
CN112752119B (en) * 2019-10-31 2023-12-01 中兴通讯股份有限公司 Delay error correction method, terminal equipment, server and storage medium
CN110788860A (en) * 2019-11-11 2020-02-14 路邦科技授权有限公司 Bionic robot action control method based on voice control
CN110815189A (en) * 2019-11-20 2020-02-21 福州大学 Robot rapid teaching system and method based on mixed reality
CN110815189B (en) * 2019-11-20 2022-07-05 福州大学 Robot rapid teaching system and method based on mixed reality
CN111198365A (en) * 2020-01-16 2020-05-26 东方红卫星移动通信有限公司 Indoor positioning method based on radio frequency signal
CN112454363A (en) * 2020-11-25 2021-03-09 马鞍山学院 Control method of AR auxiliary robot for welding operation
CN112847301A (en) * 2020-12-21 2021-05-28 山东华数智能科技有限公司 Robot augmented reality teaching programming method based on portable terminal

Also Published As

Publication number Publication date
CN108161882B (en) 2021-06-08

Similar Documents

Publication Publication Date Title
CN108161882A (en) A kind of robot teaching reproducting method and device based on augmented reality
AU2020201554B2 (en) System and method for robot teaching based on RGB-D images and teach pendant
CN107225573A (en) The method of controlling operation and device of robot
WO2019041900A1 (en) Method and device for recognizing assembly operation/simulating assembly in augmented reality environment
DE102016125811A1 (en) Two-handed object manipulations in virtual reality
CN105252532A (en) Method of cooperative flexible attitude control for motion capture robot
CN107656505A (en) Use the methods, devices and systems of augmented reality equipment control man-machine collaboration
CN104858852B (en) Humanoid robot imitates optimization and the constrained procedure of human upper limb action in real time
CN110142770B (en) Robot teaching system and method based on head-mounted display device
CN115390677B (en) Assembly simulation man-machine work efficiency evaluation system and method based on virtual reality
Dajles et al. Teleoperation of a humanoid robot using an optical motion capture system
CN115346413B (en) Assembly guidance method and system based on virtual-real fusion
CN106774028A (en) A kind of robot control method and device based on time shaft
Zaldívar-Colado et al. A mixed reality for virtual assembly
CN116386414A (en) Digital mirror image-based ergonomic adjustment line training system and method
US6307563B2 (en) System for controlling and editing motion of computer graphics model
CN110310537B (en) Gantry crane virtual hoisting training system and training method
JPH10302085A (en) Operation recording system for cg model
CN115268646B (en) Human-computer collaborative building process sensing system, device, analysis method and medium
Liu et al. COMTIS: Customizable touchless interaction system for large screen visualization
CN114299205A (en) Expression animation production method and device, storage medium and computer equipment
Kirakosian et al. Immersive simulation and training of person-to-3d character dance in real-time
CN112757273A (en) Method, system and device for editing and visualizing track of mechanical arm and storage medium
Camporesi et al. Interactive motion modeling and parameterization by direct demonstration
CN110070777B (en) Huchizhui fish skin painting simulation training system and implementation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant