CN106363637A - Fast teaching method and device for robot - Google Patents

Fast teaching method and device for robot Download PDF

Info

Publication number
CN106363637A
CN106363637A CN201610890684.6A CN201610890684A CN106363637A CN 106363637 A CN106363637 A CN 106363637A CN 201610890684 A CN201610890684 A CN 201610890684A CN 106363637 A CN106363637 A CN 106363637A
Authority
CN
China
Prior art keywords
robot
user
teaching
action
augmented reality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610890684.6A
Other languages
Chinese (zh)
Other versions
CN106363637B (en
Inventor
杨辰光
梁聪垣
曾超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201610890684.6A priority Critical patent/CN106363637B/en
Publication of CN106363637A publication Critical patent/CN106363637A/en
Application granted granted Critical
Publication of CN106363637B publication Critical patent/CN106363637B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1658Programme controls characterised by programming, planning systems for manipulators characterised by programming language
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/42Recording and playback systems, i.e. in which the programme is recorded from a cycle of operations, e.g. the cycle of operations being manually controlled, after which this record is played back on the same machine
    • G05B19/427Teaching successive positions by tracking the position of a joystick or handle to control the positioning servo of the tool head, master-slave control
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Software Systems (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a fast teaching method for a robot. The method comprises the following steps of: starting a teaching mode, wherein a camera captures picture information of the robot and the environment of the robot; synthesizing the capture picture information into a panoramic video signal which is output to a reality-augmenting device; capturing body movement data of a user as an input signal of a teaching movement; projecting the input signal to the robot and judging whether the robot produces a corresponding movement or not according to the teaching movement of the user, if so, naming the teaching movement by using a voice command; and converting the voice command into a character keyword to be stored in a primitive database and finishing teaching. The invention further discloses a fast teaching device for the robot. The device comprises the robot, the camera, the reality-augmenting device, a movement capturing module, a microphone and a loudspeaker. In a mixed reality environment, by adopting a voice programming technology, the programming efficiency for the robot is improved.

Description

A kind of quick teaching method of robot and device
Technical field
The invention belongs to robot application field, particularly to a kind of quick teaching method of robot and device.
Background technology
Nowadays, with the continuous development of roboticses, robot is in the middle of the field of industrial production of human society Play very important role, the high volume applications of robot improve the production automation degree of factory, improve production The manufacture efficiency of line.However, in the middle of the practical application of robot, but requiring a great deal of time and robot shown Religion programming.At present, the degree of accuracy that existing robot teaching technology has simulated scenario mostly is not high, be difficult to truly reduce robot The problems such as residing environment, meanwhile, the accuracy that also there is speech recognition is not high, require user using the sentence language of regulation The problems such as speed is issued an order.
China Patent No.: cn105679315a, a kind of title: Voice command and can the control method of voice programming and be System, a kind of Voice command of this disclosure of the invention and can the control method of voice programming and system, the method is particularly as follows: selected part Special key words retain voice command as system, and user need to be to control using the ensuing voice of specific mode system for prompting Order, the voice command of systematic analysiss user, if existing in data base, directly executing, if not existing in data base, carrying Show that user is demonstrated, user demonstration terminates rear system this voice command and corresponding operation are stored in data base to be provided with After use.This invention decreases the workload that early stage designs voice command, reduces the difficulty that the later stage is adjusted, expands language The applicable user scope of sound control.However, this invention does not have and the phonetic order of user being first converted into word again and is Order key word in system data base is compared, and the accuracy of direct speech recognition is relatively low, and therefore, this invention can not be complete Entirely it is applied in the middle of the industry manufacture of reality.
China Patent No.: cn104772754a, title: a kind of robot demonstrator and teaching method, this disclosure of the invention A kind of robot demonstrator and teaching method, in order to improve teaching efficiency, realize the automatic teaching of robot.Described teaching process Moved along different directions by demonstrator control machinery hand, by being arranged on the detection of the first sensor on manipulator finger, Determine mechanical hand teaching coordinate in a first direction, by being arranged on the inspection of the second sensor on manipulator finger root Survey, determine mechanical hand teaching coordinate in a second direction, by being arranged on the inspection of the 3rd sensor in manipulator base Survey, determine teaching coordinate on third direction for the mechanical hand, finally by the teaching coordinate on first direction, second direction Teaching coordinate on teaching coordinate, third direction, determines the coordinate of teaching position.Do not need to operate during whole teaching Personnel are adjusted to mechanical hand, are capable of the automatization of robot teaching process, can be greatly enhanced the precision of teaching And shorten the time of teaching.However, this invention needs user to carry out teaching so that making to robot in visual range User can not carry out accurate teaching through various visual angles to robot, and the visual angle of user is easily covered, and then affects The efficiency of teaching and precision.
To sum up, in industrial practical application, existed by the movement track that computer generates robot calculate complicated, Need the problems such as evade singular point, this extends the teaching time to robot, thus reducing the efficiency of teaching, therefore, urgently Need efficient teaching programmed method.
Content of the invention
It is an object of the present invention to overcoming shortcoming and the deficiency of prior art, provide a kind of robot quick teaching side Method, it using voice programming technology, improves the programming efficiency to robot in the environment of mixed reality.
Further object is that providing a kind of robot quick teaching apparatus, this device can be implemented in mixing Under the environment of reality, using voice programming technology, efficient teaching is carried out to robot.
The purpose of the present invention is realized by following technical scheme: a kind of quick teaching method of robot, includes following step Rapid:
S1, startup teaching pattern, the image information of cameras capture robot and its place environment;
S2, the image information of seizure is synthesized panoramic video signal output to augmented reality equipment;
S3, the somatic movement data of seizure user, as the input signal of teaching action;
S4, this input signal is projected in robot, judge whether robot makes according to the teaching action of user Corresponding action, if it is, be this teaching action naming using voice command;
S5, voice command is converted into word keyword and is stored in primitive data storehouse, teaching terminates;Described primitive data Storehouse is the data base for storing different teaching actions and its corresponding word keyword.
Preferably, the hand of user in step s3, is gone out according to the somatic movement digital simulation of user, and existing strengthening Show in real equipment that simulation hand animation supplies user to observe.
Preferably, in step s3, user is caught by wearable ectoskeleton or vision camera (kinect etc.) Somatic movement data.
Further, in step s3, by wearable ectoskeleton catch user somatic movement data when, user Moved with the end that handss draw robot arm, in addition handss are responsible for drawing the motion of robot arm joint, Enable robot movement locus follow user teaching action obtain unique solution.
Further, during teaching, force feedback information is supplied to user for adjusting by wearable ectoskeleton The power of whole user teaching action.
Preferably, in step s4, after input signal projects in robot, it is real-time that robot makes corresponding course of action It is shown on real enhancing equipment.
Preferably, by the image information of multi-faceted cameras capture robot and its place environment, during teaching, Observation visual angle is adjustable.Therefore, user can independently select observation visual angle, according to the environmental information change of robot and its periphery Adjustment teaching operation in time.
Preferably, after teaching process terminates, receive the voice command of user input, voice command is converted into word Then this word keyword is contrasted with the word keyword in primitive data storehouse, is extracted this voice command by keyword Corresponding teaching combination of actions, plays to user this teaching combination of actions voice, and person to be used formally executes after determining This voice command.Avoid because of robot, the voice command of user being had error and does the action making mistake.
A kind of quick teaching apparatus of robot, including robot, video camera, augmented reality equipment, motion capture module, wheat Gram wind and speaker, wherein:
Robot, is connected with motion capture module, for receiving the teaching action of user, makes phase according to teaching action The action answered;
Video camera, including several, is arranged in some orientation in robot place space, and each video camera is respectively It is connected with augmented reality equipment by data wire, the image information for real-time capture robot space is simultaneously transferred to augmented reality Equipment;
Augmented reality equipment, including display screen, built-in virtual scene construction procedures module simultaneously, for by cameras capture To picture be spliced into panorama dynamic 3 D environment in real time, and the real-time video information by space residing for robot and robot Presented on a display screen in the way of panoramic video;
Motion capture module, is connected with augmented reality equipment, for gathering the somatic movement data of user;
Mike, is connected with robot, for multi-faceted detection user sound, collects the voice command of user;
Speaker, is connected with augmented reality equipment, the denomination of dive that will execute for broadcasting machine people, so that user Confirmed.
Preferably, at least four, described video camera, are arranged in four orientation in robot place space.
Preferably, augmented reality equipment can be head mounted display, virtual reality glasses and/or hologram three-dimensional projection Instrument.
Preferably, motion capture module includes two sets of wearable ectoskeletons, the ectoskeleton of a set of traction robotic arm end, The ectoskeleton of another set of traction machine shoulder joint.By wearable ectoskeleton gather user movable information, can directly by The information transfer capturing is to robot.
Specifically, described wearable ectoskeleton covers ten fingers of user, catches the hand motion of user. Therefore, augmented reality equipment can show the hand animation of simulation, described hand animation gathers according to motion capture module User somatic movement digital simulation is out.
Further, described wearable ectoskeleton is provided with force feedback module, and this module is passed through data wire and strengthened now Real equipment is connected.During teaching, by this module can in real time by user touch power during object feedback information defeated Go out to augmented reality equipment, be easy to power adjustment below.
Preferably, motion capture module includes vision camera, and described vision camera gathers people's somatic movement information, can The action of user is caught by machine learning algorithm and is transmitted to robot.
The present invention compared with prior art, has the advantage that and beneficial effect:
1, the present invention combines augmented reality, on the one hand, by being arranged in the video camera of the multi-angle in robot space Catch the image information of robot motion, these information fusion are become virtual three-dimensional scenic, and defeated by augmented reality equipment Go out to operator so that operator accurately perceives the motion conditions of robotic arm under virtual environment;On the other hand, operator Teaching action in virtual scene is accurately recorded by system, and real-time dynamicly passes to the machine in reality scene People, improves the programming efficiency to robot.
2, the present invention provides force feedback mechanism, and from multi-angle, different distance comes observer robot and its surrounding enviroment information Change, strengthen the telepresenc of teaching process, improve the teaching speed of user and precision.
3, the present invention carries out after teaching to robot in virtual environment, and user can be provided crucial by voice for action Word is named, and present invention employs first the voice command of user to be converted into after word and is compared with the primitive data storehouse in system To method, improve the accuracy of system identification voice command.
Brief description
Fig. 1 is the flow chart of the present embodiment teaching method;
Fig. 2 is the device connection figure of the present embodiment;
Fig. 3 is the arrangement schematic diagram of the multi-faceted video camera of the present embodiment;
Fig. 4 is the flow chart that the present embodiment exercise data catches;
The actually used process flow diagram flow chart of Tu5Shi the present embodiment robot.
Specific embodiment
With reference to embodiment and accompanying drawing, the present invention is described in further detail, but embodiments of the present invention do not limit In this.
Mixed reality technology can connect reality scene and virtual scene, has virtual reality fusion, real-time, interactive and three-dimensional note The advantages of volume, can be by the motion in virtual scene accurately " passed " in real world using this technology, this is just to machine Device people realizes quick teaching programming and provides possibility.The present invention is just being combined with the advantage of mixed reality and voice programming, proposes A kind of quick teaching apparatus of new efficient robotic and teaching method.It is described in detail below.
A kind of quick teaching apparatus of robot, such as Fig. 2, including robot, video camera, augmented reality equipment, motion capture Module, mike and speaker;
Robot: be connected with motion capture module, for receiving the teaching action of user, phase made according to teaching action The action answered;
Video camera: the image information in real-time capture robot space is simultaneously transferred to the virtual scene structure in augmented reality equipment Build program module;At least four, video camera, is arranged in four orientation in robot place space;Each video camera is respectively It is connected with augmented reality equipment by data wire, structure is referring to Fig. 3.
Augmented reality equipment: include display screen, the picture that cameras capture is arrived by internal virtual scene construction procedures module Face is spliced into panorama dynamic 3 D environment in real time, by the real-time video information in space residing for robot and robot with aphorama The mode of frequency presents on a display screen;Augmented reality equipment can be head mounted display, virtual reality glasses and/or holographic three Dimension projector.
Motion capture module: be connected with augmented reality equipment, for gathering the somatic movement data of user;
Mike: be connected with robot, for multi-faceted detection user sound, collect the voice command of user;
Speaker: be connected with augmented reality equipment, the denomination of dive that will execute for broadcasting machine people, so that user Confirmed.
Motion capture module includes two sets of wearable ectoskeletons, and the ectoskeleton of a set of traction robotic arm end is another set of The ectoskeleton of traction machine shoulder joint.Gather the movable information of user by wearable ectoskeleton, can directly will capture Information transfer to robot.
Described wearable ectoskeleton covers ten fingers of user, catches the hand motion of user.Therefore, increase The hand animation of simulation, the user body that described hand animation gathers according to motion capture module can be shown on strong real world devices Body exercise data simulates out.
Wearable ectoskeleton is provided with force feedback module, and this module is connected with augmented reality equipment by data wire.? During teaching, augmented reality is exported by the feedback information that user can be touched power during object by this module in real time and sets Standby, it is easy to power adjustment below.
Motion capture module can also be vision camera, and described vision camera gathers people's somatic movement information, and will Information transfer is in system.Gather the movable information of user by vision camera, can be caught by machine learning algorithm is made The action of user is simultaneously transmitted to robot.
Robot receives the teaching action of user, makes corresponding action according to teaching action and is shown in mixed reality On enhancing equipment.
The quick teaching method of robot based on said apparatus, such as Fig. 1, comprise the following steps:
S1, startup teaching pattern, the image information of cameras capture robot and its place environment;
S2, the image information of seizure is synthesized panoramic video signal output to augmented reality equipment;
S3, the somatic movement data of seizure user, as the input signal of teaching action;
In step s3, such as Fig. 4, the somatic movement data of user: user can be caught by wearable ectoskeleton Moved with the end that handss draw robot arm, in addition handss are responsible for drawing the motion of robot arm joint, Enable robot movement locus follow user teaching action obtain unique solution.
When using wearable ectoskeleton, system can be according to the data collected by Wearable ectoskeleton in augmented reality ring The hand of user is simulated, user can perceive the motion of oneself hand under augmented reality environment in border.Now, make In the virtual environment that user observes, real-time except have the video signal returned based on multi-faceted camera transmissions to build Outside panorama scene, the user that the hand exercise information simulation of the user that also system is captured according to wearable skeleton goes out Hand animation so that user can more intuitively carry out teaching operation in virtual environment, improve the man-machine of system Interaction capabilities.Meanwhile, the force feedback during teaching can be supplied to user so that user can by wearable ectoskeleton The more suitable teaching action of power is made during teaching.
The somatic movement data of user in step s3, can also be caught by visual pattern: utilize vision camera (kinect etc.) gathers the somatic movement data of user.In this mode, user can not need to dress external equipment Under conditions of teaching is carried out to robot, decrease the body burden of the time that wearable device spent and user, favorably Carry out the teaching activity of longer time in user.However, the shortcoming of this pattern is then user can not receive instant power Feedback information.
After catching the somatic movement data of user, further input signal is projected in robot so that reality Robot in scene makes corresponding action according to the teaching action of user, and is shown on mixed reality enhancing equipment.
User can independently select the environmental information of observation visual angle, observer robot and its periphery to change, and adjusts in time Whole teaching operation.Present invention incorporates the Display Technique of mixed reality, user can not be affected by actual safe distance, depending on Concrete condition carries out teaching to robot under different viewing angles.In some cases, due to the motion of robot arm, use The front visual angle of person can be affected, thus user in teaching not it will be noted that the part covered by robot body, hold Easily ignore the environmental factorss of correlation.And now, user can be observed such that it is able to no using the supervisor visual angle of robot Environment residing for dead angle ground observer robot, and then provide more correct accurately teaching.
S4, this input signal is projected in robot, judge whether robot makes according to the teaching action of user Corresponding action, if it is, be this teaching action naming using voice command;
S5, voice command is converted into word keyword and is stored in primitive data storehouse, teaching terminates;Described primitive data Storehouse is the data base for storing different teaching actions and its corresponding word keyword.
After each simple teaching process terminates, system will remind user use voiced keyword for upper one the moment institute The teaching action naming carrying out.Different teaching actions and its corresponding word key word command will constitute a primitive data Storehouse, after robot formally comes into operation, user only need to send the one section of complete language being combined by each road primitive command Sound order, just can allow the robot to make the operation of complete set glibly, actually used process such as Fig. 5 institute of robot Show.
After teaching process terminates, user needs to complete a set of action accordingly using voice command robot.System After the order of analysis user, keyword therein is contrasted with the keyword in primitive data storehouse, draws and will execute After the combination of action, then the action voice that will carry out is played to user, person to be used formally executes life after determining again Order, it is to avoid because robot has error and does the action making mistake to the voice command of user.
Above-described embodiment is the present invention preferably embodiment, but embodiments of the present invention are not subject to above-described embodiment Limit, other any spirit without departing from the present invention and the change made under principle, modification, replacement, combine, simplify, All should be equivalent substitute mode, be included within protection scope of the present invention.

Claims (10)

1. a kind of quick teaching method of robot is it is characterised in that comprise the following steps:
S1, startup teaching pattern, the image information of cameras capture robot and its place environment;
S2, the image information of seizure is synthesized panoramic video signal output to augmented reality equipment;
S3, the somatic movement data of seizure user, as the input signal of teaching action;
S4, this input signal is projected in robot, judge whether robot makes accordingly according to the teaching action of user Action, if it is, using voice command be this teaching action naming;
S5, voice command is converted into word keyword and is stored in primitive data storehouse, teaching terminates;Described primitive data storehouse is For storing the data base of different teaching actions and its corresponding word keyword.
2. the quick teaching method of robot according to claim 1 is it is characterised in that in step s3, according to user Somatic movement digital simulation goes out the hand of user, and shows that in augmented reality equipment simulation hand animation supplies user to see Examine;In step s4, after input signal projects in robot, robot makes corresponding course of action and is shown in reality increasing in real time On strong equipment.
3. the quick teaching method of robot according to claim 1 is it is characterised in that in step s3, by wearable Ectoskeleton or the somatic movement data of vision camera seizure user.
4. the quick teaching method of robot according to claim 3 is it is characterised in that caught by wearable ectoskeleton During the somatic movement data of user, user is moved with the end that handss draw robot arm, in addition handss It is responsible for the motion of traction robot arm joint so that the teaching action that the movement locus of robot can follow user obtains Unique solution;During teaching, force feedback information is supplied to user and shows for adjustment user by wearable ectoskeleton The power of religion action.
5. the quick teaching method of robot according to claim 1 is it is characterised in that pass through multi-faceted cameras capture machine Device people and its image information of place environment, during teaching, observation visual angle is adjustable;User can independently select observation to regard Angle, adjusts teaching operation in time according to the environmental information change of robot and its periphery.
6. the quick teaching method of robot according to claim 1 is it is characterised in that after teaching process terminates, receive The voice command of user input, voice command is converted into word keyword, then by this word keyword and primitive data Word keyword in storehouse is contrasted, and extracts this voice command corresponding teaching combination of actions, this teaching combination of actions Play to user with voice, person to be used formally executes this voice command after determining, it is to avoid because of the language to user for the robot There is error and do the action making mistake in sound order.
7. a kind of quick teaching apparatus of robot are it is characterised in that include robot, video camera, augmented reality equipment, action are caught Catch module, mike and speaker, wherein:
Robot, is connected with motion capture module, for receiving the teaching action of user, is made accordingly according to teaching action Action;
Video camera, including several, is arranged in some orientation in robot place space, each video camera passes through respectively Data wire is connected with augmented reality equipment, and the image information for real-time capture robot space is simultaneously transferred to augmented reality and sets Standby;
Augmented reality equipment, including display screen, built-in virtual scene construction procedures module simultaneously, for arrive cameras capture Picture is spliced into panorama dynamic 3 D environment in real time, and by the real-time video information in space residing for robot and robot with complete The mode of scape video presents on a display screen;
Motion capture module, is connected with augmented reality equipment, for gathering the somatic movement data of user;Mike, with machine Device people is connected, and for multi-faceted detection user sound, collects the voice command of user;
Speaker, is connected with augmented reality equipment, the denomination of dive that will execute for broadcasting machine people, so that user is carried out Confirm.
8. the quick teaching apparatus of robot according to claim 7 are it is characterised in that at least four, described video camera, It is arranged in four orientation in robot place space;Augmented reality equipment can be head mounted display, virtual reality eye Mirror and/or hologram three-dimensional projector.
9. the quick teaching apparatus of robot according to claim 7 it is characterised in that motion capture module include two sets can Wearable ectoskeleton, the ectoskeleton of a set of traction robotic arm end, the ectoskeleton of another set of traction machine shoulder joint;By wearing Wear formula ectoskeleton gather user movable information, can directly by the information transfer capturing to robot;Described wearable Ectoskeleton covers ten fingers of user, catches the hand motion of user, therefore, augmented reality equipment can show The hand animation of simulation, the user somatic movement digital simulation that described hand animation gathers according to motion capture module is out; Described wearable ectoskeleton is provided with force feedback module, and this module is connected with augmented reality equipment, in teaching by data wire During, augmented reality equipment is exported by the feedback information that user can be touched power during object by this module in real time, It is easy to power adjustment below.
10. the quick teaching apparatus of robot according to claim 7 are it is characterised in that motion capture module includes vision Photographic head, described vision camera gathers people's somatic movement information, and the action that can catch user by machine learning algorithm is simultaneously It is transmitted to robot.
CN201610890684.6A 2016-10-12 2016-10-12 A kind of quick teaching method of robot and device Active CN106363637B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610890684.6A CN106363637B (en) 2016-10-12 2016-10-12 A kind of quick teaching method of robot and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610890684.6A CN106363637B (en) 2016-10-12 2016-10-12 A kind of quick teaching method of robot and device

Publications (2)

Publication Number Publication Date
CN106363637A true CN106363637A (en) 2017-02-01
CN106363637B CN106363637B (en) 2018-10-30

Family

ID=57894944

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610890684.6A Active CN106363637B (en) 2016-10-12 2016-10-12 A kind of quick teaching method of robot and device

Country Status (1)

Country Link
CN (1) CN106363637B (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107225561A (en) * 2017-06-02 2017-10-03 成都福莫斯智能系统集成服务有限公司 Robot arm control method based on MR technologies
CN107351058A (en) * 2017-06-08 2017-11-17 华南理工大学 Robot teaching method based on augmented reality
CN107656505A (en) * 2017-08-21 2018-02-02 杭州太若科技有限公司 Use the methods, devices and systems of augmented reality equipment control man-machine collaboration
CN108127669A (en) * 2018-02-08 2018-06-08 华南理工大学 A kind of robot teaching system and implementation based on action fusion
CN109318232A (en) * 2018-10-22 2019-02-12 佛山智能装备技术研究院 A kind of polynary sensory perceptual system of industrial robot
CN109426860A (en) * 2017-08-23 2019-03-05 幻视互动(北京)科技有限公司 A kind of MR mixed reality information processing method neural network based and device
CN109807870A (en) * 2019-03-20 2019-05-28 昆山艾派科技有限公司 Robot demonstrator
CN109903393A (en) * 2019-02-22 2019-06-18 清华大学 New Century Planned Textbook Scene Composition methods and device based on deep learning
CN110084890A (en) * 2019-04-08 2019-08-02 中科云创(北京)科技有限公司 Mechanical arm text based on mixed reality makes carbon copies method and device
CN110599823A (en) * 2019-09-05 2019-12-20 北京科技大学 Service robot teaching method based on fusion of teaching video and spoken voice
CN110757461A (en) * 2019-11-13 2020-02-07 江苏方时远略科技咨询有限公司 Control system and control method of industrial mobile robot
CN110788860A (en) * 2019-11-11 2020-02-14 路邦科技授权有限公司 Bionic robot action control method based on voice control
CN111223337A (en) * 2020-03-12 2020-06-02 燕山大学 Calligraphy teaching machine based on machine vision and augmented reality
CN111526083A (en) * 2020-04-15 2020-08-11 上海幂方电子科技有限公司 Method, device, system and storage medium for instant messaging through head action
CN111843986A (en) * 2019-04-26 2020-10-30 发那科株式会社 Robot teaching device
CN111860213A (en) * 2020-06-29 2020-10-30 广州幻境科技有限公司 Augmented reality system and control method thereof
CN111843983A (en) * 2019-04-26 2020-10-30 发那科株式会社 Robot teaching device
CN111867789A (en) * 2018-01-25 2020-10-30 川崎重工业株式会社 Robot teaching device
CN112846737A (en) * 2021-01-07 2021-05-28 深圳市驰速自动化设备有限公司 Software control system for dragging demonstration automatic screw locking machine
CN113172602A (en) * 2021-01-28 2021-07-27 朱少强 Wearable bionic manipulator based on VR technology

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050256611A1 (en) * 2003-11-24 2005-11-17 Abb Research Ltd Method and a system for programming an industrial robot
CN1843712A (en) * 2006-05-12 2006-10-11 上海大学 Flexible and remote-controlled operation platform based on virtual robot
US20130225305A1 (en) * 2012-02-28 2013-08-29 Electronics And Telecommunications Research Institute Expanded 3d space-based virtual sports simulation system
EP2741171A1 (en) * 2012-12-06 2014-06-11 AIRBUS HELICOPTERS DEUTSCHLAND GmbH Method, human-machine interface and vehicle
CN104057450A (en) * 2014-06-20 2014-09-24 哈尔滨工业大学深圳研究生院 Teleoperation method of high-dimensional motion arm aiming at service robot
CN204366968U (en) * 2015-01-04 2015-06-03 广东工业大学 Based on the multiple degrees of freedom anthropomorphic robot of said three-dimensional body sense video camera
CN105291138A (en) * 2015-11-26 2016-02-03 华南理工大学 Visual feedback platform improving virtual reality immersion degree
CN105679315A (en) * 2016-03-22 2016-06-15 谢奇 Voice-activated and voice-programmed control method and control system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050256611A1 (en) * 2003-11-24 2005-11-17 Abb Research Ltd Method and a system for programming an industrial robot
CN1843712A (en) * 2006-05-12 2006-10-11 上海大学 Flexible and remote-controlled operation platform based on virtual robot
US20130225305A1 (en) * 2012-02-28 2013-08-29 Electronics And Telecommunications Research Institute Expanded 3d space-based virtual sports simulation system
EP2741171A1 (en) * 2012-12-06 2014-06-11 AIRBUS HELICOPTERS DEUTSCHLAND GmbH Method, human-machine interface and vehicle
CN104057450A (en) * 2014-06-20 2014-09-24 哈尔滨工业大学深圳研究生院 Teleoperation method of high-dimensional motion arm aiming at service robot
CN204366968U (en) * 2015-01-04 2015-06-03 广东工业大学 Based on the multiple degrees of freedom anthropomorphic robot of said three-dimensional body sense video camera
CN105291138A (en) * 2015-11-26 2016-02-03 华南理工大学 Visual feedback platform improving virtual reality immersion degree
CN105679315A (en) * 2016-03-22 2016-06-15 谢奇 Voice-activated and voice-programmed control method and control system

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107225561A (en) * 2017-06-02 2017-10-03 成都福莫斯智能系统集成服务有限公司 Robot arm control method based on MR technologies
CN107351058A (en) * 2017-06-08 2017-11-17 华南理工大学 Robot teaching method based on augmented reality
CN107656505A (en) * 2017-08-21 2018-02-02 杭州太若科技有限公司 Use the methods, devices and systems of augmented reality equipment control man-machine collaboration
CN109426860A (en) * 2017-08-23 2019-03-05 幻视互动(北京)科技有限公司 A kind of MR mixed reality information processing method neural network based and device
CN111867789A (en) * 2018-01-25 2020-10-30 川崎重工业株式会社 Robot teaching device
CN108127669A (en) * 2018-02-08 2018-06-08 华南理工大学 A kind of robot teaching system and implementation based on action fusion
CN109318232A (en) * 2018-10-22 2019-02-12 佛山智能装备技术研究院 A kind of polynary sensory perceptual system of industrial robot
CN109903393A (en) * 2019-02-22 2019-06-18 清华大学 New Century Planned Textbook Scene Composition methods and device based on deep learning
CN109903393B (en) * 2019-02-22 2021-03-16 清华大学 New visual angle scene synthesis method and device based on deep learning
CN109807870A (en) * 2019-03-20 2019-05-28 昆山艾派科技有限公司 Robot demonstrator
CN110084890A (en) * 2019-04-08 2019-08-02 中科云创(北京)科技有限公司 Mechanical arm text based on mixed reality makes carbon copies method and device
CN111843986A (en) * 2019-04-26 2020-10-30 发那科株式会社 Robot teaching device
CN111843983A (en) * 2019-04-26 2020-10-30 发那科株式会社 Robot teaching device
CN110599823B (en) * 2019-09-05 2021-08-13 北京科技大学 Service robot teaching method based on fusion of teaching video and spoken voice
CN110599823A (en) * 2019-09-05 2019-12-20 北京科技大学 Service robot teaching method based on fusion of teaching video and spoken voice
CN110788860A (en) * 2019-11-11 2020-02-14 路邦科技授权有限公司 Bionic robot action control method based on voice control
CN110757461A (en) * 2019-11-13 2020-02-07 江苏方时远略科技咨询有限公司 Control system and control method of industrial mobile robot
CN111223337A (en) * 2020-03-12 2020-06-02 燕山大学 Calligraphy teaching machine based on machine vision and augmented reality
CN111526083A (en) * 2020-04-15 2020-08-11 上海幂方电子科技有限公司 Method, device, system and storage medium for instant messaging through head action
CN111526083B (en) * 2020-04-15 2022-04-15 上海幂方电子科技有限公司 Method, device, system and storage medium for instant messaging through head action
CN111860213A (en) * 2020-06-29 2020-10-30 广州幻境科技有限公司 Augmented reality system and control method thereof
CN112846737A (en) * 2021-01-07 2021-05-28 深圳市驰速自动化设备有限公司 Software control system for dragging demonstration automatic screw locking machine
CN113172602A (en) * 2021-01-28 2021-07-27 朱少强 Wearable bionic manipulator based on VR technology

Also Published As

Publication number Publication date
CN106363637B (en) 2018-10-30

Similar Documents

Publication Publication Date Title
CN106363637B (en) A kind of quick teaching method of robot and device
CN206105869U (en) Quick teaching apparatus of robot
JP7095602B2 (en) Information processing equipment, information processing method and recording medium
TWI437875B (en) Instant Interactive 3D stereo imitation music device
JP2022000640A (en) Information processing device, information processing method, and information processing program
CN105374251A (en) Mine virtual reality training system based on immersion type input and output equipment
WO2015180497A1 (en) Motion collection and feedback method and system based on stereoscopic vision
CN110047104A (en) Object detection and tracking, head-mounted display apparatus and storage medium
EP2919093A1 (en) Method, system, and computer for identifying object in augmented reality
CN105252532A (en) Method of cooperative flexible attitude control for motion capture robot
CN107656505A (en) Use the methods, devices and systems of augmented reality equipment control man-machine collaboration
CN107408314A (en) Mixed reality system
CN110969905A (en) Remote teaching interaction and teaching aid interaction system for mixed reality and interaction method thereof
CN109799900A (en) The wireless wrist connected for three-dimensional imaging, mapping, networking and interface calculates and controls device and method
CN111240490A (en) Equipment insulation test training system based on VR virtual immersion and circular screen interaction
CN106652590A (en) Teaching method, teaching recognizer and teaching system
JP2016126042A (en) Image display system, image display device, image display method and program
CN204406327U (en) Based on the limb rehabilitating analog simulation training system of said three-dimensional body sense video camera
CN108830944B (en) Optical perspective three-dimensional near-to-eye display system and display method
WO2017042070A1 (en) A gazed virtual object identification module, a system for implementing gaze translucency, and a related method
Zaldívar-Colado et al. A mixed reality for virtual assembly
US20230256297A1 (en) Virtual evaluation tools for augmented reality exercise experiences
CN106325480A (en) Line-of-sight tracing-based mouse control device and method
CN115984437A (en) Interactive three-dimensional stage simulation system and method
WO2022240829A1 (en) Virtual guided fitness routines for augmented reality experiences

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant