CN110405730A - A kind of man-machine object interaction mechanical arm teaching system based on RGB-D image - Google Patents

A kind of man-machine object interaction mechanical arm teaching system based on RGB-D image Download PDF

Info

Publication number
CN110405730A
CN110405730A CN201910490338.2A CN201910490338A CN110405730A CN 110405730 A CN110405730 A CN 110405730A CN 201910490338 A CN201910490338 A CN 201910490338A CN 110405730 A CN110405730 A CN 110405730A
Authority
CN
China
Prior art keywords
teaching
image
rgb
robot
mechanical arm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910490338.2A
Other languages
Chinese (zh)
Other versions
CN110405730B (en
Inventor
刘冬
丛明
卢彬鹏
邹强
于洪华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN201910490338.2A priority Critical patent/CN110405730B/en
Publication of CN110405730A publication Critical patent/CN110405730A/en
Application granted granted Critical
Publication of CN110405730B publication Critical patent/CN110405730B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/0081Programme-controlled manipulators with master teach-in means

Abstract

The invention belongs to robot technology and application field, are related to a kind of man-machine object interaction mechanical arm teaching system based on RGB-D image.The teaching system is based on RGB-D image and carries out object identification, will object and mechanical arm is unified positions to realization object under same coordinate by tf tree in ROS system and Kinect V2 point cloud information.It is accustomed to according to human behavior, is based on MoveIt!Mechanical arm high level action planning is carried out.During teaching, by selecting object in the operation interface, then the classification and pose of available object select a movement from high-rise behavior aggregate, control mechanical arm operates corresponding object in real space, and multistep interaction just constitutes teaching track.The man-machine object intelligent interaction of oriented mission grade may be implemented in teaching system of the invention, and substitution actual robot system carries out learning from instruction, has the characteristics that high-efficient, convenient, safe.

Description

A kind of man-machine object interaction mechanical arm teaching system based on RGB-D image
Technical field
The invention belongs to robot technology and application field, and it is mechanical to be related to a kind of man-machine object interaction based on RGB-D image Arm teaching system.
Background technique
Man-machine interaction is usually a kind of effective mode in machine-learning process, robot obtain technical ability on friendship The work of cross-correlation mainly has virtual reality (Virtual Reality, VR) teaching and off-line programing.Above two method all needs Three-dimensional scene models are established in advance, and real-time perception and the performance for adapting to changing environment are poor.RGB-D camera can be very good This problem is solved, it can directly acquire the depth information of three-dimensional space, and present in the form of cloud, be commonly used for observing With the behavior sequence for capturing demonstrator, joint of robot is then fitted to by study, is the master that robot obtains environmental information Want approach.
For existing teaching method towards joint action teaching, intelligent level is lower, be suitable in industrial production line it is specific, Simple task.When teaching daily life technical ability, larger workload.And joint action data dimension is higher, learning model instruction The white silk time is longer, easily causes " dimension disaster " phenomenon.Currently, for directly using RGB-D image as the research of demonstration platform also It is less.Teaching system based on the interaction of RGB-D image uses up mark in the picture based on camera visual angle, by observing demonstrator It is interacted with object, controls the object in robot manipulation's actual environment, learn specific state-action sequence, realize a kind of letter Single, intuitive robot teaching method.
Summary of the invention
In view of the problems of the existing technology, the present invention proposes a kind of man-machine object interaction mechanical arm based on RGB-D image Teaching system.The purpose of the present invention is construct a kind of man-machine object friendship based on RGB-D image for RGB-D image as demonstration platform Mutual mechanical arm teaching system (RGB-D image interaction demonstration, RGBD-ID).By to movement It is set using layering thought, the higher low layer joint action of latitude constructs the less robot motion of quantity by combination, leads to It crosses observation demonstrator to interact in RGB image with object, control robot moves in the actual environment, learns specific state- Action sequence realizes a kind of simply and intuitively robot teaching method.
In order to achieve the above object, the technical solution adopted by the present invention are as follows:
Visual image information, which is acquired, by Kinect camera carries out object identification.Pass through coordinate transformation relation and Kinect V2 image information, object and mechanical arm is unified to realization object positioning under the same coordinate system.By being ground to human behavior habit Study carefully, carries out mechanical arm high level action planning.Using RGB-D image space as robot teaching space, selected in high-rise behavior aggregate A movement is selected, control mechanical arm operates corresponding object in real space, and multistep interaction constitutes teaching track.
A kind of oriented mission, the easily man-machine object interaction mechanical arm teaching system based on RGB-D image, the system is first Robot teaching platform is built by RGB-D camera and action hierarchy thought, then is shown using RGB-D image space as robot Space is taught, realizes a kind of oriented mission, convenient mechanical arm teaching, including two parts, specifically:
Steps are as follows for the building robot teaching platform:
Step (1): point cloud chart picture that Kinect V2 is obtained and original RGB-D image are registrated, two figures are eliminated As the offset of coordinate system, specifically: obtain camera image data, according to RGB image and point cloud chart picture carry out RGB camera, The calibration of depth camera and the two relative pose, eliminates the offset of two image coordinate systems, makes color image pixel coordinate system Under each point D coordinates value pass through point cloud data determine.
Step (2): object identification is carried out using image-recognizing method, target object area is extracted, determines the space of object Position.Specific method includes to be identified based on color, sift algorithm, orb algorithm, deep learning (or convolutional neural networks).
Described carries out object identification specific steps based on color: HSV image is converted by RGB image first, according to face Tri- channel threshold values of H, S, V are arranged in color, then do operation to three satisfactory pixels in channel, obtain meeting color requirement Region, center is determined according to object area.
Step (3): teaching object is positioned.
With same alike result, (rod piece geometric parameter, coordinate transformation relation, is touched at kinematics parameters with actual robot for building Hit relationship) robot model, utilize coordinate value and robot model of the object of acquisition in RGB image coordinate conversion Relationship realizes the positioning of object under robot and object unification to identical world coordinate system.Coordinate transform is as shown in Fig. 1.
Firstly, world coordinate system is calculated to mechanical arm base coordinate system according to robot model, then arrive mechanical arm tail end The homogeneous transform matrix of actuator coordinate systemEnd effector of robot is obtained to Kinect by trick parameter calibration The transition matrix of V2 camera coordinates system For Kinect V2 camera coordinates system to the transformation square at world coordinate system Battle array:
R is spin matrix, and T is translation matrix.
Secondly, it is assumed that object A is in the pixel coordinate value of RGB imageIt can determine that object A exists according to point cloud data D coordinates value under camera coordinates system isMapping relations between the two are It is object A in the world D coordinates value under coordinate system:
Finally, we select in RGB image by under actual robot and object A unification to identical world coordinate system Object, robot can directly remove the object in operation three-dimensional space.
It uses conventional methods progress hand and eye calibrating and determines transformation matrix of the camera to world coordinate system, specific method packet Eye-in-Hand and Eye-to-Hand are included, determines pose of the object under world coordinate system then in conjunction with point cloud data.
Step (4): it is based on MoveIt!Action hierarchy planning is carried out, Hierarchical abstraction is carried out to robot motion, by low level Joint motions be combined into the motor unit of higher, combine motor unit according to certain sequence, construct TU task unit;Planning Content is divided into three parts: the respectively path planning of bottom, the i.e. trajectory planning of joint space, the action planning of middle layer, i.e., Class human action is realized in passage path combination, the mission planning of higher carries out combination of actions for task.
The path planning of bottom specifically:
It is the function packet that robot creation is used for motion planning in conjunction with urdf file.Given mechanical arm tail end starting point pose is given Determine mechanical arm tail end starting point pose A and terminal pose F, carries out trajectory interpolation between points using the library KDL, obtain a series of Path point.MoveIt!Using path point as the constraint of end effector track, motion planning is carried out to other joint values, from And it controls mechanical arm and moves to another position from a position.
The action planning of middle layer specifically:
Middle layer high-order action planning is carried out on the basis of bottom path planning.For some common actions, according to dynamic The key point pose point for making path, is reduced to the path being formed by connecting by a plurality of broken line, and then it is dynamic to obtain middle layer high-order It plans.
The mission planning of higher specifically:
Elemental motion can complete some normal work to do, can be made up of the combination or redesign of movement and more increase The movement of grade.
Steps are as follows for the RGB-D image interaction teaching method:
Comprehensive RGB-D image and action hierarchy Planning thought, RGB-D image interaction teaching method proposed by the present invention will RGB-D image space controls robot and operates object in practical three-dimensional space as robot teaching space.Every single stepping Person selects an object to tell the robot object that we need to operate in RGB image, passes through object identification and location Calculation Gestures of object information out, where is the object for telling robot to be operated, and then selects a movement to go to control robot, tells How robot removes operation object, and a series of operation just forms the teaching track of a technical ability.
Step (1): collection of objects { obj is used1, obj2, obj3…objnIndicate teaching task in N number of object, in RGB Object obj has been selected in imagei, pixel coordinate is (xi, yi).Then from high-rise set of actions { a1, a2, a3…anIn selection After acting a, object obj is recordediState.
Step (2): indicating the sample set of teaching track with D, and the teaching track of a task is d~D.Each time step Select an object objt, then select a movement at, the teaching of single step can be described as d (t)=((sobjt, sr), at), Middle sobjtIt is object objtState, be object pose, by pixel coordinate (xi, yi) obtained by coordinate transform, srIt is mechanical arm State is end effector pose and gripper open and-shut mode, by calling move_group interface function to obtain.For task k mono- The complete teaching track of section can indicate are as follows:
D={ ((sobj1, sr1), a1), ((sobj2, sr2), a2) ... ((sobjt, srt), at) ..., ((sobjT, srT), aT)} (3)
If colloquial style describes a task step by step in daily life, a verb is followed by a noun, indicates With one object of a motion action.
Beneficial effects of the present invention:
The attribute that three-dimensional spatial information can be mapped according to RGB-D image proposes a kind of machine interacted with RGB-D image Tool arm teaching method, this method combination intelligent interaction thought and oriented mission grade teaching are a kind of oriented missions, high level, simple Single teaching method.After selecting an object and movement in RGB-D image, i.e., controllable mechanical arm is in real work space Operating corresponding object, each step and object and movement interactive process is a labeling process, eliminates additional hand labeled The step of key feature.Layering thought has been used when movement setting, the specific value in each joint is ignored during teaching, has been abstracted High-order movement, effectively prevents dimension disaster, can be extended according to mission requirements and increase new movement.Long-range behaviour can be achieved Make, i.e., the working space of the teaching space of RGB-D image and mechanical arm is separable, has broken distance limitation.
Detailed description of the invention
Fig. 1 is coordinate transform figure of the present invention;Whole coordinate system system can be tracked and be safeguarded at any time in figure The coordinate of point can be completed coordinate transform in two coordinate systems at any time by the coordinate conversion relation of multiple referentials.
Fig. 2 is the teaching flow chart that the present invention constructs.
Fig. 3 is elemental motion path designed in embodiment.
Fig. 4 is the RGB-D image interaction teaching carried out in embodiment
Fig. 5 is in embodiment for convenience of the user interface for carrying out teaching job design.
Specific embodiment
The man-machine object interaction mechanical arm teaching system and method based on RGB-D image that this method proposes, show with traditional Religion method is compared, and is a kind of oriented mission, high level, simple teaching method, is ignored the specific value in each joint, directly defeated Abstract high-rise movement out, interacts with object in RGB image space and selects to act, control mechanical arm operates object in real space Body, can carry out teaching in a simulated environment, and efficiency is higher, safe low-loss and data sharing may be implemented.
In embodiment, using following implementation:
Steps are as follows for the robot teaching platform construction:
Step (1): calling libfreenect2 driving to obtain camera pictorial data, and subscribing to wherein pixel is 960 × 540 RGB image and point cloud chart picture, according to the two carry out RGB camera, depth camera and the two relative pose calibration, eliminate The offset of two image coordinate systems keeps the D coordinates value of each point under color image pixel coordinate system true by point cloud data Fixed, image color region and depth areas can be effectively overlapped after image registration.
Step (2): object identification is carried out based on color.
HSV image is converted by RGB image first, tri- channel threshold values of H, S, V are arranged according to color, it is then logical to three The satisfactory pixel in road does operation, obtains the region for meeting color requirement, determines center according to object area.
Step (3): teaching object is positioned.
With same alike result, (rod piece geometric parameter, coordinate transformation relation, is touched at kinematics parameters with actual robot for building Hit model) robot model, utilize the coordinate transformation relation of coordinate value and robot of the object of acquisition in RGB image By under robot and object unification to identical world coordinate system, the positioning of object is realized.
Firstly, according to the robot model of building, firstly, world coordinate system is calculated to machine according to robot model Tool arm base coordinate system, then arrive the homogeneous transform matrix of robot arm end effector coordinate systemIt is obtained by trick parameter calibration To end effector of robot to the transition matrix of Kinect V2 camera coordinates system For Kinect V2 camera Coordinate system is to the transformation matrix at world coordinate system:
R is spin matrix, and T is translation matrix.
Secondly, it is assumed that object A is in the pixel coordinate value of RGB imageIt can determine that object A exists according to point cloud data D coordinates value under camera coordinates system isMapping relations million between the two It is object A in the world D coordinates value under coordinate system:
Finally, object is selected in RGB image by under actual robot and object A unification to identical world coordinate system, Robot can directly remove the object of operation three-dimensional space.
Step (4): it is based on MoveIt!Action hierarchy planning is carried out, Hierarchical abstraction is carried out to robot motion, by low level Joint motions be combined into the motor unit of higher, combine motor unit according to certain sequence, construct TU task unit;Planning Content is divided into three parts, the respectively path planning of bottom, the i.e. trajectory planning of joint space, the action planning of middle layer, i.e., Class human action is realized in passage path combination, the mission planning of higher carries out combination of actions for task.
The present invention imitates the high-order technical ability of the mankind by designing the path of motion of end effector, joint of mechanical arm value by Motion planning is calculated, and ignores the specific value in each joint during teaching, and learning model directly exports abstract High level active.
The path planning of bottom specifically:
It is the function packet that robot creation is used for motion planning in conjunction with urdf file, given mechanical arm tail end starting point pose is given Determine mechanical arm tail end starting point pose A and terminal pose F, carries out trajectory interpolation between points using the library KDL, obtain a series of Path point.MoveIt!Using path point as the constraint of end effector track, motion planning is carried out to other joint values, from And it controls mechanical arm and moves to another position from a position.
The action planning of middle layer specifically:
Middle layer high-order action planning is carried out on the basis of bottom path planning.For some common actions, according to dynamic The key point pose point for making path, is reduced to the path being formed by connecting by a plurality of broken line.For everyday tasks, devise altogether Four common movements, for { pose initialization, shift action, grasping movement, placement movement }, this four elemental motions can be complete At some normal work to do, more advanced movement can be formed by the combination or redesign of movement.The road of each movement Diameter is as shown in Fig. 2.
Steps are as follows for the RGB-D image interaction teaching:
RGB-D image interaction teaching method proposed by the present invention considers using RGB-D image space as robot teaching sky Between, control robot operates object in practical three-dimensional space.Each step operator selects an object to tell in RGB image Our objects for needing to operate of robot, go out gestures of object information by object identification and location Calculation, robot are told to grasp Where is the object of work, and a movement is then selected to go to control robot, tells how robot removes operation object.It operated Journey is as shown in Fig. 3.A series of operation just forms the teaching track of a technical ability.
Step (1): collection of objects { obj is used1, obj2, obj3…objnIndicate teaching task in N number of object, in RGB Object obj has been selected in imagei, pixel coordinate is (xi, yi) is then from high-rise set of actions { a1, a2, a3…anIn selection After acting a, object obj is recordediState.
Step (2): indicating the sample set of teaching track with D, and the teaching track of a task is d~D.Each time step Select an object objtThen a movement a is selectedt, the teaching of single step can be described as d (t)=((sobjt, sr), at), Middle sobjtIt is object objtState, be object pose, by pixel coordinate (xi, yi) obtained by coordinate transform, srIt is mechanical arm State is end effector pose and gripper open and-shut mode, by calling move_group interface function to obtain.For task k mono- The complete teaching track of section can be expressed as
D={ ((sobj1, sr1), a1), ((sobj2, sr2), a2) ... ((sobjt, srt), at) ..., (sobjT, srT), aT)}
If colloquial style describes a task step by step in daily life, a verb is followed by a noun, indicates With one object of a motion action.
Mechanical arm is taught by RGBD-ID method in embodiment and stacks building blocks task, in simulated environment and actual machine People's environment has used the wooden building blocks of 4 different colours (red, yellow, blue, white) as task object simultaneously, task be set as by The sequence of blue, red, yellow three kinds of color building blocks from top to bottom stacks.Red building blocks, the ID of object and position are selected in interaction figure picture Appearance can be recorded.Then selection grasping movement is concentrated to grab red building blocks from movement, the ID of movement can be recorded.In mechanical arm In real work space, hand, which is grabbed, be will be moved into above red building blocks, then selects pose initialization action, mechanical arm is moved to Initial pose avoids mechanical arm from blocking camera visual angle, influences acquisition of information, and reselection blue building blocks simultaneously select placement to act, Mechanical arm returns red building blocks and is placed on above blue building blocks, last reselection pose initialization action, completes red building blocks It is placed into blue building blocks upper tasks.It is similar that yellow building blocks are placed into red building blocks upper tasks process, only selection operation Object convert.Stack building blocks teaching process such as attached drawing 4
User interface is devised using MatlabGUI simultaneously, as shown in Fig. 5.Interface includes RGB image interaction window Mouth, select button, behavior aggregate list, data visualization region can add other function module successively in work later.
For the stacking building blocks task of setting, teaching step number is total up to 8 steps, tests the initial bit of building blocks in the environment every time Appearance all randomly places, and original training data is acquired during teaching, the learning by imitation model training for after.In reality It is identical with either operation or data structure teaching experience in simulated environment, therefore after learning by imitation and intensified learning etc. Continuous robot learning work can carry out in a simulated environment, be the method for a kind of efficient, safe low-loss and data sharing.

Claims (4)

1. a kind of man-machine object interaction mechanical arm teaching system based on RGB-D image, which is characterized in that the teaching system is logical first It crosses RGB-D camera and action hierarchy thought builds robot teaching platform, by being accustomed to studying to human behavior, carry out mechanical Arm high level action planning;Again using RGB-D image space as robot teaching space, one is selected to move in high-rise behavior aggregate Make, control mechanical arm operates corresponding object in real space, and multistep interaction constitutes teaching track;Show including building robot Platform and RGB-D image interaction teaching two parts are taught, specifically:
Steps are as follows for the building robot teaching platform:
Step (1): point cloud chart picture that Kinect V2 is obtained and original RGB-D image are registrated, and are eliminated two images and are sat Mark the offset of system;
Step (2): object identification is carried out using image-recognizing method, target object area is extracted, determines the spatial position of object; Object identification is carried out based on color specifically: is converted HSV image for RGB image first, is arranged H, S, V tri- according to color and leads to Then road threshold value does operation to three satisfactory pixels in channel, the region for meeting color requirement is obtained, according to object areas Domain determines center;
Step (3): teaching object is positioned;
Building has the robot model of same alike result with actual robot, utilizes coordinate of the object of acquisition in RGB image The coordinate transformation relation of value and robot model realize object under robot and object unification to identical world coordinate system Positioning;The attribute includes rod piece geometric parameter, kinematics parameters, coordinate transformation relation, collision relationship;
Firstly, world coordinate system is calculated to mechanical arm base coordinate system according to robot model, then executed to mechanical arm tail end The homogeneous transform matrix of device coordinate systemCarrying out hand and eye calibrating by traditional trick parameter calibration method determines camera to generation The transformation matrix of boundary's coordinate system is to get the transition matrix to end effector of robot to Kinect V2 camera coordinates system For Kinect V2 camera coordinates system to the transformation matrix at world coordinate system:
R is spin matrix, and T is translation matrix;
Secondly, it is assumed that object A is in the pixel coordinate value of RGB imageObject A can be determined in camera according to point cloud data D coordinates value under coordinate system isMapping relations between the two are It is object A in world coordinates D coordinates value under system:
Finally, we select object in RGB image by under actual robot and object A unification to identical world coordinate system, Robot can directly remove the object in operation three-dimensional space;
Step (4): it is based on MoveIt!Action hierarchy planning is carried out, Hierarchical abstraction is carried out to robot motion, by the pass of low level Movement combination is saved into the motor unit of higher, combines motor unit according to certain sequence, constructs TU task unit;Open space planning It is divided into three parts: is respectively the path planning of bottom, the i.e. trajectory planning of joint space;The action planning of middle layer, that is, pass through Combination of paths realizes class human action;The mission planning of higher carries out combination of actions for task;
Steps are as follows for the RGB-D image interaction teaching method:
Comprehensive RGB-D image and action hierarchy planning, using RGB-D image space as robot teaching space, control robot Object is operated in practical three-dimensional space, specifically:
Step (1): collection of objects { obj is used1, obj2, obj3…objnIndicate teaching task in N number of object, in RGB image Middle selection object obji, pixel coordinate is (xi, yi);From high-rise set of actions { a1, a2, a3…anIn selection movement a after, note Record object objiState;
Step (2): indicating the sample set of teaching track using D, and the teaching track of a task is d~D;Each time step choosing Select an object objt;Select a movement at, the teaching of single step can be described as d (t)=((sobjt, sr), at), wherein sobjt It is object objtState, be object pose, by pixel coordinate (xi, yi) obtained by coordinate transform, SrIt is mechanical arm state, For end effector pose and gripper open and-shut mode;Teaching track complete for mono- section of task k indicates are as follows:
D={ ((sobj1, sr1), α1), ((sobj2, sr2), α2) ... ((sobjt, srt), αt) ..., ((sobjT, srT), αT)} (3)。
2. a kind of man-machine object interaction mechanical arm teaching system based on RGB-D image according to claim 1, feature exist In image-recognizing method described in step (2) includes identifying there is SIFT based on color in the building robot teaching platform Algorithm, ORB algorithm, deep learning.
3. a kind of man-machine object interaction mechanical arm teaching system based on RGB-D image according to claim 1, feature exist In trick parameter calibration method traditional described in step (3) includes Eye-in-Hand in the building robot teaching platform And Eye-to-Hand, pose of the object under world coordinate system is determined then in conjunction with point cloud data.
4. a kind of man-machine object interaction mechanical arm teaching system based on RGB-D image according to claim 1, feature exist In in the building robot teaching platform in step (4):
The path planning of the bottom are as follows: the function packet of motion planning is used for for robot creation;Given mechanical arm tail end rises Point pose gives mechanical arm tail end starting point pose A and terminal pose F, carries out trajectory interpolation between points using the library KDL, obtains To a series of path point;MoveIt!Using path point as the constraint of end effector track, other joint values are moved Planning, moves to another position from a position to control mechanical arm;
The action planning of the middle layer are as follows: middle layer high-order action planning is carried out on the basis of bottom path planning;Root According to the key point pose point of path of motion, common actions are reduced to the path being formed by connecting by a plurality of broken line, obtain middle layer High-order action planning.
CN201910490338.2A 2019-06-06 2019-06-06 Human-computer interaction mechanical arm teaching system based on RGB-D image Active CN110405730B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910490338.2A CN110405730B (en) 2019-06-06 2019-06-06 Human-computer interaction mechanical arm teaching system based on RGB-D image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910490338.2A CN110405730B (en) 2019-06-06 2019-06-06 Human-computer interaction mechanical arm teaching system based on RGB-D image

Publications (2)

Publication Number Publication Date
CN110405730A true CN110405730A (en) 2019-11-05
CN110405730B CN110405730B (en) 2022-07-08

Family

ID=68358240

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910490338.2A Active CN110405730B (en) 2019-06-06 2019-06-06 Human-computer interaction mechanical arm teaching system based on RGB-D image

Country Status (1)

Country Link
CN (1) CN110405730B (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110900606A (en) * 2019-12-03 2020-03-24 江苏创能智能科技有限公司 Hand-eye linkage system based on small mechanical arm and control method thereof
CN111179341A (en) * 2019-12-09 2020-05-19 西安交通大学 Registration method of augmented reality equipment and mobile robot
CN111251277A (en) * 2020-01-31 2020-06-09 武汉大学 Human-computer collaboration tool submission system and method based on teaching learning
CN111843997A (en) * 2020-07-29 2020-10-30 上海大学 Handheld general teaching system for mechanical arm and operation method thereof
CN112071144A (en) * 2020-09-01 2020-12-11 芜湖固高自动化技术有限公司 Teaching simulation platform
CN112573223A (en) * 2020-11-27 2021-03-30 中国科学院自动化研究所 Method, system and device for loading simulated human intelligence into steamer
CN112720504A (en) * 2021-01-20 2021-04-30 清华大学 Method and device for controlling learning of hand and object interactive motion from RGBD video
CN112958974A (en) * 2021-02-08 2021-06-15 西安知象光电科技有限公司 Interactive automatic welding system based on three-dimensional vision
CN113125463A (en) * 2021-04-25 2021-07-16 济南大学 Teaching method and device for detecting weld defects of automobile hub
CN113223048A (en) * 2021-04-20 2021-08-06 深圳瀚维智能医疗科技有限公司 Hand-eye calibration precision determination method and device, terminal equipment and storage medium
CN113319859A (en) * 2021-05-31 2021-08-31 上海节卡机器人科技有限公司 Robot teaching method, system and device and electronic equipment
CN113765999A (en) * 2021-07-20 2021-12-07 上海卓昕医疗科技有限公司 Compatible method and system for multiple multi-joint mechanical arms
CN114043497A (en) * 2021-11-19 2022-02-15 济南大学 Method and system for intelligently interacting with intelligence-developing game of old people and robot
US20220055216A1 (en) * 2020-08-20 2022-02-24 Smart Building Tech Co., Ltd. Cloud based computer-implemented system and method for grouping action items on visual programming panel in robot simulator
CN114227688A (en) * 2021-12-29 2022-03-25 同济大学 Teaching trajectory learning method based on curve registration
CN114332985A (en) * 2021-12-06 2022-04-12 上海大学 Portrait profile intelligent drawing method based on double mechanical arm cooperation
CN115018876A (en) * 2022-06-08 2022-09-06 哈尔滨理工大学 Non-cooperative target grabbing control system based on ROS
CN115364494A (en) * 2022-07-26 2022-11-22 福州市鹭羽智能科技有限公司 Automatic stacking device and method for building blocks based on patterns
CN117086866A (en) * 2023-08-07 2023-11-21 广州中鸣数码科技有限公司 Task planning training method and device based on programming robot
CN117260681A (en) * 2023-09-28 2023-12-22 广州市腾龙信息科技有限公司 Control system of mechanical arm robot

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030078694A1 (en) * 2000-12-07 2003-04-24 Fanuc Ltd. Robot teaching apparatus
CN106313072A (en) * 2016-10-12 2017-01-11 南昌大学 Humanoid robot based on leap motion of Kinect
CN206326605U (en) * 2016-12-19 2017-07-14 广州大学 A kind of intelligent teaching system based on machine vision
CN108858199A (en) * 2018-07-27 2018-11-23 中国科学院自动化研究所 The method of the service robot grasp target object of view-based access control model
CN208914129U (en) * 2018-05-08 2019-05-31 南京航空航天大学金城学院 Body-sensing robot controller based on FPGA

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030078694A1 (en) * 2000-12-07 2003-04-24 Fanuc Ltd. Robot teaching apparatus
CN106313072A (en) * 2016-10-12 2017-01-11 南昌大学 Humanoid robot based on leap motion of Kinect
CN206326605U (en) * 2016-12-19 2017-07-14 广州大学 A kind of intelligent teaching system based on machine vision
CN208914129U (en) * 2018-05-08 2019-05-31 南京航空航天大学金城学院 Body-sensing robot controller based on FPGA
CN108858199A (en) * 2018-07-27 2018-11-23 中国科学院自动化研究所 The method of the service robot grasp target object of view-based access control model

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110900606B (en) * 2019-12-03 2022-08-09 江苏创能智能科技有限公司 Hand-eye linkage system based on small mechanical arm and control method thereof
CN110900606A (en) * 2019-12-03 2020-03-24 江苏创能智能科技有限公司 Hand-eye linkage system based on small mechanical arm and control method thereof
CN111179341A (en) * 2019-12-09 2020-05-19 西安交通大学 Registration method of augmented reality equipment and mobile robot
CN111179341B (en) * 2019-12-09 2022-05-20 西安交通大学 Registration method of augmented reality equipment and mobile robot
CN111251277B (en) * 2020-01-31 2021-09-03 武汉大学 Human-computer collaboration tool submission system and method based on teaching learning
CN111251277A (en) * 2020-01-31 2020-06-09 武汉大学 Human-computer collaboration tool submission system and method based on teaching learning
CN111843997A (en) * 2020-07-29 2020-10-30 上海大学 Handheld general teaching system for mechanical arm and operation method thereof
US20220055216A1 (en) * 2020-08-20 2022-02-24 Smart Building Tech Co., Ltd. Cloud based computer-implemented system and method for grouping action items on visual programming panel in robot simulator
CN112071144A (en) * 2020-09-01 2020-12-11 芜湖固高自动化技术有限公司 Teaching simulation platform
CN112573223B (en) * 2020-11-27 2021-11-09 中国科学院自动化研究所 Method, system and device for loading simulated human intelligence into steamer
CN112573223A (en) * 2020-11-27 2021-03-30 中国科学院自动化研究所 Method, system and device for loading simulated human intelligence into steamer
CN112720504A (en) * 2021-01-20 2021-04-30 清华大学 Method and device for controlling learning of hand and object interactive motion from RGBD video
CN112958974A (en) * 2021-02-08 2021-06-15 西安知象光电科技有限公司 Interactive automatic welding system based on three-dimensional vision
CN113223048A (en) * 2021-04-20 2021-08-06 深圳瀚维智能医疗科技有限公司 Hand-eye calibration precision determination method and device, terminal equipment and storage medium
CN113223048B (en) * 2021-04-20 2024-02-27 深圳瀚维智能医疗科技有限公司 Method and device for determining hand-eye calibration precision, terminal equipment and storage medium
CN113125463A (en) * 2021-04-25 2021-07-16 济南大学 Teaching method and device for detecting weld defects of automobile hub
CN113125463B (en) * 2021-04-25 2023-03-10 济南大学 Teaching method and device for detecting weld defects of automobile hub
CN113319859A (en) * 2021-05-31 2021-08-31 上海节卡机器人科技有限公司 Robot teaching method, system and device and electronic equipment
CN113319859B (en) * 2021-05-31 2022-06-28 上海节卡机器人科技有限公司 Robot teaching method, system and device and electronic equipment
CN113765999B (en) * 2021-07-20 2023-06-27 上海卓昕医疗科技有限公司 Multi-multi-joint mechanical arm compatible method and system
CN113765999A (en) * 2021-07-20 2021-12-07 上海卓昕医疗科技有限公司 Compatible method and system for multiple multi-joint mechanical arms
CN114043497A (en) * 2021-11-19 2022-02-15 济南大学 Method and system for intelligently interacting with intelligence-developing game of old people and robot
CN114043497B (en) * 2021-11-19 2023-06-30 济南大学 Intelligent interaction method, system and robot for intelligent game with old people
CN114332985A (en) * 2021-12-06 2022-04-12 上海大学 Portrait profile intelligent drawing method based on double mechanical arm cooperation
CN114227688B (en) * 2021-12-29 2023-08-04 同济大学 Teaching track learning method based on curve registration
CN114227688A (en) * 2021-12-29 2022-03-25 同济大学 Teaching trajectory learning method based on curve registration
CN115018876B (en) * 2022-06-08 2023-09-26 哈尔滨理工大学 ROS-based non-cooperative target grabbing control method
CN115018876A (en) * 2022-06-08 2022-09-06 哈尔滨理工大学 Non-cooperative target grabbing control system based on ROS
CN115364494A (en) * 2022-07-26 2022-11-22 福州市鹭羽智能科技有限公司 Automatic stacking device and method for building blocks based on patterns
CN115364494B (en) * 2022-07-26 2024-02-23 福州市鹭羽智能科技有限公司 Automatic stacking device and method for building blocks based on patterns
CN117086866A (en) * 2023-08-07 2023-11-21 广州中鸣数码科技有限公司 Task planning training method and device based on programming robot
CN117086866B (en) * 2023-08-07 2024-04-12 广州中鸣数码科技有限公司 Task planning training method and device based on programming robot
CN117260681A (en) * 2023-09-28 2023-12-22 广州市腾龙信息科技有限公司 Control system of mechanical arm robot

Also Published As

Publication number Publication date
CN110405730B (en) 2022-07-08

Similar Documents

Publication Publication Date Title
CN110405730A (en) A kind of man-machine object interaction mechanical arm teaching system based on RGB-D image
CN104589356B (en) The Dextrous Hand remote operating control method caught based on Kinect human hand movement
Shridhar et al. Cliport: What and where pathways for robotic manipulation
Ganapathi et al. Learning dense visual correspondences in simulation to smooth and fold real fabrics
Marín et al. A multimodal interface to control a robot arm via the web: a case study on remote programming
CN110026987A (en) Generation method, device, equipment and the storage medium of a kind of mechanical arm crawl track
CN104002296B (en) Simulator robot, robot teaching apparatus and robot teaching method
CN107193371A (en) A kind of real time human-machine interaction system and method based on virtual reality
CN105291138B (en) It is a kind of to strengthen the visual feedback platform of virtual reality immersion sense
CN108776773A (en) A kind of three-dimensional gesture recognition method and interactive system based on depth image
CN109483534A (en) A kind of grasping body methods, devices and systems
CN110298886A (en) A kind of Dextrous Hand Grasp Planning method based on level Four convolutional neural networks
CN109108942A (en) The mechanical arm motion control method and system of the real-time teaching of view-based access control model and adaptive DMPS
CN106346485A (en) Non-contact control method of bionic manipulator based on learning of hand motion gestures
Ganapathi et al. Learning to smooth and fold real fabric using dense object descriptors trained on synthetic color images
de Rengervé et al. Emergent imitative behavior on a robotic arm based on visuo-motor associative memories
Chen et al. A multichannel human-swarm robot interaction system in augmented reality
CN102830798A (en) Mark-free hand tracking method of single-arm robot based on Kinect
Wu et al. Learning affordance space in physical world for vision-based robotic object manipulation
Xia et al. Gibson env v2: Embodied simulation environments for interactive navigation
Takizawa et al. Learning from observation of tabletop knotting using a simple task model
Tan Implementation of a framework for imitation learning on a humanoid robot using a cognitive architecture
CN113927593B (en) Mechanical arm operation skill learning method based on task decomposition
Bohg et al. Towards grasp-oriented visual perception for humanoid robots
Mi et al. Robotable: an infrastructure for intuitive interaction with mobile robots in a mixed-reality environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant