CN109129474A - Manipulator active grabbing device and method based on multi-modal fusion - Google Patents

Manipulator active grabbing device and method based on multi-modal fusion Download PDF

Info

Publication number
CN109129474A
CN109129474A CN201810911069.8A CN201810911069A CN109129474A CN 109129474 A CN109129474 A CN 109129474A CN 201810911069 A CN201810911069 A CN 201810911069A CN 109129474 A CN109129474 A CN 109129474A
Authority
CN
China
Prior art keywords
grabbed
manipulator
modal fusion
information
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810911069.8A
Other languages
Chinese (zh)
Other versions
CN109129474B (en
Inventor
王伟明
马进
薛腾
韩鸣朔
刘文海
潘震宇
邵全全
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN201810911069.8A priority Critical patent/CN109129474B/en
Publication of CN109129474A publication Critical patent/CN109129474A/en
Application granted granted Critical
Publication of CN109129474B publication Critical patent/CN109129474B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Abstract

The present invention provides a kind of manipulator active grabbing device and method based on multi-modal fusion, wherein, the manipulator active grabbing device based on multi-modal fusion includes pedestal (1), mechanical arm (2), laser radar (3), binocular vision system (4), manipulator (5), one end of the mechanical arm (2), laser radar (3) are securedly mounted to respectively on pedestal (1), and the binocular vision system (4), manipulator (5) are securedly mounted to the other end of mechanical arm respectively;The manipulator active grasping means based on multi-modal fusion includes the following steps: step 1: perceiving object to be grabbed, obtains perception information;Step 2: according to the perception information, positioning object to be grabbed, obtain location information;Step 3: according to the location information, grabbing object to be grabbed.The present invention has fully considered the complex environment of space operations, effectively improves the ability to moving object crawl, is with a wide range of applications.

Description

Manipulator active grabbing device and method based on multi-modal fusion
Technical field
The present invention relates to robot for space positioning and crawl technical field, more particularly to the machinery based on multi-modal fusion Hand active grabbing device and method, it is especially a kind of to merge the micro- heavy of CMOS camera binocular vision, laser radar and tactilely-perceptible Robot localization and technology is actively grabbed under force environment.
Background technique
Our times major country space industry accelerated development, to explore the Life Science Experiment and space work that space is carried out Industry is increasing.Traditional space development of the activity dependent on equipment preset instructions, space station staff directly operates or ground Staff's remote operating lacks automatic real-time, interactive and learning process between environment, causes to be difficult to realize under microgravity environment The complex jobs tasks such as moving object crawl.Realize that automatically grabbing operation correlation to moving object grinds under existing microgravity environment Study carefully main concentrate to improve by tactilely-perceptible in conjunction with passive compliant type mechanism to overcome the impact force of moving object crawl process Success rate and reliability are grabbed, manipulator active grasping manipulation is realized to multimodal information fusions such as comprehensive utilization tactile, visions Study less, difficult point is that severe space environment disturbs and how real based on inaccurate heat transfer agent caused by sensor The prediction etc. of existing object run track, therefore correlation and complementarity grab raising between the multi-modal sensor information of comprehensive utilization Efficiency and robustness have great importance.
Summary of the invention
For the defects in the prior art, the manipulator active based on multi-modal fusion that the object of the present invention is to provide a kind of Grabbing device and method.
According to an aspect of the present invention, a kind of manipulator active grabbing device based on multi-modal fusion provided, packet Include pedestal, mechanical arm, laser radar, binocular vision system, manipulator;Wherein, one end of the mechanical arm, laser radar difference It is securedly mounted on pedestal, the binocular vision system, manipulator are securedly mounted to the other end of mechanical arm respectively.
Preferably, it is packaged in the pedestal and appoints for multimodal information fusion, manipulator motion planning and crawl control The deep learning image processor chip of business.
Preferably, the binocular vision system is mounted on the other end of mechanical arm using mechanical arm as axisymmetrical.
Preferably, it is equipped with inside the manipulator and is clamped for Real-time Feedback seized condition, prediction object pose, control The touch sensor of power.
According to another aspect of the present invention, a kind of manipulator active grasping means based on multi-modal fusion is provided, is wrapped Include following steps:
Step 1: perceiving object to be grabbed, obtain perception information;
Step 2: according to the perception information, positioning object to be grabbed, obtain location information;
Step 3: according to the location information, grabbing object to be grabbed.
Preferably, the perception information includes radar image, visual pattern, and the step 1 includes the following steps:
Step 1.1: the radar image of object to be grabbed is obtained by laser radar;
Step 1.2: the visual pattern of object to be grabbed is obtained by binocular vision system.
Preferably, the step 2 includes the following steps:
Step 2.1: information fusion being carried out to radar image and visual pattern, obtains object state information to be grabbed;
Step 2.2: according to the object state information to be grabbed obtained after fusion radar image and visual pattern, predicting wait grab Take object operation posture and/or location information;
Step 2.3: predicting object operation posture to be grabbed and/or location information according to described, judge that object to be grabbed is It is no to enter crawl range: if entering crawl range, will to predict object operation posture to be grabbed and/or location information as described in Location information enters step 3 and continues to execute;If not entering crawl range, return step 2.1 is continued to execute.
Preferably, the step 3 includes the following steps:
Step 3.1: according to the location information for the object to be grabbed for entering crawl range, the crawl posture for adjusting manipulator is held Row grasping manipulation;
Step 3.2: tactile data being perceived by touch sensor, judges whether to grab successfully: if grabbing successfully, terminating Process;If crawl failure, return step 3.1 continue to execute.
Preferably, the binocular vision system object type to be grabbed and judges object and manipulator to be grabbed for identification Spatial relation.
Preferably, the laser radar for identification contour of object to be grabbed and by object to be grabbed from visual pattern get the bid It outpours and.
Compared with prior art, the present invention have it is following the utility model has the advantages that
1, the present invention has fully considered the complex environment of space operations, merges binocular vision, laser radar and tactilely-perceptible, Robot is effectively increased under microgravity environment to moving object positioning and active Grasping skill.
2, the present invention is obtained moving object to be grabbed from binocular vision system using the collected information of laser radar It marks out and in image, avoid the defect that Conventional visual method is interfered vulnerable to strong light, it is accurate to improve object identification to be grabbed Rate simultaneously reduces computer vision system image recognition difficulty.
3, the present invention is carried out multi-modal using the image that RNN-LSTM algorithm acquires binocular vision system and laser radar Information fusion, solves the problems, such as single multimodality environment perception information imperfection.
4, crawl object trajectory is treated using time-space relationship reasoning algorithm the present invention is based on multi-modal fusion information to carry out in advance It surveys, real-time judgment gestures of object to be grabbed and its relative space position between manipulator improve manipulator and grab successfully Probability.
5, the present invention uses touch sensor Real-time Feedback object pose information, real-time control and optimal grasp power, improves Crawl success rate.
Detailed description of the invention
Upon reading the detailed description of non-limiting embodiments with reference to the following drawings, other feature of the invention, Objects and advantages will become more apparent upon:
Fig. 1 is that the present invention is based on the general structure schematic diagrams of the manipulator active grabbing device of multi-modal fusion.
Fig. 2 be Fig. 1 in binocular vision system, mechanical arm, manipulator three positional diagram.
Fig. 3 is that the present invention is based on the flow charts of the manipulator active grasping means of multi-modal fusion.
Specific embodiment
The present invention is described in detail combined with specific embodiments below.Following embodiment will be helpful to the technology of this field Personnel further understand the present invention, but the invention is not limited in any way.It should be pointed out that the ordinary skill of this field For personnel, without departing from the inventive concept of the premise, several changes and improvements can also be made.These belong to the present invention Protection scope.
The present invention causes binocular vision system to be difficult to accurately obtain for environmental factors such as the severe illumination of space and electromagnetic fields The problem of moving object information to be grabbed, by introducing surrounding objects under laser radar real-time monitoring microgravity environment, by following Memory network algorithm, that is, RNN-LSTM algorithm carries out information fusion to radar image and visual pattern to ring neural network-length in short-term, And binocular vision system is corrected on this basis, accurately object state information to be grabbed is obtained, using based on depth The running track that the time-space relationship reasoning algorithm of the theories of learning treats crawl object is predicted, is finally equipped with touching using end Feel that the manipulator of sensor executes grasping manipulation, improves and grab successful probability.
According to an aspect of the present invention, a kind of manipulator active grabbing device based on multi-modal fusion provided, such as Shown in Fig. 1, including pedestal 1, mechanical arm 2, laser radar 3, binocular vision system 4, manipulator 5;Wherein, the mechanical arm 2 One end, laser radar 3 are securedly mounted to respectively on pedestal 1, and the binocular vision system 4, manipulator 5 are securedly mounted to machine respectively The other end of tool arm.Wherein, it is packaged in the pedestal 1 for multimodal information fusion, manipulator motion planning and crawl control The deep learning image processor chip of task processed.As shown in Fig. 2, the binocular vision system 4 is axisymmetrical with mechanical arm 2 It is mounted on the other end of mechanical arm 2.Be equipped with inside the manipulator 5 for Real-time Feedback seized condition, prediction object pose, Control the touch sensor of clamping force.The binocular vision system 4 object type to be grabbed and judges object to be grabbed for identification The spatial relation of body and manipulator 5.The laser radar 3 for identification contour of object to be grabbed and will object be grabbed from It marks out and in visual pattern.
According to another aspect of the present invention, a kind of manipulator active grasping means based on multi-modal fusion is provided, especially It is actively grabbed using the manipulator based on multi-modal fusion of the manipulator active grabbing device based on multi-modal fusion Method is taken, as shown in figure 3, including the following steps:
Step 1: perceiving object to be grabbed, obtain perception information;
Step 2: according to the perception information, positioning object to be grabbed, obtain location information;
Step 3: according to the location information, grabbing object to be grabbed.
Wherein, the perception information includes radar image, visual pattern, and the step 1 includes the following steps:
Step 1.1: the radar image of object to be grabbed is obtained by laser radar 3;
Step 1.2: the visual pattern of object to be grabbed is obtained by binocular vision system 4.
The step 2 includes the following steps:
Step 2.1: information fusion being carried out to radar image and visual pattern by RNN-LSTM algorithm, obtains object to be grabbed Body status information;
Step 2.2: according to the object state information to be grabbed obtained after fusion radar image and visual pattern, passing through space-time Relation inference algorithm predicts object operation posture to be grabbed and/or location information;
Step 2.3: predicting object operation posture to be grabbed and/or location information according to described, judge that object to be grabbed is It is no to enter crawl range: if entering crawl range, will to predict object operation posture to be grabbed and/or location information as described in Location information enters step 3 and continues to execute;If not entering crawl range, return step 2.1 is continued to execute.
The step 3 includes the following steps:
Step 3.1: according to the location information for the object to be grabbed for entering crawl range, adjusting the crawl posture of manipulator 5 Execute grasping manipulation;
Step 3.2: tactile data being perceived by touch sensor, judges whether to grab successfully: if grabbing successfully, terminating Process;If crawl failure, return step 3.1 continue to execute.
Specific embodiments of the present invention are described above.It is to be appreciated that the invention is not limited to above-mentioned Particular implementation, those skilled in the art can make a variety of changes or modify within the scope of the claims, this not shadow Ring substantive content of the invention.In the absence of conflict, the feature in embodiments herein and embodiment can any phase Mutually combination.

Claims (10)

1. a kind of manipulator active grabbing device based on multi-modal fusion, which is characterized in that including pedestal (1), mechanical arm (2), laser radar (3), binocular vision system (4), manipulator (5);Wherein, one end of the mechanical arm (2), laser radar (3) it is securedly mounted to respectively on pedestal (1), the binocular vision system (4), manipulator (5) are securedly mounted to mechanical arm respectively The other end.
2. the manipulator active grabbing device according to claim 1 based on multi-modal fusion, which is characterized in that the base It is packaged in seat (1) at the deep learning image for multimodal information fusion, manipulator motion planning and crawl control task Manage device chip.
3. the manipulator active grabbing device according to claim 1 based on multi-modal fusion, which is characterized in that described double Mesh vision system (4) is with the other end that mechanical arm (2) are that axisymmetrical is mounted on mechanical arm (2).
4. the manipulator active grabbing device according to claim 1 based on multi-modal fusion, which is characterized in that the machine It is equipped with inside tool hand (5) for Real-time Feedback seized condition, prediction object pose, the touch sensor for controlling clamping force.
5. a kind of manipulator active grasping means based on multi-modal fusion, which comprises the steps of:
Step 1: perceiving object to be grabbed, obtain perception information;
Step 2: according to the perception information, positioning object to be grabbed, obtain location information;
Step 3: according to the location information, grabbing object to be grabbed.
6. the manipulator active grasping means according to claim 5 based on multi-modal fusion, which is characterized in that the sense Know that information includes radar image, visual pattern, the step 1 includes the following steps:
Step 1.1: the radar image of object to be grabbed is obtained by laser radar (3);
Step 1.2: the visual pattern of object to be grabbed is obtained by binocular vision system (4).
7. the manipulator active grasping means according to claim 5 based on multi-modal fusion, which is characterized in that the step Rapid 2 include the following steps:
Step 2.1: information fusion being carried out to radar image and visual pattern, obtains object state information to be grabbed;
Step 2.2: according to the object state information to be grabbed obtained after fusion radar image and visual pattern, predicting object to be grabbed Running body posture and/or location information;
Step 2.3: predict object to be grabbed operation posture and/or location information according to described, judge object to be grabbed whether into Enter to grab range: if entering crawl range, object operation posture to be grabbed and/or location information will be predicted as the positioning Information enters step 3 and continues to execute;If not entering crawl range, return step 2.1 is continued to execute.
8. the manipulator active grasping means according to claim 5 based on multi-modal fusion, which is characterized in that the step Rapid 3 include the following steps:
Step 3.1: according to the location information for the object to be grabbed for entering crawl range, the crawl posture of adjustment manipulator (5) is held Row grasping manipulation;
Step 3.2: tactile data being perceived by touch sensor, judges whether to grab successfully: if grabbing successfully, terminating to flow Journey;If crawl failure, return step 3.1 continue to execute.
9. described in the manipulator active grabbing device or claim 6 according to claim 1 based on multi-modal fusion The manipulator active grasping means based on multi-modal fusion, which is characterized in that the binocular vision system (4) for identification to Crawl object type and the spatial relation for judging object to be grabbed Yu manipulator (5).
10. described in the manipulator active grabbing device or claim 6 according to claim 1 based on multi-modal fusion The manipulator active grasping means based on multi-modal fusion, which is characterized in that the laser radar (3) is for identification wait grab Object to be grabbed simultaneously is marked out to come from visual pattern by contour of object.
CN201810911069.8A 2018-08-10 2018-08-10 Multi-mode fusion-based active manipulator grabbing device and method Active CN109129474B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810911069.8A CN109129474B (en) 2018-08-10 2018-08-10 Multi-mode fusion-based active manipulator grabbing device and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810911069.8A CN109129474B (en) 2018-08-10 2018-08-10 Multi-mode fusion-based active manipulator grabbing device and method

Publications (2)

Publication Number Publication Date
CN109129474A true CN109129474A (en) 2019-01-04
CN109129474B CN109129474B (en) 2020-07-14

Family

ID=64792860

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810911069.8A Active CN109129474B (en) 2018-08-10 2018-08-10 Multi-mode fusion-based active manipulator grabbing device and method

Country Status (1)

Country Link
CN (1) CN109129474B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993763A (en) * 2019-03-28 2019-07-09 北京理工大学 The probe position method and system merged based on image recognition with force feedback
CN110666792A (en) * 2019-09-04 2020-01-10 南京富尔登科技发展有限公司 Multi-point-position cooperative control manufacturing and assembling device and method based on information fusion
CN111168685A (en) * 2020-02-17 2020-05-19 上海高仙自动化科技发展有限公司 Robot control method, robot, and readable storage medium
CN111730606A (en) * 2020-08-13 2020-10-02 深圳国信泰富科技有限公司 Grabbing action control method and system of high-intelligence robot
CN111958596A (en) * 2020-08-13 2020-11-20 深圳国信泰富科技有限公司 Action planning system and method for high-intelligence robot
CN112060085A (en) * 2020-08-24 2020-12-11 清华大学 Robot operation pose control method based on visual-touch multi-scale positioning
CN112207804A (en) * 2020-12-07 2021-01-12 国网瑞嘉(天津)智能机器人有限公司 Live working robot and multi-sensor identification and positioning method
CN112777555A (en) * 2021-03-23 2021-05-11 江苏华谊广告设备科技有限公司 Intelligent oiling device and method
CN113433941A (en) * 2021-06-29 2021-09-24 之江实验室 Multi-modal knowledge graph-based low-level robot task planning method
CN113954076A (en) * 2021-11-12 2022-01-21 哈尔滨工业大学(深圳) Robot precision assembling method based on cross-modal prediction assembling scene
CN115431279A (en) * 2022-11-07 2022-12-06 佛山科学技术学院 Mechanical arm autonomous grabbing method based on visual-touch fusion under weak rigidity characteristic condition

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0263952A2 (en) * 1986-10-15 1988-04-20 Mercedes-Benz Ag Robot unit with moving manipulators
CN1343551A (en) * 2000-09-21 2002-04-10 上海大学 Hierarchical modular model for robot's visual sense
CN107576960A (en) * 2017-09-04 2018-01-12 苏州驾驶宝智能科技有限公司 The object detection method and system of vision radar Spatial-temporal Information Fusion
CN107838932A (en) * 2017-12-14 2018-03-27 昆山市工研院智能制造技术有限公司 A kind of robot of accompanying and attending to multi-degree-of-freemechanical mechanical arm
CN108214487A (en) * 2017-12-16 2018-06-29 广西电网有限责任公司电力科学研究院 Based on the positioning of the robot target of binocular vision and laser radar and grasping means

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0263952A2 (en) * 1986-10-15 1988-04-20 Mercedes-Benz Ag Robot unit with moving manipulators
CN1343551A (en) * 2000-09-21 2002-04-10 上海大学 Hierarchical modular model for robot's visual sense
CN107576960A (en) * 2017-09-04 2018-01-12 苏州驾驶宝智能科技有限公司 The object detection method and system of vision radar Spatial-temporal Information Fusion
CN107838932A (en) * 2017-12-14 2018-03-27 昆山市工研院智能制造技术有限公司 A kind of robot of accompanying and attending to multi-degree-of-freemechanical mechanical arm
CN108214487A (en) * 2017-12-16 2018-06-29 广西电网有限责任公司电力科学研究院 Based on the positioning of the robot target of binocular vision and laser radar and grasping means

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993763A (en) * 2019-03-28 2019-07-09 北京理工大学 The probe position method and system merged based on image recognition with force feedback
CN110666792A (en) * 2019-09-04 2020-01-10 南京富尔登科技发展有限公司 Multi-point-position cooperative control manufacturing and assembling device and method based on information fusion
CN111168685A (en) * 2020-02-17 2020-05-19 上海高仙自动化科技发展有限公司 Robot control method, robot, and readable storage medium
CN111730606A (en) * 2020-08-13 2020-10-02 深圳国信泰富科技有限公司 Grabbing action control method and system of high-intelligence robot
CN111958596A (en) * 2020-08-13 2020-11-20 深圳国信泰富科技有限公司 Action planning system and method for high-intelligence robot
CN111958596B (en) * 2020-08-13 2022-03-04 深圳国信泰富科技有限公司 Action planning system and method for high-intelligence robot
CN111730606B (en) * 2020-08-13 2022-03-04 深圳国信泰富科技有限公司 Grabbing action control method and system of high-intelligence robot
CN112060085B (en) * 2020-08-24 2021-10-08 清华大学 Robot operation pose control method based on visual-touch multi-scale positioning
CN112060085A (en) * 2020-08-24 2020-12-11 清华大学 Robot operation pose control method based on visual-touch multi-scale positioning
CN112207804A (en) * 2020-12-07 2021-01-12 国网瑞嘉(天津)智能机器人有限公司 Live working robot and multi-sensor identification and positioning method
CN112777555A (en) * 2021-03-23 2021-05-11 江苏华谊广告设备科技有限公司 Intelligent oiling device and method
CN113433941A (en) * 2021-06-29 2021-09-24 之江实验室 Multi-modal knowledge graph-based low-level robot task planning method
CN113954076A (en) * 2021-11-12 2022-01-21 哈尔滨工业大学(深圳) Robot precision assembling method based on cross-modal prediction assembling scene
CN113954076B (en) * 2021-11-12 2023-01-13 哈尔滨工业大学(深圳) Robot precision assembling method based on cross-modal prediction assembling scene
CN115431279A (en) * 2022-11-07 2022-12-06 佛山科学技术学院 Mechanical arm autonomous grabbing method based on visual-touch fusion under weak rigidity characteristic condition

Also Published As

Publication number Publication date
CN109129474B (en) 2020-07-14

Similar Documents

Publication Publication Date Title
CN109129474A (en) Manipulator active grabbing device and method based on multi-modal fusion
CN110785268B (en) Machine learning method and device for semantic robot grabbing
CN106826822A (en) A kind of vision positioning and mechanical arm crawl implementation method based on ROS systems
CN106256512A (en) Robot device including machine vision
JP2016522089A (en) Controlled autonomous robot system for complex surface inspection and processing
WO2014089316A1 (en) Human augmentation of robotic work
US11014243B1 (en) System and method for instructing a device
CN110421556A (en) A kind of method for planning track and even running method of redundancy both arms service robot Realtime collision free
CN110497405B (en) Force feedback man-machine cooperation anti-collision detection method and module for driving and controlling integrated control system
CN107984474A (en) A kind of humanoid intelligent robot of half body and its control system
JPH0830327A (en) Active environment recognition system
CN116755474A (en) Electric power line inspection method and system for unmanned aerial vehicle
Zhang et al. Multi‐target detection and grasping control for humanoid robot NAO
CN116494201A (en) Monitoring integrated power machine room inspection robot and unmanned inspection method
Xue et al. Gesture-and vision-based automatic grasping and flexible placement in teleoperation
US11052541B1 (en) Autonomous robot telerobotic interface
CN114905508A (en) Robot grabbing method based on heterogeneous feature fusion
Ranjan et al. Identification and control of NAO humanoid robot to grasp an object using monocular vision
Formica et al. Neural networks based human intent prediction for collaborative robotics applications
Du et al. A novel natural mobile human-machine interaction method with augmented reality
Kuan et al. Challenges in VR-based robot teleoperation
Schnaubelt et al. Autonomous assistance for versatile grasping with rescue robots
CN112959342B (en) Remote operation method for grabbing operation of aircraft mechanical arm based on operator intention identification
WO2022170279A1 (en) Systems, apparatuses, and methods for robotic learning and execution of skills including navigation and manipulation functions
Cheng-Jun et al. Design of mobile robot teleoperation system based on virtual reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant