CN113478478B - Uncertain object-oriented perception execution interaction natural compliance human-like manipulation method - Google Patents

Uncertain object-oriented perception execution interaction natural compliance human-like manipulation method Download PDF

Info

Publication number
CN113478478B
CN113478478B CN202110667497.2A CN202110667497A CN113478478B CN 113478478 B CN113478478 B CN 113478478B CN 202110667497 A CN202110667497 A CN 202110667497A CN 113478478 B CN113478478 B CN 113478478B
Authority
CN
China
Prior art keywords
pushing
dialing
network
intelligent network
designing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110667497.2A
Other languages
Chinese (zh)
Other versions
CN113478478A (en
Inventor
汤亮
刘昊
高锡珍
谢心如
田林睿
刘乃龙
黄煌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Control Engineering
Original Assignee
Beijing Institute of Control Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Control Engineering filed Critical Beijing Institute of Control Engineering
Priority to CN202110667497.2A priority Critical patent/CN113478478B/en
Publication of CN113478478A publication Critical patent/CN113478478A/en
Application granted granted Critical
Publication of CN113478478B publication Critical patent/CN113478478B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Fuzzy Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an uncertain object-oriented perception execution interactive natural compliance human control method, which comprises the steps of designing an application scene, and obtaining a working area overlooking visual angle Heightmap through a camera; designing a dial-push control intelligent network; designing state space input and action space output of a control network; designing a reward function of the control network; designing knowledge guidance, including dialing and pushing control knowledge guidance, action space selection guidance and dialing and pushing force feedback guidance; building a virtual simulation scene, and setting dynamic parameters conforming to a physical world; building a simulation learning training project, and generating a dialing and pushing control intelligent network; and constructing a corresponding real physical scene, and carrying out a migration experiment on the dialing-pushing control intelligent network. The invention realizes intelligent autonomous dialing and pushing reset control and higher boxing rate of the samples.

Description

Uncertain object-oriented perception execution interaction natural compliance human-like manipulation method
Technical Field
The invention relates to the field of machine learning applications. In particular to a perception execution interaction natural flexible human-like manipulation method facing to uncertain objects, which is suitable for extraterrestrial celestial body sampling operation, ground online manipulation boxing and the like through a mechanical arm.
Background
The degree of freedom and the flexibility ratio of unmanned system have greatly been improved in the application of arm, but along with the improvement of task complexity, there are various uncertain constraints in the scene, and traditional programme-controlled arm and long-range teleoperation arm can't satisfy control the demand, and the arm application background in the extraterrestrial celestial body sampling task of this patent explains, but the method of this patent is applicable to the application that has similar group to push to control the demand equally, like ground online vanning, space on-orbit maintenance service etc..
Extraterrestrial celestial body sampling is an important means for researching origin and evolution of extraterrestrial celestial bodies, large-scale analysis equipment cannot be installed on a detector due to the launching capacity of a carrier rocket, cost and the like, in order to fully research the characteristics of a sampling object, a mechanical arm is often needed to pick up a sample, the sample is placed in a sampling container to be subjected to arrangement, the container is brought back to the ground to be subjected to comprehensive analysis, and therefore more accurate research data is obtained. Sample return is technically difficult and economically costly, so scientists desire to obtain as many samples as possible per sample return task. The sampling container is limited by the volume of the sampling container and is influenced by errors in the sampling process, and the problems of clamping, incomplete fitting and the like can exist when the sample is collected in the container, so that the waste of space is caused. In order to fully utilize the space of the container, the pushing and pushing operation of a person-like person is required to be continuously and actively adapted to adjust the pushing and pushing position and angle in the final stage of sampling and boxing the sample, so that the space is extruded, the arrangement of the sample with tight joint is realized, and the maximum boxing rate is ensured.
However, human beings carry out all extraterrestrial celestial body sampling work, and the main objective is to further know a sampling object, so that before sampling, the people do not know many characteristics and characteristics of the sampling object sufficiently, the quality and the form size of the next object are difficult to predict, and due to the problems of communication delay and the like, sampling dialing and pushing homing operation is difficult to realize through teleoperation, so that a human-like manipulation learning method for sensing and executing interaction is needed, scene information is sensed through self sensors (mainly a camera and a force sensor) by utilizing artificial intelligence, human-like dialing and pushing manipulation is automatically executed, and maximum sampling boxing is realized.
Disclosure of Invention
The technical problem solved by the invention is as follows: the method overcomes the defects of the prior art, provides a perception execution interaction natural flexible human-like manipulation method facing to uncertain objects, and realizes intelligent autonomous dialing, pushing, resetting and manipulating of samples and higher boxing rate.
The technical solution of the invention is as follows: an uncertain object-oriented perception execution interactive natural compliance human control method comprises the following steps:
step 1: assuming that a box for collecting a sample is a rectangular parallelepiped, the internal dimensions of the sample box are set to L, W, and H, and therefore, the boundary of the working area is predefined for image pickup. In the experiment, assuming that the sample to be collected is an irregular stone, a stone is randomly placed into the sample box one at a time, and the poking and pushing operation is performed through the tip of a closed double-finger gripper at the tail end of a 6-degree-of-freedom mechanical arm. And (3) taking a picture of the initial environment in the sample box by using a hand-eye camera of the mechanical arm, correcting the picture taking angle to the overlooking visual angle by using the internal parameters under the assumption that the internal parameters of the camera are known, picking up the edge of the sample box according to the predefined boundary of the working area, and outputting a height map.
Step 2: and (5) designing an intelligent training network. A Deep Q Network (DQN) is adopted to design a dial-push intelligent body, a network structure adopts a multilayer DenseNet model, and through characteristic reuse and bypass setting, the parameter quantity of the network is greatly reduced, and the generation of gradient variation problem is relieved to a certain extent, so that a control strategy can be obtained from original pixel data by learning.
And 3, step 3: and (3) further designing state space input of the network on the basis of the step 2, rotating the height map in the step 1 every 22.5 degrees to generate a new picture, rotating for 16 times totally, and taking the 16 pictures as state input of the intelligent network. The action space output of the network is the probability of dialing and pushing each pixel point of each picture, the pixel point with the highest probability is selected as the dialing and pushing point each time, and the rotation angle corresponding to the picture is the dialing and pushing direction.
And 4, step 4: a reward function of the network is further designed on the basis of the step 3, the reward design is simple, and by judging the change of the depth values of the front frame image and the rear frame image, when the change is greater than a certain threshold value, the single dialing push is considered to be successful, and a smaller reward value is given at the moment; and meanwhile, carrying out image connectivity detection on the height map image (excluding the edge of the sample box) in the step 1, judging the connectivity of the image by adopting a breadth-first search algorithm, and when the judgment image is communicated and the pixel difference with the edge of the box is small, considering that the dial has been pushed in place, giving a larger reward value at the moment, and ending the round.
And 5: and 3, carrying out knowledge guide design on the action output in the step 3, and specifically comprising the following steps:
(1) expanding the image with the depth value of the edge of the removed box to form a mask outline surrounding the stone, and only judging the probability of pixel points positioned in the mask to be used as a to-be-selected point for dialing and pushing;
(2) because the rewarding design in the step 4 is designed only according to the judgment of whether the stone block moves or not, and the network does not know whether the pushing direction is correct or not, knowledge guiding needs to be added, the sample box is divided into grids according to the average size of the stone block, whether the stone blocks exist in four corner parts or not is detected, and if no stone block exists, the mechanical arm is guided to push the stone block towards the corner directions; if the angular point is detected to have the stone, the mechanical arm is preferentially guided to push the stone to the gap part between the two stones.
(3) And (4) if the position of the point to be guided has a stone, selecting a pushing point from the points to be selected in the step (1) by adopting a greedy strategy.
(4) And (3) combining the knowledge of (1), (2) and (3) to guide to obtain a desired pushing point and direction, and simultaneously, because the visual measurement has errors, in order to improve the pushing efficiency, the mechanical arm stops when the collision force is greater than a threshold value by increasing the pushing distance each time and then by force feedback, and at the moment, the mechanical arm is considered to be pushed in place.
Step 6: the method comprises the steps of building a virtual simulation scene, and building the virtual simulation scene by adopting a pybull physical engine, wherein the virtual simulation scene comprises a mechanical arm, two finger clamping mechanisms, stones in various shapes, a sampling box and the like, and dynamic parameters conforming to the physical world are set.
And 7: and (6) building a simulation learning training project in combination with the step 2/3/4/5/6, converting the expected pushing pixel position output by the network into a mechanical arm base coordinate system, performing learning training, and generating a pushing control intelligent network.
And 8: and (6) building a corresponding real physical scene according to the virtual scene in the step (6), and carrying out a migration experiment on the dialing-pushing control intelligent network in the step (7).
Compared with the prior art, the invention has the advantages that:
(1) the invention can continuously and actively adapt to and adjust the position and the angle of pushing and pulling through the knowledge guiding-exploring re-guiding iterative intelligent lifting mode, and extrude the space, thereby realizing the autonomous intelligent strict silk joint control and homing.
(2) The invention does not need to know the shape and size of the object to be placed in advance, and has strong adaptability and high working efficiency.
(3) The invention can realize the autonomous dialing and pushing reset operation based on the vision and force sensor, has lower cost and wide application range, and can be popularized and applied to other fields.
Drawings
FIG. 1 is a design flow diagram of the present invention;
FIG. 2 is a Heightmap of a network input of the present invention;
FIG. 3 is a graph of pixel point pull-push probability of network output according to the present invention;
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, rather than all embodiments, and all other embodiments obtained by a person skilled in the art based on the embodiments of the present invention belong to the protection scope of the present invention without creative efforts.
As shown in fig. 1, the specific implementation steps of the present invention are as follows:
step 1: assuming that a box for collecting a sample is a rectangular parallelepiped, the internal dimensions of the sample box are set to L, W, and H, and therefore, the boundary of the working area is predefined for image pickup. In the experiment, assuming that the sample to be collected is an irregular stone, a stone is randomly placed into the sample box one at a time, and the poking and pushing operation is performed through the tip of a closed double-finger gripper at the tail end of a 6-degree-of-freedom mechanical arm. And (3) taking a picture of the initial environment in the sample box by using a hand-eye camera of the mechanical arm, correcting the picture taking angle to the overlooking visual angle by using the internal parameters under the assumption that the internal parameters of the camera are known, picking up the edge of the sample box according to the predefined boundary of the working area, and outputting a height map, wherein the height map is shown in figure 2.
Step 2: and (5) designing an intelligent training network. A Deep Q Network (DQN) is adopted to design a dial-push intelligent body, a network structure adopts a multilayer DenseNet model, and through characteristic reuse and bypass arrangement, the parameter quantity of the network is greatly reduced, and the problem of gradient disappearance is relieved to a certain extent, so that a control strategy can be obtained from original pixel data by learning.
And step 3: and (3) further designing state space input of the network on the basis of the step 2, rotating the height map in the step 1 every 22.5 degrees to generate a new picture, rotating for 16 times totally, and taking the 16 pictures as state input of the intelligent network. The motion space output of the network is the probability of each pixel point of each picture being dialed and pushed, as shown in fig. 3. And selecting the pixel point with the maximum probability as a dialing and pushing point each time, wherein the pixel point corresponding to the rotation angle of the picture is the dialing and pushing direction.
And 4, step 4: a reward function of the network is further designed on the basis of the step 3, the reward design is simple, and by judging the change of the depth values of the front frame image and the rear frame image, when the change is greater than a certain threshold value, the single dialing push is considered to be successful, and a smaller reward value is given at the moment; and meanwhile, carrying out image connectivity detection on the height map image (excluding the edge of the sample box) in the step 1, judging the connectivity of the image by adopting a breadth-first search algorithm, and when the judgment image is communicated and the pixel difference with the edge of the box is small, considering that the dial has been pushed in place, giving a larger reward value at the moment, and ending the round.
And 5: and 3, carrying out knowledge guide design on the action output in the step 3, and specifically comprising the following steps:
(1) expanding the image with the depth value of the edge of the removed box to form a mask outline surrounding the stone, and only judging the probability of pixel points positioned in the mask to be used as a to-be-selected point for dialing and pushing;
(2) because the rewarding design in the step 4 is designed only according to the judgment of whether the stone block moves or not, and the network does not know whether the pushing direction is correct or not, knowledge guiding needs to be added, the sample box is divided into grids according to the average size of the stone block, whether the stone blocks exist in four corner parts or not is detected, and if no stone block exists, the mechanical arm is guided to push the stone block towards the corner directions; if the angular point is detected to have the stone, the mechanical arm is preferentially guided to push the stone to the gap part between the two stones.
(3) And (4) if the to-be-guided point has the stone, adopting a greedy strategy to select the pull-push point from the to-be-selected points in the step (1).
(4) And (3) combining the knowledge of (1), (2) and (3) to guide to obtain a desired pushing point and direction, and simultaneously, because the visual measurement has errors, in order to improve the pushing efficiency, the mechanical arm stops when the collision force is greater than a threshold value by increasing the pushing distance each time and then by force feedback, and at the moment, the mechanical arm is considered to be pushed in place.
Step 6: the method comprises the steps of building a virtual simulation scene, and building the virtual simulation scene by adopting a pybull physical engine, wherein the virtual simulation scene comprises a mechanical arm, two finger clamping mechanisms, stones in various shapes, a sampling box and the like, and dynamic parameters conforming to the physical world are set.
And 7: and (6) building a simulation learning training project in combination with the step 2/3/4/5/6, converting the expected pushing pixel position output by the network into a mechanical arm base coordinate system, performing learning training, and generating a pushing control intelligent network.
And 8: and (6) building a corresponding real physical scene according to the virtual scene in the step (6), and carrying out a migration experiment on the dialing-pushing control intelligent network in the step (7).
Although the present invention has been described with reference to the preferred embodiments, it is not intended to limit the present invention, and those skilled in the art can make modifications and variations of the present invention without departing from the spirit and scope of the present invention.

Claims (8)

1. An uncertain object-oriented perception execution interactive natural compliance human-like manipulation method is characterized by comprising the following steps:
step 1: designing an application scene, and obtaining a working area top view angle height map through a camera;
and 2, step: designing a dial-push control intelligent network;
and step 3: designing state space input and action space output of a dial-push control intelligent network;
the specific process of the state space input of the dial-push control intelligent agent network in the step 3 is as follows: rotating the height map in the step 1 every 22.5 degrees to generate a new picture, rotating for 16 times in total, and inputting the 16 pictures as the state of the dialing-pushing control intelligent network;
the specific process of dialing and pushing the action space output of the intelligent agent network in the step 3 is as follows: the action space output of the dialing and pushing control intelligent network is the dialing and pushing probability of each pixel point of each picture, the pixel point with the highest probability is selected as the dialing and pushing point each time, and the rotation angle of the corresponding picture is the dialing and pushing direction;
and 4, step 4: designing a reward function of the dialing and pushing control intelligent network;
and 5: designing knowledge guidance, including dialing and pushing control knowledge guidance, action space selection guidance and dialing and pushing force feedback guidance;
step 6: building a virtual simulation scene, and setting dynamic parameters conforming to a physical world;
and 7: building a simulation learning training project, and generating a dialing and pushing control intelligent network;
and 8: and constructing a corresponding real physical scene, and carrying out a migration experiment on the dialing-pushing control intelligent network.
2. The uncertain object-oriented perception execution interaction natural compliance human control method according to claim 1, wherein the specific process of the step 1 is as follows:
the utility model discloses a sample case, including sample case, hand-eye camera, set up the sample case and be the cuboid, it is long to establish sample case internal dimension, it is wide, high, it is anomalous stone to establish the sample that needs the collection, place into sample case from random position with a stone at every turn, the operation that pushes away is carried out to the pointed end of a closed two finger holder through six degrees of freedom arms end, the hand-eye camera that uses the arm shoots initial environment in the sample case, utilize the camera internal parameter to correct the angle of shooing to overlooking the visual angle, and pick up sample case edge according to the predefined boundary in workspace, output height map.
3. The uncertain object-oriented perception execution interaction natural compliance human control method according to claim 1, wherein the specific process of the step 2 is as follows:
a deep Q network DQN is adopted to design a dial-push control intelligent network, a network structure adopts a multilayer DenseNet model, the parameter quantity of the network is reduced through characteristic reuse and bypass arrangement, the problem of gradient disappearance is relieved, and therefore a control strategy is obtained through learning from original pixel data.
4. The uncertain object-oriented perception execution interaction natural compliance human control method according to claim 1, wherein the specific process of the step 4 is as follows:
by judging the change of the depth values of the two frames of images before and after, when the change is larger than a threshold value, the single dialing push is considered to be successful, and a relatively small reward value is given at the moment; and meanwhile, carrying out image connectivity detection on the height map image in the step 1, judging the connectivity of the map by adopting a breadth-first search algorithm, and when the judgment map is communicated and the difference between the judgment map and the edge pixel of the box is small, considering that the dial has been pushed in place, and giving a relatively large reward value at the moment.
5. The uncertain object-oriented perception execution interaction natural compliance human control method according to claim 2, wherein the specific process of the step 5 is as follows:
(1) expanding the image with the depth value at the edge of the removed sample box to form a mask outline surrounding the stone, and only judging the probability of pixel points positioned in the mask to be used as a to-be-selected point for dialing and pushing;
(2) dividing the sample box into grids according to the average size of the stones, detecting whether the stones exist in the four corner parts, and guiding the mechanical arm to push the stones towards the corner directions if the stones do not exist; if the angular point is detected to have the stone, preferentially guiding the mechanical arm to push the stone to a gap part between the two stones; if the points to be guided have stones, selecting pushing points from the points to be selected by adopting a greedy strategy;
(3) and obtaining a desired pushing point and direction, and simultaneously, in order to improve the pushing efficiency, increasing the pushing distance each time, and then feeding back through force, when the collision force is greater than a threshold value, stopping the mechanical arm, and at the moment, considering that the pushing is in place.
6. The uncertain object-oriented perception execution interaction natural compliance human control method according to claim 1, wherein the specific process of the step 6 is as follows:
a pybullet physical engine is adopted to build a virtual simulation scene, a mechanical arm, a two-finger clamping mechanism and a sampling box are built through a urdf file, 20 stone samples with different shapes and quality types are built in a sampling sample library, and gravity and friction coefficients which accord with the physical world are set.
7. The uncertain object-oriented perception execution interactive natural compliance human control method according to claim 6, wherein the specific process of the step 7 is as follows: and (3) building a simulation learning training project by combining the steps 2-6, randomly selecting a stone from a sample library each time, putting the stone at a random position in a sampling box, training a dialing and pushing control intelligent network by a knowledge guiding-exploring-redirecting iterative training method, obtaining an optimal dialing and pushing action according to the network output probability, converting the optimal dialing and pushing action into a mechanical arm base coordinate system for specific execution, and updating the network weight by awarding to generate an available dialing and pushing control intelligent network.
8. The uncertain object-oriented perception execution interaction natural compliance human control method according to claim 7, wherein the specific process of the step 8 is as follows: and (4) building a corresponding real physical scene according to the virtual scene in the step (6), deploying the dialing and pushing control intelligent network in the step (7) into the real physical scene, and carrying out communication interaction with the mechanical arm and the camera through a UDP network to develop a migration experiment.
CN202110667497.2A 2021-06-16 2021-06-16 Uncertain object-oriented perception execution interaction natural compliance human-like manipulation method Active CN113478478B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110667497.2A CN113478478B (en) 2021-06-16 2021-06-16 Uncertain object-oriented perception execution interaction natural compliance human-like manipulation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110667497.2A CN113478478B (en) 2021-06-16 2021-06-16 Uncertain object-oriented perception execution interaction natural compliance human-like manipulation method

Publications (2)

Publication Number Publication Date
CN113478478A CN113478478A (en) 2021-10-08
CN113478478B true CN113478478B (en) 2022-08-12

Family

ID=77935353

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110667497.2A Active CN113478478B (en) 2021-06-16 2021-06-16 Uncertain object-oriented perception execution interaction natural compliance human-like manipulation method

Country Status (1)

Country Link
CN (1) CN113478478B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108527371A (en) * 2018-04-17 2018-09-14 重庆邮电大学 A kind of Dextrous Hand planing method based on BP neural network
CN111618862A (en) * 2020-06-12 2020-09-04 山东大学 Robot operation skill learning system and method under guidance of priori knowledge

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7672922B2 (en) * 2006-11-06 2010-03-02 Boris Kaplan Pointer-oriented object acquisition method for abstract treatment of information of AI of AI of a cyborg or an android based on a natural language
JP2019063984A (en) * 2017-10-02 2019-04-25 キヤノン株式会社 Information processor, method, and robot system
CN110769985B (en) * 2017-12-05 2023-10-17 谷歌有限责任公司 Viewpoint-invariant visual servoing of robotic end effectors using recurrent neural networks

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108527371A (en) * 2018-04-17 2018-09-14 重庆邮电大学 A kind of Dextrous Hand planing method based on BP neural network
CN111618862A (en) * 2020-06-12 2020-09-04 山东大学 Robot operation skill learning system and method under guidance of priori knowledge

Also Published As

Publication number Publication date
CN113478478A (en) 2021-10-08

Similar Documents

Publication Publication Date Title
Ha et al. Flingbot: The unreasonable effectiveness of dynamic manipulation for cloth unfolding
CN110026987B (en) Method, device and equipment for generating grabbing track of mechanical arm and storage medium
CN110400345B (en) Deep reinforcement learning-based radioactive waste push-grab cooperative sorting method
US9019278B2 (en) Systems and methods for animating non-humanoid characters with human motion data
CN113511503B (en) Independent intelligent method for collecting, collecting and collecting uncertain objects by extraterrestrial detection
Kiatos et al. Robust object grasping in clutter via singulation
CN106651949A (en) Teleoperation method and system for grabbing objects using space mechanical arm based on simulation
CN111598951A (en) Method, device and storage medium for identifying space target
TW201027288A (en) Method of teaching robotic system
CN114912287A (en) Robot autonomous grabbing simulation system and method based on target 6D pose estimation
CN112149573A (en) Garbage classification and picking robot based on deep learning
CN114918918B (en) Domain-containing self-adaptive robot disordered target pushing and grabbing method
CN110852241B (en) Small target detection method applied to nursing robot
CN114882109A (en) Robot grabbing detection method and system for sheltering and disordered scenes
CN107363834A (en) A kind of mechanical arm grasping means based on cognitive map
Weng et al. Graph-based task-specific prediction models for interactions between deformable and rigid objects
CN113478478B (en) Uncertain object-oriented perception execution interaction natural compliance human-like manipulation method
CN114851201A (en) Mechanical arm six-degree-of-freedom vision closed-loop grabbing method based on TSDF three-dimensional reconstruction
Orsula et al. Learning to Grasp on the Moon from 3D Octree Observations with Deep Reinforcement Learning
CN109318227A (en) A kind of shake the elbows method and anthropomorphic robot based on anthropomorphic robot
CN113664825B (en) Stacking scene mechanical arm grabbing method and device based on reinforcement learning
Wada et al. 3D object segmentation for shelf bin picking by humanoid with deep learning and occupancy voxel grid map
CN114193440A (en) Robot automatic grabbing system and method based on 3D vision
CN112288809B (en) Robot grabbing detection method for multi-object complex scene
Ren et al. Fast-learning grasping and pre-grasping via clutter quantization and Q-map masking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant