CN116679642A - Efficient modularized system design and knowledge reasoning method for complex assembly task in unstructured environment - Google Patents
Efficient modularized system design and knowledge reasoning method for complex assembly task in unstructured environment Download PDFInfo
- Publication number
- CN116679642A CN116679642A CN202310645881.1A CN202310645881A CN116679642A CN 116679642 A CN116679642 A CN 116679642A CN 202310645881 A CN202310645881 A CN 202310645881A CN 116679642 A CN116679642 A CN 116679642A
- Authority
- CN
- China
- Prior art keywords
- assembly
- knowledge
- knowledge base
- reasoning
- robot
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013461 design Methods 0.000 title claims abstract description 14
- 238000000034 method Methods 0.000 title claims abstract description 14
- 230000009471 action Effects 0.000 claims abstract description 31
- 238000004891 communication Methods 0.000 claims abstract description 25
- 230000006854 communication Effects 0.000 claims abstract description 25
- 230000016776 visual perception Effects 0.000 claims abstract description 11
- 238000012545 processing Methods 0.000 claims abstract description 10
- 230000000007 visual effect Effects 0.000 claims description 10
- 230000004438 eyesight Effects 0.000 claims description 9
- 230000006870 function Effects 0.000 claims description 6
- 230000007246 mechanism Effects 0.000 claims description 4
- 230000008859 change Effects 0.000 claims description 3
- 238000001514 detection method Methods 0.000 claims description 2
- 238000012805 post-processing Methods 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 208000035475 disorder Diseases 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000036544 posture Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- -1 sizes Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B19/00—Programme-control systems
- G05B19/02—Programme-control systems electric
- G05B19/418—Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM]
- G05B19/41885—Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM] characterised by modeling, simulation of the manufacturing system
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/32—Operator till task planning
- G05B2219/32339—Object oriented modeling, design, analysis, implementation, simulation language
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Manufacturing & Machinery (AREA)
- General Engineering & Computer Science (AREA)
- Quality & Reliability (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Manipulator (AREA)
Abstract
The invention discloses a high-efficiency modularized system design and knowledge reasoning method for complex assembly tasks in an unstructured environment. The knowledge base module stores part attributes, action instructions and different assembly tasks in a layering manner; the visual perception module controls the camera to shoot the working area, and the position and the gesture of the randomly placed parts in the scene are recognized; the communication module is connected with the action control module; and constructing a knowledge reasoning model, combining with the assembly requirements of a knowledge base, and automatically reasoning a feasible assembly operation sequence by a robot to finish the removal and reassembly of covered parts. The invention has various processing forms for complex assembly tasks, and is stored in a knowledge base mode or is completed by inputting external task instructions; based on knowledge reasoning, a feasible assembly action sequence is generated autonomously, and flexibility and intelligence of the robot for completing complex assembly tasks are improved.
Description
Technical Field
The invention belongs to an industrial robot modularized system design and knowledge reasoning method facing complex assembly tasks, and establishes a high-efficiency assembly planning system comprising an assembly knowledge base module, a vision module, a communication module and an action control module; aiming at complex assembly tasks in a non-structural environment, a knowledge reasoning mechanism is constructed, so that a robot can autonomously reason a feasible assembly sequence by utilizing visual image processing and action control according to part assembly requirements.
Background
The complex assembly task often needs to orderly assemble parts with different shapes, requires a large amount of assembly knowledge to be memorized by workers, and has extremely high requirements on the professional knowledge level. Although a large number of robot devices are applied to the assembly industry in recent years, most of the robot devices are applied in a structural scene, the positions of parts are fixed, and the mechanical arms only need to reach different positions according to requirements, so that the robot device does not have independent reasoning capability, can realize single assembly task, and is not beneficial to the robot to complete various assembly tasks in multiple scenes. Aiming at the pain point problem in the assembly industry, the invention researches high-efficiency implementation schemes of various complex assembly tasks under the unstructured environment (unknown part gesture), carries out modularized design on parts such as assembly knowledge learning, environment perception, action control and communication in a system, optimizes the internal information transmission of a robot system, and enables the robot to independently and flexibly complete various complex assembly tasks according to the assembly task requirement change; meanwhile, a knowledge reasoning mechanism is constructed, so that the robot faces to an unknown working scene, can recognize complex position relations among different parts, and autonomously reasoning and generating an assembly sequence in combination with assembly task requirements.
Disclosure of Invention
The invention provides a method for designing and reasoning knowledge of a high-efficiency modularized system for complex assembly tasks in an unstructured environment, which aims to solve the problem that a robot autonomously reasoning assembly action sequences according to assembly requirements in an unknown environment. The specific measures are as follows: firstly, according to different functions, a robot system is subjected to modularized design, and the robot system is respectively a knowledge base module, a visual perception module, an action control module and a communication module. The knowledge base module stores part attributes, action instructions and different assembly tasks in a layering manner; the visual perception module controls the camera to shoot the working area, and the position and the gesture of the randomly placed parts in the scene are recognized; the communication module is tightly connected with the action control module and has the main functions of carrying out network port communication on the robot, so that (1) the robot can combine the vision module and the knowledge base module to plan movement and execute assembly tasks; (2) Other sensors on the robot are communicated and controlled. Secondly, a knowledge reasoning model is built for assembling parts which are placed in a disordered and overlapped mode in a non-structural environment, the positions and the postures of the parts are detected visually, the identification of covered parts is achieved through image post-processing, the assembling operation sequence which is feasible is automatically reasoned by a robot according to the assembling requirements in a knowledge base, and the removal and the reassembling of the covered parts are completed.
The invention is oriented to complex assembly tasks in unstructured environments, and realizes efficient modularized design and knowledge reasoning of the system by the following steps:
(1) High-efficient modularized design of robot system: the invention comprises a knowledge base module, a visual perception module, a communication module and an action control module. The creation of the knowledge base module comprises two parts, wherein the first part is the creation of an assembly knowledge base, and the storage content comprises the attributes (physical attributes and operation attributes) of the parts, the robot action instructions (instruction templates and parameter options) and the known assembly task sequences (comprising the order of the operation objects corresponding to the operation objects and the assembly pose). The knowledge base content can be displayed in a visual knowledge map form, and is convenient to maintain and update. The second part is designed for a knowledge base query function based on Python language and is responsible for searching and calling information in the knowledge base in real time. If a task instruction is input externally, the design analyzes the acquired operation object names and the corresponding sequences thereof, queries corresponding information in a knowledge base and outputs dictionary files for the subsequent modules to use; if the input task is a known assembly task stored in the knowledge base, task information in the knowledge base can be directly called for subsequent independent reasoning.
The visual perception module implementation also includes two parts: the first part is shooting control of a camera, and a 2D image and a point cloud in a working area are obtained in real time; the second part is visual real-time processing, and templates are firstly manufactured and divided into two cases of a 2D mode and a 3D mode. When facing objects with simple structures and obvious differences among different types of part images, visual processing in a 2D mode can be selected: by shooting images of each part at various positions in the field of view of the camera, drawing a rough Region (ROI) where the part is positioned on the images, analyzing the change of gray values of the images in the ROI region, and extracting the contours of the parts. After the operation is finished, a 2D template file is generated and stored in a model library for subsequent matching. If the STL model of the part is known, 3D vision can be used directly: reading the STL model, discretizing the surface of the STL model, creating a 3D point cloud template of the part, and storing the model to a model library for subsequent matching. The 3D template is more convenient than the 2D template, and the precision is higher. Then, estimating the current pose of the part through template matching, extracting the outlines of all the parts in the clutter scene in a 2D mode, matching according to a template file, returning n pieces (n is a self-determined number) with the highest degree of coincidence (namely the fraction), and recording the fraction for subsequent knowledge reasoning. In the 3D mode, the point cloud obtained by shooting the clutter scene is matched with the 3D template point cloud, n pieces (n is a self-determined number) with the highest degree of coincidence (namely the fraction) are returned, and the fraction is recorded for subsequent knowledge reasoning. Experimental results show that the matching score in the 2D case is at most 1, and the matching score in the 3D case is at most 0.5 (because a single shot can only cover 50% of the part surface at 3D view, and the template file is a 100% surface point cloud).
The communication module is tightly connected with the robot action control module and can be divided into two parts: the first part is a robot control module, and the acquired operation pose is sent to the robot through network port communication to finish the initial pose reaching the pose of an operation object in a scene, and then reaching the target assembly pose extracted from a knowledge base. The second part is a tool control module which is used for controlling the end tool or other tools of the robot, reading serial port control instructions of different tools in a knowledge base through serial port communication, automatically generating end tool operation instructions suitable for each part according to different operation attributes of different operation objects, and sending the end tool operation instructions to the tools in a serial port mode.
(2) Complex assembly task implementation based on knowledge reasoning in non-structural environment: because the parts are scattered and placed in a non-structural environment, the parts are overlapped with each other (shielded from each other), and the robot needs to combine the assembly requirement and visual perception in the knowledge base and autonomously deduce a proper operation action sequence to complete the assembly task. Aiming at the situation, the invention provides an inference mechanism based on matching similarity score, which combines the matching score recorded by a vision module in a 2D or 3D mode, wherein a part A with matching similarity less than 0.85 (similarity is fully divided into 1) is considered to be blocked by other parts, so that 50% of surfaces cannot be completely shown in a view, an inference flow is entered, a matching result of a part B closest to the part A is taken, and the matching result is set as a blocking pose, before the blocked part is assembled, the blocked part is moved to an idle waiting area through robot operation, so that inference is performed, and an assembly sequence of the whole task can be independently generated until all required parts acquire qualified poses and is stored locally for a subsequent communication module and an action control module to call.
According to the technical scheme, the invention has the following advantages:
the invention is oriented to a modularized system designed by complex assembly tasks in an unstructured environment, and modularizes knowledge base, visual perception, communication and action control, so that the information transmission efficiency among the modules is obviously improved, the maintenance is easy, the switching is easy, and the programming complexity is simplified. The complex assembly task processing modes are various, and the complex assembly task processing modes can be stored in a knowledge base mode or can be completed by inputting an external task instruction; for the assembly of parts placed in disorder under the non-structural environment, the visual perception module can detect the pose of the parts in real time, and when the parts are stacked, a feasible assembly action sequence can be generated autonomously based on knowledge reasoning, so that the flexibility and the intelligence of the robot for completing complex assembly tasks are improved.
Drawings
FIG. 1 is a schematic diagram of a knowledge base visualized in the present invention;
FIG. 2 is a general flow diagram of a complex assembly task in the present invention, including the cooperative relationship between the modules.
Detailed Description
The following describes in detail the examples of the present invention, which are implemented on the premise of the technical solution of the present invention, and detailed embodiments and specific operation procedures are given, but the scope of protection of the present invention is not limited to the following examples.
FIG. 1 shows a visual knowledge base schematic diagram, which is described by using a hierarchical knowledge map, wherein purple nodes are assembly knowledge bases and top nodes, and the nodes comprise three types of information of parts, actions and tasks. The part nodes store part information of various shapes, including small cylinders, squares, large cuboids, triangular prisms, bridges, cylinders, semicircle, parts 1, 2, 3 and the like, and the related information stored in each part entity node contains ID numbers, semantic names, shapes, materials, sizes, colors, weights, operational modes, positions (visual detection acquisition) and the like; the action node comprises grabbing and placing skills, the grabbing skill can be divided into three actions or sub-events of identification, grabbing and moving according to the execution sequence, and the placing skill can be divided into two actions of transferring and placing (loosening). Various known assembly schemes (task one, task two, task three, etc.) are stored within the task node (green node).
FIG. 2 is a general flow diagram of a complex assembly task depicting the cooperative relationship between the modules. Can be divided into S0, S1, S2 and S34 events. Event S0: creating a hierarchical knowledge graph base; event S1: inputting task instructions to inquire and acquire target parts and assembly information; event S2: the vision processing module and the knowledge reasoning module are performed simultaneously, and the robot recognizes the gesture of the part and autonomously reasoning the part assembly action sequence under the shielding condition; event S3: and directly controlling the robot and various end tools to complete the assembly task through the communication module and the action control module according to the finally generated operation sequence.
Event S1: as shown, the robot may be assigned an assembly task in modes 1 and 2. If the task is externally input (mode 2), the task instruction needs to be analyzed to obtain the name of each part and the assembly order thereof; and then the knowledge base is queried, and attribute information and assembly task information (assembly order, assembly pose and the like) of the corresponding parts are output.
Event S2: sequentially extracting part information according to the part assembly sequence output by the S1, judging whether the part has a shielding mark, if TRUE (TRUE), reading the corresponding position and the target assembly position of the shielding object B from a knowledge base, sending the position and the target assembly position to a communication and action control module, and controlling the robot and various end tools to finish assembly operation through an event S3; if FALSE (FALSE), the part template is read and matched with the visual information of the actual scene, and the Score is returned. Judging whether the part is possibly blocked or not according to whether the Score is larger than 0.85, if TRUE (TRUE), extracting a grabbing point, converting the grabbing point into a robot coordinate system, sending the pose information of the part and the target assembly pose to a communication and action control module, and completing the assembly of the part through an event S3; if FALSE (FALSE), the part may be partially blocked, the part closest to the part is found, the part is judged to be blocked, the pose of the part is identified, meanwhile, the knowledge graph is queried to extract the spare waiting pose, the pose of the blocked part and the space waiting pose are sent to the communication and action control module, and the robot completes the removal of the blocked object. And the process is circulated until all the parts are assembled.
Event S3: and (3) finishing assembly operation according to the final executable pose generated in the step S2. Step1, sequentially reading the grabbing pose and the target assembling pose according to the assembling sequence; step2, reading the grasped materials and the grasping force through the object attribute inquired in the S1, determining the final grasping force, inquiring a knowledge base, acquiring a control instruction of the end tool, and calculating an instruction value of a corresponding function according to the grasping force; step3, the information obtained in Step1 and Step2 is read through the combination of the network port and the serial port communication designed based on Python, and the information is sequentially sent to the robot to finish operation.
The following are 7 sub-modules to which the above assembly task is applied.
(1) The knowledge base creation module: creating a knowledge base;
(2) Knowledge base query module: completing the searching of various knowledge and the inquiring of relation in the assembly task;
(3) Shooting control module: controlling when and what parameters the camera takes;
(4) And a vision processing module: acquiring pose information of target parts in the working space in a template matching mode and the like;
(5) And a robot control module: controlling the gesture of the robot through network port communication;
(6) End tool control module: controlling the gesture of the tool through serial communication;
(7) Knowledge reasoning module: and an assembly sequence of the task is autonomously generated through an efficient and convenient reasoning algorithm.
Claims (5)
1. The efficient modularized system design and knowledge reasoning method for complex assembly tasks in an unstructured environment is characterized by comprising the following steps of:
according to different functions, the robot system is modularly designed, and the robot system is respectively a knowledge base module, a visual perception module, an action control module and a communication module;
the knowledge base module stores part attributes, action instructions and different assembly tasks in a layering manner;
the visual perception module controls the camera to shoot the working area, and the position and the gesture of the randomly placed parts in the scene are recognized;
the communication module is connected with the action control module and is used for carrying out network port communication on the robot system; (1) Enabling the robot system to combine the vision module and the knowledge base module to plan movement and execute assembly tasks; (2) Communicating and controlling other sensors on the robotic system; aiming at the assembling of parts which are placed in a disordered and overlapped way in a non-structural environment, a knowledge reasoning model is built, the identification of covered parts is realized through visual detection of the pose of the parts and image post-processing, the assembling operation sequence which is feasible by robot autonomous reasoning is combined with the assembling requirement of a knowledge base, and the removal and the reassembling of the covered parts are completed.
2. The method for efficient modular system design and knowledge reasoning for complex assembly tasks in an unstructured environment of claim 1, wherein,
the method comprises the steps of creating a knowledge base module, wherein the first part is the building of an assembly knowledge base, and the storage content comprises the attribute, namely the physical attribute and the operation attribute, of a part, the action instruction of a robot, namely an instruction template and a parameter option, and a known assembly task sequence, namely the sequence and the assembly pose of an operation object corresponding to the operation object; the second part is designed for a knowledge base query function based on Python language and is responsible for searching and calling information in the knowledge base in real time; if a task instruction is input externally, the design analyzes the acquired operation object names and the corresponding sequences thereof, queries corresponding information in a knowledge base and outputs dictionary files for the subsequent modules to use; if the input task is a known assembly task stored in the knowledge base, task information in the knowledge base is directly called for subsequent independent reasoning.
3. The method for efficient modular system design and knowledge reasoning for complex assembly tasks in an unstructured environment of claim 1, wherein the visual perception module implementation also comprises two parts: the first part is shooting control of a camera, and a 2D image and a point cloud in a working area are obtained in real time; the second part is visual real-time processing, wherein a template is firstly manufactured and is divided into two cases of a 2D mode and a 3D mode; when facing objects with simple structures and obvious differences among different types of part images, the visual processing in the 2D mode is selected: drawing an approximate region ROI of each part on the image by shooting images of each part at various positions in the view field of the camera, analyzing the change of the gray value of the image in the region ROI, and extracting the contour of the part; after the operation is finished, generating a 2D template file, and storing the 2D template file into a model library for subsequent matching; if the STL model of the part is known, 3D vision is directly used: reading an STL model, discretizing the surface of the STL model, creating a 3D point cloud template of the part, and storing the 3D point cloud template into a model library for subsequent matching; then, estimating the current pose of the part through template matching, extracting the outlines of all the parts in the disordered scene in a 2D mode, matching according to a template file, returning n numbers with the highest coincidence degree, namely the highest scores, wherein n is a self-determined number, and recording the scores for subsequent knowledge reasoning; in the 3D mode, matching the point cloud obtained by shooting the clutter scene with the 3D template point cloud, returning n points with the highest coincidence degree, namely the points with the highest scores, and recording the scores for subsequent knowledge reasoning; the experimental results show that the matching score in the 2D case is at most 1, and the matching score in the 3D case is at most 0.5.
4. The method for designing and reasoning the knowledge of the efficient modular system for the complex assembly task in the unstructured environment according to claim 1, wherein the robot control module sends the acquired operation pose to the robot to finish the pose of the operation object in the scene from the initial pose and then to finish the target assembly pose extracted from the knowledge base through network port communication; the robot control module is used for controlling the end tool or other tools of the robot through the tool control module, reading serial port control instructions of different tools in the knowledge base through serial port communication, automatically generating end tool operation instructions suitable for each part according to different operation attributes of different operation objects, and sending the end tool operation instructions to the tools through serial ports.
5. The method for efficient modular system design and knowledge reasoning for complex assembly tasks in unstructured environments according to claim 1, wherein a reasoning mechanism based on matching similarity scores is adopted, and in combination with matching scores recorded by a vision module in a 2D or 3D mode, part a with matching similarity less than 0.85 is considered to be blocked by other parts, so that 50% of surfaces cannot be fully displayed in a view, and the method enters a reasoning flow: the matching result of the part B closest to the position of the part A is set as a shielding pose, and before the shielded part is assembled, the shielded part is moved to an idle waiting area through robot operation; thus, after all required parts acquire qualified poses, the assembly sequence of the whole task can be automatically generated and stored locally for the subsequent communication module and the action control module to call.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310645881.1A CN116679642A (en) | 2023-06-02 | 2023-06-02 | Efficient modularized system design and knowledge reasoning method for complex assembly task in unstructured environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310645881.1A CN116679642A (en) | 2023-06-02 | 2023-06-02 | Efficient modularized system design and knowledge reasoning method for complex assembly task in unstructured environment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116679642A true CN116679642A (en) | 2023-09-01 |
Family
ID=87786607
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310645881.1A Pending CN116679642A (en) | 2023-06-02 | 2023-06-02 | Efficient modularized system design and knowledge reasoning method for complex assembly task in unstructured environment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116679642A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117272425A (en) * | 2023-11-22 | 2023-12-22 | 卡奥斯工业智能研究院(青岛)有限公司 | Assembly method, assembly device, electronic equipment and storage medium |
-
2023
- 2023-06-02 CN CN202310645881.1A patent/CN116679642A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117272425A (en) * | 2023-11-22 | 2023-12-22 | 卡奥斯工业智能研究院(青岛)有限公司 | Assembly method, assembly device, electronic equipment and storage medium |
CN117272425B (en) * | 2023-11-22 | 2024-04-09 | 卡奥斯工业智能研究院(青岛)有限公司 | Assembly method, assembly device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Tsai et al. | A hybrid switched reactive-based visual servo control of 5-DOF robot manipulators for pick-and-place tasks | |
Alonso et al. | Current research trends in robot grasping and bin picking | |
Aleotti et al. | Part-based robot grasp planning from human demonstration | |
CN116679642A (en) | Efficient modularized system design and knowledge reasoning method for complex assembly task in unstructured environment | |
Aleotti et al. | Perception and grasping of object parts from active robot exploration | |
Akinola et al. | Learning precise 3d manipulation from multiple uncalibrated cameras | |
Hak et al. | Reverse control for humanoid robot task recognition | |
Bertino et al. | Experimental autonomous deep learning-based 3d path planning for a 7-dof robot manipulator | |
Scalise et al. | Improving robot success detection using static object data | |
Shneier et al. | Model-based strategies for high-level robot vision | |
Deng et al. | A human–robot collaboration method using a pose estimation network for robot learning of assembly manipulation trajectories from demonstration videos | |
Li et al. | Learning target-oriented push-grasping synergy in clutter with action space decoupling | |
Sebbata et al. | An adaptive robotic grasping with a 2-finger gripper based on deep learning network | |
Lei et al. | Unknown object grasping using force balance exploration on a partial point cloud | |
Huang et al. | Intelligent humanoid mobile robot with embedded control and stereo visual feedback | |
Lin et al. | Inference of 6-DOF robot grasps using point cloud data | |
EP4155036A1 (en) | A method for controlling a grasping robot through a learning phase and a grasping phase | |
Kyprianou et al. | Bin-picking in the industry 4.0 era | |
Villagomez et al. | Robot grasping based on RGB object and grasp detection using deep learning | |
Kim et al. | Fuzzy logic control of a robot manipulator based on visual servoing | |
Fu et al. | Robotic arm intelligent grasping system for garbage recycling | |
Hildebrandt et al. | A flexible robotic framework for autonomous manufacturing processes: report from the european robotics challenge stage 1 | |
Wang et al. | Combining vision sensing with knowledge database for environmental modeling in dual-arm robot assembly task | |
Vargas et al. | Object recognition pipeline: Grasping in domestic environments | |
CN115570572B (en) | Complex assembly task action sequence planning method based on hierarchical knowledge graph |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |