CN111283679A - Multi-connected voice control automatic guide transportation system and control method thereof - Google Patents

Multi-connected voice control automatic guide transportation system and control method thereof Download PDF

Info

Publication number
CN111283679A
CN111283679A CN202010060340.9A CN202010060340A CN111283679A CN 111283679 A CN111283679 A CN 111283679A CN 202010060340 A CN202010060340 A CN 202010060340A CN 111283679 A CN111283679 A CN 111283679A
Authority
CN
China
Prior art keywords
target object
voice
operator
control
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010060340.9A
Other languages
Chinese (zh)
Inventor
麦骞誉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lubang Technology Licensing Co Ltd
Original Assignee
Lubang Technology Licensing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lubang Technology Licensing Co Ltd filed Critical Lubang Technology Licensing Co Ltd
Priority to CN202010060340.9A priority Critical patent/CN111283679A/en
Publication of CN111283679A publication Critical patent/CN111283679A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J5/00Manipulators mounted on wheels or on carriages
    • B25J5/007Manipulators mounted on wheels or on carriages mounted on wheels
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The invention relates to a multi-connected voice control automatic guide transportation system and a control method thereof, and the control method is characterized in that: the method comprises the following steps: the software part comprises an instruction database arranged on the cloud server; the instruction database comprises object list data, object image identification data, object relative position data and instruction input position data; the hardware part comprises an instruction input device for inputting voice instructions and a robot for completing corresponding work according to the voice instructions; the instruction input device comprises a fixed input device and/or a mobile input device; the robot comprises a transplanting mechanism, a mechanical arm, a camera module and a control mainboard; the control main board controls the transplanting mechanism, the mechanical arm and/or the camera module; the robot is provided with an instruction input module and a communication module, and the instruction input module and/or the communication module are in communication connection with the control mainboard. The system can take voice as an instruction, and control the robot to complete related work after voice recognition.

Description

Multi-connected voice control automatic guide transportation system and control method thereof
Technical Field
The invention relates to a robot control system, in particular to a multi-connected voice control automatic guide transportation system and a control method thereof.
Background
In the modern society, the robot is inseparable from the human society, and the robot is absolutely a powerful assistant for human; in industry, the application of the robot arm is very extensive, and the robot arm is visible everywhere in light industry or heavy industry; in life, a reception robot for reception, a transport robot for serving, and the like have appeared, and the situation is becoming more and more common. Recently, logistics companies develop mechanical arms for carrying and trolley guide transportation systems for logistics classification, so that the cargo classification efficiency is greatly improved; however, these robot arms and car guiding and transporting systems can only execute a few single and repeated commands, i.e. the whole link from receiving human commands to completing related work is simple, because the commands issued by human generally need to be programmed temporarily or pre-programmed, the degree of intelligence is low, and different use requirements cannot be met.
Therefore, further improvements are needed.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a multi-connected voice control automatic guide transportation system and a control method thereof.
The purpose of the invention is realized as follows:
the utility model provides a many voice control automated guidance conveyor systems that ally oneself with which characterized in that: comprises that
The software part comprises an instruction database arranged on the cloud server; the instruction database comprises object list data recording more than one type of information related to the target object, object image identification data for identifying the target object, object relative position data for identifying the position of the target object, and instruction input position data for feeding back the position of an operator;
the hardware part comprises an instruction input device for inputting voice instructions and a robot for completing corresponding work according to the voice instructions; the instruction input device comprises a fixed input device fixed at a preset position and used for inputting voice instructions and/or a mobile input device moving along with an operator and used for inputting voice instructions; the robot comprises a transplanting mechanism for moving and walking, a mechanical arm for taking a target object, a camera module for obtaining surrounding image information, and a control main board for executing a voice instruction; the control main board controls the transplanting mechanism, the mechanical arm and/or the camera module; the robot is provided with an instruction input module for receiving a voice instruction from an operator and a communication module for communicating and interconnecting with the cloud server, and the instruction input module and/or the communication module are/is in communication connection with the control mainboard.
The object list data includes keyword information corresponding to the name, color, shape and/or size of the target object.
The image discrimination data includes picture information in conformity with a color, shape and/or size of the target object.
The object relative position data comprises three-dimensional coordinate information corresponding to the position of the target object.
The instruction input side position data fixes the fixed position information of the input device and/or moves the position information of the mobile input device.
The control mainboard is loaded with a voice mode and a manual mode, and the switching between the voice mode and the manual mode is carried out through a physical key and/or a virtual key.
In the manual mode, an operator inputs a control instruction through a remote controller, and the camera module transmits a field real-time image to the operator.
The control method of the multi-connected voice control automatic guide transportation system is characterized in that: comprises the following steps
Step A, an operator inputs a voice command, and a control system analyzes the input voice command to judge whether a target object mentioned in the voice command exists in object list data or not so as to execute the next step;
b, the control system extracts image identification information corresponding to the target object from the image identification data, extracts three-dimensional coordinate information corresponding to the target object from the object relative position data, and confirms the position information of the operator through the position data of the instruction input party; then the control system sends the image identification information, the three-dimensional coordinate information and the operator position information to the robot so as to execute the next step;
step C, the control main board plans a walking route required by the transplanting mechanism to move to the position of the target object according to the received three-dimensional coordinate information; the control main board searches a target object through the camera module according to the received image identification information and/or the three-dimensional coordinate information, and executes the next step;
d, the control main board confirms the real-time posture of the found target object through the camera module, plans out an action track required by the mechanical arm to take the target object, and controls the mechanical arm to smoothly take the target object according to the action track; or after the control main board confirms that the target object is found through the camera module, the operator controls the mechanical arm to take the target object according to the image transmitted back by the camera module and through the remote controller;
and E, planning a walking route required by the transplanting mechanism to the position of the operator according to the position of the target object and the position information of the operator by the control main board, and controlling the transplanting mechanism to move to the position of the operator according to the walking route.
The transplanting mechanism is provided with a laser sensor, an ultrasonic sensor, a photoelectric sensor, a magnetic sensor, a camera device, an infrared sensor and/or a GPS positioning module and the like.
And step E, after the robot hands the target object to the operator, waiting for the next voice command within a set time, and if the next voice command is not received within the set time, controlling the main board to plan the walking track of the transplanting mechanism returning to the initial position so as to enable the robot to return to the initial position for waiting.
The invention has the following beneficial effects:
the multi-connected voice control automatic guide transportation system integrates five technologies, namely a voice identification command technology, an automatic guide vehicle technology, an image identification technology, a wireless communication technology and a mechanical action programming technology, so that the system can complete corresponding work according to a voice command issued by an operator. Specifically, an operator can issue voice commands remotely through voice control, mechanical arms with automatic guided vehicles and a wireless network connection system to control the mechanical arms to complete related operations, the whole control process is simple and convenient, and the voice commands can be issued to a plurality of different mechanical arms simultaneously to assist the operator in completing work, so that complicated control processes of programming or controllers and the like required by a traditional control robot are omitted, and the control of the robot is more intelligent and convenient; the voice control mode of the system can really draw the distance between the human and the robot, so that the robot can better become a real assistant for the human, and the working efficiency and the living standard are improved.
Drawings
Fig. 1 is a schematic structural diagram of a robot according to an embodiment of the present invention.
FIG. 2 is a flowchart illustrating the operation of the system according to an embodiment of the present invention.
Fig. 3 is a flowchart illustrating the operation of the robot according to an embodiment of the present invention.
Detailed Description
The invention is further described with reference to the following figures and examples.
Referring to fig. 1-3, the multi-connected voice control automatic guided transportation system comprises
The software part comprises an instruction database arranged on the cloud server, the instruction database is arranged on the cloud server, so that the control system can be conveniently updated, and the connection with the robots M in different places can be conveniently carried out at any time and any place; the instruction database comprises object list data recording more than one type of information related to the target object, object image identification data for identifying the target object, object relative position data for identifying the position of the target object, and instruction input position data for feeding back the position of an operator;
the hardware part comprises an instruction input device for inputting voice instructions and a robot M for completing corresponding work according to the voice instructions; the instruction input device comprises a fixed input device fixed at a preset position and used for inputting voice instructions and/or a movable input device moving along with an operator and used for inputting voice instructions, and the operator can freely select the instruction input device to the United kingdom according to use habits or convenience; the robot M comprises a transplanting mechanism 3 for moving and walking, a mechanical arm 1 for taking a target object, a camera module 2 for obtaining surrounding image information and a control mainboard for executing a voice instruction; the main control board controls the transplanting mechanism 3, the mechanical arm 1 and/or the camera module 2 to achieve the purpose of unified control; the robot M is provided with an instruction input module for receiving a voice instruction from an operator and a communication module for communicating and interconnecting with the cloud server, and the instruction input module and/or the communication module are/is in communication connection with the control mainboard; the communication mode of the communication module is WIFI, 4G and/or 5G and the like, so that the robot M can be effectively, stably and quickly connected to the cloud server to exchange information and receive commands.
The system effectively associates voice instruction data in an instruction database, the transplanting mechanism 3 capable of walking to a destination through a walking route, the camera module 2 capable of identifying a target object, the mechanical arm 1 capable of taking the target object and the like, so that the robot M with the transplanting mechanism 3 can identify a related instruction from voice input by an operator and walk to a specified position to perform corresponding action.
Further, the object list data includes keyword information corresponding to the name, color, shape and/or size of the target object for pairing as a voice command; the name of the target object is not unique, and other names that may be called can be recorded in the object list data.
Further, the image discrimination data includes picture information in conformity with the color, shape and/or size of the target object; the control system needs more than one image (the more the number is better) which is consistent with the target object, and then applies an algorithm according to the images to obtain the picture data of the target object for identifying the target object to be searched.
Furthermore, the relative position data of the object comprises three-dimensional coordinate information which is consistent with the position of the target object, and the three-dimensional coordinate information consists of a transverse position, a longitudinal position and a height position so that the control system can confirm the specific position of the target object.
Further, the command input position data includes fixed position information of the fixed input unit and/or moving position information of the moving input unit. Wherein, the mobile input device comprises an electronic accessory which is connected with the control system through a wireless network; electronic accessories include smart phones, PC computers, remote controllers, and the like.
Furthermore, a voice mode and a manual mode are loaded on the control main board, and switching between the voice mode and the manual mode is carried out through a physical key and/or a virtual key.
Further, in the manual mode, an operator inputs a control instruction through a remote controller, and the camera module 2 transmits a live real-time image to the operator. Specifically, besides the control is realized through a voice command, an operator can be switched into a manual mode; when the transplanting mechanism 3 is switched to the manual mode, the operator can control the action of the robot M by using a designated remote controller or the like, which can be hardware or software; in the manual mode, an operator can immediately and manually control the robot M to execute some tasks which are difficult through a network at any time and any place, the inspection of the field situation through the camera module 2 on the robot M is more convenient, and the remote instant control can be realized; the camera module 2 is preferably a CCD camera.
Referring to fig. 2 to 3, the control method of the multi-connected voice-controlled automated guided transport system includes the following steps
Step A, an operator inputs a voice command, and a control system analyzes the input voice command to judge whether a target object mentioned in the voice command exists in object list data or not; when the target object exists in the object list data, the control system extracts the data related to the target object from the instruction database and executes the next step; when the target object does not exist in the object list data, the control system sends a feedback to the operator through a communication medium, so that the operator can confirm whether the input basic information of the target object is correct or not and inform the operator of re-input;
b, the control system extracts image identification information corresponding to the target object from the image identification data, extracts three-dimensional coordinate information corresponding to the target object from the object relative position data, and confirms the position information of the operator through the position data of the instruction input party; then the control system sends the image identification information, the three-dimensional coordinate information and the operator position information to the robot M to execute the next step;
step C, the control main board plans a walking route required by the transplanting mechanism 3 to go to the position of the target object according to the received three-dimensional coordinate information; when the robot M reaches the position where the target object is placed, the camera module 2 is used for acquiring images around the robot M, and the control main board searches for the target object through the camera module 2 according to the received image identification information and/or three-dimensional coordinate information; when the target object is found, the control system executes the next step; when the target object cannot be found, the control system feeds back the target object to the operator so that the operator can confirm whether the input target object position information is correct or not and inform the operator of inputting the target object again;
step D, the control main board confirms the real-time posture of the target object found through the camera module 2, plans out the action track required by the mechanical arm 1 to take the target object, and controls the mechanical arm 1 to smoothly take the target object according to the action track; or after the control main board confirms that the target object is found through the camera module 2, the operator controls the mechanical arm 1 to take the target object according to the image transmitted back by the camera module 2 and through the remote controller, and the manual operation mode can be switched in under the condition that the target object cannot be taken in the automatic operation mode so as to complete the task which cannot be completed in the automatic operation mode, and further ensure the reliability of the system;
and E, planning a walking route required by the transplanting mechanism 3 to the position of the operator by the control main board according to the position of the target object and the position information of the operator, and controlling the transplanting mechanism 3 to move to the position of the operator according to the walking route.
Furthermore, the transplanting mechanism 3 is provided with a laser sensor, an ultrasonic sensor, a photoelectric sensor, a magnetic sensor, a camera device, an infrared sensor, a GPS positioning module and the like, and the transplanting mechanism 3 can have an automatic driving function and an automatic obstacle avoidance function in cooperation with the sensors.
Further, in step E, the robot M waits for the next voice command within a set time after delivering the target object to the operator, and if the next voice command is not received within the set time, the main board is controlled to plan the walking track of the transplanting mechanism 3 returning to the initial position, so that the robot M waits for the life from the initial position.
Further, in step a, the voice input by the operator includes the name of the target object, the characteristics of the target object (the characteristics include color, shape, size, etc.), the placement position of the target object, and the like.
Furthermore, the control system can also recognize some specific voice commands, such as 'pause','re-input command', 'transfer to manual mode', etc.; after the control system identifies the specific voice command, a corresponding command is sent to the transplanting mechanism 3, and the transplanting mechanism 3 is allowed to complete a corresponding action.
The foregoing is a preferred embodiment of the present invention, and the basic principles, principal features and advantages of the invention are shown and described. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are intended to illustrate the principles of the invention, but that various changes and modifications may be made without departing from the spirit and scope of the invention, and the invention is intended to be protected by the following claims. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (10)

1. The utility model provides a many voice control automated guidance conveyor systems that ally oneself with which characterized in that: comprises that
The software part comprises an instruction database arranged on the cloud server; the instruction database comprises object list data recording more than one type of information related to the target object, object image identification data for identifying the target object, object relative position data for identifying the position of the target object, and instruction input position data for feeding back the position of an operator;
a hardware part which comprises an instruction input device for inputting voice instructions and a robot (M) for completing corresponding work according to the voice instructions; the instruction input device comprises a fixed input device fixed at a preset position and used for inputting voice instructions and/or a mobile input device moving along with an operator and used for inputting voice instructions; the robot (M) comprises a transplanting mechanism (3) for moving and walking, a mechanical arm (1) for taking a target object, a camera module (2) for obtaining surrounding image information and a control main board for executing a voice instruction; the control main board controls the transplanting mechanism (3), the mechanical arm (1) and/or the camera module (2); the robot (M) is provided with an instruction input module for receiving a voice instruction from an operator and a communication module for communicating and interconnecting with the cloud server, and the instruction input module and/or the communication module are/is in communication connection with the control mainboard.
2. The multi-connected voice-controlled automated guided transportation system according to claim 1, wherein: the object list data includes keyword information corresponding to the name, color, shape and/or size of the target object.
3. The multi-connected voice-controlled automated guided transportation system according to claim 1, wherein: the image discrimination data includes picture information in conformity with a color, shape and/or size of the target object.
4. The multi-connected voice-controlled automated guided transportation system according to claim 1, wherein: the object relative position data comprises three-dimensional coordinate information corresponding to the position of the target object.
5. The multi-connected voice-controlled automated guided transportation system according to claim 1, wherein: the position data of the instruction input side comprises fixed position information of the fixed input device and/or moving position information of the moving input device.
6. The multi-connected voice-controlled automated guided transportation system according to claim 1, wherein: the control mainboard is loaded with a voice mode and a manual mode, and the switching between the voice mode and the manual mode is carried out through a physical key and/or a virtual key.
7. The multi-connected voice controlled automated guided transport system of claim 6, wherein: in the manual mode, an operator inputs a control instruction through a remote controller, and the camera module (2) transmits a field real-time image to the operator.
8. The control method of the multi-connected voice-controlled automated guided transport system according to claim 1, wherein: comprises the following steps
Step A, an operator inputs a voice command, and a control system analyzes the input voice command to judge whether a target object mentioned in the voice command exists in object list data or not so as to execute the next step;
b, the control system extracts image identification information corresponding to the target object from the image identification data, extracts three-dimensional coordinate information corresponding to the target object from the object relative position data, and confirms the position information of the operator through the position data of the instruction input party; subsequently the control system sends the image discrimination information, the three-dimensional coordinate information and the operator position information to the robot (M) to perform the next step;
step C, the control main board plans a walking route required by the transplanting mechanism (3) to go to the target object position according to the received three-dimensional coordinate information; the control main board searches for a target object through the camera module (2) according to the received image identification information and/or the three-dimensional coordinate information, and executes the next step;
d, the control main board confirms the real-time posture of the found target object through the camera module (2), plans out an action track required by the mechanical arm (1) to take the target object, and controls the mechanical arm (1) to smoothly take the target object according to the action track; or after the control main board confirms that the target object is found through the camera module (2), the operator controls the mechanical arm (1) to take the target object through the remote controller according to the image transmitted back by the camera module (2);
and E, planning a walking route required by the transplanting mechanism (3) to the position of the operator by the control main board according to the position of the target object and the position information of the operator, and controlling the transplanting mechanism (3) to move to the position of the operator according to the walking route.
9. The multi-connected voice controlled automated guided transport system of claim 8, wherein: the transplanting mechanism (3) is provided with a laser sensor, an ultrasonic sensor, a photoelectric sensor, a magnetic sensor, a camera device, an infrared sensor and/or a GPS positioning module.
10. The multi-connected voice controlled automated guided transport system of claim 8, wherein: and step E, after the robot (M) delivers the target object to the hands of the operator, waiting for the next voice command within a set time, and if the next voice command is not received within the set time, controlling the main board to plan the walking track of the transplanting mechanism (3) returning to the initial position so as to enable the robot (M) to return to the initial position for waiting.
CN202010060340.9A 2020-01-19 2020-01-19 Multi-connected voice control automatic guide transportation system and control method thereof Pending CN111283679A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010060340.9A CN111283679A (en) 2020-01-19 2020-01-19 Multi-connected voice control automatic guide transportation system and control method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010060340.9A CN111283679A (en) 2020-01-19 2020-01-19 Multi-connected voice control automatic guide transportation system and control method thereof

Publications (1)

Publication Number Publication Date
CN111283679A true CN111283679A (en) 2020-06-16

Family

ID=71030739

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010060340.9A Pending CN111283679A (en) 2020-01-19 2020-01-19 Multi-connected voice control automatic guide transportation system and control method thereof

Country Status (1)

Country Link
CN (1) CN111283679A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113799140A (en) * 2021-10-14 2021-12-17 友上智能科技(苏州)有限公司 Flight vision positioning material taking method applied to composite robot

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016090655A (en) * 2014-10-30 2016-05-23 シャープ株式会社 Voice recognition robot system, voice recognition robot, controller for voice recognition robot, communication terminal for controlling voice recognition robot, and program
CN206164799U (en) * 2016-11-17 2017-05-10 东莞职业技术学院 Visible light communication's pronunciation broadcast system, earphone and audio player
CN109213149A (en) * 2018-08-06 2019-01-15 珠海格力电器股份有限公司 A kind of automated guided vehicle and its control method, device and storage medium
US20190107833A1 (en) * 2017-10-05 2019-04-11 National Chiao Tung University Robot speech control system and method thereof
CN109648579A (en) * 2019-01-17 2019-04-19 青岛理工大学 Intelligent robot, cloud server and intelligent robot system
CN109927012A (en) * 2019-04-08 2019-06-25 清华大学 Mobile crawl robot and automatic picking method
CN110472915A (en) * 2019-08-19 2019-11-19 上海木木机器人技术有限公司 A kind of transportation management method and system of cargo
CN110509281A (en) * 2019-09-16 2019-11-29 中国计量大学 The apparatus and method of pose identification and crawl based on binocular vision
CN110666806A (en) * 2019-10-31 2020-01-10 湖北文理学院 Article sorting method, article sorting device, robot and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016090655A (en) * 2014-10-30 2016-05-23 シャープ株式会社 Voice recognition robot system, voice recognition robot, controller for voice recognition robot, communication terminal for controlling voice recognition robot, and program
CN206164799U (en) * 2016-11-17 2017-05-10 东莞职业技术学院 Visible light communication's pronunciation broadcast system, earphone and audio player
US20190107833A1 (en) * 2017-10-05 2019-04-11 National Chiao Tung University Robot speech control system and method thereof
CN109213149A (en) * 2018-08-06 2019-01-15 珠海格力电器股份有限公司 A kind of automated guided vehicle and its control method, device and storage medium
CN109648579A (en) * 2019-01-17 2019-04-19 青岛理工大学 Intelligent robot, cloud server and intelligent robot system
CN109927012A (en) * 2019-04-08 2019-06-25 清华大学 Mobile crawl robot and automatic picking method
CN110472915A (en) * 2019-08-19 2019-11-19 上海木木机器人技术有限公司 A kind of transportation management method and system of cargo
CN110509281A (en) * 2019-09-16 2019-11-29 中国计量大学 The apparatus and method of pose identification and crawl based on binocular vision
CN110666806A (en) * 2019-10-31 2020-01-10 湖北文理学院 Article sorting method, article sorting device, robot and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113799140A (en) * 2021-10-14 2021-12-17 友上智能科技(苏州)有限公司 Flight vision positioning material taking method applied to composite robot

Similar Documents

Publication Publication Date Title
CN111496770B (en) Intelligent carrying mechanical arm system based on 3D vision and deep learning and use method
US20210053407A1 (en) Systems and methods for automated operation and handling of autonomous trucks and trailers hauled thereby
US11905116B2 (en) Controller and control method for robot system
US11584004B2 (en) Autonomous object learning by robots triggered by remote operators
CN112894757B (en) Automatic goods taking method and system of robot and robot
US20210279670A1 (en) Systems and methods for autonomous lineside parts delivery to an assembly line process
CN109532522A (en) A kind of unmanned charging system of automobile based on 3D vision technique and its application method
US11559902B2 (en) Robot system and control method of the same
WO2019169643A1 (en) Luggage transport method, transport system, robot, terminal device, and storage medium
US11048250B2 (en) Mobile transportation means for transporting data collectors, data collection system and data collection method
CN114405866B (en) Visual guide steel plate sorting method, visual guide steel plate sorting device and system
CN114505840B (en) Intelligent service robot for independently operating box type elevator
JP2022024084A (en) Inventory management system, transportation device, and method of connecting transportation device and transportation object
CN114355885A (en) Cooperative robot carrying system and method based on AGV
CN111283679A (en) Multi-connected voice control automatic guide transportation system and control method thereof
CA3193473A1 (en) Systems and methods for automated operation and handling of autonomous trucks and trailers hauled thereby
CN113021336B (en) File taking and placing system and method based on master-slave mobile operation robot
CN209038363U (en) Sorting system
CN111975776A (en) Robot movement tracking system and method based on deep learning and Kalman filtering
Wang et al. Coarse-to-fine visual object catching strategy applied in autonomous airport baggage trolley collection
US20220374295A1 (en) Systems and Methods for Inter-Process Communication within a Robot
CN114888768A (en) Mobile duplex robot cooperative grabbing system and method based on multi-sensor fusion
CN115847428A (en) AR technology-based mechanical assembly auxiliary guide system and method
US20240051134A1 (en) Controller, robot system and learning device
JP2005088146A (en) Object processing system, object processing method and robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination