CN112589809A - Tea pouring robot based on binocular vision of machine and artificial potential field obstacle avoidance method - Google Patents
Tea pouring robot based on binocular vision of machine and artificial potential field obstacle avoidance method Download PDFInfo
- Publication number
- CN112589809A CN112589809A CN202011405666.7A CN202011405666A CN112589809A CN 112589809 A CN112589809 A CN 112589809A CN 202011405666 A CN202011405666 A CN 202011405666A CN 112589809 A CN112589809 A CN 112589809A
- Authority
- CN
- China
- Prior art keywords
- teacup
- tea
- control system
- potential field
- obstacle avoidance
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 241001122767 Theaceae Species 0.000 claims abstract description 58
- 230000003993 interaction Effects 0.000 claims abstract description 25
- 230000009471 action Effects 0.000 claims abstract description 6
- 230000004044 response Effects 0.000 claims description 2
- 238000005516 engineering process Methods 0.000 description 5
- 238000013473 artificial intelligence Methods 0.000 description 3
- 230000006872 improvement Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 235000013361 beverage Nutrition 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
- B25J11/008—Manipulators for service tasks
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
- B25J13/003—Controls for manipulators by means of an audio-responsive input
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J19/00—Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J19/00—Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
- B25J19/02—Sensing devices
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J19/00—Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
- B25J19/02—Sensing devices
- B25J19/021—Optical sensing devices
- B25J19/022—Optical sensing devices using lasers
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J19/00—Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
- B25J19/02—Sensing devices
- B25J19/021—Optical sensing devices
- B25J19/023—Optical sensing devices including video camera means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1674—Programme controls characterised by safety, monitoring, diagnostic
- B25J9/1676—Avoiding collision or forbidden zones
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
Abstract
The invention discloses a tea pouring robot based on machine binocular vision and an artificial potential field obstacle avoidance method, which comprises a master control system, wherein the master control system comprises a main control system and a plurality of tea pouring robots; the human-computer interaction system is used for acquiring a tea pouring instruction; the identification and positioning system comprises a binocular camera and a laser ranging module; the mechanical arm execution system comprises a steering engine and a mechanical arm; when the human-computer interaction system receives a tea pouring instruction, the main control system controls the binocular camera to search and identify the teacup, and position information and distance information of the teacup relative to the robot are obtained; the main control system reversely solves the rotation angle of the steering engine according to the position and distance information of the teacup by combining an artificial potential field obstacle avoidance method and a D-H coordinate system of the robot, controls the steering engine to rotate by utilizing a PID algorithm so as to control the mechanical arm to grab the teacup, and then carries out tea making operation after grabbing the teacup. According to the invention, the three-dimensional space coordinate of the teacup is obtained through the binocular camera and the laser ranging, the artificial potential field method is combined with the mechanical arm movement, the tea pouring action is completed, the working efficiency is improved, and the use by a user is convenient.
Description
Technical Field
The invention belongs to the technical field of robot control, and particularly relates to a tea pouring robot based on machine binocular vision and an artificial potential field obstacle avoidance method.
Background
At present, the quality of life of people is gradually improved, the demand for service in life is higher and higher, and with the improvement of the manual service fee, a service robot with relatively low price, high durability and low use error rate is needed to meet the increasing demands of people.
Machine vision is a rapid development direction in the field of current artificial intelligence, and with the continuous progress of machine vision technology, the machine vision technology promotes the progress of industries such as industrial automation, intelligent security, artificial intelligence and the like, and also brings more development potentials and opportunities for the application in various fields, wherein binocular vision is an important branch of machine vision. The application of vision to the mechanical arm is a product combining a robot control technology and a mechanical vision technology, can better meet the requirements of people, and is integrated into the daily life of people.
At present, the application of tea serving and pouring robots in the market is not many, and most service robots in the market are wheel robots and can only walk on a horizontal ground and perform service. It has a significant disadvantage of being oversized and expensive to manufacture. Although the functions are complete, the method is not suitable for household popularization.
Disclosure of Invention
The invention aims to provide a tea pouring robot based on machine binocular vision and an artificial potential field obstacle avoidance method, and solves the problems of miniaturization, family, universality and automation of a tea pouring service robot.
The invention provides a tea pouring robot based on machine binocular vision and an artificial potential field obstacle avoidance method, which comprises a human-computer interaction system, a master control system, an identification and positioning system and a mechanical arm execution system, wherein the human-computer interaction system is connected with the master control system through a network; the human-computer interaction system, the identification and positioning system and the mechanical arm execution system are all connected with the master control system;
the human-computer interaction system is used for acquiring a tea pouring instruction; the identification and positioning system comprises a binocular camera and a laser ranging module; the mechanical arm execution system comprises a steering engine and a mechanical arm;
when the human-computer interaction system receives a tea pouring instruction, the instruction is sent to the main control system; the main control system controls the binocular camera to search and identify the teacup, and feeds back position information of the teacup relative to the robot to the main control system; the main control system controls the laser ranging module to face the teacup, and distance information of the teacup relative to the robot is obtained; the main control system reversely solves the rotation angle of the steering engine according to the position and distance information of the teacup by combining an artificial potential field obstacle avoidance method and a D-H coordinate system of the robot, controls the steering engine to rotate by utilizing a PID algorithm so as to control the mechanical arm to grab the teacup, and then carries out tea making operation after grabbing the teacup.
Further, when the binocular camera identifies the teacup, the color of the teacup is identified firstly, and image binarization is carried out according to the RGB value of the characteristic color block of the teacup, so that the teacup is highlighted.
Further, when the binocular cameras identify the teacup, the two cameras respectively identify image coordinates of the teacup in each picture, similar triangle operation is carried out through parallax of the teacup in the two images, and position information of the teacup is calculated.
Further, the main control system controls the laser ranging module to rotate by utilizing a PID algorithm, so that the laser ranging module is opposite to the tea cup.
Further, when the positive directions of the binocular camera and the laser ranging module are consistent, the main control system controls the binocular camera to be over against the teacup, and then the laser ranging module is over against the teacup.
Furthermore, the human-computer interaction system comprises a voice module, wherein the voice module receives and decodes a voice command of a user and sends a decoding signal to the main control system.
Furthermore, the voice module is provided with a first-level password, and the first-level password is valid within the preset time after the tea pouring instruction is responded.
Furthermore, the human-computer interaction system prompts a user through voice or light after the robot finishes the action of pouring tea.
Further, the robot also comprises a power supply system for supplying power to the robot.
The invention has the beneficial effects that: according to the tea pouring robot based on the machine binocular vision and the artificial potential field obstacle avoidance method, the machine binocular vision is combined with the mechanical arm artificial potential field obstacle avoidance method, instructions of a user are obtained through a man-machine interaction system, three-dimensional space coordinates of a tea cup are obtained through a binocular camera and laser ranging, the artificial potential field method is combined with mechanical arm movement, values of all axes of the mechanical arm can be obtained by matching with an inverse solution algorithm of the mechanical arm movement, tea pouring is accurately captured and finished, working efficiency is improved, and the tea pouring robot is convenient for the user to use.
Furthermore, color feature recognition and threshold binarization are adopted in image processing, and the image binarization is performed according to the RGB value of the teacup, so that the teacup is highlighted, the influence of the external environment is favorably reduced, the accuracy is high, and meanwhile, the subsequent processing of the teacup is favorably realized.
Furthermore, the robot is operated by using a voice control method, so that the use of a user is facilitated, the voice module is provided with a primary password, and only when a tea pouring command is input within a specific time after the password responds, the robot can execute a tea pouring action, so that the interference can be effectively avoided. The robot prompts the user through voice or light after finishing the tea pouring action, so that the robot is convenient for the user to use.
Drawings
Fig. 1 is a structural block diagram of a tea pouring robot based on machine binocular vision and an artificial potential field obstacle avoidance method.
Fig. 2 is a control flow chart of the tea pouring robot based on the binocular vision of the machine and the artificial potential field obstacle avoidance method.
Detailed Description
The invention will be further described with reference to the accompanying drawings in which:
the tea pouring robot based on the binocular vision of the machine and the artificial potential field obstacle avoidance method comprises a man-machine interaction system, a main control system, an identification and positioning system and a mechanical arm execution system, as shown in figure 1; the human-computer interaction system, the identification and positioning system and the mechanical arm execution system are all connected with the master control system; further, the robot also comprises a power supply system for supplying power to the robot. The human-computer interaction system is used for acquiring a tea pouring instruction; the identification and positioning system comprises a binocular camera and a laser ranging module; the mechanical arm execution system comprises a steering engine and a mechanical arm;
when the human-computer interaction system receives a tea pouring instruction, the instruction is sent to the main control system; the main control system controls the binocular camera to search and identify the teacup, and feeds back position information of the teacup relative to the robot to the main control system; the main control system controls the laser ranging module to face the teacup, and distance information of the teacup relative to the robot is obtained; the main control system reversely solves the rotation angle of the steering engine according to the position and distance information of the teacup by combining an artificial potential field obstacle avoidance method and a D-H coordinate system of the robot, controls the steering engine to rotate by utilizing a PID algorithm so as to control the mechanical arm to grab the teacup, and then carries out tea making operation after grabbing the teacup.
Furthermore, when the binocular camera identifies the teacup, the color of the teacup is identified firstly, and image binarization is carried out according to the object characteristic color block, namely the RGB value of the teacup, so that the teacup is highlighted, and external interference is reduced. When the binocular cameras identify the teacup, the two cameras respectively identify image coordinates of the teacup in each picture, similar triangle operation is carried out through parallax of the teacup in the two images, and position information of the teacup is calculated.
Further, the main control system controls the laser ranging module to rotate by utilizing a PID algorithm, so that the laser ranging module is opposite to the tea cup. When the positive directions of the binocular camera and the laser ranging module are consistent, the master control system controls the binocular camera to be over against the teacup, so that the laser ranging module is over against the teacup, and distance information is measured.
Furthermore, the human-computer interaction system comprises a voice module, wherein the voice module receives and decodes a voice command of a user and sends a decoding signal to the main control system. The voice module is provided with a first-level password, and the tea pouring instruction is effective within the preset time after the first-level password responds, so that interference is effectively avoided. The human-computer interaction system prompts a user through voice or light after the robot finishes the action of pouring tea.
The tea pouring robot based on the binocular vision of the machine and the artificial potential field obstacle avoidance method as shown in fig. 1 comprises a main control system, a power supply system, a man-machine interaction system, a recognition and positioning system and a mechanical arm execution system. Wherein, the main control system part selects STM32F407ZGT6 singlechip. The power supply system comprises a BOOST circuit and a stabilized voltage power supply circuit. The man-machine interaction system mainly comprises an LD3320 voice recognition module part. The identification and positioning system mainly comprises a binocular camera and a laser ranging module, wherein the binocular camera is composed of two OPENMV cameras. The mechanical arm execution system mainly comprises an execution mechanism for controlling the movement of the six-axis mechanical arm.
Fig. 2 shows the work flow of the robot in one control cycle. The robot firstly judges the current working state, and if the initialization is not successful, the system returns to carry out the initialization again. After the initialization is successful, the robot enters a waiting state, at the moment, a user sends an instruction to the robot through voice, the LD3320 module decodes after receiving the instruction, and the decoded information is transmitted to the main control chip through a serial port for mode selection. When the object grabbing mode is started, the binocular camera searches for a target object in a visual field, and if the target object is not searched, the chassis of the mechanical arm is rotated until the object appears in the visual field. After the object is found, the position of the target object is firstly identified by the binocular cameras, the image coordinates of the tea cups in the respective pictures are respectively identified by the binocular cameras, similar triangle operation is carried out through the parallax of the tea cups in the two images, and the position information of the tea cups relative to the robot is calculated. And sending the position coordinates x and y to a main control chip, controlling the mechanical arm chassis to rotate by using a PID algorithm, enabling the camera to face the object, and obtaining the distance z of the teacup by using a laser ranging module. After the three-dimensional space coordinates of the target object are solved, the motion position of the mechanical arm is solved reversely, the rotation angle of the steering engine is solved reversely by combining an artificial potential field obstacle avoidance method and a D-H coordinate system of the robot, the steering engine is controlled to rotate by utilizing a PID algorithm so as to control the mechanical arm to grab a teacup, an LED is lightened after the teacup is grabbed, and the completion of the task is marked.
The LD3320 module of the man-machine interaction system decodes the information input by the voice of a user, sends the information to the main control chip through a serial port at a specific frequency to realize the real-time feedback of the information, the control system performs binocular camera recognition and image processing according to the information obtained by feedback, and the information obtained by feedback is used for mechanical arm control. The LD3320 module uses a password with a primary instruction that is valid within the primary password response 15s, which avoids interference.
When the binocular camera identifies an object, the camera identifies the color in the image, particularly the color of a teacup, and then binaryzation is carried out on the image according to the RGB value of the object characteristic color block to highlight the target object. And then, the x and y coordinate values of the target object are obtained, so that the mechanical arm chassis is convenient to control to be turned over to be opposite to the teacup. The x and y coordinate values of the object are combined with the z value obtained by the laser ranging module, the x and y coordinate values are sent to a main control chip through a serial port at a specific frequency, and the rotation angle of each axis of the mechanical arm is obtained after operation, so that the further processing is facilitated.
In the process of grabbing objects by the mechanical arm, a D-H coordinate system is established by the mechanical arm, the position of a target object and the position of an obstacle are identified by the camera, the motion path of the mechanical arm is planned by combining a manual potential field method and a mechanical arm motion inverse solution formula, the angle of each shaft needing to be rotated is obtained, and the steering engine is controlled to rotate to the corresponding position to grab the mechanical gripper. And after the object is grabbed, the mechanical arm is controlled to return to carry out tea making work, and the lamp is turned on to display after the work is finished.
In summary, an acoustic control tea pouring robot system based on machine binocular vision, an artificial potential field obstacle avoidance method and mechanical arm motion analysis takes an STM32F407ZGT6 as a core, a binocular OPENMV camera and a laser ranging module as sensors, six-axis mechanical arms as motion modules, and an LD3320 voice module to communicate with a user. The robot combines the artificial intelligence at the frontier of modern robotics, and has the advantages of high cost performance, high efficiency, high reliability, and capability of listening to commands and quickly executing the commands through a binocular vision technology and an artificial potential field robot obstacle avoidance method. The robot applies an improved machine vision binocular recognition algorithm and an artificial potential field method to the tea-holding robot, combines the control of a sound control module and a six-axis mechanical arm, and runs fully automatically. The robot can be applied to daily special services for each user in future, can also be applied to beverage stores, restaurants and the like, liberates labor force, improves working efficiency and promotes the development of the service industry in China.
It will be understood by those skilled in the art that the foregoing is merely a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included within the scope of the present invention.
Claims (9)
1. A tea pouring robot based on machine binocular vision and an artificial potential field obstacle avoidance method is characterized by comprising a human-computer interaction system, a master control system, an identification and positioning system and a mechanical arm execution system; the human-computer interaction system, the identification and positioning system and the mechanical arm execution system are all connected with the master control system;
the human-computer interaction system is used for acquiring a tea pouring instruction; the identification and positioning system comprises a binocular camera and a laser ranging module; the mechanical arm execution system comprises a steering engine and a mechanical arm;
when the human-computer interaction system receives a tea pouring instruction, an instruction signal is sent to the main control system; the main control system controls the binocular camera to search and identify the teacup, and after the teacup is identified, the position information of the teacup relative to the robot is fed back to the main control system; the main control system controls the laser ranging module to face the teacup, and distance information of the teacup relative to the robot is obtained; the main control system reversely solves the rotation angle of the steering engine according to the position and distance information of the teacup by combining an artificial potential field obstacle avoidance method and a D-H coordinate system of the robot, controls the steering engine to rotate by utilizing a PID algorithm so as to control the mechanical arm to grab the teacup, and then carries out tea making operation after grabbing the teacup.
2. The tea pouring robot based on the binocular vision of the machine and the artificial potential field obstacle avoidance method is characterized in that when the binocular camera identifies a teacup, the color of the teacup is identified, image binarization is performed according to the RGB value of a characteristic color block of the teacup, and the teacup is highlighted.
3. The tea pouring robot based on the binocular vision of the machine and the artificial potential field obstacle avoidance method is characterized in that when the binocular cameras identify tea cups, the two cameras respectively identify image coordinates of the tea cups in respective pictures, similar triangle operation is carried out through parallax errors of the tea cups in the two images, and position information of the tea cups is calculated.
4. The tea pouring robot based on the binocular vision of the machine and the artificial potential field obstacle avoidance method as claimed in claim 1, wherein the main control system controls the laser ranging module to rotate by using a PID algorithm, so that the laser ranging module is opposite to a tea cup.
5. The tea pouring robot based on the binocular vision of the machine and the artificial potential field obstacle avoidance method is characterized in that when the positive directions of the binocular cameras and the laser ranging module are consistent, the main control system controls the binocular cameras to face the teacup, and then the laser ranging module faces the teacup.
6. The tea pouring robot based on the binocular vision of the machine and the artificial potential field obstacle avoidance method is characterized in that the human-computer interaction system comprises a voice module, the voice module receives and decodes voice commands of a user, and sends decoded signals to the main control system.
7. The tea pouring robot based on the binocular vision of the machine and the artificial potential field obstacle avoidance method according to claim 6, wherein the voice module is provided with a primary password, and the primary password of the tea pouring instruction is valid within a preset time after response.
8. The tea pouring robot based on the binocular vision of the machine and the artificial potential field obstacle avoidance method as claimed in claim 1, wherein the human-computer interaction system prompts a user through voice or light after the robot finishes a tea pouring action.
9. The tea pouring robot based on the binocular vision of the machine and the artificial potential field obstacle avoidance method as claimed in claim 1, wherein the robot further comprises a power supply system for supplying power to the robot.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011405666.7A CN112589809A (en) | 2020-12-03 | 2020-12-03 | Tea pouring robot based on binocular vision of machine and artificial potential field obstacle avoidance method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011405666.7A CN112589809A (en) | 2020-12-03 | 2020-12-03 | Tea pouring robot based on binocular vision of machine and artificial potential field obstacle avoidance method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112589809A true CN112589809A (en) | 2021-04-02 |
Family
ID=75188152
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011405666.7A Pending CN112589809A (en) | 2020-12-03 | 2020-12-03 | Tea pouring robot based on binocular vision of machine and artificial potential field obstacle avoidance method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112589809A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113146651A (en) * | 2021-04-15 | 2021-07-23 | 华中科技大学 | Tea making robot and control method thereof |
CN113867412A (en) * | 2021-11-19 | 2021-12-31 | 中国工程物理研究院电子工程研究所 | Multi-unmanned aerial vehicle track planning method based on virtual navigation |
CN114536323A (en) * | 2021-12-31 | 2022-05-27 | 中国人民解放军国防科技大学 | Classification robot based on image processing |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102165880A (en) * | 2011-01-19 | 2011-08-31 | 南京农业大学 | Automatic-navigation crawler-type mobile fruit picking robot and fruit picking method |
CN102323817A (en) * | 2011-06-07 | 2012-01-18 | 上海大学 | Service robot control platform system and multimode intelligent interaction and intelligent behavior realizing method thereof |
CN102848388A (en) * | 2012-04-05 | 2013-01-02 | 上海大学 | Service robot locating and grabbing method based on multiple sensors |
CN102902271A (en) * | 2012-10-23 | 2013-01-30 | 上海大学 | Binocular vision-based robot target identifying and gripping system and method |
CN103503639A (en) * | 2013-09-30 | 2014-01-15 | 常州大学 | Double-manipulator fruit and vegetable harvesting robot system and fruit and vegetable harvesting method thereof |
US20160184990A1 (en) * | 2014-12-26 | 2016-06-30 | National Chiao Tung University | Robot and control method thereof |
CN108171796A (en) * | 2017-12-25 | 2018-06-15 | 燕山大学 | A kind of inspection machine human visual system and control method based on three-dimensional point cloud |
CN111258311A (en) * | 2020-01-17 | 2020-06-09 | 青岛北斗天地科技有限公司 | Obstacle avoidance method of underground mobile robot based on intelligent vision |
-
2020
- 2020-12-03 CN CN202011405666.7A patent/CN112589809A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102165880A (en) * | 2011-01-19 | 2011-08-31 | 南京农业大学 | Automatic-navigation crawler-type mobile fruit picking robot and fruit picking method |
CN102323817A (en) * | 2011-06-07 | 2012-01-18 | 上海大学 | Service robot control platform system and multimode intelligent interaction and intelligent behavior realizing method thereof |
CN102848388A (en) * | 2012-04-05 | 2013-01-02 | 上海大学 | Service robot locating and grabbing method based on multiple sensors |
CN102902271A (en) * | 2012-10-23 | 2013-01-30 | 上海大学 | Binocular vision-based robot target identifying and gripping system and method |
CN103503639A (en) * | 2013-09-30 | 2014-01-15 | 常州大学 | Double-manipulator fruit and vegetable harvesting robot system and fruit and vegetable harvesting method thereof |
US20160184990A1 (en) * | 2014-12-26 | 2016-06-30 | National Chiao Tung University | Robot and control method thereof |
CN108171796A (en) * | 2017-12-25 | 2018-06-15 | 燕山大学 | A kind of inspection machine human visual system and control method based on three-dimensional point cloud |
CN111258311A (en) * | 2020-01-17 | 2020-06-09 | 青岛北斗天地科技有限公司 | Obstacle avoidance method of underground mobile robot based on intelligent vision |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113146651A (en) * | 2021-04-15 | 2021-07-23 | 华中科技大学 | Tea making robot and control method thereof |
CN113146651B (en) * | 2021-04-15 | 2023-03-10 | 华中科技大学 | Tea making robot and control method thereof |
CN113867412A (en) * | 2021-11-19 | 2021-12-31 | 中国工程物理研究院电子工程研究所 | Multi-unmanned aerial vehicle track planning method based on virtual navigation |
CN114536323A (en) * | 2021-12-31 | 2022-05-27 | 中国人民解放军国防科技大学 | Classification robot based on image processing |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112589809A (en) | Tea pouring robot based on binocular vision of machine and artificial potential field obstacle avoidance method | |
WO2019232806A1 (en) | Navigation method, navigation system, mobile control system, and mobile robot | |
CN106826838B (en) | Interaction bionic mechanical arm control method based on Kinect visual depth sensor | |
CN108972549B (en) | Industrial mechanical arm real-time obstacle avoidance planning and grabbing system based on Kinect depth camera | |
CN102219051B (en) | Method for controlling four-rotor aircraft system based on human-computer interaction technology | |
CN107433573B (en) | Intelligent binocular automatic grabbing mechanical arm | |
CN113333998B (en) | Automatic welding system and method based on cooperative robot | |
CN110170995B (en) | Robot rapid teaching method based on stereoscopic vision | |
CN1319702C (en) | Movable manipulator system | |
CN111906788B (en) | Bathroom intelligent polishing system based on machine vision and polishing method thereof | |
CN111055281A (en) | ROS-based autonomous mobile grabbing system and method | |
CN105499953A (en) | Automobile engine piston and cylinder block assembly system based on industrial robot and method thereof | |
WO2018209863A1 (en) | Intelligent moving method and device, robot and storage medium | |
CN112634318B (en) | Teleoperation system and method for underwater maintenance robot | |
CN109199240B (en) | Gesture control-based sweeping robot control method and system | |
CN111459274B (en) | 5G + AR-based remote operation method for unstructured environment | |
CN102902271A (en) | Binocular vision-based robot target identifying and gripping system and method | |
CN106997201B (en) | Multi-robot cooperation path planning method | |
CN109129492A (en) | A kind of industrial robot platform that dynamic captures | |
WO2019232804A1 (en) | Software updating method and system, and mobile robot and server | |
CN112643207B (en) | Laser automatic derusting system and method based on computer vision | |
CN111459277B (en) | Mechanical arm teleoperation system based on mixed reality and interactive interface construction method | |
CN105234940A (en) | Robot and control method thereof | |
CN115464657A (en) | Hand-eye calibration method of rotary scanning device driven by motor | |
CN108839018A (en) | A kind of robot control operating method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210402 |