CN111823228A - Indoor following robot system and operation method - Google Patents
Indoor following robot system and operation method Download PDFInfo
- Publication number
- CN111823228A CN111823228A CN202010515112.6A CN202010515112A CN111823228A CN 111823228 A CN111823228 A CN 111823228A CN 202010515112 A CN202010515112 A CN 202010515112A CN 111823228 A CN111823228 A CN 111823228A
- Authority
- CN
- China
- Prior art keywords
- robot
- subsystem
- omnidirectional
- information
- target object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J19/00—Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
- B25J19/005—Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators using batteries, e.g. as a back-up power source
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J5/00—Manipulators mounted on wheels or on carriages
- B25J5/007—Manipulators mounted on wheels or on carriages mounted on wheels
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1628—Programme controls characterised by the control loop
- B25J9/1651—Programme controls characterised by the control loop acceleration, rate control
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
- B25J9/1666—Avoiding collision or forbidden zones
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
Abstract
The invention discloses an indoor following robot system, which comprises a structure subsystem, a voice interaction subsystem, a perception subsystem, an intelligent planning subsystem and an omnidirectional maneuvering control subsystem, wherein the voice interaction subsystem, the perception subsystem, the intelligent planning subsystem and the omnidirectional maneuvering control subsystem are constructed on the structure subsystem; the human-computer voice interaction subsystem is used for acquiring voice information; the sensing subsystem is used for acquiring sensing information, including RGB-D images and obstacle position information; the intelligent planning subsystem is used for determining a robot working mode and carrying out follow-up motion control planning by combining the voice information and the perception information; the omnidirectional maneuvering control subsystem is used for realizing maneuvering control of the robot according to the following movement control plan. The system adopts deep learning and Kalman filtering to realize human body target detection and tracking, and realizes robot motion planning under obstacle avoidance based on an artificial potential field method, thereby ensuring good following performance of the robot to human bodies with various postures. The invention has simple structure, low cost and strong expansibility, and can be used as a universal basic platform for other multipurpose indoor following robots.
Description
Technical Field
The application relates to an indoor following robot system and an operation method, and belongs to the technical field of robot design and control.
Background
With the rapid development of the robot technology and the automatic control technology, the service robot gradually steps into the daily life of people. The indoor following robot is a service robot with a pedestrian following mode based on human body detection tracking and following motion control. The indoor following service robot has wide application prospect. For example, in the warehouse logistics industry, the indoor following robot can follow the serviced workers, replace the workers to complete material carrying tasks, and cooperate the workers to complete complex material sorting tasks and other tasks; in the medical health industry, the indoor following robot system can follow a served object, and provides real-time safety monitoring or rehabilitation accompanying and attending.
The robot following process often occurs in a relatively complex natural environment, and the environment includes a target object and a static or dynamic obstacle, so that the indoor following robot needs to be capable of identifying the target object in real time in the natural environment with multiple interferences and avoiding the obstacle to accurately follow the target.
According to the indoor following robot operation method and/or system in the prior art, detection and tracking of a standing target object are relatively perfect, and detection and following performance of the target object under various postures such as squatting and bending is relatively poor, so that the technical defect that the target is easy to follow and lose is caused.
Disclosure of Invention
An object of the application is to provide an indoor following robot system and an operation method, so as to solve the technical problem that the following performance is poor under various postures of a target object existing in the existing indoor following robot.
The indoor following robot system comprises a structure subsystem, a voice interaction subsystem, a perception subsystem, an intelligent planning subsystem and an omnidirectional maneuvering control subsystem, wherein the voice interaction subsystem, the perception subsystem, the intelligent planning subsystem and the omnidirectional maneuvering control subsystem are constructed on the structure subsystem;
the man-machine voice interaction subsystem is used for acquiring voice information and sending the voice information to the intelligent planning subsystem;
the perception subsystem is used for acquiring perception information, the perception information comprises RGB-D images of the environment in front of the robot and position information of obstacles around the robot, and the perception information is sent to the intelligent planning subsystem;
the intelligent planning subsystem is used for determining a robot working mode and carrying out follow-up motion control planning by combining the voice information and the perception information;
and the omnidirectional maneuvering control subsystem is used for realizing maneuvering control of the robot according to the following movement control plan.
Preferably, the system further comprises a power supply subsystem built on the structure subsystem;
the power supply subsystem is used for supplying power to the robot;
the power supply subsystem comprises a power supply controller, a storage battery pack, a charging interface and a plurality of output interfaces, wherein the storage battery pack, the charging interface and the plurality of output interfaces are connected with the power supply controller;
and the power supply controller is used for combining the charging interface to charge the storage battery pack and combining the output interface to supply power to the robot or supply power to external expansion equipment by using the storage battery pack.
Preferably, the perception subsystem comprises an RGB-D depth camera and a plurality of sonar sensors;
the RGB-D depth camera is used for collecting RGB-D images of the environment in front of the robot;
the sonar sensor is used for collecting position information of obstacles around the robot.
Preferably, the intelligent planning subsystem comprises an artificial intelligence computing processor and an information resource expansion interface led out from the artificial intelligence computing processor;
the artificial intelligence calculation processor is used for determining a robot working mode and carrying out follow-up motion control planning by combining the voice information and the perception information;
the information resource expansion interface is used for connecting external expansion equipment.
Preferably, the structural subsystem comprises a base plate and a 4-wheel omni-directional drive unit;
the 4-wheel omnidirectional driving unit is arranged on the lower surface of the bottom plate and used for moving according to the following motion control plan;
the 4-wheel omnidirectional driving unit comprises 4 sets of omnidirectional driving subunits;
the omnidirectional driving subunit comprises a parallelogram suspension, a stepping motor and an omnidirectional wheel;
one end of the parallelogram suspension is connected with the lower surface of the bottom plate, and the other end of the parallelogram suspension is connected with a driving shaft of the stepping motor;
the omnidirectional wheel is connected with a driving shaft of the stepping motor and moves under the driving of the stepping motor;
preferably, the omni-wheel is a mecanum wheel.
Preferably, the omnidirectional maneuvering control subsystem comprises a master controller and a motor driver;
the main controller is wirelessly connected with the intelligent planning subsystem and is used for converting the following motion control planning into the maneuvering control of the 4-wheel omnidirectional driving unit;
the motor driver is connected with the main controller and used for driving the stepping motor according to the maneuvering control of the 4-wheel omnidirectional driving unit;
preferably, the wireless connection is a bluetooth connection.
Preferably, the omnidirectional maneuvering control subsystem further comprises a power supply conversion module;
the power supply conversion module is respectively connected with the power supply controller, the main controller, the motor driver and the stepping motor and is used for converting the voltage output by the storage battery into the voltage required by the main controller, the motor driver and the stepping motor.
The invention also discloses an indoor random robot running method, which comprises the following steps:
acquiring voice information;
matching the voice information with predefined voice information, and determining the working mode of the robot according to the matching result; the working modes comprise a standby mode, a tracking mode and a following mode;
when the working mode is a standby mode, controlling the robot to enter a low power consumption state and only receiving the voice information;
when the working mode is a tracking mode, controlling the robot to collect RGB-D images of the front environment of the robot, acquiring the position of a target object in the RGB-D images, predicting the position of the target object at the next moment, and tracking the target object in real time;
when the working mode is a following mode, controlling the robot to collect an RGB-D image of the front environment of the robot, acquiring the position of a target object in the RGB-D image and predicting the position of the target object at the next moment;
acquiring position information of obstacles around the robot;
determining an expected movement direction and an expected movement speed of the robot by combining the position of the target object at the next moment and the position information of the obstacles around the robot, and controlling the robot to move along with the target object according to the expected movement direction and the expected movement speed;
preferably, the acquiring a position of a target object in the RGB-D image and predicting a position of the target object at a next time includes:
determining the position of a target object in the RGB-D image by using a deep learning method;
and predicting the position of the target object at the next moment by utilizing Kalman filtering in combination with the position of the target object.
Preferably, the robot is controlled to follow the target object to move according to the desired movement direction and the desired movement speed, specifically:
calculating and obtaining target rotating directions and target rotating speeds of 4 omnidirectional wheels of the robot according to the expected moving direction and the expected moving speed by using an inverse kinematics method;
and controlling the 4 omnidirectional wheels to move according to the target rotating direction and the target rotating speed.
Preferably, the acquiring of the position information of the obstacle around the robot is specifically:
acquiring obstacle position information around the robot by using an ultrasonic ranging method;
preferably, the determining the desired movement direction and the desired movement speed of the robot is specifically:
and determining the expected movement direction and the expected movement speed of the robot by using an artificial potential field method.
Compared with the prior art, the indoor following robot system and the operation method have the following beneficial effects:
the system adopts deep learning and Kalman filtering to realize human body target detection and tracking, and realizes robot motion planning under obstacle avoidance based on an artificial potential field method, thereby ensuring good following performance of the robot to human bodies with various postures. The invention has simple structure, low cost and strong expansibility, and can be used as a universal basic platform for other multipurpose indoor following robots.
The indoor following robot system adopts a multi-level control system, intelligent perception and planning of the top layer are independent from maneuvering control of the bottom layer, abundant electrical expansion interfaces and mechanical expansion interfaces are provided, and the system can be conveniently and rapidly upgraded and updated according to different applications.
The indoor following robot system of the invention uses a sonar sensor to collect the obstacle information around the robot and uses an RGB-D depth camera to collect the RGB-D image of the front environment, and has high collection speed and small error.
The indoor following robot operation method determines the expected movement direction and the expected movement speed of the robot by combining the position of the target object at the next moment and the position information of the obstacles around the robot, and further determines the rotation direction and the rotation angle of each independently operated omnidirectional wheel of the robot by using an inverse kinematics method, so that the robot can move in any direction along with the target object and can turn quickly.
According to the indoor following robot operation method, the obstacle position information around the robot is obtained by using the ultrasonic ranging method, the obtained information is high in precision, simple in structure and low in cost.
According to the indoor following robot operation method, the expected movement direction and the expected movement speed of the robot are determined by using the artificial potential field method, and the artificial potential field method is a feedback control strategy and has robustness on control and sensing errors, so that the obtained expected movement direction and the obtained expected movement speed have high accuracy.
Drawings
FIG. 1 is a front view of an indoor following robot system in an embodiment of the present invention;
FIG. 2 is a left side view of an indoor following robot system in an embodiment of the present invention;
FIG. 3 is a top view of an indoor following robot system in an embodiment of the present invention;
FIG. 4 is a schematic diagram of electrical connections of an indoor following robot system according to an embodiment of the present invention;
FIG. 5 is a flowchart of the top-level intelligent sensing and planning software of the indoor following robot system according to the embodiment of the present invention;
FIG. 6 is a functional and flow diagram of the underlying mobility control software of the indoor tracking robot system in an embodiment of the present invention;
fig. 7 is a flowchart of an operation method of the indoor following robot in the embodiment of the present invention.
List of parts and reference numerals:
1. an omni wheel; 2. a stepping motor; 3. a parallelogram suspension; 4. an instrument pod; 5. a support; 6. a power supply controller; 7. a charging interface; 8. a power switch; 9. an RGB-D depth camera; 10. a sonar sensor; 11. a microphone; 12. a speaker; 13. an intelligent planning subsystem; 14. a master controller; 15. an expansion interface; 16. an extended storage box.
Detailed Description
The present invention will be described in detail with reference to examples, but the present invention is not limited to these examples.
The indoor following robot system comprises a structure subsystem, a voice interaction subsystem, a perception subsystem, an intelligent planning subsystem, an omnidirectional maneuvering control subsystem and a power supply subsystem, wherein the voice interaction subsystem, the perception subsystem, the intelligent planning subsystem, the omnidirectional maneuvering control subsystem and the power supply subsystem are built on the structure subsystem.
The man-machine voice interaction subsystem is used for acquiring voice information and sending the voice information to the intelligent planning subsystem; the man-machine voice interaction subsystem comprises a microphone and a loudspeaker.
The sensing subsystem is used for acquiring sensing information, the sensing information comprises RGB-D images of the environment in front of the robot and position information of obstacles around the robot, and the sensing information is sent to the intelligent planning subsystem; the perception subsystem comprises a plurality of sonar sensors and an RGB-D depth camera; the sonar sensor is used for acquiring position information of obstacles around the robot; the RGB-D depth camera is used for collecting RGB-D images of the environment in front of the robot.
The intelligent planning subsystem is used for determining a robot working mode and carrying out follow-up motion control planning by combining the voice information and the perception information; wherein the robot work mode includes a standby mode, a tracking mode, and a following mode. The standby mode is that the robot is in the standby state, waits for receiving a voice command and is switched to other working modes; in the tracking mode, the robot keeps still and only tracks the target object in the RGB-D image in real time; and in the following mode, the robot comprehensively measures the obtained obstacle information on the basis of the tracking mode, and follows the target object and keeps a proper distance with the target object all the time through maneuvering control on the premise of avoiding obstacles. The intelligent planning subsystem comprises an artificial intelligence computing processor and an information resource expansion interface led out from the artificial intelligence computing processor; the artificial intelligence calculation processor is used for determining a robot working mode and carrying out follow-up motion control planning by combining the voice information and the perception information; the information resource expansion interface is used for connecting external expansion equipment to increase or upgrade the functions of the robot, such as a sensor or an actuator of a laser radar, a mechanical arm and the like. The intelligent planning subsystem further comprises a solid state disk for realizing information storage and a main Bluetooth device for communicating with the omnidirectional maneuvering control subsystem.
The power supply subsystem is used for supplying power to the robot; the power supply subsystem comprises a power supply controller, a storage battery pack, a charging interface and a plurality of output interfaces, wherein the storage battery pack, the charging interface and the plurality of output interfaces are connected with the power supply controller; and the power supply controller is used for combining the charging interface to charge the storage battery pack and combining the output interface to supply power for the robot or supply power for external expansion equipment by using the storage battery pack.
The omnidirectional maneuvering control subsystem is used for realizing maneuvering control of the robot according to the following movement control plan and comprises a main controller and a motor driver; the main controller is wirelessly connected with the intelligent planning subsystem and is used for converting the following motion control planning into the maneuvering control of the 4-wheel omnidirectional driving unit; the motor driver is connected with the main controller and used for driving the stepping motor according to the maneuvering control of the 4-wheel omnidirectional driving unit; preferably, the wireless connection is a bluetooth connection. The omnidirectional mobile control subsystem also comprises a power supply conversion module; the power supply conversion module is respectively connected with the power supply controller, the main controller, the motor driver and the stepping motor and is used for converting the voltage output by the storage battery into the voltage required by the main controller, the motor driver and the stepping motor.
The structure subsystem of the invention comprises a bottom plate and 4-wheel omnidirectional driving units; the 4-wheel omnidirectional driving unit is arranged on the lower surface of the bottom plate and used for moving according to the following motion control plan; the 4-wheel omnidirectional driving unit comprises 4 sets of omnidirectional driving subunits; the omnidirectional driving subunit comprises a parallelogram suspension, a stepping motor and an omnidirectional wheel; one end of the parallelogram suspension is connected with the lower surface of the bottom plate, and the other end of the parallelogram suspension is connected with a driving shaft of the stepping motor; the omnidirectional wheel is connected with a driving shaft of the stepping motor and moves under the driving of the stepping motor; preferably, the omni-wheel is a mecanum wheel.
The indoor following robot system of the present application will be described below in specific embodiments.
Fig. 1 to 3 are a front view, a left side view, and a top view (with the head facing upward) of an indoor following robot system according to an embodiment of the present invention.
In the structure subsystem among the indoor following robot system of this application, omniwheel 1 adopts mecanum wheel, and wheel diameter 20cm, wheel are installed on step motor 2 axles, realize four-wheel independent drive. An encoder is integrated in each set of stepping motor 2. Subsequently, each stepping motor 2 is mounted on a parallelogram suspension 3 located on the lower surface of the base plate, and the parallelogram suspension is composed of a connecting rod and a shock absorber, so that good load is guaranteed, and meanwhile, a certain shock absorption capacity is achieved. One end of the connecting rod and the shock absorber is connected with the stepping motor 2, and the other end of the connecting rod and the shock absorber is connected to a robot bottom plate. A cubic instrument chamber 4 is arranged on the bottom plate of the robot, and the size of the instrument chamber 4 is 70cm multiplied by 50cm multiplied by 25 cm. The position, 10cm away from the center of the tail of the vehicle, above the instrument cabin 4 is provided with a support 5, the height of the support 5 is 80cm, the support is two cylindrical light aluminum rods, a support platform is provided for the RGB-D depth camera and equipment in the man-machine voice interaction subsystem, and installation at a certain height is achieved. In the power supply subsystem, a lithium ion storage battery pack and a power supply controller 6 are arranged inside an instrument cabin 4 and close to the tail of a vehicle. Meanwhile, a charging interface 7 is led out from a panel right behind the instrument cabin 4, and a power switch 8 is led out from an upper panel of the instrument cabin 4 close to the tail of the vehicle. RGB-D degree of depth camera 9 in the perception subsystem installs in support 5 top, and is about 1.3m apart from ground, and the direction is towards the first direction of robot car. The visible RGB-D depth camera 9 is designed and installed on the support 5 close to the tail of the vehicle and has a certain distance with the head of the robot and the ground, so that when the distance between the human body and the face of the robot is short, the color image and the depth image of the human body which is relatively complete can be obtained. Six sonar sensors 10 in the perception subsystem are all designed to be mounted on the side of the instrument pod 4. One sonar sensor 10 is attached to each of the side panels of the instrument pod 4 in the front and rear directions of the robot. Two sonar sensors 10 are mounted at the upper two vertex angles on the side panels of the instrument shelter 4 on the left and right sides of the robot. Therefore, two sonar sensors 10 are installed in the front-back direction of the robot, four sonar sensors 10 are installed in the left-right direction of the robot, the height from the ground is 40cm, and the distance detection of the robot to the obstacles in the peripheral range is guaranteed. A microphone 11 and a loudspeaker 12 which are included in a man-machine voice interaction subsystem of the robot and a signal processing circuit board thereof are arranged on the top of the bracket 5, and are positioned immediately below the RGB-D depth camera 9. The intelligent planning subsystem 13 (including an artificial intelligence computing processor, a solid state disk and a main Bluetooth) is installed in the instrument chamber 4 and is located in the middle of the instrument chamber 4. Hardware circuits (such as a master controller 14, an inertial navigation device, an AD sampling device, a motor driver and slave Bluetooth) related to the omnidirectional maneuvering control subsystem are integrated on a circuit board and are arranged in the instrument cabin 4 and close to the front part of the robot. The expansion interface 15 is designed on the top outside the robot instrument cabin and comprises an electrical part and a mechanical part. The interface of the electrical part, namely the information resource expansion interface, is positioned in the area between the top panel and the tail of the vehicle to the bracket outside the instrument chamber 4. The mechanical expansion interface is arranged at the top outside the instrument cabin 4 and between the support and the region of the locomotive, and is a mechanical connection interface such as a mortise and tenon. For example, this application has designed extension storage tank 16, is connected through fourth of the twelve earthly branches hook and instrument cabin 4 top for applications such as goods and materials transport.
Fig. 4 shows the composition and electrical connection relationship of the indoor following robot system of the present application. In the power supply subsystem, storage battery packs of 24V and 10Ah are connected to a power supply controller 6 through cables, and the power supply controller 6 outputs regulated 24V buses, 12V and 5V secondary power supply voltages. The charging interface 7 is connected to the power controller 6 through a cable, and the charging management of the storage battery is realized through the control of the power controller 6. The power switch 8 is also connected with the power controller 6 through a cable, so that the power-on and power-off control of the secondary power supply is realized. In addition, the power controller 6 reserves a plurality of output interfaces, and a part of the output interfaces are used as power expansion interfaces so as to meet the requirement of a secondary power supply during system expansion. In the perception subsystem, 6 sonar sensors 10 are supplied power by the 5V secondary power supply output by the power supply system, and each sonar sensor 10 communicates with an embedded artificial intelligence computing processor Jetson TX2 in the intelligent planning subsystem through an I2C bus. The RGB-D depth camera is connected with the Jetson TX2 through a USB interface, and the USB interface simultaneously takes charge of the information transmission and power supply functions of the RGB-D depth camera. In the human-computer voice interaction subsystem, a microphone and a loudspeaker are connected with a signal processing circuit board through cables, and the signal processing circuit board is connected with Jetson TX2 through a UART interface to carry out information interaction. The signal processing circuit board is powered by a 5V secondary power supply output by a power supply system. In the intelligent planning subsystem, the solid state disk is connected with Jetson TX2 through a PCIe interface, and a 5V power supply is also provided by Jetson TX 2. The main bluetooth is connected with Jetson TX2 through UART, and its 5V power supply is provided by Jetson TX 2. The redundant electrical interfaces on Jetson TX2, such as CAN, SPI, I2C, PCIe, CSI, etc. interfaces, are brought out by cables to the expansion interface. The omnidirectional mobile control subsystem is powered by a 24V bus power supply, an independent power supply conversion module is designed in the omnidirectional mobile control subsystem, a 24V power supply can be converted into 5V, 3.3V and 24V with stabilized voltage, and all paths of power supplies are mutually isolated. The design has the advantages that the power supply of the omnidirectional mobile control subsystem is isolated from other devices, and the interference influence of heavy current such as the driving of the stepping motor on the branch output of other power supplies of the power supply controller or a control unit in the omnidirectional mobile control subsystem is prevented. Meanwhile, the power supply of the omnidirectional maneuvering control subsystem is relatively independent and is of a modular design which is easy to integrate and maintain. The slave Bluetooth is connected with the main controller STM32F103 through a UART interface, and the power supply voltage is 5V direct current output by the power supply conversion module. The inertial measurement unit IMU and the ADC sampling chip are respectively connected with the main controller STM32F103 through an I2C interface and a UART interface, and both are powered by a 3.3V power supply output by the power conversion module. The motor driver is powered by a 24V voltage-stabilized power supply output by the power conversion module, and receives analog signals such as pulse width debugging signals and direction control signals of the main controller STM32F103, so that the rotation speed adjustment and the steering control of the motor are realized. The stepping motor winding is connected with the output of the motor driver through a cable. An encoder integrated in the stepping motor is connected with the main controller STM32F103 through a UART interface, and the power is supplied to 5V direct current output by the power conversion module.
Because the indoor following robot system adopts a multi-level control system consisting of an intelligent planning subsystem and an omnidirectional maneuvering control subsystem, the software also comprises two levels: (1) the top-layer intelligent sensing and planning software runs in the intelligent planning subsystem; (2) the bottom layer of mobile control software runs in the omnidirectional mobile control subsystem.
The flow of the top-level intelligent sensing and planning software of the indoor following robot system in the embodiment of the application is as shown in fig. 5. The software runs on Jetson TX 2. And detecting a target object in the color image based on a depth learning method for the received RGB-D image transmitted by the RGB-D depth camera, and solving the space position of the target object relative to the robot, namely the position of the target object by using the fused color, RGB-D image and the accurate camera model. The deep learning method is deployed to Jetson TX2 by using a TensorRT engine to carry out reasoning calculation, and a target object detection result in an RGB-D image is obtained. After the spatial position of the target object is obtained, the Kalman filtering is utilized to predict the position of the target object at the next moment, and the human body tracking is realized. And obtaining distance information of the obstacles in each direction of the indoor following robot system according to the received distance measurement data of the six sonar sensors. And the manual switching of the working modes of the indoor following robot system can be realized by utilizing the predefined voice command. Three robot working modes are designed: the standby mode is that the robot is in the standby state, waits for receiving a voice command and is switched to other working modes; in the tracking mode, the robot keeps still and only tracks the target object in the RGB-D image in real time; and following the running mode, comprehensively measuring the obtained obstacle information by the robot on the basis of the tracking mode, and following the target object and keeping a distance of about 1.2 meters with the target object all the time through motor control on the premise of avoiding obstacles. If the received voice command is in a following mode, a human body tracking result, an obstacle measuring result, inertial group information, motor winding current information and motor encoder angle measuring information returned by the control unit are fused, target object following control of the robot is carried out based on an artificial potential field method, motion planning of the robot is obtained, an expected motion direction and an expected motion speed of the robot are given in real time, and the expected motion direction and the expected motion speed are sent to the control unit in the omnidirectional maneuvering device through the main Bluetooth. The indoor following robot system has the capability of moving in the front direction, the rear direction, the left direction, the right direction, the left turning direction and the right turning direction.
Fig. 6 is a functional and flow chart of the underlying mobile control software, which runs on the main controller STM32F103 of the omnidirectional mobile control subsystem. From bluetooth, the software receives the current desired direction and speed of movement of the robot. And then, obtaining the target rotation directions and the target rotation speeds of the four stepping motors through inverse kinematics analysis. Then, inertia unit information, motor winding current information and motor encoder angle measurement information are fused, and four motors are controlled to the expected target rotation direction and rotation speed respectively by utilizing a PID algorithm, so that closed-loop following control of the robot is achieved. Meanwhile, the acquired inertial measurement unit information, the motor winding current information and the motor encoder angle measurement information are sent to the intelligent planning subsystem from the Bluetooth.
With the support of the software and the working mode, the application method of the indoor following robot system is described by using a most typical application scenario, namely a human-computer cooperation material carrying task in warehouse logistics. After the robot is started, a warehousing worker stands in front of the robot and controls the robot to work in a following operation mode through voice instructions. The robot then follows the stocker at a relative distance between the warehouse rack aisles while avoiding possible obstacles. After the operation position of a warehousing worker is reached, the worker enables the robot to be switched to a tracking mode through a voice instruction, and then the worker can be close to the robot to place goods on the storage rack into the storage box of the robot or take the goods out of the storage box to place the goods on the corresponding storage rack. During the period, the robot only keeps tracking the target object in the image, and is in a static state, so that the robot is convenient for workers to take and place goods. After the material taking and placing are completed, the worker can switch the robot back to the following operation mode through a voice instruction, and the worker continues to follow to the next operation position to perform a material transferring task.
The application designs a simple, low-cost indoor following robot system. Meanwhile, the robot control system is a multi-level control system consisting of an intelligent planning subsystem and a control unit in an omnidirectional mobile device, intelligent perception and planning at the top level are independent from mobile control at the bottom level, abundant electrical expansion interfaces and mechanical expansion interfaces are provided, and the system can be conveniently and rapidly upgraded and updated according to different applications. The robot system can be used as a basic platform for development of other multi-purpose indoor following robot systems.
The invention also discloses an indoor random robot running method, which comprises the following steps:
step 1, acquiring voice information;
when the working mode is a standby mode, controlling the robot to enter a low power consumption state and only receiving the voice information;
when the working mode is a tracking mode, controlling the robot to collect RGB-D images of the front environment of the robot, acquiring the position of a target object in the RGB-D images by using a deep learning method, predicting the position of the target object at the next moment by using Kalman filtering, and tracking the target object in real time;
when the working mode is a following mode, controlling the robot to collect RGB-D images of the front environment of the robot, acquiring the position of a target object in the RGB-D images by using a deep learning method, and predicting the position of the target object at the next moment by using Kalman filtering;
acquiring obstacle position information around the robot by using an ultrasonic ranging method;
determining an expected movement direction and an expected movement speed of the robot by using an artificial potential field method in combination with the position of the target object at the next moment and the position information of the obstacles around the robot, and controlling the robot to move along with the target object according to the expected movement direction and the expected movement speed, specifically:
calculating and obtaining target rotating directions and target rotating speeds of 4 omnidirectional wheels of the robot according to the expected moving direction and the expected moving speed by using an inverse kinematics method;
and controlling the 4 omnidirectional wheels to move according to the target rotating direction and the target rotating speed.
For example, the flow chart of the operation method of the indoor following robot of the present invention is shown in fig. 7. After the power-on starting, the robot is in a standby mode by default, and a voice instruction is detected in a circulating mode, wherein the voice instruction comprises voice information. And after the voice command is detected, analyzing the voice command. If the command is a standby mode command, the working mode is not changed, and the monitoring of the voice command is continuously kept. If the received voice instruction is a tracking mode instruction, entering a tracking mode, determining a target object and the position of the target object in the RGB-D image of the front environment acquired by the robot by using a deep learning method, in the embodiment, determining the nearest human body as the target object, then predicting the position of the target object at the next moment by combining the position of the target object and using Kalman filtering, and tracking the target object in real time. If the received voice instruction is a following mode instruction, entering a following mode, determining a target object and a position of the target object in an RGB-D image of a front environment acquired by the robot by using a deep learning method, in the embodiment, determining a nearest human body as the target object, and then predicting the position of the target object at the next moment by combining the position of the target object and using Kalman filtering. And finally, performing follow-up motion control: and determining the expected movement direction and the expected movement speed of the robot by using an artificial potential field method in combination with the position of the target object at the next moment and the position information of the obstacles around the robot, and controlling the robot to move along with the target object according to the expected movement direction and the expected movement speed. The robot remains in the tracking mode or the following mode and the detection of the voice command is maintained. If the voice command is not received, the working mode is unchanged; if the voice command is detected to be received, the voice command is analyzed again, and the working mode is updated.
The system adopts deep learning and Kalman filtering to realize human body target detection and tracking, and realizes robot motion planning under obstacle avoidance based on an artificial potential field method, thereby ensuring good following performance of the robot to human bodies with various postures. The invention has simple structure, low cost and strong expansibility, and can be used as a universal basic platform for other multipurpose indoor following robots.
Although the present application has been described with reference to a few embodiments, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the application as defined by the appended claims.
Claims (10)
1. An indoor following robot system is characterized by comprising a structure subsystem, a voice interaction subsystem, a perception subsystem, an intelligent planning subsystem and an omnidirectional motor control subsystem, wherein the voice interaction subsystem, the perception subsystem, the intelligent planning subsystem and the omnidirectional motor control subsystem are constructed on the structure subsystem;
the man-machine voice interaction subsystem is used for acquiring voice information and sending the voice information to the intelligent planning subsystem;
the perception subsystem is used for acquiring perception information, the perception information comprises RGB-D images of the environment in front of the robot and position information of obstacles around the robot, and the perception information is sent to the intelligent planning subsystem;
the intelligent planning subsystem is used for determining a robot working mode and carrying out follow-up motion control planning by combining the voice information and the perception information;
and the omnidirectional maneuvering control subsystem is used for realizing maneuvering control of the robot according to the following movement control plan.
2. The indoor follower robot system as defined in claim 1, further comprising a power subsystem built on the structural subsystem;
the power supply subsystem is used for supplying power to the robot;
the power supply subsystem comprises a power supply controller, a storage battery pack, a charging interface and a plurality of output interfaces, wherein the storage battery pack, the charging interface and the plurality of output interfaces are connected with the power supply controller;
and the power supply controller is used for combining the charging interface to charge the storage battery pack and combining the output interface to supply power to the robot or supply power to external expansion equipment by using the storage battery pack.
3. The indoor following robot system according to claim 1, wherein the perception subsystem comprises one RGB-D depth camera and a plurality of sonar sensors;
the RGB-D depth camera is used for collecting RGB-D images of the environment in front of the robot;
the sonar sensor is used for collecting position information of obstacles around the robot.
4. The indoor follower robot system of claim 1, wherein the intelligent planning subsystem comprises an artificial intelligence computing processor and an information resource extension interface leading from the artificial intelligence computing processor;
the artificial intelligence calculation processor is used for determining a robot working mode and carrying out follow-up motion control planning by combining the voice information and the perception information;
the information resource expansion interface is used for connecting external expansion equipment.
5. The indoor follower robot system of claim 2, wherein the structural subsystem comprises a base plate and a 4-wheel omni-directional drive unit;
the 4-wheel omnidirectional driving unit is arranged on the lower surface of the bottom plate and used for moving according to the following motion control plan;
the 4-wheel omnidirectional driving unit comprises 4 sets of omnidirectional driving subunits;
the omnidirectional driving subunit comprises a parallelogram suspension, a stepping motor and an omnidirectional wheel;
one end of the parallelogram suspension is connected with the lower surface of the bottom plate, and the other end of the parallelogram suspension is connected with a driving shaft of the stepping motor;
the omnidirectional wheel is connected with a driving shaft of the stepping motor and moves under the driving of the stepping motor;
preferably, the omni-wheel is a mecanum wheel.
6. The indoor follower robot system of claim 5, wherein the omnidirectional maneuver control subsystem comprises a master controller and a motor drive;
the main controller is wirelessly connected with the intelligent planning subsystem and is used for converting the following motion control planning into the maneuvering control of the 4-wheel omnidirectional driving unit;
the motor driver is connected with the main controller and used for driving the stepping motor according to the maneuvering control of the 4-wheel omnidirectional driving unit;
preferably, the wireless connection is a bluetooth connection.
7. The indoor follower robot system of claim 6, wherein the omnidirectional maneuver control subsystem further comprises a power conversion module;
the power supply conversion module is respectively connected with the power supply controller, the main controller, the motor driver and the stepping motor and is used for converting the voltage output by the storage battery into the voltage required by the main controller, the motor driver and the stepping motor.
8. An indoor random robot running method is characterized by comprising the following steps:
acquiring voice information;
matching the voice information with predefined voice information, and determining the working mode of the robot according to the matching result; the working modes comprise a standby mode, a tracking mode and a following mode;
when the working mode is a standby mode, controlling the robot to enter a low power consumption state and only receiving the voice information;
when the working mode is a tracking mode, controlling the robot to collect RGB-D images of the front environment of the robot, acquiring the position of a target object in the RGB-D images, predicting the position of the target object at the next moment, and tracking the target object in real time;
when the working mode is a following mode, controlling the robot to collect an RGB-D image of the front environment of the robot, acquiring the position of a target object in the RGB-D image and predicting the position of the target object at the next moment;
acquiring position information of obstacles around the robot;
determining an expected movement direction and an expected movement speed of the robot by combining the position of the target object at the next moment and the position information of the obstacles around the robot, and controlling the robot to move along with the target object according to the expected movement direction and the expected movement speed;
preferably, the acquiring a position of a target object in the RGB-D image and predicting a position of the target object at a next time includes:
determining the position of a target object in the RGB-D image by using a deep learning method;
and predicting the position of the target object at the next moment by utilizing Kalman filtering in combination with the position of the target object.
9. The indoor following robot operation method according to claim 8, wherein the robot is controlled to follow the target object according to the desired movement direction and the desired movement speed, specifically:
calculating and obtaining target rotating directions and target rotating speeds of 4 omnidirectional wheels of the robot according to the expected moving direction and the expected moving speed by using an inverse kinematics method;
and controlling the 4 omnidirectional wheels to move according to the target rotating direction and the target rotating speed.
10. An operation method of an indoor following robot according to claim 8 or 9, wherein the acquiring of the obstacle position information around the robot is specifically:
acquiring obstacle position information around the robot by using an ultrasonic ranging method;
preferably, the determining the desired movement direction and the desired movement speed of the robot is specifically:
and determining the expected movement direction and the expected movement speed of the robot by using an artificial potential field method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010515112.6A CN111823228A (en) | 2020-06-08 | 2020-06-08 | Indoor following robot system and operation method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010515112.6A CN111823228A (en) | 2020-06-08 | 2020-06-08 | Indoor following robot system and operation method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111823228A true CN111823228A (en) | 2020-10-27 |
Family
ID=72898438
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010515112.6A Pending CN111823228A (en) | 2020-06-08 | 2020-06-08 | Indoor following robot system and operation method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111823228A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113433949A (en) * | 2021-07-19 | 2021-09-24 | 北京云迹科技有限公司 | Automatic following object conveying robot and object conveying method thereof |
CN113467462A (en) * | 2021-07-14 | 2021-10-01 | 中国人民解放军国防科技大学 | Pedestrian accompanying control method and device for robot, mobile robot and medium |
CN113741458A (en) * | 2021-09-03 | 2021-12-03 | 北京易航远智科技有限公司 | Robot on-site help following or gesture guiding driving method and system |
CN115129049A (en) * | 2022-06-17 | 2022-09-30 | 广东工业大学 | Mobile service robot path planning system and method with social awareness |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107139179A (en) * | 2017-05-26 | 2017-09-08 | 西安电子科技大学 | A kind of intellect service robot and method of work |
CN207164583U (en) * | 2017-04-24 | 2018-03-30 | 武汉理工大学 | Indoor following device based on Mecanum wheel |
CN108536145A (en) * | 2018-04-10 | 2018-09-14 | 深圳市开心橙子科技有限公司 | A kind of robot system intelligently followed using machine vision and operation method |
CN110103237A (en) * | 2019-05-13 | 2019-08-09 | 湖北经济学院 | The follower type robot Fellow of view-based access control model target following |
EP3597375A1 (en) * | 2018-06-11 | 2020-01-22 | Kabushiki Kaisha Toyota Jidoshokki | Autonomous cart |
CN111091088A (en) * | 2019-12-12 | 2020-05-01 | 中国人民解放军战略支援部队航天工程大学 | Video satellite information supported marine target real-time detection positioning system and method |
-
2020
- 2020-06-08 CN CN202010515112.6A patent/CN111823228A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN207164583U (en) * | 2017-04-24 | 2018-03-30 | 武汉理工大学 | Indoor following device based on Mecanum wheel |
CN107139179A (en) * | 2017-05-26 | 2017-09-08 | 西安电子科技大学 | A kind of intellect service robot and method of work |
CN108536145A (en) * | 2018-04-10 | 2018-09-14 | 深圳市开心橙子科技有限公司 | A kind of robot system intelligently followed using machine vision and operation method |
EP3597375A1 (en) * | 2018-06-11 | 2020-01-22 | Kabushiki Kaisha Toyota Jidoshokki | Autonomous cart |
CN110103237A (en) * | 2019-05-13 | 2019-08-09 | 湖北经济学院 | The follower type robot Fellow of view-based access control model target following |
CN111091088A (en) * | 2019-12-12 | 2020-05-01 | 中国人民解放军战略支援部队航天工程大学 | Video satellite information supported marine target real-time detection positioning system and method |
Non-Patent Citations (3)
Title |
---|
李卫国等主编: "《创意之星:模块化机器人设计与竞赛》", 30 September 2016, 北京航空航天大学出版社 * |
薛颂东著: "《群机器人协调控制》", 30 November 2016, 北京理工大学出版社 * |
陈孟元著: "《移动机器人SLAM、目标跟踪及路径规划》", 31 December 2017, 北京航空航天大学出版社 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113467462A (en) * | 2021-07-14 | 2021-10-01 | 中国人民解放军国防科技大学 | Pedestrian accompanying control method and device for robot, mobile robot and medium |
CN113433949A (en) * | 2021-07-19 | 2021-09-24 | 北京云迹科技有限公司 | Automatic following object conveying robot and object conveying method thereof |
CN113741458A (en) * | 2021-09-03 | 2021-12-03 | 北京易航远智科技有限公司 | Robot on-site help following or gesture guiding driving method and system |
CN115129049A (en) * | 2022-06-17 | 2022-09-30 | 广东工业大学 | Mobile service robot path planning system and method with social awareness |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111823228A (en) | Indoor following robot system and operation method | |
CN102495632B (en) | Movement platform based on omnidirectional driving of ball wheels | |
CN106325267A (en) | Omnidirectional mobile platform vehicle with automatic line patrolling and obstacle avoiding functions | |
CN109866936B (en) | Unmanned aerial vehicle landing and recovery integrated mobile platform based on six-degree-of-freedom series-parallel mechanism | |
CN106774318B (en) | Multi-agent interactive environment perception and path planning motion system | |
CN101817182A (en) | Intelligent moving mechanical arm control system | |
US11633848B2 (en) | Independent pan of coaxial robotic arm and perception housing | |
WO2022016754A1 (en) | Multi-machine cooperative vehicle washing system and method based on unmanned vehicle washing device | |
CN206833250U (en) | A kind of unmanned investigation dolly based on laser radar | |
CN103318167A (en) | Intelligent air cushion transfer vehicle and control method thereof | |
CN103914072B (en) | A kind of novel detection robot | |
CN212683967U (en) | Autonomous mobile robot control system and robot | |
CN201625982U (en) | Intelligent mobile mechanical arm control system | |
CN211590199U (en) | Pipeline robot based on vision SLAM | |
CN210198395U (en) | Unmanned aerial vehicle and unmanned vehicle cooperative navigation system | |
CN116985090A (en) | Intelligent garbage sorting robot | |
CN218398132U (en) | Indoor multifunctional operation robot of transformer substation | |
CN111367273A (en) | Unmanned small-sized sweeping machine control system based on path tracking and control method thereof | |
CN216265979U (en) | Indoor autonomous mobile robot | |
CN107272725B (en) | Spherical robot motion control system with visual feedback and motion control method | |
CN212322113U (en) | Trolley obstacle avoidance system based on laser radar | |
CN211906081U (en) | Unmanned small-sized sweeping machine control system based on path tracking | |
CN114137992A (en) | Method and related device for reducing shaking of foot type robot | |
CN203825464U (en) | Novel detection robot | |
CN112882475A (en) | Motion control method and device of Mecanum wheel type omnibearing mobile robot |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20201027 |