CN107544482B - Automatic distribution robot system facing medical environment - Google Patents

Automatic distribution robot system facing medical environment Download PDF

Info

Publication number
CN107544482B
CN107544482B CN201710668970.2A CN201710668970A CN107544482B CN 107544482 B CN107544482 B CN 107544482B CN 201710668970 A CN201710668970 A CN 201710668970A CN 107544482 B CN107544482 B CN 107544482B
Authority
CN
China
Prior art keywords
control module
module
robot
main control
obstacle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710668970.2A
Other languages
Chinese (zh)
Other versions
CN107544482A (en
Inventor
禹鑫燚
朱熠琛
欧林林
卢靓
朱峰
郭永奎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201710668970.2A priority Critical patent/CN107544482B/en
Publication of CN107544482A publication Critical patent/CN107544482A/en
Application granted granted Critical
Publication of CN107544482B publication Critical patent/CN107544482B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

An automatic distribution robot system facing to a medical environment comprises a user operation module, a keyboard control module, a sensing module, a main control module and a motion module. A user selects a delivery task to be completed through an operation interface in a user operation module, a main control module generates a running path of the robot according to the task requirement of the user and sends the running path to a motion module, the motion module carries out track tracking along the running path of the robot, and in the track tracking process, data from a wheel type encoder of a sensing module and a laser radar are used for avoiding a dynamic obstacle, so that the delivery task is completed. When the robot is in an emergency, a user can start the keyboard control module to manually control the motion of the robot.

Description

Automatic distribution robot system facing medical environment
Technical Field
The invention relates to a robot system for automatic distribution, in particular to an automatic distribution robot system facing a medical environment.
Background
With the continuous improvement of medical treatment level and the gradual improvement of medical insurance system, more and more patients can be selected to see a doctor in a large hospital, which leads to the increasing workload of medical care personnel in the large hospital. Nurses as primary medical care personnel are not only responsible for intravenous injection, daily health index testing and other works, but also need to complete low-technology and repetitive work such as medicine delivery, meal delivery, patrol and even medical waste cleaning in multiple wards. For some highly contagious diseases, such as the 2003 SARS, the patient care work presents a significant risk of infection to the nurse. If the low-technology and repeated work is completed by the robot, the working intensity of nurses and the risk of cross infection of diseases can be greatly reduced, the working efficiency of nurses is improved, and the improvement of intelligent medical treatment of hospitals is promoted.
In order to solve the above problems, more and more hospitals introduce various medical service robots. On the one hand, however, the conventional medical robot aims at the execution of a single task, and the walking path of the robot is only from the point A to the point B. Therefore, when the task requirement with timing constraints is met at multiple points, the conventional robot system for medical service can only use a plurality of robots to complete the task at the same time, which may result in insufficient number of robots or congestion and blockage of the robots in the aisles. Zhangwei, Wangxuifang, Chentao and the like propose a path planning method based on a searchable continuous neighborhood A algorithm (Zhangwei, Wangxuifang, Chentao, Tengben, Li Juan, and Zhe Ping.) a path planning method based on a searchable continuous neighborhood A algorithm [ P ]. Heilongjiang: CN106441303A,2017-02-22), perform path planning on a robot based on the A algorithm, and cannot generate a globally optimal path in the face of complex tasks; on the other hand, the conventional medical robot uses a camera to perform obstacle avoidance, distance flood, king fukui, zheng zili and the like, and proposes a depth camera-based robot temporary obstacle avoidance method (distance flood, king fukui, zheng zili, chen nan, lupeue. depth camera-based robot temporary obstacle avoidance method [ P ]. sichuan: CN106054900A,2016-10-26.) that when facing a dynamic obstacle, the system response is slow and the robot temporary obstacle avoidance method is easy to collide with the dynamic obstacle.
Disclosure of Invention
The present invention is directed to overcoming the above-mentioned problems of the prior art and providing an automatic delivery robot system for medical environments.
Firstly, the system has a humanized operation interface, so that an operator can conveniently and rapidly achieve complex distribution tasks. Secondly, the system is provided with a keyboard control module, so that an operator can manually control the robot in an emergency; thirdly, a main control module of the system can generate a globally optimal path based on a linear sequential logic algorithm; finally, the system dynamically avoids the barrier based on the laser radar, and the barrier avoiding effect is stable and reliable.
The technical scheme adopted by the invention for solving the problems in the prior art is as follows:
an automatic delivery robot system oriented to a medical environment, characterized in that: the PC end software is installed on a Linux computer of a user, the laser radar sensor is in wired connection with the PC end through a USB, and the robot base is in wired connection with the PC end through the USB.
The PC-side software sequentially comprises a user operation module, a main control module and a keyboard control module, the laser radar comprises a sensing module, and the robot base comprises a motion module. The main control module sends an opening instruction to the user operation module, the keyboard control module and the sensing module to enable the main control module to enter a working state. The main control module sends the expected path of the robot to the motion control module, so that the motion module carries out track tracking along the expected path; the user operation module sends a user operation requirement to the main control module; the sensing module sends data obtained by the sensor to the main control module; and the keyboard control module sends a robot motion instruction to the main control module.
The specific structure of each module is as follows:
the user operation module is user operation software with good man-machine interaction; the user operation module enters a working state after receiving a starting instruction from the main control module; a user manually inputs a task operation instruction on a software operation interface and sends the task operation instruction to a main control module in a linear time sequence logic formula; the user operation module has three task modes which can be selected:
1) traversing tasks: serving/drug delivery tasks for a plurality of wards;
2) and (3) sequential tasks: the food delivery/medicine delivery tasks are realized in sequence according to the ward requirements;
3) combining tasks: the food delivery/medicine delivery is realized for a plurality of wards, then the garbage collection of partial wards is executed, and finally the garbage is taken back to the garbage station.
The keyboard control module is a control module which enables the robot to move towards any direction; the keyboard control module enters a working state after receiving a starting instruction from the main control module; the keyboard control module receives an operation instruction from a user and finally transmits an operation requirement of the user to the main control module; after a user presses keys corresponding to different motion modes of the robot, the keyboard control module transmits the operation requirements of the user to the main control module in a character mode; the keyboard control module has nine motion modes: left forward, straight forward, right forward, counterclockwise, stop, clockwise, left rear, straight rear and right rear.
The sensing module is a module responsible for collecting sensor data; the sensing module enters a working state after receiving a starting instruction from the main control module; the sensing module simultaneously receives odometer data from a wheel type encoder on a robot base and scanning data of angles and distances from a laser radar and then transmits the odometer data and the scanning data to the main control module in real time; and the obtained data of the sensing module is transmitted to the main control module in a byte stream mode.
The main control module receives data from the user operation module, the keyboard control module and the sensing module at the same time, and sends linear velocity and angular velocity instructions to the motion module after processing; after receiving the data from the keyboard control module, the main control module directly sends linear velocity and angular velocity instructions to the motion module; after receiving data from the user operation module, the main control module waits for the data from the sensor module, determines the position of the robot through the sensor data based on the self-adaptive Monte Carlo algorithm, performs path planning based on the linear time sequence logic algorithm, and finally sends the path planning result to the motion module: firstly, constructing a distribution environment as a weighting switching system in a finite state; describing task requirements by using a linear time sequence task formula, and converting the task requirements into a chart form through an LTL2BA toolkit; then, the switching system and the Buchi automaton are subjected to Cartesian product to construct a task feasible network topology; searching out an optimal path on the task feasible network topology by combining an A-algorithm; then mapping the path obtained by optimizing the task feasible network topology back to the switching system to obtain the corresponding optimal path in the environment; and finally, sending the path planning result to the motion module.
The motion module is used for controlling the robot to move along the track of the path planning result and avoiding the dynamic barrier in real time; the motion module receives a path planning result from the main control module, and after the path planning result is processed by a microcomputer carried by the robot base, the robot is controlled to move along the path planning requirement of the main control module, and a dynamic barrier is avoided in the moving process: firstly, acquiring obstacle information in real time by using a laser radar: defining a safe distance L of a mobile robotsData D of laser radarL,DL={LnL n ∈ [0,360) }, where LnRepresenting the obstacle distance in the direction of angle n. Firstly, detecting whether an obstacle exists in a safe distance; if an obstacle appears within a safe distance, then L is utilizedsScreening all points within safe distance, and recording as DS,DS={Ln|Ln<Ls,Ln∈DL}; recalculating the orientation of the obstacle: calculating the direction and the distance of the obstacle by using obstacle information acquired by a laser radar, and abstracting the obstacle into a mass point; defining radius R of a mobile robots(ii) a First, calculate the weight W of each lidar data point,
Figure GDA0002589534110000031
wherein k iswIs a constant coefficient; second, calculate DSMiddle LnA weighted average of (a); definition of LeWeighted average of obstacle distance:
Figure GDA0002589534110000041
definition of thetaeWeighted average of obstacle direction:
Figure GDA0002589534110000042
using theta finallye,LeDescribing the orientation of the obstacle; then, designing an obstacle avoidance controller: defining the current orientation theta of the mobile robotRAngle between mobile robot and obstacle
Figure GDA0002589534110000043
The kinematics model of the invention is based on the two-wheel differential motion mobile robot, and the linear velocity v and the angular velocity omega are used to control the motion of the mobile robot, then the design of the obstacle avoidance controller is as follows:
Figure GDA0002589534110000044
Figure GDA0002589534110000045
wherein k isv,kω,kvr,kωrAre all constant coefficients; designing a motion controller: if the laser radar does not detect the obstacle, controlling the mobile robot to move to a target point; the motion controller designed by the invention comprises the following components:
ν=VmaxS(d)cosφ,
Figure GDA0002589534110000046
Figure GDA0002589534110000047
where d is the distance between the robot and the target point, phi is the angle between the orientation of the mobile robot and the line connecting the robot and the target point, and VmaxThe maximum speed of the mobile robot is determined, K is a constant coefficient, and c is a deceleration distance to a target point; switching between the obstacle avoidance controller and the motion controller: if an obstacle is detected, and
Figure GDA0002589534110000048
then the obstacle avoidance controller is used for avoiding the obstacle; otherwise, controlling the mobile robot to move to the target point by using the motion controller; at the same time, the maximum linear acceleration a of the mobile robot is limitedvMaximum angular acceleration aω
The invention has the advantages and positive effects that:
compared with the traditional task distribution interface, the task distribution interface is simpler to operate and has three different working modes; the additional keyboard control module is used for enabling an operator to manually control the robot in an emergency so as to deal with emergency; the main control module can deal with complex tasks and generate a globally optimal path based on a linear time sequence logic algorithm compared with a traditional robot system which uses an A-algorithm to plan the path; the system dynamically avoids the barrier based on the laser radar, and compared with a traditional robot system which uses a camera to dynamically avoid the barrier, the robot has higher response speed to the dynamic barrier and more stable barrier avoiding effect.
Drawings
FIG. 1 is a schematic diagram of the platform of the present invention.
Fig. 2 is a control schematic block diagram of the present invention.
FIG. 3a is a user interface module according to the present invention.
FIG. 3b is a keyboard control module interface of the present invention.
FIG. 4a is a flow chart of the linear sequential logic algorithm of the present invention.
FIG. 4b is a switching system in the linear sequential logic algorithm of the present invention.
FIG. 4c is a Buchi automaton in the linear sequential logic algorithm of the present invention.
FIG. 4d is a Product automaton in the linear sequential logic algorithm of the present invention.
FIG. 5 is a flow chart of the motion module algorithm of the present invention.
Detailed Description
The following examples are further detailed in conjunction with the accompanying drawings:
an automatic distribution robot system oriented to a medical environment is shown in figure 1, and mainly comprises an associative thinpad computer 1, an Irobot-critical 2 mobile robot base 2 and a Silan RPLIDAR-A2 laser radar 3. The associative thinpad computer 1 is arranged above the Irobot-create 2 mobile robot base 2, and the two are connected through a USB; the Silan RPLIDAR-A2 laser radar 3 is arranged above the associative Thinkpad computer 1, and the Silan RPLIDAR-A2 laser radar and the associative Thinkpad computer are connected through a USB.
With reference to fig. 2, 3 and 4, the embodiments of the present invention are as follows:
after the SMALL RPLIDAR-A2 laser radar and the Irobot-create 2 mobile robot base are respectively connected with the association thinpad computer through the USB, the association thinpad computer is started, and the user can operate the robot system.
The user operation module enters a working state after receiving a starting instruction from the main control module; a user manually inputs a task operation instruction on a software operation interface, and the task operation instruction is sent to the main control module in a linear time sequence logic formula mode through an ROS communication mechanism. The user operation module has three task modes which can be selected:
1) traversing tasks: serving/drug delivery tasks for a plurality of wards;
2) and (3) sequential tasks: the food delivery/medicine delivery tasks are realized in sequence according to the ward requirements;
3) combining tasks: the food delivery/medicine delivery is realized for a plurality of wards, then the garbage collection of partial wards is executed, and finally the garbage is taken back to the garbage station.
In the user operation module, if the robot is required to complete the food and medicine delivery task to the ward 03 and the ward 09 firstly, then complete the garbage collection task to the ward 05 and the ward 08, and finally return to the ward 01, the specific operation steps are as follows:
1) firstly, clicking a number 1 representing a first priority, and selecting a ward 03 and a ward 09 to click and determine;
2) then clicking the number 2 representing the second priority, selecting the ward 05 and the ward 08 to click and determine;
3) then clicking the number 3 representing the third priority, selecting the ward 01, and clicking for determination;
4) then clicking a starting button;
5) and finally, the user operation module converts the user operation into a linear time sequence logic formula:
Figure GDA0002589534110000061
U(Fp5∧Fp8)))∧Fp3∧Fp9∧Fp5∧Fp8∧GFp1
and sending the linear time sequence logic formula to the main control module. Wherein the symbols Λ (and), and
Figure GDA0002589534110000062
(not) are standard Boolean connectors, F (Final), G (always), X (Next) and U (Up) are sequential operators, e.g., Fq0Denotes q0Is true and the final state of (c) is true,
Figure GDA0002589534110000063
representing global always avoid q4,Xq3Represents the state q3For the next true state in the system, formula q5Uq6Indicating only state q6Is true so that state q can be made5Can be true.
The keyboard control module enters a working state after receiving a starting instruction from the main control module; the keyboard control module receives an operation instruction from a user and finally transmits the operation requirement of the user to the main control module. After the user presses keys corresponding to different motion modes of the robot, the keyboard control module transmits the operation requirements of the user to the main control module in a character mode through the ROS communication mechanism. The keyboard control module has nine motion modes: left forward, straight forward, right forward, counterclockwise, stop, clockwise, left rear, straight rear and right rear. The keys of the keyboard are 'U', 'I', 'O', 'J', 'K', 'L', 'M', 'and' are sequentially represented as controlling the robot to move forward left and front, move forward linearly, move forward right and front, rotate counterclockwise, stop, rotate clockwise, move backward left and back, move backward linearly and move backward right and back.
The sensing module enters a working state after receiving a starting instruction from the main control module; and the sensing module simultaneously receives data from the wheel type encoder on the robot base and scanning data from the laser radar and then transmits the data to the main control module in real time. And the obtained data of the sensing module is transmitted to the main control module in a byte stream mode through an ROS communication mechanism. The information format transmitted by the laser radar sensor is the radar orientation angle and the obstacle distance corresponding to the radar orientation angle, such as: (22 ° 0.22m) (23 ° 0.24m) (24 ° 0.25 m). The information format transmitted by the wheel type encoder is that Irobot-create 2 moves the distance of the left and right wheels of the base of the robot, such as: (0.120.15 m).
After the main control module receives the character data from the keyboard control module, the main control module directly sends linear velocity and angular velocity instructions to the motion module; after receiving the linear time sequence logic formula data from the user operation module, the main control module waits for odometer data of a wheel type encoder from the sensor module and angle and distance data of the laser radar, determines the position of the robot through the sensor data based on the self-adaptive Monte Carlo algorithm, performs path planning based on the linear time sequence logic algorithm, and finally sends the path planning result to the motion module. The algorithm flow is as shown in fig. 4a, and the specific steps are as follows:
1) the distribution environment is first constructed as a finite state weighted switching system, as shown in FIG. 4b, where P0, P1, P2 and P3 are state sets of the switching system, arrows represent switching relationships, numbers represent switching weights, and the switching system uses a tuple
Figure GDA0002589534110000071
And (4) showing. Wherein: qTIs a finite set of states, representing a set of nodes in the switching system;
Figure GDA0002589534110000072
is QTRepresents an initial position in an actual environment;
Figure GDA0002589534110000073
is a switching function representing the switching relationship between nodes, i.e. QTThe communication condition between the intermediate states; II typeTIs a finite set of atomic propositions;
Figure GDA0002589534110000074
represents QTThe atomic proposition satisfied by each state in the system is that the medical robot in the environment drives to the task which needs to be completed by the corresponding task node;
Figure GDA0002589534110000075
is a positive weight function whose value represents QTThe cost of switching between intermediate states;
2) and describing task requirements by using a linear time sequence task formula, converting the task requirements into a chart form through an LTL2BA toolkit, and converting the task requirements into the linear time sequence logic formula by using a linear time sequence logic theory, wherein the task requirements of the medical robot can traverse p1 and p2 and finally return to a p3 task point:
φ=Fp1∨Fp2∨GFp3
the converted Buchi automaton is shown in FIG. 4 c;
3) then, the switching system and the Buchi automaton are subjected to Cartesian product to construct a task-feasible network topology, as shown in FIG. 4d, wherein the state of S0 in the first row is an initial state, and the state of S4 in the last row is a final receiving state;
4) and (4) searching an optimal path on the task feasible network topology by combining the A-algorithm, as shown in fig. 4d, wherein the solid line arrow in the figure showsThe path is a path obtained by optimizing the task feasible network topology by adopting the A-star algorithm, namely (p)0,s0)→(p2,s0)→(p1,s1)→(p3,s3) The self-loop of the switching GFp3 portion between states S3 and S4 is therefore negligible. It can be seen from the figure that the total weight value of the path is minimum, so the planned path is optimal;
5) and then mapping the path obtained by optimizing the task feasible network topology back to the switching system to obtain the corresponding optimal path in the environment. Finally, after the optimal path on the task feasible network topology is obtained, the optimal path needs to be mapped back to the switching system to obtain the optimal path of the actual medical environment;
6) and finally, sending the path planning result to a motion module: the optimal path (p) of step 40,s0)→(p2,s0)→(p1,s1)→(p3,s3) And sending the data to the motion module.
And the motion module is a module for controlling the robot to move along the track of the path planning result and avoiding the dynamic barrier in real time. The motion module receives a path planning result from the main control module, and after the path planning result is processed by a microcomputer carried by the robot base, the motion module controls the robot to move along the path planning requirement of the main control module and avoids dynamic obstacles in the moving process, and an algorithm flow chart is shown in fig. 5, and the specific steps are as follows:
1) using laser radar to record the distance data D of the obstacle at each angle in a scanning periodL,DL360 data are contained, from 0 ° to 359 °, and the obstacle distances in each direction are recorded in turn. If the obstacle distance is greater than the measurement azimuth of the laser radar, the obstacle distance in the direction is infinite. Recording safety distance L of mobile robots0.5 m. If the current laser radar data contains a numerical value smaller than the safe distance, constructing an obstacle point set DS
DS={(180°,0.49),(181°,0.47),(182°,0.45),(183°,0.43),
(184°,0.41),(185°,0.43),(186°,0.45),(187°,0.47),(188°,0.49)}
If no obstacle is detected, the motion controller is executed.
2) Recording size radius R of mobile robots0.2 m. First, the weight W of each lidar data point is calculated,
Figure GDA0002589534110000091
let k w1. If an obstacle is detected, step D of step 1 is usedSCalculating each weight W:
W={(180°,3.45),(181°,3.70),(182°,4.00),(183°,4.35),
(184°,4.76),(185°,4.35),(186°,4.00),(187°,3.70),(188°,3.45)}
next, D is calculatedSMiddle LnIs calculated as the weighted average of (a). L iseIs a weighted average of the distances of the obstacles,
Figure GDA0002589534110000092
θeis a weighted average of the directions of the obstacles,
Figure GDA0002589534110000093
using theta finallye,LeThe orientation of the obstacle is described.
3) Because the mobile robot is fixedly connected with the laser radar, the current orientation of the mobile robot is always thetaR180 degrees, the angle between the mobile robot and the obstacle is recorded
Figure GDA0002589534110000094
The kinematics model of the invention is based on the two-wheel differential motion mobile robot, and the linear velocity v and the angular velocity omega are used to control the motion of the mobile robot, then the design of the obstacle avoidance controller is as follows:
Figure GDA0002589534110000095
Figure GDA0002589534110000096
wherein k isv=0.1,kω=0.2,kvr=0.1,kωr=0.01。
4) And if the laser radar does not detect the obstacle, controlling the mobile robot to move to the target point. Let Vmax0.2, K0.05, c 0.5, current d 2,
Figure GDA0002589534110000097
ν=VmaxS(d)cosφ=0.141,
Figure GDA0002589534110000098
S(d)=1
5) on the basis of designing an obstacle avoidance controller and a motion controller, two controllers need to be reasonably utilized. If an obstacle is detected, and
Figure GDA0002589534110000099
then the obstacle avoidance controller is used for avoiding the obstacle; otherwise, controlling the mobile robot to move to the target point by using the motion controller. At the same time, the maximum linear acceleration a of the mobile robot is limitedvMaximum angular acceleration a of 0.2ω0.2. And if the current obstacle direction is-4 degrees, obstacle avoidance control is executed. Through continuous iterative control, the mobile robot can finally avoid the obstacle to reach the target point.
It is emphasized that the embodiments described herein are illustrative rather than restrictive, and thus the present invention includes, but is not limited to, the embodiments described in the detailed description, as well as other embodiments that fall within the scope of the appended claims, as may be derived from the teachings of the present invention.

Claims (1)

1. An automatic delivery robot system oriented to a medical environment, characterized in that: the PC end software is installed on a Linux computer of a user, the laser radar sensor is in wired connection with the PC end through a USB, and the robot base is in wired connection with the PC end through the USB;
the PC-side software sequentially comprises a user operation module, a main control module and a keyboard control module, the laser radar comprises a sensing module, and the robot base comprises a motion module; the main control module sends a starting instruction to the user operation module, the keyboard control module and the sensing module to enable the main control module to enter a working state; the main control module sends the expected path of the robot to the motion control module, so that the motion module carries out track tracking along the expected path; the user operation module sends a user operation requirement to the main control module; the sensing module sends data obtained by the sensor to the main control module; the keyboard control module sends a robot motion instruction to the main control module;
the specific structure of each module is as follows:
the user operation module is user operation software with good man-machine interaction; the user operation module enters a working state after receiving a starting instruction from the main control module; a user manually inputs a task operation instruction on a software operation interface and sends the task operation instruction to a main control module in a linear time sequence logic formula; the user operation module has three task modes which can be selected:
1) traversing tasks: serving/drug delivery tasks for a plurality of wards;
2) and (3) sequential tasks: the food delivery/medicine delivery tasks are realized in sequence according to the ward requirements;
3) combining tasks: the method comprises the following steps of firstly realizing food delivery/medicine delivery for a plurality of wards, then executing garbage collection for partial wards, and finally taking garbage back to a garbage station;
the keyboard control module is a control module which enables the robot to move towards any direction; the keyboard control module enters a working state after receiving a starting instruction from the main control module; the keyboard control module receives an operation instruction from a user and finally transmits an operation requirement of the user to the main control module; after a user presses keys corresponding to different motion modes of the robot, the keyboard control module transmits the operation requirements of the user to the main control module in a character mode; the keyboard control module has nine motion modes: left front forward, straight forward, right front forward, counter clockwise rotation, stop, clockwise rotation, left rear backward, straight backward and right rear backward;
the sensing module is a module responsible for collecting sensor data; the sensing module enters a working state after receiving a starting instruction from the main control module; the sensing module simultaneously receives odometer data from a wheel type encoder on a robot base and scanning data of angles and distances from a laser radar and then transmits the odometer data and the scanning data to the main control module in real time; the obtained data of the sensing module is transmitted to the main control module in a byte stream mode;
the main control module receives data from the user operation module, the keyboard control module and the sensing module at the same time, and sends linear velocity and angular velocity instructions to the motion module after processing; after receiving the data from the keyboard control module, the main control module directly sends linear velocity and angular velocity instructions to the motion module; after receiving data from the user operation module, the main control module waits for the data from the sensor module, determines the position of the robot through the sensor data based on the self-adaptive Monte Carlo algorithm, performs path planning based on the linear time sequence logic algorithm, and finally sends the path planning result to the motion module: firstly, constructing a distribution environment as a weighting switching system in a finite state; describing task requirements by using a linear time sequence task formula, and converting the task requirements into a chart form through an LTL2BA toolkit; then, the switching system and the Buchi automaton are subjected to Cartesian product to construct a task feasible network topology; searching out an optimal path on the task feasible network topology by combining an A-algorithm; then mapping the path obtained by optimizing the task feasible network topology back to the switching system to obtain the corresponding optimal path in the environment; finally, the path planning result is sent to the motion module;
the motion module is used for controlling the robot to move along the track of the path planning result and avoiding the dynamic barrier in real time; the motion module receives a path planning result from the main control module, and after the path planning result is processed by a microcomputer carried by the robot base, the robot is controlled to move along the path planning requirement of the main control module, and a dynamic barrier is avoided in the moving process: first useThe laser radar acquires the obstacle information in real time: defining a safe distance L of a mobile robotsData D of laser radarL,DL={LnL n ∈ [0,360) }, where LnRepresenting the obstacle distance in the direction of angle n; firstly, detecting whether an obstacle exists in a safe distance; if an obstacle appears within a safe distance, then L is utilizedsScreening all points within safe distance, and recording as DS,DS={Ln|Ln<Ls,Ln∈DL}; recalculating the orientation of the obstacle: calculating the direction and the distance of the obstacle by using obstacle information acquired by a laser radar, and abstracting the obstacle into a mass point; defining radius R of a mobile robots(ii) a First, calculate the weight W of each lidar data point,
Figure FDA0002589534100000021
wherein k iswIs a constant coefficient; second, calculate DSMiddle LnA weighted average of (a); definition of LeWeighted average of obstacle distance:
Figure FDA0002589534100000022
definition of thetaeWeighted average of obstacle direction:
Figure FDA0002589534100000031
using theta finallye,LeDescribing the orientation of the obstacle; then, designing an obstacle avoidance controller: defining the current orientation theta of the mobile robotRAngle between mobile robot and obstacle
Figure FDA0002589534100000032
The kinematics model of the invention is based on the two-wheel differential motion mobile robot, the linear velocity v and the angular velocity omega are used to control the motion of the mobile robot, and the design of the obstacle avoidance controller is as followsThe following:
Figure FDA0002589534100000033
Figure FDA0002589534100000034
wherein k isv,kω,kvr,kωrAre all constant coefficients; designing a motion controller: if the laser radar does not detect the obstacle, controlling the mobile robot to move to a target point; the designed motion controller comprises the following components:
ν=VmaxS(d)cosφ,
Figure FDA0002589534100000035
Figure FDA0002589534100000036
where d is the distance between the robot and the target point, phi is the angle between the orientation of the mobile robot and the line connecting the robot and the target point, and VmaxThe maximum speed of the mobile robot is determined, K is a constant coefficient, and c is a deceleration distance to a target point; switching between the obstacle avoidance controller and the motion controller: if an obstacle is detected, and
Figure FDA0002589534100000037
then the obstacle avoidance controller is used for avoiding the obstacle; otherwise, controlling the mobile robot to move to the target point by using the motion controller; at the same time, the maximum linear acceleration a of the mobile robot is limitedvMaximum angular acceleration aω
CN201710668970.2A 2017-08-08 2017-08-08 Automatic distribution robot system facing medical environment Active CN107544482B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710668970.2A CN107544482B (en) 2017-08-08 2017-08-08 Automatic distribution robot system facing medical environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710668970.2A CN107544482B (en) 2017-08-08 2017-08-08 Automatic distribution robot system facing medical environment

Publications (2)

Publication Number Publication Date
CN107544482A CN107544482A (en) 2018-01-05
CN107544482B true CN107544482B (en) 2020-10-09

Family

ID=60970216

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710668970.2A Active CN107544482B (en) 2017-08-08 2017-08-08 Automatic distribution robot system facing medical environment

Country Status (1)

Country Link
CN (1) CN107544482B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108983777B (en) * 2018-07-23 2021-04-06 浙江工业大学 Autonomous exploration and obstacle avoidance method based on self-adaptive front exploration target point selection
CN110120268A (en) * 2019-05-24 2019-08-13 宿州学院 A kind of robot system helping rehabilitation
CN110653830A (en) * 2019-09-03 2020-01-07 南京美桥信息科技有限公司 Automatic distribution robot system oriented to medical environment
CN113031593B (en) * 2021-02-25 2022-02-11 上海交通大学 Active sensing task path planning method and system, robot and controller
CN113059576B (en) * 2021-03-31 2022-07-26 上海应用技术大学 Medical transportation robot based on fish school effect and self-adaptive cruise following method
CN115542889A (en) * 2021-06-30 2022-12-30 上海微觅医疗器械有限公司 Preoperative navigation method and system for robot, storage medium and computer equipment
CN114800553A (en) * 2022-04-24 2022-07-29 华南理工大学 Control system of medical care robot
CN115328173A (en) * 2022-10-14 2022-11-11 深圳市功夫机器人有限公司 Mobile robot control method based on laser radar and mobile robot

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102103663A (en) * 2011-02-26 2011-06-22 山东大学 Ward visit service robot system and target searching method thereof
CN102541057A (en) * 2010-12-29 2012-07-04 沈阳新松机器人自动化股份有限公司 Moving robot obstacle avoiding method based on laser range finder
US8239084B2 (en) * 2006-09-11 2012-08-07 Hitachi, Ltd. Moving device
CN103558856A (en) * 2013-11-21 2014-02-05 东南大学 Service mobile robot navigation method in dynamic environment
CN105467997A (en) * 2015-12-21 2016-04-06 浙江工业大学 Storage robot path program method based on linear temporal logic theory
CN106500697A (en) * 2016-10-13 2017-03-15 浙江工业大学 It is applied to the LTL A* A* optimum path planning methods of dynamic environment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8239084B2 (en) * 2006-09-11 2012-08-07 Hitachi, Ltd. Moving device
CN102541057A (en) * 2010-12-29 2012-07-04 沈阳新松机器人自动化股份有限公司 Moving robot obstacle avoiding method based on laser range finder
CN102103663A (en) * 2011-02-26 2011-06-22 山东大学 Ward visit service robot system and target searching method thereof
CN103558856A (en) * 2013-11-21 2014-02-05 东南大学 Service mobile robot navigation method in dynamic environment
CN105467997A (en) * 2015-12-21 2016-04-06 浙江工业大学 Storage robot path program method based on linear temporal logic theory
CN106500697A (en) * 2016-10-13 2017-03-15 浙江工业大学 It is applied to the LTL A* A* optimum path planning methods of dynamic environment

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
REAL-TIME OBSTACLE AVOIDANCE;0. Khatib;《IEEE International Conference on Robotics》;19851231;第500-505页 *
Research on Multi-Robot Collaborative Transportation Control System;Cheng Cheng, Xin-Yi Yu, Lin-Lin Ou, Yong-Kui Guo;《Control and Decision Conference》;20161231;第4886-4891页 *
双轮移动机器人安全目标追踪与自动避障算法;李保国,张春熹;《控制理论与应用》;20070831;第24卷(第4期);第358-362页 *
基于激光测距仪的移动机器人避障新方法;徐玉华,张崇巍,徐海琴;《机器人》;20100331;第32卷(第2期);第179-183页 *
基于视觉与激光的移动机器人环境识别研究;倪晓清;《中国优秀硕士学位沦文全文数据库信息科技辑》;20131231;全文 *
基于速度空间的移动机器人同时避障和轨迹跟踪方法;张启彬,王鹏,陈宗;《控制与决策》;20170228;第32卷(第2期);第358-362页 *

Also Published As

Publication number Publication date
CN107544482A (en) 2018-01-05

Similar Documents

Publication Publication Date Title
CN107544482B (en) Automatic distribution robot system facing medical environment
Borenstein et al. Teleautonomous guidance for mobile robots
Hu et al. A parallel processing architecture for sensor-based control of intelligent mobile robots
US20160008988A1 (en) Robotics Platforms Incorporating Manipulators Having Common Joint Designs
JP2019501384A (en) Autonomous positioning navigation equipment, positioning navigation method and autonomous positioning navigation system
CN110653830A (en) Automatic distribution robot system oriented to medical environment
Pramila et al. Design and Development of Robots for Medical Assistance: An Architectural Approach
CN109163724A (en) Multiple target point autonomous navigation method based on Turtlebot2 robot building map
CN111506199B (en) Kinect-based high-precision unmarked whole-body motion tracking system
US11642780B2 (en) Monitoring of surface touch points for precision cleaning
Brock Generating robot motion: The integration of planning and execution
Celemin et al. Interactive imitation learning in robotics: A survey
Nam et al. A software architecture for service robots manipulating objects in human environments
Zhong et al. A collaborative telerobotics network framework with hand gesture interface and conflict prevention
Wei et al. A vision-based measure of environmental effects on inferring human intention during human robot interaction
Placidi et al. Data integration by two-sensors in a LEAP-based Virtual Glove for human-system interaction
Xue et al. Gesture-and vision-based automatic grasping and flexible placement in teleoperation
Al-Fedaghi et al. Thinging the robotic architectural structure
CN111134974B (en) Wheelchair robot system based on augmented reality and multi-mode biological signals
CN114700950A (en) Medical nursing robot motion prediction and fault diagnosis method based on digital twins
CN112284383A (en) Centralized multi-nursing-bed path planning and real-time obstacle avoidance system
Zou et al. Novel standardized representation methods for modular service robots
Sorouri et al. Plug-and-play design and distributed logic control of medical devices using IEC 61499 function blocks
Cattoni et al. Bridging the gap between planning and reactivity: a layered architecture for autonomous indoor navigation
Yun et al. Digital twin model construction of robot and multi-object under stacking environment for grasping planning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant