CN115129049A - Mobile service robot path planning system and method with social awareness - Google Patents
Mobile service robot path planning system and method with social awareness Download PDFInfo
- Publication number
- CN115129049A CN115129049A CN202210689092.3A CN202210689092A CN115129049A CN 115129049 A CN115129049 A CN 115129049A CN 202210689092 A CN202210689092 A CN 202210689092A CN 115129049 A CN115129049 A CN 115129049A
- Authority
- CN
- China
- Prior art keywords
- mobile service
- service robot
- human body
- robot
- potential field
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 230000033001 locomotion Effects 0.000 claims abstract description 70
- 230000008569 process Effects 0.000 claims abstract description 20
- 238000001514 detection method Methods 0.000 claims abstract description 12
- 230000011273 social behavior Effects 0.000 claims abstract description 4
- 210000000988 bone and bone Anatomy 0.000 claims description 16
- 238000004422 calculation algorithm Methods 0.000 claims description 15
- 238000012549 training Methods 0.000 claims description 13
- 238000013459 approach Methods 0.000 claims description 11
- 238000004364 calculation method Methods 0.000 claims description 5
- 238000005381 potential energy Methods 0.000 claims description 5
- 238000007781 pre-processing Methods 0.000 claims description 5
- 239000000126 substance Substances 0.000 claims description 5
- 238000004458 analytical method Methods 0.000 claims description 4
- 230000015572 biosynthetic process Effects 0.000 claims description 3
- 238000012937 correction Methods 0.000 claims description 3
- 238000006073 displacement reaction Methods 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 238000003786 synthesis reaction Methods 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 2
- 230000006399 behavior Effects 0.000 abstract description 7
- 239000003638 chemical reducing agent Substances 0.000 description 5
- 238000012545 processing Methods 0.000 description 4
- OAICVXFJPJFONN-UHFFFAOYSA-N Phosphorus Chemical compound [P] OAICVXFJPJFONN-UHFFFAOYSA-N 0.000 description 2
- 238000003066 decision tree Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000003997 social interaction Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0242—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0214—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0221—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0223—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
- G05D1/0253—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0276—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Electromagnetism (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Manipulator (AREA)
Abstract
The invention discloses a mobile service robot path planning system with social consciousness and a method thereof, wherein the system comprises a human body detection module and a motion planning module; the human body detection module is used for reading the picture information containing the depth, detecting the human body, distinguishing a target object from a non-target object, and calculating to obtain the position and orientation information of all the human bodies; the motion planning module is used for planning a motion path with social awareness for the mobile service robot, and the motion path planning simultaneously conforms to three social behavior criteria: a) head-on approaching the target when driving to the target object; b) predicting and considering the movement intentions of other people in the avoidance process; c) and the right side avoiding standard is followed in the moving process. The invention solves the problem that the movement behavior of the existing mobile service robot in social occasions does not have social consciousness, and can realize anthropomorphic autonomous navigation of the mobile service robot in social environments.
Description
Technical Field
The invention relates to the technical field of mobile service robot path planning, in particular to a mobile service robot path planning system and method with social awareness.
Background
The mobile service robot is widely applied to dynamic social scenes such as shopping malls, restaurants, stations and the like. In these scenarios, the mobile service robot not only needs to have safety and autonomy, but also the social awareness of the robot is particularly important. The behavior of the service robot in the social scene affects the human experience, so people expect the service robot to show good behavior in the social environment and follow social etiquette and social criteria.
In the prior art, most mobile service robots only aim to efficiently drive to a target object and safely avoid obstacles when planning a path, and etiquette specifications to be followed in a movement process are not considered. In practical social occasions, the comfort of human-computer interaction can be improved when a user approaches an object to be communicated head-on, the service consciousness of the robot can be reflected when the user predicts the movement intention of the other side in the process of avoiding other people and reasonably avoids obstacles based on the movement trend of the other side, and meanwhile, the humanoid behavior consciousness of the robot can be reflected by following the right avoidance principle in the movement process.
Disclosure of Invention
In order to overcome the defects, the invention provides a mobile service robot path planning system and a mobile service robot path planning method with social awareness, and aims to solve the problem that the movement behavior of the existing service mobile service robot in the social occasion does not have the social awareness.
In order to achieve the purpose, the invention adopts the following technical scheme:
a mobile service robot path planning system with social consciousness comprises a human body detection module and a motion planning module;
the human body detection module is used for reading picture information containing depth, detecting a human body, distinguishing a target object from a non-target object, and calculating and acquiring position and orientation information of all human bodies;
the motion planning module is used for planning a motion path with social awareness for the mobile service robot, and the motion path planning simultaneously conforms to three social behavior criteria: a) head-on approaching the target when driving to the target object; b) predicting and considering the movement intentions of other people in the avoidance process; c) and the right side avoidance specification is followed in the movement process.
Preferably, the motion planning module comprises a first motion planning sub-module and a second motion planning sub-module; the first motion planning submodule is used for driving the mobile service robot to a target object through a gravitational potential field; the second motion planning submodule is used for enabling the mobile service robot to avoid non-target objects through repulsive force fields.
Another aspect of the present application provides a mobile service robot path planning method with social awareness, where the system includes a mobile service robot, and the method includes the following steps:
step S1: collecting picture information, wherein the picture information comprises human body information and object information;
step S2: receiving and analyzing the picture information, detecting a human body through bone identification, and acquiring position and orientation information of the human body;
step S3: distinguishing a target object from a non-target object through face recognition;
step S4: and planning a motion path of the mobile service robot by adopting an artificial potential field algorithm according to the position and orientation information of the human body, and approaching the target object by the robot according to a social criterion and reasonably avoiding the non-target object.
Preferably, in step S2, the bone identification specifically includes the following steps:
step S21: sending infrared rays and receiving reflected infrared rays, calculating the time difference of the infrared rays to and from each other, and collecting a 3D depth image;
step S22: segmenting the 3D depth image, removing background images except for the human body, and converting the residual images into depth values to be used as training samples;
step S23: correspondingly separating training samples by taking the body part as a label and training a classifier until the classifier can identify the class of the body part corresponding to the specified 3D depth image;
step S24: joints of the body part are determined, corresponding joint points are tracked and bones are generated.
Preferably, in step S24, in the process of generating the skeleton, the pose information of the human body in the global coordinate system can be obtained by calculating the current pose information of the robot, as shown in fig. 3, the pose of the mobile service robot is expressed as: p F =[x F ,y F ,θ F ] T Wherein x is F ,y F ,θ F Respectively representing the X-axis coordinate, the Y-axis coordinate and the course angle of the robot under a global coordinate system, wherein the motion state of the robot is represented as follows: q ═ v F ,ω F ] T ,v F ,ω F Respectively representing the linear velocity and the angular velocity of the mobile service robot; the pose of the target human body is represented as P L =[x L ,y L ,θ L ] T Wherein x is L ,y L ,θ L Respectively representing the X-axis coordinate, the Y-axis coordinate and the orientation angle of the target human body under a global coordinate system;
calculating the position of the target human body according to the formulas (1) and (2):
x L =x F +l LF cos(θ F +θ K )+d cosθ F (1)
y L =y F +l LF sin(θ F +θ K )+d sinθ F (2)
wherein l LF Is the horizontal position distance between the target human body and the camera,x c and y c Obtainable from depth information of a depth camera of the target detection module, θ K The azimuth angle of the target human body under the camera coordinate system is shown, and d is the distance from the center of the wheelchair to the camera; current orientation angle theta of target human body L The method can be obtained by detecting human body joint points through a bone recognition algorithm.
Preferably, in step S3, the face recognition specifically includes the following steps:
step S31: collecting image flow, selecting useful characteristics in the image flow, and accurately positioning the position of a human face in the image;
step S32: preprocessing a face image, wherein the preprocessing mainly comprises light compensation, gray level transformation, histogram equalization, normalization, geometric correction, filtering and sharpening of the face image;
step S33: obtaining feature data of face image classification according to the shape description of the faces and the distance characteristics among the faces, wherein the feature data comprises Euclidean distance, angle and curvature among feature points;
step S34: and searching and matching the feature data of the extracted face image with a feature template stored in a database, calculating a similarity value, and outputting a matching result if the similarity value is greater than or equal to a threshold value set by a system.
Preferably, in step S4, the artificial potential field algorithm includes an attractive potential field and a repulsive potential field, wherein the attractive potential field is based on an attractive potential function U t Generation of U t Expressed as:
wherein r | | l | | non-woven phosphor 2 ,l=P R -P T ,P R =[x R ,y R ] T Indicating the position of the center point of the robot, P T =[x T ,y T ] T Indicating the position of the target point, r indicating the machineThe distance between the center point of the robot and the target point, n and mu are adjustable parameters, alpha o The method comprises the following steps that an artificial obstacle is represented by epsilon + delta, wherein epsilon is a constant, delta plays a role in adjusting the artificial obstacle, the posture of the robot is corrected when the robot is guided to approach a target object, and the posture of the robot is ensured to be converged to an angle aligned with the direction of the target object at the same time when the robot reaches the target object, namely the robot approaches the target object in a head-on mode; b ═ cos θ L ,sinθ L ] T Where θ is the orientation angle of the target object in the global coordinate systemρ w Is the operating space radius;
the gradient field of gravitational potential energy is gravitational potential field, and gravitational force F produced by gravitational potential field g The component forces in the X-axis and Y-axis directions can be obtained by calculating the partial derivatives, and the specific formula is as follows:
wherein the content of the first and second substances,is represented by F g The component force in the direction of the X-axis,is represented by F g Component force in the Y-axis direction.
Preferably, the calculation of the repulsive force comprises in particular the following steps:
step S81: passing the position of the pedestrian at the last momentAnd the position of the pedestrian at that timePresume the pedestrian's position at the next moment
Step S82: recording the displacement of the pedestrian at two adjacent moments asWherein Performing probability analysis on the obstacle to obtain an expected value mu of the obstacle x,i 、μ y,i Variance σ x,i 、σ y,i Covariance σ xy,i And correlation coefficient ρ xy,i ;
Step S83: dividing the coordinate system into a plurality of grids to obtain the probability density function U of each grid op (m,n):
Step S84: and (3) solving the partial derivatives in the X-axis and Y-axis directions of the coordinate system to obtain a potential field gradient:
step S85: calculating to obtain a probability potential field according to the potential field gradient, wherein the repulsion of the potential field is a column vectorWherein, the component force in the X-axis and Y-axis directions is as follows:
wherein the content of the first and second substances,is represented by F p The force component on the X-axis,is represented by F p A component force on the Y axis;
the repulsion force generated by the probability potential field is rotated by a certain angle to obtain a rotation repulsion force, and the rotation repulsion force is defined as a column vectorCan be expressed as:
F r =R v F p (16)
wherein R is v Is a rotation matrix, which is defined as follows:
wherein beta is a deflection coefficient, and the value range of beta is [0, pi/2 ];
rotational repulsive force F r The component forces in the X-axis and Y-axis directions are:
according to the force synthesis principle, adding two repulsion forces of the attraction force potential field and the repulsion force potential field to obtain a formula for calculating a resultant force F:
F=F g +F p +F r (20)
wherein, the component force of F in the X-axis and Y-axis directions is as follows:
calculating the speed v of the mobile service robot planned by the potential field method and the expected heading angle theta at the next moment according to the magnitude and the direction of the resultant force:
wherein v is 0 、d 0 All are defined as a constant and take value according to actual conditions.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
according to the mobile service robot path planning system with social consciousness, the depth camera is used for detecting and shooting human bodies in a visual field, distinguishing target objects and avoiding objects, identifying and obtaining motion information of the human bodies around, and planning tracks by combining the position and orientation information of the human bodies through an artificial potential field algorithm meeting the social criterion, so that other non-target objects can be reasonably avoided according to the social criterion when the robot drives the target objects head-on.
Drawings
FIG. 1 is a rear view of a mobile service robot;
FIG. 2 is a side view of the mobile service robot;
FIG. 3 is a schematic diagram of an embodiment;
FIG. 4 is a flow chart of a navigation algorithm.
Wherein, 1, a mobile service robot; 11. a mounting frame; 12. a power supply component; 13. a master control assembly; 14. a motion assembly; 15. a sensing component; 16. a display; 111. a first mounting plate; 112. a second mounting plate; 113. a connecting rod; 130. an industrial personal computer; 141. a driver; 142. a direct current motor; 143. a photoelectric encoder; 144. a speed reducer; 145. a drive wheel; 150. a depth camera.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
A mobile service robot path planning system with social awareness comprises a mobile service robot and a motion planning module, wherein the mobile service robot comprises a human body detection module;
the human body detection module is used for reading picture information containing depth, detecting a human body, distinguishing a target object from a non-target object, and calculating and acquiring position and orientation information of all human bodies;
the motion planning module is used for planning a motion path with social awareness for the mobile service robot, and the motion path planning simultaneously conforms to three social behavior criteria: a) head-on approaching the target when driving to the target object; b) predicting and considering the movement intentions of other people in the avoidance process; c) and the right side avoidance specification is followed in the movement process.
In this embodiment, as shown in fig. 1-2, the mobile service robot 1 specifically includes a mounting frame 11, a power supply component 12, a main control component 13, a movement component 14, and a sensing component 15;
the mounting frame 11 comprises a first mounting plate 111 and a second mounting plate 112, and the second mounting plate 112 is mounted below the first mounting plate 111 through a connecting rod 113;
the power supply assembly 12 is mounted on the second mounting plate 112, the power supply assembly 12 is used for supplying power to the main control assembly 13, the main control assembly 13 comprises an industrial personal computer 130, the motion assembly 14 comprises a driver 141, a direct current motor 142, a photoelectric encoder 143, a speed reducer 144 and a driving wheel 145, and the sensing assembly 15 comprises a depth camera 150; the driver 141 and the photoelectric encoder 143 are both mounted on the bottom surface of the first mounting plate 111; the speed reducer 144 is mounted on the second mounting plate 112, and the driving wheel 145 is connected to the driving end of the speed reducer 144; the direct current motor 142 is mounted on the top of the speed reducer 144; the depth camera 150 is mounted on the first mounting plate 111; the industrial personal computer 130 is mounted to the power supply module 12.
Further, the power supply assembly 12 comprises a 24V DC power supply, and the 24V DC power supply is connected with a 24V-12V transformer to provide 12V DC voltage for the display 16 in the mobile service robot 1; the depth camera 150, the driver 141 and the industrial personal computer 130 are directly supplied with 24V direct current voltage by a 24V direct current power supply. The main control assembly 13 is composed of the industrial personal computer 130, and is used for analyzing and processing information from the motion assembly 14 and the sensing assembly 15 in a centralized manner, adjusting and sending a control instruction, and controlling the motion of the mobile service robot. In the sensing assembly 15, the depth camera 150 is connected with the industrial personal computer 130 through a USB interface, the depth camera 150 collects information of a human body and surrounding environment, and then sends the measured information data to the industrial personal computer 130, so as to provide environmental characteristics for path planning and ensure safety of a driving process; in the motion assembly 14, the driver 141, the dc motor 142, and the photoelectric encoder 143 form a closed loop overall structure, perform data transmission with the industrial personal computer 130 through a CAN bus, and control the speed and direction of the driving wheel 145 by receiving a motion command of the industrial personal computer 130.
The human body detection module mainly detects through a depth camera 150 in the mobile service robot, and the motion planning module mainly controls the mobile service robot to move according to a planned path and a motion instruction.
According to the mobile service robot path planning system with social consciousness, the depth camera 150 is used for detecting and shooting human bodies in a visual field, distinguishing target objects and avoiding objects, identifying and obtaining motion information of the human bodies around, and planning tracks by combining the position and orientation information of the human bodies through an artificial potential field algorithm meeting the social criterion, so that other non-target objects can be reasonably avoided according to the social criterion when the robot drives the target objects head-on.
Preferably, the motion planning module comprises a first motion planning sub-module and a second motion planning sub-module; the first motion planning submodule is used for driving the mobile service robot to a target object through a gravitational potential field; the second motion planning submodule is used for enabling the mobile service robot to avoid non-target objects through repulsive force fields.
In this embodiment, the first motion planning submodule makes the mobile service robot 1 approach in a face-to-face manner when driving to the target object through the gravitational potential field; the second motion planning submodule provides the mobile service robot 1 with the capability of predicting and considering the motion intentions of other people in the obstacle avoidance process and the capability of following the right side avoidance standard in the obstacle avoidance process through the repulsive force potential field.
Another aspect of the present application provides a mobile service robot path planning method with social awareness, including the following steps:
step S1: collecting picture information, wherein the picture information comprises human body information and object information;
step S2: receiving and analyzing the picture information, detecting a human body through bone recognition and face recognition, distinguishing a target object from a non-target object, and acquiring position and orientation information of the human body;
step S3: calculating a planned path of the mobile service robot by adopting an artificial potential field algorithm according to the position and orientation information of the human body, and generating a motion instruction of the mobile service robot based on the planned path;
step S4: and planning a motion path of the mobile service robot by adopting an artificial potential field algorithm according to the position and orientation information of the human body, and approaching the target object by the robot according to a social criterion and reasonably avoiding the non-target object.
In the scheme, as shown in fig. 1, 2 and 4, the depth camera 150 collects picture information in a shooting view, sends the collected picture information to the main control assembly 13 for analysis and processing, at the moment, the human body detection module reads the picture information containing depth, detects a human body and distinguishes a target object from a non-target object by combining a bone recognition technology and a face recognition technology, and calculates and obtains position and orientation information of all human bodies; and the motion planning module calculates a planned path of the mobile service robot by adopting an artificial potential field algorithm according with social criteria according to the position and orientation information of the human body. The main control component 13 sends a motion instruction to the motion component 14 of the mobile service robot 1 according to the calculated planned path, wherein the motion instruction refers to an instruction for operating the motion component 14, so that the mobile service robot 1 moves according to the planned path, and other non-target objects are reasonably avoided according to social criteria when the mobile service robot drives to the target object head-on.
Preferably, in step S2, the bone identification specifically includes the following steps:
step S21: sending infrared light and receiving reflected infrared light, calculating the time difference of the infrared light to and fro, and collecting a 3D depth image;
step S22: segmenting the 3D depth image, removing background images except for the human body, and converting the residual images into depth values to be used as training samples;
step S23: correspondingly separating training samples by taking the body part as a label, and training a classifier until the classifier can identify the class of the body part corresponding to the specified 3D depth image;
step S24: joints of the body part are determined, corresponding joint points are tracked and bones are generated.
In this embodiment, a depth camera 150 is adopted, and the depth camera 130 adopts a Time of Flight (TOF). The infrared transmitter actively transmits modulated near-infrared light, the infrared light is reflected when the infrared light irradiates an object in the field of view, the infrared camera receives the reflected infrared light, the time difference of the back-and-forth light is calculated (usually calculated through phase difference), and the depth information of the object (namely the distance from the object to the depth camera) can be obtained.
Specifically, the bone recognition is to call an application interface function in an Open Natural Interaction (OpenNI) library to control a Kinect camera to acquire an image stream, and process each frame of image. Firstly, image segmentation is carried out on the collected 3D depth image, background images except for human bodies are removed, and the calculation amount is reduced. And converting the processed 3D depth image into depth values serving as training samples, and training a classifier containing a plurality of depth features to identify the object so as to determine the body part. And separating a depth image corresponding to the training sample by taking the body part as a label, and then training a decision tree classifier until the decision tree can accurately classify the test set image on the specific body part. These trained classifiers specify the likelihood of each pixel being in a respective body part. An algorithm is then designed to select the most probable region for each body part, and this region is assigned to the corresponding body part category. The depth camera will track large-sized objects close to the human scale. And finally, determining joints corresponding to the body part by using all the results of the previous judgment, tracking the corresponding joint points to generate a pair of bones, and realizing bone tracking.
Preferably, in step S24, in the process of generating the skeleton, the pose information of the human body in the global coordinate system may be obtained by calculating the current pose information of the robot, and the pose of the mobile service robot is expressed as: p F =[x F ,y F ,θ F ] T Wherein x is F ,y F ,θ F Respectively representing the X-axis coordinate, the Y-axis coordinate and the course angle of the robot under a global coordinate system, wherein the motion state of the robot is represented as follows: q ═ v F ,ω F ] T ,v F ,ω F Respectively representing the linear velocity and the angular velocity of the mobile service robot; the pose of the target human body is represented as P L =[x L ,y L ,θ L ] T Wherein x is L ,y L ,θ L Respectively representing an X-axis coordinate, a Y-axis coordinate and an orientation angle of the target human body under a global coordinate system;
calculating the position of the target human body according to formulas (1) and (2):
x L =x F +l LF cos(θ F +θ K )+d cosθ F (1)
y L =y F +l LF sin(θ F +θ K )+d sinθ F (2)
wherein l LF Is the horizontal position distance between the target human body and the camera,x c and y c Can be detected by the targetObtaining depth information of a depth camera of a measurement module, θ K The azimuth angle of the target human body under the camera coordinate system is shown, and d is the distance from the center of the wheelchair to the camera; current orientation angle theta of target human body L The method can be obtained by detecting human body joint points through a bone recognition algorithm.
Specifically, as shown in fig. 3, the speed and posture information of the human body can be obtained from the position information of two adjacent frames. The camera acquires a frame of image every other time constant T, supposing that n human bodies are identified in a certain frame, the n human bodies are sequentially arranged from left to right according to the distribution of the n human bodies in the image, and the pose information of the ith human body in the current k frame is definedThe pose information of the ith human body in the (k-1) th frame is
Preferably, in step S2, the face recognition specifically includes the following steps:
step S31: collecting image flow, selecting useful characteristics in the image flow, and accurately positioning the position of a human face in the image;
step S32: preprocessing a face image, which mainly comprises light compensation, gray level conversion, histogram equalization, normalization, geometric correction, filtering and sharpening of the face image;
step S33: obtaining feature data of face image classification according to the shape description of the faces and the distance characteristics among the faces, wherein the feature data comprises Euclidean distance, angle and curvature among feature points;
step S34: and searching and matching the feature data of the extracted face image with a feature template stored in a database, calculating a similarity value, and outputting a matching result if the similarity value is greater than or equal to a threshold value set by a system.
In this embodiment, the face recognition specifically calls an application interface function in an Open Source Computer Vision (OpenCV) library to control a Kinect camera to acquire an image stream, and processes each frame of image. And comparing the feature data of the face image to be recognized with the feature template of the obtained face image in the database, and judging the identity information of the face according to the similarity degree.
Preferably, in step S3, the artificial potential field algorithm includes an attractive potential field and a repulsive potential field, wherein the attractive potential field is based on an attractive potential function U t Generation of U t Expressed as:
wherein r | | l | | non-woven phosphor 2 ,l=P R -P T ,P R =[x R ,y R ] T Indicating the position of the center point of the robot, P T =[x T ,y T ] T The position of a target point is shown, r represents the distance between the center point of the robot and the target point, n and mu are adjustable parameters, alpha o The method comprises the following steps that an artificial obstacle is represented by epsilon + delta, wherein epsilon is a constant, delta plays a role in adjusting the artificial obstacle, the posture of the robot is corrected when the robot is guided to approach a target object, and the posture of the robot is ensured to be converged to an angle aligned with the direction of the target object at the same time when the robot reaches the target object, namely the robot approaches the target object in a head-on mode; δ | | | l T B|| 2 B is the direction vector of the target point, and B is [ cos θ ═ L ,sinθ L ] T Wherein theta L Is the orientation angle of the target object in the global coordinate system, andρ w is the operating space radius;
the gradient field of gravitational potential energy is gravitational potential field, and gravitational force F produced by gravitational potential field g The component forces in the X-axis and Y-axis directions can be obtained by calculating the partial derivatives, and the specific formula is as follows:
wherein the content of the first and second substances,is represented by F g The component force in the direction of the X-axis,is represented by F g Component force in the Y-axis direction.
In this embodiment, the attraction potential field is actually a polarity potential field, and the polarity potential field is intended to provide a social interaction criterion that the mobile service robot 1 approaches in a face-to-face manner when driving to a target object, and the attraction potential field is used as the attraction potential field part of the social interaction artificial potential field.
Preferably, the calculation of the repulsive force specifically comprises the following steps:
step S81: passing the position of the pedestrian at the last momentAnd the position of the pedestrian at that timePresume the pedestrian's position at the next moment
Step S82: recording the displacement of the pedestrians at two adjacent moments asWherein Performing probability analysis on the obstacle to obtain an expected value mu of the obstacle x,i 、μ y,i Variance σ x,i 、σ y,i Covariance σ xy,i And correlation coefficient ρ xy,i ;
Step S83: dividing the coordinate system into a plurality of grids to obtain the probability density function U of each grid op (m,n):
Step S84: and (3) solving the partial derivatives in the X-axis and Y-axis directions of the coordinate system to obtain a potential field gradient:
step S85: according to the potential field gradient, the repulsive force generated by the probability potential field is calculated to be a column vectorWherein, the component force in the X-axis and Y-axis directions is as follows:
wherein the content of the first and second substances,representing the component of the probabilistic potential field in the X-axis direction,representing the component force of the probability potential field in the Y-axis direction;
the repulsion force generated by the probability potential field is rotated by a certain angle to obtain a rotation repulsion force, and the rotation repulsion force is defined as a column vectorCan be expressed as:
F r =R v F p (16)
wherein R is v Is a rotation matrix, which is defined as follows:
wherein beta is a deflection coefficient, and the value range of beta is [0, pi/2 ];
rotational repulsive force F r The component forces in the directions of the X axis and the Y axis are as follows:
according to the force synthesis principle, adding two repulsion forces of the attraction force potential field and the repulsion force potential field to obtain a formula for calculating a resultant force F:
F=F g +F p +F r (20)
the component force of F in the X-axis and Y-axis directions is as follows:
calculating the speed v of the mobile service robot planned by the potential field method and the expected heading angle theta at the next moment according to the magnitude and the direction of the resultant force:
wherein v is 0 、d 0 All are defined as a constant and take value according to actual conditions.
Specifically, the attractive potential field is a polar potential field, and the polar axis direction of the potential field, i.e. the orientation theta of the target object L The polar potential field can guide the robot to approach the target point in a mode of approaching to the polar axis while avoiding obstacles, namely approach the target object in a head-on mode. In addition, the probability potential energy function is obtained by carrying out statistical analysis on the positions of the obstacles at the historical moments, and the gradient of the probability potential energy function is obtainedAnd a repulsive force potential field is generated, so that the obstacle avoidance behavior of the robot has prediction capability, and the movement intention of avoiding the object is considered in the obstacle avoidance process. Further, the swirling processing of the probability potential field provides the driving force for the robot to move right to avoid the obstacle, so that the obstacle avoiding behavior follows the right side avoiding standard. The right driving force is obtained by rightwards rotating a repulsive force generated by the probability potential field by a certain angle, and the larger the value of beta is, the larger the right deviation degree is.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
While embodiments of the present invention have been shown and described above, it will be understood that the above embodiments are exemplary and not to be construed as limiting the present invention and that variations, modifications, substitutions and alterations in the above embodiments may be made by those of ordinary skill in the art within the scope of the present invention.
Claims (8)
1. A mobile service robot path planning system with social awareness is characterized in that: comprises a human body detection module and a motion planning module;
the human body detection module is used for reading picture information containing depth, detecting a human body, distinguishing a target object from a non-target object, and calculating and acquiring position and orientation information of all human bodies;
the motion planning module is used for planning a motion path with social awareness for the mobile service robot, and the motion path planning simultaneously conforms to three social behavior criteria: a) head-on approaching the target when driving to the target object; b) predicting and considering the movement intentions of other people in the avoidance process; c) and the right side avoiding standard is followed in the moving process.
2. A mobile, socially aware service robot path planning system as claimed in claim 1, wherein: the motion planning module comprises a first motion planning submodule and a second motion planning submodule;
the first motion planning submodule is used for driving the mobile service robot to a target object through a gravitational potential field; and the second motion planning submodule is used for enabling the mobile service robot to avoid non-target objects according to a social criterion through a repulsive potential field.
3. A mobile service robot path planning method with social awareness is characterized in that: a socially aware mobile service robot path planning system as claimed in claims 1-2, the system comprising a mobile service robot, the method comprising the steps of:
step S1: acquiring picture information, wherein the picture information comprises human body information and object information;
step S2: receiving and analyzing the picture information, detecting a human body through bone identification, and acquiring position and orientation information of the human body;
step S3: distinguishing a target object from a non-target object through face recognition;
step S4: and planning a motion path of the mobile service robot by adopting an artificial potential field algorithm according to the position and orientation information of the human body, and approaching the target object and reasonably avoiding the non-target object by the robot according to the social criterion.
4. The mobile service robot path planning method with social awareness according to claim 3, wherein: in step S2, the bone identification specifically includes the following steps:
step S21: sending infrared rays and receiving reflected infrared rays, calculating the time difference of the infrared rays to and fro, and collecting a 3D depth image;
step S22: segmenting the 3D depth image, removing background images except for the human body, and converting the residual images into depth values to be used as training samples;
step S23: correspondingly separating training samples by taking the body part as a label and training a classifier until the classifier can identify the class of the body part corresponding to the specified 3D depth image;
step S24: joints of the body part are determined, corresponding joint points are tracked and bones are generated.
5. The mobile service robot path planning method with social awareness according to claim 4, wherein: in step S24, in the process of generating the skeleton, the pose information of the human body in the global coordinate system may be obtained by calculating the current pose information of the robot, as shown in fig. 3, where the pose of the mobile service robot is represented as: p F =[x F ,y F ,θ F ] T Wherein x is F ,y F ,θ F Respectively representing the X-axis coordinate, the Y-axis coordinate and the course angle of the robot under a global coordinate system, wherein the motion state of the robot is represented as follows: q ═ v F ,ω F ] T ,v F ,ω F Respectively representing the linear velocity and the angular velocity of the mobile service robot; the pose of the target human body is expressed as P L =[x L ,y L ,θ K ] T Wherein x is L ,y L ,θ L Respectively representing the X-axis coordinate, the Y-axis coordinate and the orientation angle of the target human body under a global coordinate system;
calculating the position of the target human body according to formulas (1) and (2):
x L =x F +l LF cos(θ F +θ K )+d cosθ F (1)
y L =y F +l LF sin(θ F +θ K )+d sinθ F (2)
wherein l LF Is the horizontal position distance between the target human body and the camera,x c and y c Obtainable from depth information of a depth camera of the target detection module, θ K The azimuth angle of the target human body under the camera coordinate system is defined, and d is the distance from the center of the wheelchair to the camera; current orientation angle theta of target human body L The method can be obtained by detecting human body joint points through a bone recognition algorithm.
6. The mobile service robot path planning method with social awareness according to claim 3, wherein: in step S2, the face recognition specifically includes the following steps:
step S31: collecting image flow, selecting useful characteristics in the image flow, and accurately positioning the position of a human face in the image;
step S32: preprocessing a face image, wherein the preprocessing mainly comprises light compensation, gray level transformation, histogram equalization, normalization, geometric correction, filtering and sharpening of the face image;
step S33: obtaining feature data of face image classification according to the shape description of the faces and the distance characteristics among the faces, wherein the feature data comprises Euclidean distance, angle and curvature among feature points;
step S34: and searching and matching the feature data of the extracted face image with a feature template stored in a face database, calculating a similarity value, and outputting a matching result if the similarity value is greater than or equal to a threshold value set by a system.
7. The mobile service robot path planning method with social awareness according to claim 3, wherein: in step S4, the artificial potential field algorithm includes an attractive potential field and a repulsive potential field, wherein the attractive potential field is based on an attractive potential energy function U t Generation of U t Expressed as:
wherein r | | | l | non-calculation 2 ,l=P R -P T ,P R =[x R ,y R ] T Indicating the position of the center point of the robot, P T =[x T ,y T ] T The position of a target point is shown, r represents the distance between the center point of the robot and the target point, n and mu are adjustable parameters, alpha o The method comprises the following steps that an artificial obstacle is represented by epsilon + delta, wherein epsilon is a constant, delta plays a role in adjusting the artificial obstacle, the posture of the robot is corrected when the robot is guided to approach a target object, and the posture of the robot is ensured to be converged to an angle aligned with the direction of the target object at the same time when the robot reaches the target object, namely the robot approaches the target object in a head-on mode; δ | | | l T B|| 2 B is the direction vector of the target point, and B is [ cos θ ═ L ,sinθ L ] T Wherein theta L Is the orientation angle of the target object in the global coordinate system, andρ w is the operating space radius;
gravitational force F generated by gravitational potential field g The component forces in the X-axis and Y-axis directions can be obtained by calculating the partial derivatives, and the specific formula is as follows:
8. The mobile service robot path planning method with social awareness according to claim 3, wherein: the calculation of the repulsive force specifically comprises the following steps:
step S81: passing the position of the pedestrian at the last momentAnd the position of the pedestrian at that timePresume the pedestrian's position at the next moment
Step S82: recording the displacement of the pedestrian at two adjacent moments asWherein Performing probability analysis on the obstacle to obtain the expected value mu of the obstacle x,i 、μ y,i Variance σ x,i 、σ y,i Covariance σ xy,i And correlation coefficient ρ xy,i ;
Step S83: dividing the coordinate system into a plurality of grids to obtain the probability density function U of each grid op (m,n):
Step S84: and (3) solving partial derivatives in the X-axis and Y-axis directions of the coordinate system to obtain a gradient field, namely a probability potential field:
step S85: according to the potential field gradient, the repulsive force generated by the probability potential field is calculated to be a column vectorWherein, the component force in the X-axis direction and the Y-axis direction is as follows:
wherein, the first and the second end of the pipe are connected with each other,is represented by F p The component force in the direction of the X-axis,is represented by F p A force component on the Y axis;
the repulsion force generated by the probability potential field is rotated by a certain angle to obtain a rotation repulsion force, and the rotation repulsion force is defined as a column vectorCan be expressed as:
F r =R v F p (16)
wherein R is v Is a rotation matrix, which is defined as follows:
wherein beta is a deflection coefficient, and the value range of beta is [0, pi/2 ];
rotational repulsive force F r The component forces in the directions of the X axis and the Y axis are as follows:
according to the force synthesis principle, adding two repulsion forces of the attraction force potential field and the repulsion force potential field to obtain a formula for calculating a resultant force F:
F=F g +F p +F r (20)
the component force of F in the X-axis and Y-axis directions is as follows:
calculating the speed v of the mobile service robot planned by the potential field method and the expected course angle theta at the next moment according to the magnitude and the direction of the resultant force:
wherein v is 0 、d 0 All are defined as a constant and are taken according to actual conditions.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210689092.3A CN115129049B (en) | 2022-06-17 | 2022-06-17 | Mobile service robot path planning system and method with social awareness |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210689092.3A CN115129049B (en) | 2022-06-17 | 2022-06-17 | Mobile service robot path planning system and method with social awareness |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115129049A true CN115129049A (en) | 2022-09-30 |
CN115129049B CN115129049B (en) | 2023-03-28 |
Family
ID=83377835
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210689092.3A Active CN115129049B (en) | 2022-06-17 | 2022-06-17 | Mobile service robot path planning system and method with social awareness |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115129049B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104570731A (en) * | 2014-12-04 | 2015-04-29 | 重庆邮电大学 | Uncalibrated human-computer interaction control system and method based on Kinect |
CN106981075A (en) * | 2017-05-31 | 2017-07-25 | 江西制造职业技术学院 | The skeleton point parameter acquisition devices of apery motion mimicry and its recognition methods |
CN110853099A (en) * | 2019-11-19 | 2020-02-28 | 福州大学 | Man-machine interaction method and system based on double Kinect cameras |
CN111823228A (en) * | 2020-06-08 | 2020-10-27 | 中国人民解放军战略支援部队航天工程大学 | Indoor following robot system and operation method |
CN112965081A (en) * | 2021-02-05 | 2021-06-15 | 浙江大学 | Simulated learning social navigation method based on feature map fused with pedestrian information |
CN113721633A (en) * | 2021-09-09 | 2021-11-30 | 南京工业大学 | Mobile robot path planning method based on pedestrian trajectory prediction |
EP3929803A1 (en) * | 2020-06-24 | 2021-12-29 | Tata Consultancy Services Limited | System and method for enabling robot to perceive and detect socially interacting groups |
CN113985897A (en) * | 2021-12-15 | 2022-01-28 | 北京工业大学 | Mobile robot path planning method based on pedestrian trajectory prediction and social constraint |
-
2022
- 2022-06-17 CN CN202210689092.3A patent/CN115129049B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104570731A (en) * | 2014-12-04 | 2015-04-29 | 重庆邮电大学 | Uncalibrated human-computer interaction control system and method based on Kinect |
CN106981075A (en) * | 2017-05-31 | 2017-07-25 | 江西制造职业技术学院 | The skeleton point parameter acquisition devices of apery motion mimicry and its recognition methods |
CN110853099A (en) * | 2019-11-19 | 2020-02-28 | 福州大学 | Man-machine interaction method and system based on double Kinect cameras |
CN111823228A (en) * | 2020-06-08 | 2020-10-27 | 中国人民解放军战略支援部队航天工程大学 | Indoor following robot system and operation method |
EP3929803A1 (en) * | 2020-06-24 | 2021-12-29 | Tata Consultancy Services Limited | System and method for enabling robot to perceive and detect socially interacting groups |
CN112965081A (en) * | 2021-02-05 | 2021-06-15 | 浙江大学 | Simulated learning social navigation method based on feature map fused with pedestrian information |
CN113721633A (en) * | 2021-09-09 | 2021-11-30 | 南京工业大学 | Mobile robot path planning method based on pedestrian trajectory prediction |
CN113985897A (en) * | 2021-12-15 | 2022-01-28 | 北京工业大学 | Mobile robot path planning method based on pedestrian trajectory prediction and social constraint |
Non-Patent Citations (2)
Title |
---|
PATOMPAK P等: "Mobile robot navigation for human-robot social interaction", 《2016 16TH INTERNATIONAL CONFERENCE ON CONTROL,AUTOMATION AND SYSTEMS(ICCAS)》 * |
何丽等: "服务机器人社会意识导航方法综述", 《计算机工程与应用》 * |
Also Published As
Publication number | Publication date |
---|---|
CN115129049B (en) | 2023-03-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Kim et al. | Deep learning based vehicle position and orientation estimation via inverse perspective mapping image | |
Knoop et al. | Sensor fusion for 3D human body tracking with an articulated 3D body model | |
Tapu et al. | A smartphone-based obstacle detection and classification system for assisting visually impaired people | |
Zhang et al. | Real-time multiple human perception with color-depth cameras on a mobile robot | |
Chen et al. | Person following robot using selected online ada-boosting with stereo camera | |
CN111360780A (en) | Garbage picking robot based on visual semantic SLAM | |
McKenna et al. | Tracking faces | |
Damen et al. | Egocentric real-time workspace monitoring using an rgb-d camera | |
Brookshire | Person following using histograms of oriented gradients | |
CN112518748B (en) | Automatic grabbing method and system for visual mechanical arm for moving object | |
Capito et al. | Optical flow based visual potential field for autonomous driving | |
CN115129049B (en) | Mobile service robot path planning system and method with social awareness | |
WO2024035917A1 (en) | Autonomous solar installation using artificial intelligence | |
Perera et al. | Human motion analysis from UAV video | |
Baklouti et al. | Intelligent assistive exoskeleton with vision based interface | |
CN115120429B (en) | Intelligent wheelchair human body following control system based on surface electromyographic signals | |
Esfahani et al. | A new approach to train convolutional neural networks for real-time 6-DOF camera relocalization | |
Budzan | Fusion of visual and range images for object extraction | |
US20210371260A1 (en) | Automatic detection and tracking of pallet pockets for automated pickup | |
Lee et al. | Camera-laser fusion sensor system and environmental recognition for humanoids in disaster scenarios | |
Ferreira et al. | 6D UAV pose estimation for ship landing guidance | |
Lin et al. | Human action recognition using motion history image based temporal segmentation | |
Santos et al. | Human robot interaction studies on laban human movement analysis and dynamic background segmentation | |
Xue et al. | Pedestrian tracking and stereo matching of tracklets for autonomous vehicles | |
Manigandan et al. | Wireless vision based moving object tracking robot through perceptual color space |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |