Disclosure of Invention
The invention aims to solve the technical problem of the prior art, and provides a mobile robot autonomous following method based on multi-sensor fusion, so that a mobile robot can autonomously follow a target person.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows: on one hand, the invention provides a mobile robot autonomous following system based on multi-sensor fusion, which comprises an upper navigation unit, a bottom motion control unit and a power supply unit; the upper navigation unit comprises a two-dimensional laser radar, a router, an AOA beacon system, a camera, an industrial personal computer and a TTL-to-USB module; the bottom layer motion control unit comprises a robot body, an embedded development board and a photoelectric encoder;
the two-dimensional laser radar is used for detecting plane control position information in a fixed range and is connected with the industrial personal computer through an LAN (local area network) port of the router, so that the stability, the safety and the real-time performance of data transmission between the laser radar and the industrial personal computer are ensured; the embedded development board is used for realizing the motion control of the robot and is connected with a second LAN port of the router; the industrial personal computer is used for realizing navigation planning of an upper layer, is connected with a third LAN port of the router, and is in wired connection with the router, so that the industrial personal computer is ensured to send a control instruction to the bottom layer motion control unit, and the motion speed and the motion direction of the robot are further controlled; the camera is arranged at the top in front of the robot, is used for acquiring image information under the current visual field and is connected with the industrial personal computer, so that real-time and effective image information transmission is ensured; the AOA beacon system comprises an AOA beacon base station and an AOA handheld beacon, wherein the AOA beacon base station is installed on a robot, the AOA handheld beacon is in a hand of a target person to be followed, and the AOA beacon base station obtains a relative pose with the AOA handheld beacon; the AOA beacon base station is connected with the industrial personal computer through the TTL-to-USB module, so that the industrial personal computer receives the information of the AOA handheld beacon in real time, and data fusion of the laser radar and the AOA beacon system information is realized; the wheels moved by the robot adopt direct current speed reduction motors; the embedded development board is connected with a motor driving module, the motor driving module is connected with a direct-current speed reducing motor, and a photoelectric encoder is installed at a wheel shaft of the robot and connected with the embedded development board for obtaining the wheel speed of the robot; the industrial personal computer is internally provided with a program for realizing the detection of target personnel and the planning of a following path; and the power supply unit is respectively connected with the upper navigation unit and the bottom control unit and supplies power to the whole system.
Preferably, the power supply unit includes a vehicle-mounted battery and a power management module, wherein the power management module is connected to the vehicle-mounted battery, converts the voltage of the vehicle-mounted battery into the voltage required by each component in the system, and then connects with each component to supply power to the whole system.
Preferably, the robot body adopts a double-wheel differential trolley.
Preferably, the built-in program of the industrial personal computer comprises a personnel detection unit and a following navigation unit, and is used for realizing the following functions:
(1) processing information data of the person to be detected, which are acquired by the two-dimensional laser radar and the camera;
(2) information from a two-dimensional laser, a camera and an AOA beacon system is fused to obtain more accurate positioning information of the target personnel;
(3) distributing corresponding ID for target personnel, storing the data processed by the two-dimensional laser and the camera in a classified manner, creating a new matched object as a tracking object, and storing the tracking object;
(4) planning and calculating the movement track of the robot to the target personnel, and selecting a planning track with the optimal local path;
(5) and sending a control instruction to the bottom layer motion control unit so as to control the motion speed and the motion direction of the robot.
On the other hand, the invention also provides a mobile robot autonomous following method based on multi-sensor fusion, which comprises the following steps:
step 1, collecting information data of a person to be detected through a two-dimensional laser radar and a camera, processing the collected data, and identifying a target person;
the method for processing the data acquired by the two-dimensional laser radar comprises the following steps: firstly, clustering points returned by laser, dividing the points smaller than a certain threshold into clusters, and generating geometric characteristics for the clusters; the geometrical characteristics comprise the number of laser points, the width length of a laser cluster, and the distance and angle relative to the laser; classifying through a random forest classifier based on the geometric characteristics to train the adaptive characteristics of the human leg model; then, carrying out feature extraction on the laser clusters in the laser data, and comparing the extracted features with adaptive features trained by random forests to detect human legs of the surrounding environment; when the detected distance between the two legs is less than 0.4m, taking the average value of the positions of the two legs as the position of a new leg;
the method for processing the data collected by the camera comprises the following steps: calculating gradient values of different pixel blocks for each frame of image by adopting an HOG feature extraction method, and then sending the extracted features into a Support Vector Machine (SVM) classifier to train according to the calculated gradient values, so as to train the adaptive features of the human body; then, comparing the features extracted from the visual data with the features trained by the classifier to identify the target person in the visual field, thereby realizing the identification of the person;
step 2, performing information matching processing on corresponding personnel on pedestrian leg information and pedestrian image information obtained by the laser and the camera by adopting a Hungarian algorithm, and matching the information recognized by the laser and the pedestrian leg information and the pedestrian image information according to corresponding rules, so as to obtain a group of fusion positions of corresponding visually recognized personnel and laser recognized personnel; then, an Interactive Multiple Model (IMM) Filter based on a Kalman Filter (KF) is adopted to fuse information from a two-dimensional laser, a camera and an AOA beacon system, and more accurate target person positioning information is obtained;
step 3, distributing corresponding IDs for target personnel, storing the data processed by the two-dimensional laser and the camera in a classified manner, creating a new matched object as a tracking object, storing the new matched object, and removing the target failed in tracking so as to distinguish different target personnel;
step 4, generating a target potential field by adopting a Fast Marching Method (FMM), and then increasing a measurement index of a directional gradient field by an improved Dynamic Window Algorithm (DWA) to constrain the planned track of the robot so as to select the planned track with the optimal local path, wherein the specific Method comprises the following steps:
firstly, sensing environmental information by using a laser sensor, establishing a rolling grid map with a robot as a center, in order to measure the time T of each point in the map reaching a target point, establishing a target potential field on the rolling grid map by adopting an FMM algorithm, and expressing the time of a coordinate point (x, y) reaching a target position by using T (x, y); carrying out gradient derivation on the potential field to obtain a direction gradient field, wherein the direction gradient field provides a reference azimuth angle theta (x, y) of each coordinate robot on the map;
in order to select the optimal track of the robot moving to the target, the following evaluation method is adopted:
firstly, in order to effectively move the robot to a target point, a target cost function for evaluating the motion effectiveness of the robot is introduced, and the following formula is shown:
wherein, the good _ cost is the track validity cost and is used for evaluating whether the track moves to a position with a low value of an arrival time field; beta is the influence factor of the azimuth angle of the robot, (x)e,ye) As coordinates of the end position of the robot path, thetaeAzimuth angle, theta, of the robot trajectory end pointr(xe,ye) A reference azimuth angle T (x) provided for the directional gradient field when the robot is at the track end positione,ye) The time when the robot reaches the track end point is taken as the time;
when the difference value between the direction of the track terminal and the reference azimuth angle provided by the direction gradient field is increased, the real _ cost is amplified by a certain factor, so that the track conforming to the reference direction of the vector field is more prone to be selected when the track evaluation is excellent;
in order to evaluate the cost of the motion track endpoint towards the target, an angle cost function for evaluating the effectiveness of the motion direction of the robot is introduced, and the following formula is shown:
wherein, T (x)s,ys) Is the arrival time of the initial point of the motion track of the robot, d (x)s,ys) The distance of the nearest barrier of the initial point of the motion track of the robot is taken as alpha, which is a barrier influence factor and is used for evaluating the influence of the barrier on the planned path;
evaluating the advantages and disadvantages of the tracks of the robot moving to the target point by taking the sum of the target cost function and the angle cost function as an overall cost function, and selecting a planning track with the optimal local path by minimizing the overall cost function;
the overall cost function is shown in the following formula:
total_cost=goal_cost+angel_cost
wherein, the good _ cost is a target cost function, and the angle _ cost is an angle cost function.
Adopt the produced beneficial effect of above-mentioned technical scheme to lie in: the invention provides a mobile robot autonomous following method based on multi-sensor fusion, which adopts a Kalman filtering algorithm, corrects AOA information by using detection information of a laser sensor and a vision sensor, eliminates some transitional oscillations and obtains a smooth personnel movement track. Meanwhile, an improved DWA algorithm is adopted, a direction gradient field is generated on the basis of a target potential field, the gradient field provides a reference azimuth angle of each coordinate robot on a map, and the gradient field is used for measuring the effectiveness of the robot direction. This prevents the robot from moving blindly towards the target, regardless of the adjustment of the orientation angle. The mobile robot can realize a stable autonomous following function of people under the dynamic shielding environment including the condition that the target people are shielded by the barrier. The method can be suitable for various robot motion models and different working scenes, and is wider in application range and stronger in applicability.
Detailed Description
The following detailed description of embodiments of the present invention is provided in connection with the accompanying drawings and examples. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
A mobile robot autonomous following system based on multi-sensor fusion is shown in figure 1 and comprises an upper navigation unit, a bottom motion control unit and a power supply unit; the upper navigation unit comprises a two-dimensional laser radar, a router, an AOA beacon system, a camera, an industrial personal computer and a TTL-to-USB module; the bottom layer motion control unit comprises a robot body, an embedded development board and a photoelectric encoder; the robot body adopts a double-wheel differential trolley.
The two-dimensional laser radar is used for detecting plane control position information in a fixed range and is connected with the industrial personal computer through an LAN (local area network) port of the router, so that the stability, the safety and the real-time performance of data transmission between the laser radar and the industrial personal computer are ensured; the embedded development board is used for realizing the motion control of the robot and is connected with a second LAN port of the router; the industrial personal computer is used for realizing navigation planning of an upper layer, is connected with a third LAN port of the router, and is in wired connection with the router, so that the industrial personal computer is ensured to send a control instruction to the bottom layer motion control unit, and the motion speed and the motion direction of the robot are further controlled; the camera is arranged at the top in front of the robot, is used for acquiring image information under the current visual field and is connected with the industrial personal computer, so that real-time and effective image information transmission is ensured; the AOA beacon system comprises an AOA beacon base station and an AOA handheld beacon, wherein the AOA beacon base station is installed on a robot, the AOA handheld beacon is in a hand of a target person to be followed, and the AOA beacon base station obtains a relative pose with the AOA handheld beacon; the AOA beacon base station is connected with the industrial personal computer through the TTL-to-USB module, so that the industrial personal computer receives the information of the AOA handheld beacon in real time, and data fusion of the laser radar and the AOA beacon system information is realized; the wheels moved by the robot adopt direct current speed reduction motors; the embedded development board is connected with a motor driving module, the motor driving module is connected with a direct-current speed reducing motor, and a photoelectric encoder is installed at a wheel shaft of the robot and connected with the embedded development board for obtaining the wheel speed of the robot; and the power supply unit is respectively connected with the upper navigation unit and the bottom control unit and supplies power to the whole system. The power supply unit comprises a vehicle-mounted battery and a power supply management module, wherein the power supply management module is connected with the vehicle-mounted battery, converts the voltage of the vehicle-mounted battery into the voltage required by each component in the system and then is connected with each component to supply power for the whole system.
The industrial personal computer is internally provided with a program for realizing the detection of target personnel and the planning of a following path, comprises a personnel detection unit and a following navigation unit, and is used for realizing the following functions:
(1) processing information data of the person to be detected, which are acquired by the two-dimensional laser radar and the camera;
(2) information from a two-dimensional laser, a camera and an AOA beacon system is fused to obtain more accurate positioning information of the target personnel;
(3) distributing corresponding ID for target personnel, storing the data processed by the two-dimensional laser and the camera in a classified manner, creating a new matched object as a tracking object, and storing the tracking object;
(4) planning and calculating the movement track of the robot to the target personnel, and selecting a planning track with the optimal local path;
(5) and sending a control instruction to the bottom layer motion control unit so as to control the motion speed and the motion direction of the robot.
In the embodiment, the model of the embedded control board is STM32F407VET 6; the model of the industrial personal computer is Gk 400; the model of the TTL-to-USB module is CH 340C; the model of the 2D laser radar is PEPEPEPERL + FUCHS; the camera adopts a color monocular camera, the model is large and X.sub.Pro, and the bottom operating system of the industrial personal computer is Ubuntu 16.04LST; the secondary operating system is an ROS system; the vehicle-mounted battery is a Kaimeiwei 12V100A lithium battery; the model of the power supply management module is SD-50B-12; the model of the motor driving module is ZLAC 706; the router model is NETGEAR R6020; beacon base stations and hand-held beacons are AOAs.
A mobile robot autonomous following method based on multi-sensor fusion is disclosed, as shown in FIG. 2, and comprises the following steps:
step 1, collecting information data of a person to be detected through a two-dimensional laser radar and a camera, processing the collected data, and identifying a target person;
the method for processing the data acquired by the two-dimensional laser radar comprises the following steps: firstly, clustering points returned by laser, dividing the points smaller than a certain threshold into clusters, and generating geometric characteristics for the clusters; the geometrical characteristics comprise the number of laser points, the width length of a laser cluster, and the distance and angle relative to the laser; classifying through a random forest classifier based on the geometric characteristics to train the adaptive characteristics of the human leg model; then, carrying out feature extraction on the laser clusters in the laser data, and comparing the extracted features with adaptive features trained by random forests to detect human legs of the surrounding environment; when the detected distance between the two legs is less than 0.4m, taking the average value of the positions of the two legs as the position of a new leg;
the method for processing the data collected by the camera comprises the following steps: calculating gradient values of different pixel blocks for each frame of image by adopting an HOG feature extraction method, and then sending the extracted features into a Support Vector Machine (SVM) classifier to train according to the calculated gradient values, so as to train the adaptive features of the human body; then, comparing the features extracted from the visual data with the features trained by the classifier to identify the target person in the visual field, thereby realizing the identification of the person;
step 2, performing information matching processing on corresponding personnel on pedestrian leg information and pedestrian image information obtained by the laser and the camera by adopting a Hungarian algorithm, and matching the information recognized by the laser and the pedestrian leg information and the pedestrian image information according to corresponding rules, so as to obtain a group of fusion positions of corresponding visually recognized personnel and laser recognized personnel; then, an Interactive Multiple Model (IMM) Filter based on a Kalman Filter (KF) is adopted to fuse information from a two-dimensional laser, a camera and an AOA beacon system, and more accurate target person positioning information is obtained;
the Hungarian algorithm is a combinatorial optimization algorithm for solving a task allocation problem in polynomial time. The algorithm matches the information identified by the two according to the corresponding rules, so as to obtain a group of fusion positions of the corresponding visual identification personnel and the laser identification personnel. The obtained fusion data is shown in the following formula, wherein n is the number of people detected at the time t. The different person positions detected at time t are denoted as
The IMM algorithm introduces a plurality of target motion models, has the characteristic of self-adaptation, can effectively adjust the probability of each model, weights the state estimation of each model according to the corresponding probability, and realizes the tracking of the moving target. The IMM algorithm comprises a plurality of filters, an interactive actor, a model probability updater and an estimation mixer, wherein multiple models track the maneuvering motion of a target through interaction, and the transition between the models is determined by a Markov probability transition matrix. The KF filter is used as a filter in an IMM algorithm, a plurality of motion models are established for target personnel, and information from laser, vision and AOA labels is fused by adopting the KF-IMM algorithm.
One key factor of the IMM algorithm is the determination of a model of the motion of the object, which reflects the actual motion of the object as truly as possible. The invention researches a motion model of a target person by taking common motions of the target person as an example.
A general motion model is divided into a prediction process and an update process as follows. The prediction process and the update process of the target person are represented as:
X(k)=F(k-1)X(k-1)+W(k-1)
Z(k)=hX(k)+V(k)
wherein k represents a sampling instant; x (k) ε RnIs the state vector of the prediction process; f is an n-dimensional system transfer matrix; z (k) epsilon RmIs the measurement vector of the update process; w (k) -N (0, R) and V (k) -N (0, Q) are Gaussian process noise and measurement noise, respectively; wherein the state vector and the measurement vector differ according to the selected model.
In this embodiment, a motion process of a following target person is modeled, and motion models of the target person are divided into three types, namely a Constant Velocity vector (CV), a Constant acceleration motion model (CA), and a Constant turning motion model (CT).
In the dynamic model of the CA, CT and CV motion models, x, y represent the position of the target person,
and
the speed is indicated in the form of a speed,
and
represents acceleration, ω represents turn rate, w (T) represents white gaussian noise, and T represents the sampling period.
(1) CV model, selecting state variables
Suppose it
And
as random noise treatment, i.e.
The predicted process state equation for the CV model is:
X(k)=FCV(k-1)X(k-1)+W(k-1)
wherein, Fcv=diag{A,A},A2×2Is a Newton matrix; w (k-1) ═ Wx,Wy]k-1Is zero-mean white gaussian noise.
(2) CT model, selecting state variables
Suppose it
And
as random noise treatment, i.e.
w (t) is a white noise process. The prediction process state equation of the CT model is:
X(k)=FCT(k-1)X(k-1)+W(k-1)
wherein W (k-1) ═ Wx,Wy,Wω]Is a zero-mean white gaussian noise,
(3) CA model, selecting state variables
Wherein
And
as random noise treatment, i.e.
w (t) is a white noise process. The predicted process state equation of the CA model is:
X(k)=FGA(k-1)X(k-1)+W(k-1)
wherein, FcA=diag{A,A},A3×3Is a Newton matrix; w (k-1) ═ Wx,Wy]'k-1Is zero-mean white gaussian noise.
In the prediction phase, a probability value is respectively assigned to each motion model as an initial probability value. Setting the initial probability value corresponding to each corresponding motion model as:
where 0 represents the probability value at the initial time and t-1 represents the probability value at time t-1. (i is 0,1,2), which corresponds to three motion models (CA, CT, CV) in order, and ∑ W
i=1。
The procedure for locating the target person in this embodiment is as followsThe following: the position obtained by the AOA tag is used as the initial value X of the personnel state. The person state is represented by the coordinates (x, y) of the person in the global coordinate system. The prediction process of Kalman filtering estimates three personnel motion models, and a corresponding state transition matrix F and motion noise W can be obtained according to the motion models, wherein F belongs to (F ∈)
CA,F
CV,F
CT). The method comprises the following steps of fusing information from a two-dimensional laser, a camera and an AOA beacon system to obtain more accurate positioning information of a target person, and specifically comprises the following two steps: the first step is to update with AOA tag information. The position information obtained by the AOA label has larger fluctuation, and the data is obtained by performing sliding filtering
For the first update, observe the noise matrix
The general arrangement is smaller. In the second step, data fusion is carried out on the data of the laser and the vision sensor by adopting Hungarian algorithm, and then the fused data is used as observed quantity of target personnel
The information obtained by the AOA tag fluctuates greatly, but
The value of (c) does not deviate too much from the true value. Selection of z
tMiddle distance
Most recent value
If it is not
And
and if the distance is less than 0.8m, performing second updating. Second measurement noise matrix
Size and z
tMiddle distance
The distance between the two closest position data is related. Taking uniform linear motion of the robot as an example, pseudo codes of the Kalman filtering fusion AOA information and the laser and visual information are shown in table 1.
TABLE 1 Kalman Filter Algorithm pseudo-code
Step 3, distributing corresponding IDs for target personnel, storing the data processed by the two-dimensional laser and the camera in a classified manner, creating a new matched object as a tracking object, storing the new matched object, and removing the target failed in tracking so as to distinguish different target personnel;
step 4, generating a target potential field by adopting a Fast Marching Method (FMM), and then increasing a measurement index of a directional gradient field by an improved Dynamic Window Algorithm (DWA) to constrain a planned track of the robot, thereby selecting a planned track with an optimal local path, as shown in fig. 3, the specific Method comprises:
firstly, sensing environmental information by using a laser sensor, establishing a rolling grid map with a robot as a center, in order to measure the time T of each point in the map reaching a target point, establishing a target potential field on the rolling grid map by adopting an FMM algorithm, and expressing the time of a coordinate point (x, y) reaching a target position by using T (x, y); performing gradient derivation on the potential field to obtain a directional gradient field, which provides a reference azimuth angle θ (x, y) of each coordinate robot on the map, as shown in fig. 4;
in order to select the optimal track of the robot moving to the target, the following evaluation method is adopted:
firstly, in order to effectively move the robot to a target point, a target cost function for evaluating the motion effectiveness of the robot is introduced, and the following formula is shown:
wherein, the good _ cost is the track validity cost and is used for evaluating whether the track moves to a position with a low value of an arrival time field; beta is the influence factor of the azimuth angle of the robot, (x)e,ye) As coordinates of the end position of the robot path, thetaeAzimuth angle, theta, of the robot trajectory end pointr(xe,ye) A reference azimuth angle T (x) provided by the direction gradient field when the robot track is at the end positione,ye) The time when the robot reaches the track end point is taken as the time;
when the difference value between the direction of the track terminal and the reference azimuth angle provided by the direction gradient field is increased, the real _ cost is amplified by a certain factor, so that the track conforming to the reference direction of the vector field is more prone to be selected when the track evaluation is excellent;
the essence of the introduced objective cost function for evaluating the effectiveness of robot motion is to model the motion process of the robot, as shown in fig. 5.
In order to evaluate the cost of the motion track endpoint towards the target, an angle cost function for evaluating the effectiveness of the motion direction of the robot is introduced, and the following formula is shown:
wherein, T (x)s,ys) The arrival time of the initial point of the motion track of the robot is T (x)s,ys) When small, the angle _ cost is rapidly increased, so that the robot is more inclined to select a track conforming to the reference direction of the directional gradient field; d (x)s,ys) The distance d (x) of the nearest obstacle at the initial point of the motion track of the robots,ys) Very muchWhen the robot is small, the robot quickly adjusts the movement direction to the reference direction of the directional gradient field to avoid falling into the predicament of the obstacle, and alpha is an obstacle influence factor and is used for evaluating the influence of the obstacle on the planned path;
evaluating the advantages and disadvantages of the tracks of the robot moving to the target point by taking the sum of the target cost function and the angle cost function as an overall cost function, and selecting a planning track with the optimal local path by minimizing the overall cost function; as shown in fig. 6, in fig. 6(a), the direction indicated by the middle arrow is different from the reference direction provided by the gradient field, so the cost of judging the track is amplified by a certain factor. Although the direction indicated by the lower arrow is far away from the target point, the final direction of the track is not much different from the reference direction of the gradient field, so that the cost for judging the track is lower than that of the track corresponding to the direction indicated by the middle arrow. When the robot falls into local optimum, all the simulated tracks obtained by track sampling collide with the barrier, and the method provided by the invention can escape from the situation. In fig. 6(b), when the robot is too close to the obstacle, the three forward simulated trajectories all hit the obstacle in front, and as for the backward simulated trajectory, the conventional DWA algorithm considers that the current position is closer to the target point, the potential field is lower, and therefore the backward movement is not adopted. In the method provided by the invention, when the cost of the robot is evaluated, the difference between the direction of the current position and the reference direction of the gradient field is larger, so that the cost value is increased; the direction of the arrow at the lower left of the backward simulation is consistent with the reference direction of the gradient field, and the distance from the target point is increased but still has lower cost than the current position, so that the robot can choose to escape from the rear. When the robot azimuth is adjusted to be consistent with the gradient field reference direction, as shown in fig. 6(c), the robot advances toward the target point according to the direction indicated by the middle arrow.
The overall cost function is shown in the following formula:
total_cost=goal_cost+angel_cost
wherein, the good _ cost is a target cost function, and the angle _ cost is an angle cost function.
The FMM algorithm solves the problem of interface propagation by solving the viscous solution of an equation function through a numerical method. The equation of the equation is shown as follows:
where x denotes a point in the search space, and the expression in the two-dimensional space is x ═ x, y. T (x) is the time from the start to point x, and W (x) is the local propagation velocity of the interface at point x. An equation of a solved equation of x at each point in space, x corresponding to a grid of ith row and j column in a planning space represented by the grid, can be solved by discretizing the gradient T (x).
The DWA algorithm is a classic online local path planning method and works well in a dynamic uncertain environment. The method mainly comprises the steps of sampling a plurality of groups of speed tracks in a speed space (v, w) and simulating the tracks of the robot in a certain time at the speeds. After a plurality of groups of tracks are obtained, the tracks are evaluated, and the speed corresponding to the optimal track is selected to drive the robot to move. The algorithm is characterized by dynamic, and the meaning of the algorithm is that the speed is limited within a feasible dynamic range of a space according to the acceleration and deceleration performance of the mobile robot. To simulate the trajectory of a robot, the motion model of the robot needs to be known. The double-wheel differential mobile robot adopted in the embodiment can only advance, retreat and rotate. Considering two adjacent moments, the motion distance is short, and the track between two adjacent points can be considered as a straight line, namely, the v is moved along the robot coordinate systemtΔ t. The coordinate change under the world coordinate system can be obtained by projecting the coordinate change on the world coordinate system. Suppose that the robot has a pose of (x) at time tt,yt,θt) And calculating the pose of the robot at the t +1 moment according to the following formula:
xt+1=xt+vt*Δt*cosθt
yt+1=yt+vt*Δt*sinθt
θt+1=θt+wt*Δt
and sampling a plurality of groups of speeds in the speed space of the robot, calculating expected poses of the robot at different speeds, and generating a simulated track of the robot.
In this embodiment, when the target person is not shielded by the obstacle, a comparison experiment is performed on a trajectory obtained by fusing information of the AOA, the laser, and the camera through kalman filtering and a trajectory of a person using only AOA information. When the robot walks along with the target person, the person position trajectory obtained by using only the AOA tag and the person position trajectory obtained by kalman filter fusion are shown in fig. 7. It can be seen from the figure that when only AOA information is used, the estimation of the pose of the person is inaccurate, and sometimes there is a severe jitter. And correcting the AOA information by using the detection information of the laser and the camera by adopting a Kalman filtering algorithm, and eliminating some transitional oscillations to obtain a smooth personnel track.
In this embodiment, the result of the personnel position detection only by the laser and the camera is compared with the result of the personnel position detection fused with the kalman filter, as shown in fig. 8, it can be seen from the figure that the track obtained by the detection by the laser and the camera is substantially the same as the track obtained by the kalman filter, but the track obtained by the kalman filter is smoother.
When the target person is shielded by an obstacle, the tracking failure is often caused by purely depending on laser and visual information. According to the invention, the problem of severe fluctuation of the AOA signal value can be effectively alleviated by fusing laser and visual information through Kalman filtering. The AOA label has a unique ID, can just provide an initial value for laser identification, does not cause the problem of false identification of pedestrians, and has high reliability. Through the track that Kalman filtering fuses laser and vision and AOA information and obtains, utilize AOA information to match the people leg that laser detected again with by the tracking personnel, can effectively deal with laser and vision information when the target personnel is sheltered from by the barrier, can't detect the condition of people. Even in a multi-obstacle environment, the system and the method of the invention can effectively detect people and realize smooth tracking track, as shown in fig. 9, wherein the black square in the figure is an obstacle.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions and scope of the present invention as defined in the appended claims.