CN109947119B - Mobile robot autonomous following method based on multi-sensor fusion - Google Patents

Mobile robot autonomous following method based on multi-sensor fusion Download PDF

Info

Publication number
CN109947119B
CN109947119B CN201910326362.2A CN201910326362A CN109947119B CN 109947119 B CN109947119 B CN 109947119B CN 201910326362 A CN201910326362 A CN 201910326362A CN 109947119 B CN109947119 B CN 109947119B
Authority
CN
China
Prior art keywords
robot
target
track
aoa
laser
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910326362.2A
Other languages
Chinese (zh)
Other versions
CN109947119A (en
Inventor
方正
周思帆
曾杰鑫
张伟义
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ruige Intelligent Technology Shenyang Co ltd
Original Assignee
Northeastern University China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeastern University China filed Critical Northeastern University China
Priority to CN201910326362.2A priority Critical patent/CN109947119B/en
Publication of CN109947119A publication Critical patent/CN109947119A/en
Application granted granted Critical
Publication of CN109947119B publication Critical patent/CN109947119B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a mobile robot autonomous following method based on multi-sensor fusion, and relates to the technical field of industrial automation. The system comprises an upper layer navigation unit, a bottom layer motion control unit and a power supply unit; the upper navigation unit obtains positioning information of a target person through a sensor, carries out planning calculation on the movement track of the robot to the target person, selects a planning track with an optimal local path and sends a control instruction to the bottom movement control unit; the bottom layer motion control unit controls the robot to move towards the target personnel according to the control instruction of the upper layer navigation unit; the power supply unit supplies power to the whole system; meanwhile, a specific method for automatically following by the mobile robot based on the system is also provided. The mobile robot autonomous following method based on multi-sensor fusion can realize a stable autonomous following function of people under a dynamic shielding environment of the mobile robot, including the condition that the target people are shielded by the obstacles.

Description

Mobile robot autonomous following method based on multi-sensor fusion
Technical Field
The invention relates to the technical field of industrial automation, in particular to a mobile robot autonomous following method based on multi-sensor fusion.
Background
With the gradual transition of robots from industrial environments to home or personal applications, direct human-to-robot interaction has become a new area of research where detecting and following a human is a necessary skill. In recent years, the following robot is continuously and deeply researched and applied, and is widely applied to environments such as hospitals, markets, battlefields and the like to help the general public to complete simple tasks in daily life or working environments. In order to realize rapid and stable pedestrian following, the robot must be capable of obtaining accurate position information of a target person.
Currently common positioning technologies include indoor and outdoor positioning technologies: the positioning method is based on a real-time positioning and mapping (SLAM) technology, an indoor and outdoor positioning technology based on a GPS, a positioning technology based on an indoor beacon system and the like.
In the current positioning application scenario of a small range of identification and detection of people, the cost of using GPS is too high and the accuracy is not suitable, so the location is often performed by means of SLAM or based on a beacon base station system. In SLAM, the visual mode carries out personnel positioning by detecting the positions of pedestrians in the image, so that the cost is low and the related application algorithm is mature. However, due to the limitation of the camera view, the phenomenon that the pedestrian leaves the view easily to cause tracking failure can occur easily. And visual sensors all have the same pain point, are sensitive to lighting conditions, and are not suitable for outdoor environments. The laser sensor in SLAM has the advantage of wide field of view, and can scan the whole 360-degree range, so that the laser sensor has many applications in personnel detection. However, when detecting people by laser, people's profile information is only used, different people cannot be distinguished, and false identification is easy to occur in situations with more people and greater interference. And the following person is not effectively treated in the case that the following person is blocked by an obstacle. The beacon system has the advantage that a person can hold a beacon in hand, and relative position information can be obtained through a vehicle-mounted beacon base station. The method can be applied to the situation that people are blocked by obstacles and cannot be identified by the visual and laser sensors, but the signal of the beacon system sometimes has larger drift and the data information is not stable enough.
Disclosure of Invention
The invention aims to solve the technical problem of the prior art, and provides a mobile robot autonomous following method based on multi-sensor fusion, so that a mobile robot can autonomously follow a target person.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows: on one hand, the invention provides a mobile robot autonomous following system based on multi-sensor fusion, which comprises an upper navigation unit, a bottom motion control unit and a power supply unit; the upper navigation unit comprises a two-dimensional laser radar, a router, an AOA beacon system, a camera, an industrial personal computer and a TTL-to-USB module; the bottom layer motion control unit comprises a robot body, an embedded development board and a photoelectric encoder;
the two-dimensional laser radar is used for detecting plane control position information in a fixed range and is connected with the industrial personal computer through an LAN (local area network) port of the router, so that the stability, the safety and the real-time performance of data transmission between the laser radar and the industrial personal computer are ensured; the embedded development board is used for realizing the motion control of the robot and is connected with a second LAN port of the router; the industrial personal computer is used for realizing navigation planning of an upper layer, is connected with a third LAN port of the router, and is in wired connection with the router, so that the industrial personal computer is ensured to send a control instruction to the bottom layer motion control unit, and the motion speed and the motion direction of the robot are further controlled; the camera is arranged at the top in front of the robot, is used for acquiring image information under the current visual field and is connected with the industrial personal computer, so that real-time and effective image information transmission is ensured; the AOA beacon system comprises an AOA beacon base station and an AOA handheld beacon, wherein the AOA beacon base station is installed on a robot, the AOA handheld beacon is in a hand of a target person to be followed, and the AOA beacon base station obtains a relative pose with the AOA handheld beacon; the AOA beacon base station is connected with the industrial personal computer through the TTL-to-USB module, so that the industrial personal computer receives the information of the AOA handheld beacon in real time, and data fusion of the laser radar and the AOA beacon system information is realized; the wheels moved by the robot adopt direct current speed reduction motors; the embedded development board is connected with a motor driving module, the motor driving module is connected with a direct-current speed reducing motor, and a photoelectric encoder is installed at a wheel shaft of the robot and connected with the embedded development board for obtaining the wheel speed of the robot; the industrial personal computer is internally provided with a program for realizing the detection of target personnel and the planning of a following path; and the power supply unit is respectively connected with the upper navigation unit and the bottom control unit and supplies power to the whole system.
Preferably, the power supply unit includes a vehicle-mounted battery and a power management module, wherein the power management module is connected to the vehicle-mounted battery, converts the voltage of the vehicle-mounted battery into the voltage required by each component in the system, and then connects with each component to supply power to the whole system.
Preferably, the robot body adopts a double-wheel differential trolley.
Preferably, the built-in program of the industrial personal computer comprises a personnel detection unit and a following navigation unit, and is used for realizing the following functions:
(1) processing information data of the person to be detected, which are acquired by the two-dimensional laser radar and the camera;
(2) information from a two-dimensional laser, a camera and an AOA beacon system is fused to obtain more accurate positioning information of the target personnel;
(3) distributing corresponding ID for target personnel, storing the data processed by the two-dimensional laser and the camera in a classified manner, creating a new matched object as a tracking object, and storing the tracking object;
(4) planning and calculating the movement track of the robot to the target personnel, and selecting a planning track with the optimal local path;
(5) and sending a control instruction to the bottom layer motion control unit so as to control the motion speed and the motion direction of the robot.
On the other hand, the invention also provides a mobile robot autonomous following method based on multi-sensor fusion, which comprises the following steps:
step 1, collecting information data of a person to be detected through a two-dimensional laser radar and a camera, processing the collected data, and identifying a target person;
the method for processing the data acquired by the two-dimensional laser radar comprises the following steps: firstly, clustering points returned by laser, dividing the points smaller than a certain threshold into clusters, and generating geometric characteristics for the clusters; the geometrical characteristics comprise the number of laser points, the width length of a laser cluster, and the distance and angle relative to the laser; classifying through a random forest classifier based on the geometric characteristics to train the adaptive characteristics of the human leg model; then, carrying out feature extraction on the laser clusters in the laser data, and comparing the extracted features with adaptive features trained by random forests to detect human legs of the surrounding environment; when the detected distance between the two legs is less than 0.4m, taking the average value of the positions of the two legs as the position of a new leg;
the method for processing the data collected by the camera comprises the following steps: calculating gradient values of different pixel blocks for each frame of image by adopting an HOG feature extraction method, and then sending the extracted features into a Support Vector Machine (SVM) classifier to train according to the calculated gradient values, so as to train the adaptive features of the human body; then, comparing the features extracted from the visual data with the features trained by the classifier to identify the target person in the visual field, thereby realizing the identification of the person;
step 2, performing information matching processing on corresponding personnel on pedestrian leg information and pedestrian image information obtained by the laser and the camera by adopting a Hungarian algorithm, and matching the information recognized by the laser and the pedestrian leg information and the pedestrian image information according to corresponding rules, so as to obtain a group of fusion positions of corresponding visually recognized personnel and laser recognized personnel; then, an Interactive Multiple Model (IMM) Filter based on a Kalman Filter (KF) is adopted to fuse information from a two-dimensional laser, a camera and an AOA beacon system, and more accurate target person positioning information is obtained;
step 3, distributing corresponding IDs for target personnel, storing the data processed by the two-dimensional laser and the camera in a classified manner, creating a new matched object as a tracking object, storing the new matched object, and removing the target failed in tracking so as to distinguish different target personnel;
step 4, generating a target potential field by adopting a Fast Marching Method (FMM), and then increasing a measurement index of a directional gradient field by an improved Dynamic Window Algorithm (DWA) to constrain the planned track of the robot so as to select the planned track with the optimal local path, wherein the specific Method comprises the following steps:
firstly, sensing environmental information by using a laser sensor, establishing a rolling grid map with a robot as a center, in order to measure the time T of each point in the map reaching a target point, establishing a target potential field on the rolling grid map by adopting an FMM algorithm, and expressing the time of a coordinate point (x, y) reaching a target position by using T (x, y); carrying out gradient derivation on the potential field to obtain a direction gradient field, wherein the direction gradient field provides a reference azimuth angle theta (x, y) of each coordinate robot on the map;
in order to select the optimal track of the robot moving to the target, the following evaluation method is adopted:
firstly, in order to effectively move the robot to a target point, a target cost function for evaluating the motion effectiveness of the robot is introduced, and the following formula is shown:
Figure GDA0002985916140000031
wherein, the good _ cost is the track validity cost and is used for evaluating whether the track moves to a position with a low value of an arrival time field; beta is the influence factor of the azimuth angle of the robot, (x)e,ye) As coordinates of the end position of the robot path, thetaeAzimuth angle, theta, of the robot trajectory end pointr(xe,ye) A reference azimuth angle T (x) provided for the directional gradient field when the robot is at the track end positione,ye) The time when the robot reaches the track end point is taken as the time;
when the difference value between the direction of the track terminal and the reference azimuth angle provided by the direction gradient field is increased, the real _ cost is amplified by a certain factor, so that the track conforming to the reference direction of the vector field is more prone to be selected when the track evaluation is excellent;
in order to evaluate the cost of the motion track endpoint towards the target, an angle cost function for evaluating the effectiveness of the motion direction of the robot is introduced, and the following formula is shown:
Figure GDA0002985916140000041
wherein, T (x)s,ys) Is the arrival time of the initial point of the motion track of the robot, d (x)s,ys) The distance of the nearest barrier of the initial point of the motion track of the robot is taken as alpha, which is a barrier influence factor and is used for evaluating the influence of the barrier on the planned path;
evaluating the advantages and disadvantages of the tracks of the robot moving to the target point by taking the sum of the target cost function and the angle cost function as an overall cost function, and selecting a planning track with the optimal local path by minimizing the overall cost function;
the overall cost function is shown in the following formula:
total_cost=goal_cost+angel_cost
wherein, the good _ cost is a target cost function, and the angle _ cost is an angle cost function.
Adopt the produced beneficial effect of above-mentioned technical scheme to lie in: the invention provides a mobile robot autonomous following method based on multi-sensor fusion, which adopts a Kalman filtering algorithm, corrects AOA information by using detection information of a laser sensor and a vision sensor, eliminates some transitional oscillations and obtains a smooth personnel movement track. Meanwhile, an improved DWA algorithm is adopted, a direction gradient field is generated on the basis of a target potential field, the gradient field provides a reference azimuth angle of each coordinate robot on a map, and the gradient field is used for measuring the effectiveness of the robot direction. This prevents the robot from moving blindly towards the target, regardless of the adjustment of the orientation angle. The mobile robot can realize a stable autonomous following function of people under the dynamic shielding environment including the condition that the target people are shielded by the barrier. The method can be suitable for various robot motion models and different working scenes, and is wider in application range and stronger in applicability.
Drawings
Fig. 1 is a structural block diagram of a mobile robot autonomous following system based on multi-sensor fusion according to an embodiment of the present invention;
fig. 2 is a flowchart of an autonomous following method of a mobile robot based on multi-sensor fusion according to an embodiment of the present invention;
FIG. 3 is a following navigation flow chart of the autonomous following method of the mobile robot based on the fusion of various sensors according to the present invention;
FIG. 4 is a schematic diagram of a direction gradient field established by the FMM algorithm provided by the embodiment of the invention;
FIG. 5 is a schematic diagram of modeling a robot motion process according to an embodiment of the present invention;
fig. 6 is a schematic diagram of the improved DWA algorithm provided by the embodiment of the invention, wherein (a) is a diagram of a moving direction given by the improved DWA algorithm when the robot moves normally, (b) is a diagram of a moving direction given by the improved DWA algorithm when the robot encounters an obstacle, and (c) is a diagram of the moving direction of the robot being the same as a reference direction given by the improved DWA algorithm;
fig. 7 is a comparison diagram of a person position track obtained by using only AOA tags and a person position track obtained by kalman filtering fusion according to the embodiment of the present invention;
fig. 8 is a comparison graph of a person position detection result obtained only by a laser and a camera and a person position detection result fused with kalman filtering provided by the embodiment of the present invention;
fig. 9 is a schematic diagram of a movement trajectory following a target person in an obstacle environment according to an embodiment of the present invention.
In the figure, 1, a person track using only AOA information; 2. the method comprises the steps of fusing AOA, laser and camera information through Kalman filtering to obtain a personnel track; 3. the laser and the camera jointly detect the obtained personnel track; 4. the robot follows the motion trail of the target movement; 5. and (4) actual motion track of the target person.
Detailed Description
The following detailed description of embodiments of the present invention is provided in connection with the accompanying drawings and examples. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
A mobile robot autonomous following system based on multi-sensor fusion is shown in figure 1 and comprises an upper navigation unit, a bottom motion control unit and a power supply unit; the upper navigation unit comprises a two-dimensional laser radar, a router, an AOA beacon system, a camera, an industrial personal computer and a TTL-to-USB module; the bottom layer motion control unit comprises a robot body, an embedded development board and a photoelectric encoder; the robot body adopts a double-wheel differential trolley.
The two-dimensional laser radar is used for detecting plane control position information in a fixed range and is connected with the industrial personal computer through an LAN (local area network) port of the router, so that the stability, the safety and the real-time performance of data transmission between the laser radar and the industrial personal computer are ensured; the embedded development board is used for realizing the motion control of the robot and is connected with a second LAN port of the router; the industrial personal computer is used for realizing navigation planning of an upper layer, is connected with a third LAN port of the router, and is in wired connection with the router, so that the industrial personal computer is ensured to send a control instruction to the bottom layer motion control unit, and the motion speed and the motion direction of the robot are further controlled; the camera is arranged at the top in front of the robot, is used for acquiring image information under the current visual field and is connected with the industrial personal computer, so that real-time and effective image information transmission is ensured; the AOA beacon system comprises an AOA beacon base station and an AOA handheld beacon, wherein the AOA beacon base station is installed on a robot, the AOA handheld beacon is in a hand of a target person to be followed, and the AOA beacon base station obtains a relative pose with the AOA handheld beacon; the AOA beacon base station is connected with the industrial personal computer through the TTL-to-USB module, so that the industrial personal computer receives the information of the AOA handheld beacon in real time, and data fusion of the laser radar and the AOA beacon system information is realized; the wheels moved by the robot adopt direct current speed reduction motors; the embedded development board is connected with a motor driving module, the motor driving module is connected with a direct-current speed reducing motor, and a photoelectric encoder is installed at a wheel shaft of the robot and connected with the embedded development board for obtaining the wheel speed of the robot; and the power supply unit is respectively connected with the upper navigation unit and the bottom control unit and supplies power to the whole system. The power supply unit comprises a vehicle-mounted battery and a power supply management module, wherein the power supply management module is connected with the vehicle-mounted battery, converts the voltage of the vehicle-mounted battery into the voltage required by each component in the system and then is connected with each component to supply power for the whole system.
The industrial personal computer is internally provided with a program for realizing the detection of target personnel and the planning of a following path, comprises a personnel detection unit and a following navigation unit, and is used for realizing the following functions:
(1) processing information data of the person to be detected, which are acquired by the two-dimensional laser radar and the camera;
(2) information from a two-dimensional laser, a camera and an AOA beacon system is fused to obtain more accurate positioning information of the target personnel;
(3) distributing corresponding ID for target personnel, storing the data processed by the two-dimensional laser and the camera in a classified manner, creating a new matched object as a tracking object, and storing the tracking object;
(4) planning and calculating the movement track of the robot to the target personnel, and selecting a planning track with the optimal local path;
(5) and sending a control instruction to the bottom layer motion control unit so as to control the motion speed and the motion direction of the robot.
In the embodiment, the model of the embedded control board is STM32F407VET 6; the model of the industrial personal computer is Gk 400; the model of the TTL-to-USB module is CH 340C; the model of the 2D laser radar is PEPEPEPERL + FUCHS; the camera adopts a color monocular camera, the model is large and X.sub.Pro, and the bottom operating system of the industrial personal computer is Ubuntu 16.04LST; the secondary operating system is an ROS system; the vehicle-mounted battery is a Kaimeiwei 12V100A lithium battery; the model of the power supply management module is SD-50B-12; the model of the motor driving module is ZLAC 706; the router model is NETGEAR R6020; beacon base stations and hand-held beacons are AOAs.
A mobile robot autonomous following method based on multi-sensor fusion is disclosed, as shown in FIG. 2, and comprises the following steps:
step 1, collecting information data of a person to be detected through a two-dimensional laser radar and a camera, processing the collected data, and identifying a target person;
the method for processing the data acquired by the two-dimensional laser radar comprises the following steps: firstly, clustering points returned by laser, dividing the points smaller than a certain threshold into clusters, and generating geometric characteristics for the clusters; the geometrical characteristics comprise the number of laser points, the width length of a laser cluster, and the distance and angle relative to the laser; classifying through a random forest classifier based on the geometric characteristics to train the adaptive characteristics of the human leg model; then, carrying out feature extraction on the laser clusters in the laser data, and comparing the extracted features with adaptive features trained by random forests to detect human legs of the surrounding environment; when the detected distance between the two legs is less than 0.4m, taking the average value of the positions of the two legs as the position of a new leg;
the method for processing the data collected by the camera comprises the following steps: calculating gradient values of different pixel blocks for each frame of image by adopting an HOG feature extraction method, and then sending the extracted features into a Support Vector Machine (SVM) classifier to train according to the calculated gradient values, so as to train the adaptive features of the human body; then, comparing the features extracted from the visual data with the features trained by the classifier to identify the target person in the visual field, thereby realizing the identification of the person;
step 2, performing information matching processing on corresponding personnel on pedestrian leg information and pedestrian image information obtained by the laser and the camera by adopting a Hungarian algorithm, and matching the information recognized by the laser and the pedestrian leg information and the pedestrian image information according to corresponding rules, so as to obtain a group of fusion positions of corresponding visually recognized personnel and laser recognized personnel; then, an Interactive Multiple Model (IMM) Filter based on a Kalman Filter (KF) is adopted to fuse information from a two-dimensional laser, a camera and an AOA beacon system, and more accurate target person positioning information is obtained;
the Hungarian algorithm is a combinatorial optimization algorithm for solving a task allocation problem in polynomial time. The algorithm matches the information identified by the two according to the corresponding rules, so as to obtain a group of fusion positions of the corresponding visual identification personnel and the laser identification personnel. The obtained fusion data is shown in the following formula, wherein n is the number of people detected at the time t. The different person positions detected at time t are denoted as
Figure GDA0002985916140000071
The IMM algorithm introduces a plurality of target motion models, has the characteristic of self-adaptation, can effectively adjust the probability of each model, weights the state estimation of each model according to the corresponding probability, and realizes the tracking of the moving target. The IMM algorithm comprises a plurality of filters, an interactive actor, a model probability updater and an estimation mixer, wherein multiple models track the maneuvering motion of a target through interaction, and the transition between the models is determined by a Markov probability transition matrix. The KF filter is used as a filter in an IMM algorithm, a plurality of motion models are established for target personnel, and information from laser, vision and AOA labels is fused by adopting the KF-IMM algorithm.
One key factor of the IMM algorithm is the determination of a model of the motion of the object, which reflects the actual motion of the object as truly as possible. The invention researches a motion model of a target person by taking common motions of the target person as an example.
A general motion model is divided into a prediction process and an update process as follows. The prediction process and the update process of the target person are represented as:
X(k)=F(k-1)X(k-1)+W(k-1)
Z(k)=hX(k)+V(k)
wherein k represents a sampling instant; x (k) ε RnIs the state vector of the prediction process; f is an n-dimensional system transfer matrix; z (k) epsilon RmIs the measurement vector of the update process; w (k) -N (0, R) and V (k) -N (0, Q) are Gaussian process noise and measurement noise, respectively; wherein the state vector and the measurement vector differ according to the selected model.
In this embodiment, a motion process of a following target person is modeled, and motion models of the target person are divided into three types, namely a Constant Velocity vector (CV), a Constant acceleration motion model (CA), and a Constant turning motion model (CT).
In the dynamic model of the CA, CT and CV motion models, x, y represent the position of the target person,
Figure GDA0002985916140000072
and
Figure GDA0002985916140000073
the speed is indicated in the form of a speed,
Figure GDA0002985916140000081
and
Figure GDA0002985916140000082
represents acceleration, ω represents turn rate, w (T) represents white gaussian noise, and T represents the sampling period.
(1) CV model, selecting state variables
Figure GDA0002985916140000083
Suppose it
Figure GDA0002985916140000084
And
Figure GDA0002985916140000085
as random noise treatment, i.e.
Figure GDA0002985916140000086
Figure GDA0002985916140000087
The predicted process state equation for the CV model is:
X(k)=FCV(k-1)X(k-1)+W(k-1)
wherein, Fcv=diag{A,A},A2×2Is a Newton matrix; w (k-1) ═ Wx,Wy]k-1Is zero-mean white gaussian noise.
(2) CT model, selecting state variables
Figure GDA0002985916140000088
Suppose it
Figure GDA0002985916140000089
And
Figure GDA00029859161400000810
as random noise treatment, i.e.
Figure GDA00029859161400000811
Figure GDA00029859161400000812
w (t) is a white noise process. The prediction process state equation of the CT model is:
X(k)=FCT(k-1)X(k-1)+W(k-1)
wherein W (k-1) ═ Wx,Wy,Wω]Is a zero-mean white gaussian noise,
Figure GDA00029859161400000813
(3) CA model, selecting state variables
Figure GDA00029859161400000814
Wherein
Figure GDA00029859161400000815
And
Figure GDA00029859161400000816
as random noise treatment, i.e.
Figure GDA00029859161400000817
Figure GDA00029859161400000818
w (t) is a white noise process. The predicted process state equation of the CA model is:
X(k)=FGA(k-1)X(k-1)+W(k-1)
wherein, FcA=diag{A,A},A3×3Is a Newton matrix; w (k-1) ═ Wx,Wy]'k-1Is zero-mean white gaussian noise.
In the prediction phase, a probability value is respectively assigned to each motion model as an initial probability value. Setting the initial probability value corresponding to each corresponding motion model as:
Figure GDA00029859161400000819
where 0 represents the probability value at the initial time and t-1 represents the probability value at time t-1. (i is 0,1,2), which corresponds to three motion models (CA, CT, CV) in order, and ∑ Wi=1。
The procedure for locating the target person in this embodiment is as followsThe following: the position obtained by the AOA tag is used as the initial value X of the personnel state. The person state is represented by the coordinates (x, y) of the person in the global coordinate system. The prediction process of Kalman filtering estimates three personnel motion models, and a corresponding state transition matrix F and motion noise W can be obtained according to the motion models, wherein F belongs to (F ∈)CA,FCV,FCT). The method comprises the following steps of fusing information from a two-dimensional laser, a camera and an AOA beacon system to obtain more accurate positioning information of a target person, and specifically comprises the following two steps: the first step is to update with AOA tag information. The position information obtained by the AOA label has larger fluctuation, and the data is obtained by performing sliding filtering
Figure GDA00029859161400000820
For the first update, observe the noise matrix
Figure GDA00029859161400000821
The general arrangement is smaller. In the second step, data fusion is carried out on the data of the laser and the vision sensor by adopting Hungarian algorithm, and then the fused data is used as observed quantity of target personnel
Figure GDA00029859161400000822
The information obtained by the AOA tag fluctuates greatly, but
Figure GDA0002985916140000091
The value of (c) does not deviate too much from the true value. Selection of ztMiddle distance
Figure GDA0002985916140000092
Most recent value
Figure GDA0002985916140000093
If it is not
Figure GDA0002985916140000094
And
Figure GDA0002985916140000095
and if the distance is less than 0.8m, performing second updating. Second measurement noise matrix
Figure GDA0002985916140000096
Size and ztMiddle distance
Figure GDA0002985916140000097
The distance between the two closest position data is related. Taking uniform linear motion of the robot as an example, pseudo codes of the Kalman filtering fusion AOA information and the laser and visual information are shown in table 1.
TABLE 1 Kalman Filter Algorithm pseudo-code
Figure GDA0002985916140000098
Step 3, distributing corresponding IDs for target personnel, storing the data processed by the two-dimensional laser and the camera in a classified manner, creating a new matched object as a tracking object, storing the new matched object, and removing the target failed in tracking so as to distinguish different target personnel;
step 4, generating a target potential field by adopting a Fast Marching Method (FMM), and then increasing a measurement index of a directional gradient field by an improved Dynamic Window Algorithm (DWA) to constrain a planned track of the robot, thereby selecting a planned track with an optimal local path, as shown in fig. 3, the specific Method comprises:
firstly, sensing environmental information by using a laser sensor, establishing a rolling grid map with a robot as a center, in order to measure the time T of each point in the map reaching a target point, establishing a target potential field on the rolling grid map by adopting an FMM algorithm, and expressing the time of a coordinate point (x, y) reaching a target position by using T (x, y); performing gradient derivation on the potential field to obtain a directional gradient field, which provides a reference azimuth angle θ (x, y) of each coordinate robot on the map, as shown in fig. 4;
in order to select the optimal track of the robot moving to the target, the following evaluation method is adopted:
firstly, in order to effectively move the robot to a target point, a target cost function for evaluating the motion effectiveness of the robot is introduced, and the following formula is shown:
Figure GDA0002985916140000099
wherein, the good _ cost is the track validity cost and is used for evaluating whether the track moves to a position with a low value of an arrival time field; beta is the influence factor of the azimuth angle of the robot, (x)e,ye) As coordinates of the end position of the robot path, thetaeAzimuth angle, theta, of the robot trajectory end pointr(xe,ye) A reference azimuth angle T (x) provided by the direction gradient field when the robot track is at the end positione,ye) The time when the robot reaches the track end point is taken as the time;
when the difference value between the direction of the track terminal and the reference azimuth angle provided by the direction gradient field is increased, the real _ cost is amplified by a certain factor, so that the track conforming to the reference direction of the vector field is more prone to be selected when the track evaluation is excellent;
the essence of the introduced objective cost function for evaluating the effectiveness of robot motion is to model the motion process of the robot, as shown in fig. 5.
In order to evaluate the cost of the motion track endpoint towards the target, an angle cost function for evaluating the effectiveness of the motion direction of the robot is introduced, and the following formula is shown:
Figure GDA0002985916140000101
wherein, T (x)s,ys) The arrival time of the initial point of the motion track of the robot is T (x)s,ys) When small, the angle _ cost is rapidly increased, so that the robot is more inclined to select a track conforming to the reference direction of the directional gradient field; d (x)s,ys) The distance d (x) of the nearest obstacle at the initial point of the motion track of the robots,ys) Very muchWhen the robot is small, the robot quickly adjusts the movement direction to the reference direction of the directional gradient field to avoid falling into the predicament of the obstacle, and alpha is an obstacle influence factor and is used for evaluating the influence of the obstacle on the planned path;
evaluating the advantages and disadvantages of the tracks of the robot moving to the target point by taking the sum of the target cost function and the angle cost function as an overall cost function, and selecting a planning track with the optimal local path by minimizing the overall cost function; as shown in fig. 6, in fig. 6(a), the direction indicated by the middle arrow is different from the reference direction provided by the gradient field, so the cost of judging the track is amplified by a certain factor. Although the direction indicated by the lower arrow is far away from the target point, the final direction of the track is not much different from the reference direction of the gradient field, so that the cost for judging the track is lower than that of the track corresponding to the direction indicated by the middle arrow. When the robot falls into local optimum, all the simulated tracks obtained by track sampling collide with the barrier, and the method provided by the invention can escape from the situation. In fig. 6(b), when the robot is too close to the obstacle, the three forward simulated trajectories all hit the obstacle in front, and as for the backward simulated trajectory, the conventional DWA algorithm considers that the current position is closer to the target point, the potential field is lower, and therefore the backward movement is not adopted. In the method provided by the invention, when the cost of the robot is evaluated, the difference between the direction of the current position and the reference direction of the gradient field is larger, so that the cost value is increased; the direction of the arrow at the lower left of the backward simulation is consistent with the reference direction of the gradient field, and the distance from the target point is increased but still has lower cost than the current position, so that the robot can choose to escape from the rear. When the robot azimuth is adjusted to be consistent with the gradient field reference direction, as shown in fig. 6(c), the robot advances toward the target point according to the direction indicated by the middle arrow.
The overall cost function is shown in the following formula:
total_cost=goal_cost+angel_cost
wherein, the good _ cost is a target cost function, and the angle _ cost is an angle cost function.
The FMM algorithm solves the problem of interface propagation by solving the viscous solution of an equation function through a numerical method. The equation of the equation is shown as follows:
Figure GDA0002985916140000111
where x denotes a point in the search space, and the expression in the two-dimensional space is x ═ x, y. T (x) is the time from the start to point x, and W (x) is the local propagation velocity of the interface at point x. An equation of a solved equation of x at each point in space, x corresponding to a grid of ith row and j column in a planning space represented by the grid, can be solved by discretizing the gradient T (x).
The DWA algorithm is a classic online local path planning method and works well in a dynamic uncertain environment. The method mainly comprises the steps of sampling a plurality of groups of speed tracks in a speed space (v, w) and simulating the tracks of the robot in a certain time at the speeds. After a plurality of groups of tracks are obtained, the tracks are evaluated, and the speed corresponding to the optimal track is selected to drive the robot to move. The algorithm is characterized by dynamic, and the meaning of the algorithm is that the speed is limited within a feasible dynamic range of a space according to the acceleration and deceleration performance of the mobile robot. To simulate the trajectory of a robot, the motion model of the robot needs to be known. The double-wheel differential mobile robot adopted in the embodiment can only advance, retreat and rotate. Considering two adjacent moments, the motion distance is short, and the track between two adjacent points can be considered as a straight line, namely, the v is moved along the robot coordinate systemtΔ t. The coordinate change under the world coordinate system can be obtained by projecting the coordinate change on the world coordinate system. Suppose that the robot has a pose of (x) at time tt,ytt) And calculating the pose of the robot at the t +1 moment according to the following formula:
xt+1=xt+vt*Δt*cosθt
yt+1=yt+vt*Δt*sinθt
θt+1=θt+wt*Δt
and sampling a plurality of groups of speeds in the speed space of the robot, calculating expected poses of the robot at different speeds, and generating a simulated track of the robot.
In this embodiment, when the target person is not shielded by the obstacle, a comparison experiment is performed on a trajectory obtained by fusing information of the AOA, the laser, and the camera through kalman filtering and a trajectory of a person using only AOA information. When the robot walks along with the target person, the person position trajectory obtained by using only the AOA tag and the person position trajectory obtained by kalman filter fusion are shown in fig. 7. It can be seen from the figure that when only AOA information is used, the estimation of the pose of the person is inaccurate, and sometimes there is a severe jitter. And correcting the AOA information by using the detection information of the laser and the camera by adopting a Kalman filtering algorithm, and eliminating some transitional oscillations to obtain a smooth personnel track.
In this embodiment, the result of the personnel position detection only by the laser and the camera is compared with the result of the personnel position detection fused with the kalman filter, as shown in fig. 8, it can be seen from the figure that the track obtained by the detection by the laser and the camera is substantially the same as the track obtained by the kalman filter, but the track obtained by the kalman filter is smoother.
When the target person is shielded by an obstacle, the tracking failure is often caused by purely depending on laser and visual information. According to the invention, the problem of severe fluctuation of the AOA signal value can be effectively alleviated by fusing laser and visual information through Kalman filtering. The AOA label has a unique ID, can just provide an initial value for laser identification, does not cause the problem of false identification of pedestrians, and has high reliability. Through the track that Kalman filtering fuses laser and vision and AOA information and obtains, utilize AOA information to match the people leg that laser detected again with by the tracking personnel, can effectively deal with laser and vision information when the target personnel is sheltered from by the barrier, can't detect the condition of people. Even in a multi-obstacle environment, the system and the method of the invention can effectively detect people and realize smooth tracking track, as shown in fig. 9, wherein the black square in the figure is an obstacle.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions and scope of the present invention as defined in the appended claims.

Claims (2)

1. A mobile robot autonomous following method based on multi-sensor fusion adopts a mobile robot autonomous following system based on multi-sensor fusion to realize autonomous following of a mobile robot, wherein the system comprises an upper navigation unit, a bottom motion control unit and a power supply unit; the upper navigation unit comprises a two-dimensional laser radar, a router, an AOA beacon system, a camera, an industrial personal computer and a TTL-to-USB module; the bottom layer motion control unit comprises a robot body, an embedded development board and a photoelectric encoder;
the two-dimensional laser radar is used for detecting plane control position information in a fixed range and is connected with the industrial personal computer through an LAN (local area network) port of the router, so that the stability, the safety and the real-time performance of data transmission between the laser radar and the industrial personal computer are ensured; the embedded development board is used for realizing the motion control of the robot and is connected with a second LAN port of the router; the industrial personal computer is used for realizing navigation planning of an upper layer, is connected with a third LAN port of the router, and is in wired connection with the router, so that the industrial personal computer is ensured to send a control instruction to the bottom layer motion control unit, and the motion speed and the motion direction of the robot are further controlled; the camera is arranged at the top in front of the robot, is used for acquiring image information under the current visual field and is connected with the industrial personal computer; the AOA beacon system comprises an AOA beacon base station and an AOA handheld beacon, wherein the AOA beacon base station is installed on a robot, the AOA handheld beacon is in a hand of a target person to be followed, and the AOA beacon base station obtains a relative pose with the AOA handheld beacon; the AOA beacon base station is connected with the industrial personal computer through the TTL-to-USB module, so that the industrial personal computer receives the information of the AOA handheld beacon in real time, and data fusion of the laser radar and the AOA beacon system information is realized; the wheels moved by the robot adopt direct current speed reduction motors; the embedded development board is connected with a motor driving module, the motor driving module is connected with a direct-current speed reducing motor, and a photoelectric encoder is installed at a wheel shaft of the robot and connected with the embedded development board for obtaining the wheel speed of the robot; the industrial personal computer is internally provided with a program for realizing the detection of target personnel and the planning of a following path; the power supply unit is respectively connected with the upper navigation unit and the bottom control unit and supplies power to the whole system;
the method is characterized in that: the mobile robot autonomous following method comprises the following steps:
step 1, collecting information data of a person to be detected through a two-dimensional laser radar and a camera, processing the collected data, and identifying a target person;
step 2, performing information matching processing on corresponding personnel on pedestrian leg information and pedestrian image information obtained by the laser and the camera by adopting a Hungarian algorithm, and matching the information recognized by the laser and the pedestrian leg information and the pedestrian image information according to corresponding rules, so as to obtain a group of fusion positions of corresponding visually recognized personnel and laser recognized personnel; then, fusing information from a two-dimensional laser, a camera and an AOA beacon system by adopting an interactive multi-model filter based on a Kalman filter to obtain more accurate positioning information of the target personnel;
step 3, distributing corresponding IDs for target personnel, storing the data processed by the two-dimensional laser and the camera in a classified manner, creating a new matched object as a tracking object, storing the new matched object, and removing the target failed in tracking so as to distinguish different target personnel;
step 4, generating a target potential field by adopting a fast traveling algorithm, and then increasing the measurement index of the directional gradient field through an improved dynamic window algorithm to constrain the planned trajectory of the robot so as to select the planned trajectory with the optimal local path;
the specific method in the step 4 comprises the following steps:
firstly, sensing environmental information by using a laser sensor, establishing a rolling grid map with a robot as a center, in order to measure the time T of each point in the map reaching a target point, establishing a target potential field on the rolling grid map by adopting a rapid marching algorithm, and expressing the time of a coordinate point (x, y) reaching a target position by using T (x, y); carrying out gradient derivation on the potential field to obtain a direction gradient field, wherein the direction gradient field provides a reference azimuth angle theta (x, y) of each coordinate robot on the map;
in order to select the optimal track of the robot moving to the target, the following evaluation method is adopted:
firstly, in order to effectively move the robot to a target point, a target cost function for evaluating the motion effectiveness of the robot is introduced, and the following formula is shown:
Figure FDA0002985916130000021
wherein, the good _ cost is the track validity cost and is used for evaluating whether the track moves to a position with a low value of an arrival time field; beta is the influence factor of the azimuth angle of the robot, (x)e,ye) As coordinates of the end position of the robot path, thetaeAzimuth angle, theta, of the robot trajectory end pointr(xe,ye) A reference azimuth angle T (x) provided for the directional gradient field when the robot is at the track end positione,ye) The time when the robot reaches the track end point is taken as the time;
when the difference value between the direction of the track terminal and the reference azimuth angle provided by the direction gradient field is increased, the real _ cost is amplified by a certain factor, so that the track conforming to the reference direction of the vector field is more prone to be selected when the track evaluation is excellent;
in order to evaluate the cost of the motion track endpoint towards the target, an angle cost function for evaluating the effectiveness of the motion direction of the robot is introduced, and the following formula is shown:
Figure FDA0002985916130000022
wherein, T (x)s,ys) Is the arrival time of the initial point of the motion track of the robot, d (x)s,ys) The distance of the nearest barrier of the initial point of the motion track of the robot is taken as alpha, which is a barrier influence factor and is used for evaluating the influence of the barrier on the planned path;
evaluating the advantages and disadvantages of the tracks of the robot moving to the target point by taking the sum of the target cost function and the angle cost function as an overall cost function, and selecting a planning track with the optimal local path by minimizing the overall cost function;
the overall cost function is shown in the following formula:
total_cost=goal_cost+angel_cost
wherein, the good _ cost is a target cost function, and the angle _ cost is an angle cost function.
2. The mobile robot autonomous following method based on multi-sensor fusion according to claim 1, characterized in that: the method for processing the data acquired by the two-dimensional laser radar in the step 1 comprises the following steps: firstly, clustering points returned by laser, dividing the points smaller than a certain threshold into clusters, and generating geometric characteristics for the clusters; the geometrical characteristics comprise the number of laser points, the width length of a laser cluster, and the distance and angle relative to the laser; classifying through a random forest classifier based on the geometric characteristics to train the adaptive characteristics of the human leg model; then, carrying out feature extraction on laser clusters in the laser data, and comparing the extracted features with adaptive features trained by random forests to detect human legs of the surrounding environment; when the detected distance between the two legs is less than 0.4m, taking the average value of the positions of the two legs as the position of a new leg;
the method for processing the data collected by the camera comprises the following steps: calculating gradient values of different pixel blocks for each frame of image by adopting an HOG feature extraction method, and then sending the extracted features into a support vector machine classifier for training according to the calculated gradient values to train the adaptive features of the human body; and then comparing the features extracted from the visual data with the features trained by the classifier to identify the target person in the visual field, thereby realizing the identification of the person.
CN201910326362.2A 2019-04-23 2019-04-23 Mobile robot autonomous following method based on multi-sensor fusion Active CN109947119B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910326362.2A CN109947119B (en) 2019-04-23 2019-04-23 Mobile robot autonomous following method based on multi-sensor fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910326362.2A CN109947119B (en) 2019-04-23 2019-04-23 Mobile robot autonomous following method based on multi-sensor fusion

Publications (2)

Publication Number Publication Date
CN109947119A CN109947119A (en) 2019-06-28
CN109947119B true CN109947119B (en) 2021-06-29

Family

ID=67015959

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910326362.2A Active CN109947119B (en) 2019-04-23 2019-04-23 Mobile robot autonomous following method based on multi-sensor fusion

Country Status (1)

Country Link
CN (1) CN109947119B (en)

Families Citing this family (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110427034B (en) * 2019-08-13 2022-09-02 浙江吉利汽车研究院有限公司 Target tracking system and method based on vehicle-road cooperation
CN110879592B (en) * 2019-11-08 2020-11-20 南京航空航天大学 Artificial potential field path planning method based on escape force fuzzy control
CN110865534A (en) * 2019-11-15 2020-03-06 合肥工业大学 Intelligent following system with improved Kalman filtering for navigation positioning
CN110609561A (en) * 2019-11-18 2019-12-24 深圳市优必选科技股份有限公司 Pedestrian tracking method and device, computer readable storage medium and robot
CN117398023A (en) * 2019-11-19 2024-01-16 科沃斯机器人股份有限公司 Self-moving robot following method and self-moving robot
CN111089590B (en) * 2019-12-09 2021-10-15 泉州装备制造研究所 Method for tracking human leg by mobile robot through fusion of vision and laser
CN113126600A (en) * 2019-12-26 2021-07-16 沈阳新松机器人自动化股份有限公司 Follow system and article transfer cart based on UWB
CN111103887B (en) * 2020-01-14 2021-11-12 大连理工大学 Multi-sensor-based multi-mobile-robot scheduling system design method
TWI742644B (en) * 2020-05-06 2021-10-11 東元電機股份有限公司 Following mobile platform and method thereof
CN111708042B (en) * 2020-05-09 2023-05-02 汕头大学 Robot method and system for predicting and following pedestrian track
CN113671940A (en) * 2020-05-14 2021-11-19 东元电机股份有限公司 Following mobile platform and method thereof
CN113741550B (en) * 2020-05-15 2024-02-02 北京机械设备研究所 Mobile robot following method and system
CN113960996A (en) * 2020-07-20 2022-01-21 华为技术有限公司 Planning method and device for obstacle avoidance path of driving device
TWI780468B (en) 2020-08-13 2022-10-11 國立陽明交通大學 Method and system of robot for human following
CN112237400B (en) * 2020-09-04 2022-07-01 安克创新科技股份有限公司 Method for area division, self-moving robot and computer storage medium
CN112015186A (en) * 2020-09-09 2020-12-01 上海有个机器人有限公司 Robot path planning method and device with social attributes and robot
CN112148011B (en) * 2020-09-24 2022-04-15 东南大学 Electroencephalogram mobile robot sharing control method under unknown environment
CN112509264B (en) * 2020-11-19 2022-11-18 深圳市欧瑞博科技股份有限公司 Abnormal intrusion intelligent shooting method and device, electronic equipment and storage medium
CN112488068B (en) * 2020-12-21 2022-01-11 重庆紫光华山智安科技有限公司 Method, device and equipment for searching monitoring target and computer storage medium
CN112834764A (en) * 2020-12-28 2021-05-25 深圳市人工智能与机器人研究院 Sampling control method and device of mechanical arm and sampling system
CN113156933B (en) * 2020-12-30 2022-05-03 徐宁 Robot traveling control system and method
CN112904855B (en) * 2021-01-19 2022-08-16 四川阿泰因机器人智能装备有限公司 Follow-up robot local path planning method based on improved dynamic window
WO2022166067A1 (en) * 2021-02-04 2022-08-11 武汉工程大学 System and method for coordinated traction of multi-machine heavy-duty handling robot
CN112965081B (en) * 2021-02-05 2023-08-01 浙江大学 Simulated learning social navigation method based on feature map fused with pedestrian information
US20220350342A1 (en) * 2021-04-25 2022-11-03 Ubtech North America Research And Development Center Corp Moving target following method, robot and computer-readable storage medium
CN113222122A (en) * 2021-06-01 2021-08-06 重庆大学 High-quality neural network system suitable for singlechip
CN113504777B (en) * 2021-06-16 2024-04-16 新疆美特智能安全工程股份有限公司 Automatic following method and system for artificial intelligence AGV trolley
CN113535861B (en) * 2021-07-16 2023-08-11 子亥科技(成都)有限公司 Track prediction method for multi-scale feature fusion and self-adaptive clustering
US20230091806A1 (en) * 2021-09-23 2023-03-23 Honda Motor Co., Ltd. Inverse optimal control for human approach
CN114216463A (en) * 2021-11-04 2022-03-22 国家电网有限公司 Path optimization target positioning method and device, storage medium and unmanned equipment
CN114061590A (en) * 2021-11-18 2022-02-18 北京仙宇科技有限公司 Method for dynamically creating robot cruise coordinate and robot navigation method
CN114237256B (en) * 2021-12-20 2023-07-04 东北大学 Three-dimensional path planning and navigation method suitable for under-actuated robot
CN114326732A (en) * 2021-12-28 2022-04-12 无锡笠泽智能科技有限公司 Robot autonomous following system and autonomous following control method
CN115437299A (en) * 2022-10-10 2022-12-06 北京凌天智能装备集团股份有限公司 Accompanying transportation robot advancing control method and system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9889566B2 (en) * 2015-05-01 2018-02-13 General Electric Company Systems and methods for control of robotic manipulation

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10962647B2 (en) * 2016-11-30 2021-03-30 Yujin Robot Co., Ltd. Lidar apparatus based on time of flight and moving object
CN107765220B (en) * 2017-09-20 2020-10-23 武汉木神机器人有限责任公司 Pedestrian following system and method based on UWB and laser radar hybrid positioning
CN109129507B (en) * 2018-09-10 2022-04-19 北京联合大学 Intelligent explaining robot and explaining method and system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9889566B2 (en) * 2015-05-01 2018-02-13 General Electric Company Systems and methods for control of robotic manipulation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Optimal collision free path planning for an autonomous articulated vehicle with two trailers;Amr Mohamed; Jing Ren; Haoxiang Lang; Moustafa El-Gindy;《2017 IEEE International Conference on Industrial Technology (ICIT)》;20170504;全文 *

Also Published As

Publication number Publication date
CN109947119A (en) 2019-06-28

Similar Documents

Publication Publication Date Title
CN109947119B (en) Mobile robot autonomous following method based on multi-sensor fusion
Chung et al. The detection and following of human legs through inductive approaches for a mobile robot with a single laser range finder
Dong et al. Real-time avoidance strategy of dynamic obstacles via half model-free detection and tracking with 2d lidar for mobile robots
CN114998276B (en) Robot dynamic obstacle real-time detection method based on three-dimensional point cloud
CN112762928B (en) ODOM and DM landmark combined mobile robot containing laser SLAM and navigation method
CN110705385B (en) Method, device, equipment and medium for detecting angle of obstacle
Wu et al. Vision-based target detection and tracking system for a quadcopter
Navarro-Serment et al. LADAR-based pedestrian detection and tracking
Lu et al. Perception and avoidance of multiple small fast moving objects for quadrotors with only low-cost RGBD camera
Basit et al. Joint localization of pursuit quadcopters and target using monocular cues
CN113158779A (en) Walking method and device and computer storage medium
Catalano et al. Uav tracking with solid-state lidars: dynamic multi-frequency scan integration
Nomatsu et al. Development of an autonomous mobile robot with self-localization and searching target in a real environment
Gebregziabher Multi Object Tracking for Predictive Collision Avoidance
CN114636422A (en) Positioning and navigation method for information machine room scene
Dong et al. Path Planning Research for Outdoor Mobile Robot
Marchant et al. Cooperative global tracking using multiple sensors
Huang et al. Multi-object Detection, Tracking and Prediction in Rugged Dynamic Environments
Chang Application of multi-information fusion positioning technology in robot positioning system
Zhao et al. People following system based on lrf
Huang et al. Human-Following Strategy for Orchard Mobile Robot Based on the KCF-YOLO Algorithm
US11693416B2 (en) Route determination method
Chen et al. Global Visual And Semantic Observations for Outdoor Robot Localization
Sun et al. HPPS: A Hierarchical Progressive Perception System for Luggage Trolley Detection and Localization at Airports
Wu et al. Developing a dynamic obstacle avoidance system for autonomous mobile robots using Bayesian optimization and object tracking: Implementation and testing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20231218

Address after: Room 4X-139, No. 96 Sanhao Street, Heping District, Shenyang City, Liaoning Province, 110057

Patentee after: Shenyang Ruige Holdings Co.,Ltd.

Address before: No.11, Wenhua Road, Sanxiang, Heping District, Shenyang City, Liaoning Province

Patentee before: Fang Zheng

Patentee before: Shenyang Ruige Holdings Co.,Ltd.

Effective date of registration: 20231218

Address after: No.11, Wenhua Road, Sanxiang, Heping District, Shenyang City, Liaoning Province

Patentee after: Fang Zheng

Patentee after: Shenyang Ruige Holdings Co.,Ltd.

Address before: 110819 No. 3 lane, Heping Road, Heping District, Shenyang, Liaoning 11

Patentee before: Northeastern University

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240116

Address after: No. 94-2 Sanhao Street, Heping District, Shenyang City, Liaoning Province, 110057 (3008)

Patentee after: Ruige Intelligent Technology (Shenyang) Co.,Ltd.

Address before: Room 4X-139, No. 96 Sanhao Street, Heping District, Shenyang City, Liaoning Province, 110057

Patentee before: Shenyang Ruige Holdings Co.,Ltd.

TR01 Transfer of patent right