CN108646761B - ROS-based robot indoor environment exploration, obstacle avoidance and target tracking method - Google Patents

ROS-based robot indoor environment exploration, obstacle avoidance and target tracking method Download PDF

Info

Publication number
CN108646761B
CN108646761B CN201810764178.1A CN201810764178A CN108646761B CN 108646761 B CN108646761 B CN 108646761B CN 201810764178 A CN201810764178 A CN 201810764178A CN 108646761 B CN108646761 B CN 108646761B
Authority
CN
China
Prior art keywords
target
ros
tracking
robot
mobile robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810764178.1A
Other languages
Chinese (zh)
Other versions
CN108646761A (en
Inventor
姚利娜
王继玉
吴巍
陈文浩
李丰哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou University
Original Assignee
Zhengzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou University filed Critical Zhengzhou University
Priority to CN201810764178.1A priority Critical patent/CN108646761B/en
Publication of CN108646761A publication Critical patent/CN108646761A/en
Application granted granted Critical
Publication of CN108646761B publication Critical patent/CN108646761B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0234Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons
    • G05D1/0236Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention provides a ROS-based robot indoor environment exploring, obstacle avoiding and target tracking method, which is characterized in that a grid map is established based on laser radar information, local map deduction and global boundary search are combined, an autonomous exploration strategy is designed, so that a mobile robot can be prevented from falling into local exploration dead cycle, and the exploration of the whole indoor environment can be guaranteed. In the ROS system, the real-time tracking node and the shielding tracking node are designed by utilizing an improved Kalman filtering method and a MeanShift method, so that the real-time problem and the complete shielding problem of target tracking of the mobile robot are solved, the operation speed of the system is improved, the target searching time is shortened, and the real-time requirement of tracking is met; when the target is completely shielded, the shielding tracking node predicts and tracks the target according to the previous state information, after shielding is finished, the target is locked again, the tracking mode is switched automatically, and the real-time tracking mode is used for tracking the target.

Description

ROS-based robot indoor environment exploration, obstacle avoidance and target tracking method
Technical Field
The invention relates to the technical field of robot motion, in particular to a ROS-based robot indoor environment exploring, obstacle avoiding and target tracking method.
Background
With the development of science and technology and the progress of society, mobile robot technology is rapidly developed. At present, the application of the robot relates to a plurality of fields such as sanitation, medical treatment, tour guide, education, entertainment, security, daily life and the like, and is suitable for various working environments, even dangerous, dirty and boring occasions and the like. In many cases, the information of the working space is unknown, and when the robot enters an unknown environment, the working environment needs to be effectively detected to construct a map of the working environment. Navigation, path planning, obstacle avoidance strategies and other operations can only be performed on the basis of the constructed map. The basic capability of the mobile robot is necessary, namely environment exploration is carried out in an unknown environment, and a corresponding environment map is constructed according to information acquired by the laser radar. However, in many environments, a robot cannot acquire a map of a working environment in advance, for example, in working environments such as mine exploration, deep sea exploration, and dangerous environment rescue, a human cannot enter a field to acquire environmental information, and a mobile robot must be relied on to perform environment exploration and model creation. When the robot tracks the target, the robot may increase the complexity of tracking the target due to the motion of the robot, the shake of the camera, the irregular motion of the tracked target, the influence of light, and the like. The mobile robot completes various tasks through an operating system, motion control, path planning, obstacle avoidance, tracking, environment mapping and positioning and the like.
With the rapid development and sophistication of the field of robotics, the need for code reuse and modularity is becoming increasingly strong. The ROS is a secondary operating system running on a main operating system such as Ubuntu and the like, has a distributed open source software architecture, abstracts bottom hardware of the robot, improves the code reuse rate, has the execution of bottom driver management and common functions, can provide a plurality of functions similar to the traditional operating system, including common function implementation, interprocess message transmission, program package management and the like, and further provides related tools and libraries for acquiring, compiling and editing codes and running programs among a plurality of computers to finish distributed computing. ROS can support various robot configurations and sensors, enabling researchers to develop and simulate quickly.
The ROS system supports multiple programming languages such as C + +, Python and the like, integrates an OpenCV library for robot vision development, and has an S L AM map construction and navigation function package, the ROS can establish a robot model by using a standardized robot description format (URDF), can also establish an ideal simulation environment by using Gazebo simulation software, and drives the robot to perform simulation experiments such as obstacle avoidance, path planning, map construction and navigation and the like in the simulation environment.
The target tracking method is mainly divided into region-based tracking, feature-based tracking, model-based tracking and active contour-based tracking. Most of target tracking based on visual sensors is target tracking based on color features, for example, the MeanShift algorithm is applied to target tracking in the literature by comeiciu and the like, so that the MeanShift algorithm is widely applied to the field of target tracking. When the mobile robot tracks the target by using the traditional MeanShift tracking method, the target tracking window cannot be adjusted in a self-adaptive mode, the motion condition of the target cannot be reflected, and when interference of homochromatic objects, rapid movement and shielding of the target exist, the tracking effect of the method is not ideal. Bradski proposed a Camshift algorithm in the literature, which uses a color histogram to calculate a color probability distribution map of a target window to realize target tracking. When the mobile robot tracks the target by using the Camshift method, the mobile robot loses the target when the target is completely shielded or the movement speed of the target changes too fast.
Disclosure of Invention
The invention provides a ROS-based robot indoor environment exploration, obstacle avoidance and target tracking method, aiming at the technical problem that when the existing mobile robot carries out target tracking, the mobile robot can lose a target when the target is completely shielded or the target movement speed changes too fast.
In order to achieve the purpose, the technical scheme of the invention is realized as follows: a robot indoor environment exploration, obstacle avoidance and target tracking method based on ROS comprises the following steps:
the method comprises the following steps: building a hardware platform of the ROS mobile robot: the bottom of the mobile robot is provided with a motion control module, the upper part of the mobile robot is fixed with a sensor, the middle part of the mobile robot is provided with a controller and a wireless communication module, the controller is provided with an ROS system, the motion control module, the sensor and the wireless communication module are all connected with the controller, and the wireless communication module is connected with an upper computer
Step two: the method comprises the following steps that an ROS mobile robot is arranged in a room to be detected, the indoor environment is scanned by using a laser radar in a sensor, the position information and the direction information of the mobile robot are collected by using a mileometer in the sensor, the ROS mobile robot explores the indoor environment through a square wave path, an upper computer acquires the scanning information of the laser radar through a wireless communication module, and an ROS system in the upper computer constructs a grid map by using a map construction function package;
step three: and guiding the established grid map into an ROS system of the ROS mobile robot, navigating the robot to a specified position in the map by using the established grid map, and realizing target tracking by the vision sensor based on a Kalman filtering method and a Meanshift tracking method.
The motion control module is mainly a Kobuki mobile chassis, sensors comprise an odometer, a Kinect2.0 depth vision sensor and an Rplidar A1 laser radar, the controller is a Jetson TK1 development board provided with an Ubuntu14.04 and an ROS indigo system, the wireless communication module is an Intel 7260AC HMW wireless network card, and the wireless communication module realizes data transmission through a wifi module; the controller is provided with a Kinect2.0 depth vision sensor suitable for an ROS system, an Rplidar A1 laser radar, a driving system of a wireless network card Intel 7260AC HMW and a software system of a Kobuki2.0 depth vision sensor; ubuntu14.04 and ROS indigo operating system have been built on the host computer, and the host computer passes through SSH long-range login to the Ubuntu system of controller, starts the removal chassis of robot, and the host computer uses the communication mode of ROS system through wireless wifi module, issues speed message and gives the robot removal chassis, changes the linear velocity and the angular velocity of robot, controls the robot motion.
The square wave path exploration method comprises the following steps: when the ROS mobile robot is started, the ROS mobile robot is placed in an indoor place without obstacles within 1 m around, the ROS mobile robot starts to move to an unknown area, after a laser radar detects that a wall appears in a front room, and when the ROS mobile robot is less than 0.8 m away from the wall, a controller controls a motion control module to start steering to avoid the front wall; turning to the in-process, when laser radar detects the place ahead and does not have the barrier, ROS mobile robot begins to stop rotatory, restart and move forward, continue to explore, timer in the controller is triggered this moment, begin the timing, after moving 10 seconds, ROS mobile robot stops moving, rotatory 90 degrees again according to the direction of rotation when keeping away the barrier, continue to move after the rotation is accomplished, it is rotatory again when meetting the wall, keep away the barrier direction of rotation and the direction of rotation before this moment opposite, repeat preceding step, seek until accomplishing indoor environment.
In exploration, the ROS mobile robot avoids the barrier by using an artificial potential field barrier avoiding method, wherein the artificial potential field barrier avoiding method comprises the following steps:
(1) setting the starting point position to ps=[xs,ys]TTarget point position is pt=[xt,yt]T(ii) a The ROS mobile robot is regarded as a mass point and moves in a two-dimensional space;
(2) the upper computer obtains the position p of the ROS mobile robot in the global coordinate system through tf coordinate transformation of the ROS systemc=[xc,yc]TThe position of the obstacle detected by the laser radar is po=[xo,yo]T
(3) Calculating the resultant force of the total repulsion force and the attraction force of the virtual potential field borne by the ROS mobile robot as follows:
Figure BDA0001728628360000031
wherein, U (p)c)=Ua(pc)+Ure(pc) Is the sum of the gravitational potential and the repulsive potential,
Figure BDA0001728628360000032
in order to target the gravitational potential created by the mobile robot for ROS,
Figure BDA0001728628360000033
repulsive force potential, lambda, k, d, formed for obstacle to ROS moving robot0Are all constant and are all provided with the same power,
Figure BDA0001728628360000034
is the euclidean distance between the robot and the target,
Figure BDA0001728628360000035
is the Euclidean distance between the robot and the obstacle;
Figure BDA0001728628360000036
is the gravitational force generated by the gravitational field;
Figure BDA0001728628360000041
repulsion of the obstacle to the ROS mobile robot; when a plurality of obstacles exist, calculating the repulsive force generated by each obstacle to the robot, and combining the repulsive forces generated by the plurality of obstacles into a total repulsive force;
(3) and taking the direction of the resultant force as the obstacle avoidance direction of the robot, and rotating the ROS mobile robot to the obstacle avoidance direction to move so as to realize the local obstacle avoidance of the robot.
The laser radar of ROS mobile robot scans indoor environment information, and the host computer passes through the wifi module and updates the real-time data that return of laser radar, and the host computer calls ROS system's construction map function package, and when the barrier appears in the data that laser radar returns, the grid map is shown to the visualized instrument of rviz in the host computer to use black to demonstrate the barrier area of surveying, uses the light grey to describe the region that does not have the barrier who surveys, utilizes dark grey to show the region that does not explore.
The navigation method in the third step comprises the steps that an upper computer uses an rviz visualization tool to operate an rplidar _ amcl.launch start file, a MAP _ file or a TURT L EBOT _ MAP _ FI L E environment variable of a bashrc file is used to guide a constructed grid MAP into an ROS mobile robot, when the robot estimates a two-dimensional pose in the grid MAP, the initial direction of the robot is set, the robot starts to rotate and stops rotating after rotating to the set direction, the direction of the robot in the actual environment is specified, the navigation pose of the robot is set through two-dimensional target navigation, when a navigation target is set, the robot starts to plan a path, after the path planning is completed, the robot starts to move towards the target along the planned path, obstacles are avoided through an artificial potential field obstacle avoidance method, the robot stops moving after reaching the target position, stops moving after rotating to the target direction, and stops rotating, and the pose reaches the specified position in the grid MAP.
The vision sensor realizes target tracking by utilizing a real-time target tracking node, and comprises the following steps:
step (a): initializing a state transition matrix A, an observation matrix H, a process noise covariance matrix Q, a measurement noise covariance matrix R and a state error covariance matrix P parameter of Kalman filtering, and establishing a Kalman tracking object parameter;
step (b): according to the target state position of the previous frame, using the tracking state of the target: performing Kalman prediction on X (k/k-1) ═ AX (k-1/k-1) to obtain the position (X) of the target in the current frame1,y1) Updating the state error covariance P (k/k-1); wherein X (k/k-1) is the result of predicting the state at the k moment by using the state at the k-1 moment, and X (k-1/k-1) is the optimal result at the k-1 moment;
the state in the target equation of state x (k) ═ AX (k-1) + w (k) is set to:
Figure BDA0001728628360000051
wherein X (k) is the state of the system at time k, and (x (k-1), y (k-1)) is the position of the target at time k-1, and the moving speeds are vx(k-1) and vy(k-1);
Step (c) uses the window width w and height h of the previous frame and predicts the position (x) of the current frame1,y1) As the center of the window, the actual position (x) of the target in the current frame is obtained by combining the MeanShift tracking method2,y2);
Step (d) uses the actual position (x) of the object in the current frame2,y2) Calculating an observed value of a Kalman filter from a measurement equation z (k) hx (k) v (k) of the target, and calculating a Kalman gain k (k) P (k/k-1) H '[ HP (k/k-1) H' + R]-1The target position (X) is obtained by Kalman state update correction X (k/k) ═ X (k/k-1) + k (k) (z (k) -HX (k/k-1))3,y3) As the exact position of the target, the state error covariance matrix P (k/k) ═ 1-k (k) H) P (k/k-1) is updated simultaneously, where P (k/k-1) is the predicted value of the state error covariance at time k-1 versus time k: p (k/k-1) ═ AP (k-1/k-1) a' + Q;
step (e) locating the target position (x)3,y3) And (d) repeating the steps (b) to (d) as the predicted position of the next frame to realize real-time tracking of the target, wherein if the process is closed, the algorithm is terminated, and if not, the process returns to the step (b).
The method for tracking the target object by the MeanShift tracking method comprises the following steps:
the established target model is as follows:
Figure BDA0001728628360000052
wherein, the function is a kronecker function, h is a bandwidth matrix of the window, k (| | x | | purple sweet2) As a kernel function, b (x)i) Is the sampling point xiThe calculated image characteristic value is mapped to a corresponding bin value to obtain a quantization function;
assuming that y is the image coordinate of the center of the candidate target in the current frame, the model of the candidate target located at y is:
Figure BDA0001728628360000053
wherein,
Figure BDA0001728628360000054
m and n both represent the number of sampled data points, CuIs a normalized coefficient:
Figure BDA0001728628360000061
and measuring the similarity degree between the target object model and the candidate object area by adopting a Papanicolaou distance coefficient:
Figure BDA0001728628360000062
the target object and the candidate target object obtain the minimum distance in the distance space of the selected features, which is equivalent to the Papanicolaou distance coefficient of d (y)
Figure BDA0001728628360000063
Giving the initial position of the target object in the current image frame as y0Mixing rho [ p (y), q)]Using a first order taylor series expansion to yield:
Figure BDA0001728628360000064
defining a weight coefficient:
Figure BDA0001728628360000065
the iteration position in the current frame is obtained as follows:
Figure BDA0001728628360000066
searching a target object in each frame, continuously iterating by using a MeanShift tracking method, finding an area with the maximum similarity value, and calculating a new position y of the target in the current frame1Up to y1-y0If < iteration stop or the number of iterations reaches the maximum value, y1Becoming the new position of the next frame overlay;
the real-time target tracking node calculates the distance between the target and the ROS mobile robot according to the searched target area and the depth information of the target area, adjusts the linear speed of the ROS mobile robot for tracking the target, and adjusts the rotation angular speed of the ROS mobile robot for tracking the target according to the deviation between the target and the center of the image window of the visual sensor in the upper computer.
When no shielding occurs, the ROS mobile robot uses the real-time tracking node to track the target, and when the shielding occurs completely, the ROS mobile robot uses the shielding tracking node to track the target; when the similarity degree between the target object model and the candidate object region, namely the Bhattacharyya distance coefficient, is larger than 0.6, executing an occlusion tracking node; the design method of the occlusion tracking node comprises the following steps: suppose the coordinates of pixel points of a moving target in a video frame are (x, y), and the moving speed of the target is vxAnd vyAnd the image frame updating time is dt, and the kinematic equation of the target is established as follows:
Figure BDA0001728628360000071
wherein, ax(k-1) and ay(k-1) is the acceleration in the x and y directions at time k-1, which translates into:
X(k)=AX(k-1)+W(k-1);
wherein:
Figure BDA0001728628360000072
Figure BDA0001728628360000073
establishing a Kalman linear state equation of the moving target, wherein the measurement equation is established as follows:
Figure BDA0001728628360000074
conversion to:
Z(k)=HX(k)+V(k),
wherein:
Figure BDA0001728628360000075
when occlusion occurs, using a Kalman filter based onContinuously predicting and correcting the position of the target by the motion state and the measured value of the previous frame to realize the prediction tracking during the shielding; state error covariance matrix in Kalman filter:
Figure BDA0001728628360000081
process noise error covariance matrix:
Figure BDA0001728628360000082
the shielding tracking node is in an image processing function process _ image (self, image _ color) according to the motion speed v before the target is lostxAnd vyCalculating the moving distance v of the target in the x direction and the y direction in the video frame updating time dtxDt and vyDt; then according to the correction position (x) of the Kalman filter in the previous frame3,y3) Using x ═ x3+vx*dt,y=y3+vyDt, obtaining the state of the target in the current frame X (k) ([ x y vx vy ]]TThen, a measurement equation is used for obtaining a measurement value of the target; and correcting by using a Kalman filter according to the measured value to obtain the position of the target in the current frame.
The invention has the beneficial effects that: an unknown environment exploration task is oriented, and autonomous exploration and obstacle avoidance of the mobile robot based on the ROS system are realized; the 2D grid map established based on the information of the laser radar Rplidar A1 is combined with local map deduction and global boundary search to design an autonomous exploration strategy, so that the mobile robot can be prevented from falling into local exploration dead circulation, and the exploration of the whole indoor environment can be guaranteed to be completed; on the premise of no need of user intervention, the autonomous exploration and the map construction of an unknown indoor environment are realized, the map construction process can be displayed on the upper computer in real time, and compared with a two-dimensional map constructed by a traditional autonomous exploration method, the method is visual and easy to recognize, and is convenient for a user to observe. In the ROS system, a real-time tracking node and an occlusion tracking node are designed by using an improved Kalman filtering method and a MeanShift method, so that the real-time problem and the complete occlusion problem of target tracking of the mobile robot are solved; by using the prediction function of a Kalman filter, the target position is predicted, and then the Meanshift tracking node is used for tracking according to the prediction information, so that the operation speed of the system is improved, the target search time is shortened, and the real-time requirement of tracking is met; and establishing a state equation and an observation equation of the target, when the target is completely shielded, predicting and tracking the target by the shielding tracking node according to the previous state information, after shielding is finished, locking the target again, automatically switching a tracking mode, and tracking the target by using a real-time tracking mode.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a control structure diagram of a mobile robot according to the present invention.
Fig. 2 is a flowchart of an artificial potential field obstacle avoidance method in the ROS system.
Fig. 3 is an obstacle avoidance experiment performed by the ROS mobile robot in a simulated ideal indoor environment.
FIG. 4 is a flow chart of the present invention for real-time tracking of nodes.
FIG. 5 is a flow chart of the object tracking system of the present invention.
FIG. 6 is a test result displayed by the upper computer of the real-time tracking node, wherein (a), (b), (c) and (d) respectively show the tracking result when the moving target is far away from the mobile robot, the moving target is close to the mobile robot and the moving target moves leftwards and rightwards; (e) and (f) tracking the tracking result of the node in real time for the moving target to move leftwards and rightwards quickly.
FIG. 7 is a result of an actual test using a real-time target tracking node, wherein (a), (b), and (c) indicate that the mobile robot follows the target in real time to rotate synchronously; (d) the (e) and the (f) represent the process that the mobile robot tracks the forward movement of the target in real time; (g) and (h) and (i) represent the motion process of the moving object approaching the mobile robot.
FIG. 8 is a display result of an upper computer of an occlusion tracking node, wherein (a) shows a tracking result of a real-time tracking mode when a target is not completely occluded; FIGS. 8(b) and 8(c) show the tracking results of the occlusion tracking nodes when the target is completely occluded; fig. 8(d) shows that the target tracking is performed by automatically switching to the real-time tracking mode after the occlusion is completed.
Fig. 9 is a test result of using an occlusion tracking node on a mobile robot platform, where (a) shows a situation that a mobile robot tracks a moving target using a real-time tracking mode, fig. (b) and (c) show a result of tracking the moving target using the occlusion tracking mode when the target is completely occluded, and fig. (d) shows a result of automatically switching to the real-time tracking mode to track the target after the occlusion is completed and the moving target is re-locked.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
A robot indoor environment exploration, obstacle avoidance and target tracking method based on ROS comprises the following steps:
the method comprises the following steps: building a hardware platform of the ROS mobile robot: the bottom of mobile robot is provided with motion control module, and mobile robot's upper portion is fixed with the sensor, and mobile robot's middle part is equipped with controller and wireless communication module, installs the ROS system on the controller, and motion control module, sensor and wireless communication module all are connected with the controller, and wireless communication module and host computer are connected.
The control structure diagram of the mobile robot is shown in fig. 1, and according to the control structure diagram, a hardware platform of the mobile robot is built: the mobile robot comprises a motion control module, a sensor, a controller, a wireless communication module and an upper computer, wherein the motion control module is arranged at the bottom of the mobile robot, the sensor is fixed on the upper part of the mobile robot, the controller and the wireless communication module are arranged in the middle of the mobile robot, the motion control module, the sensor and the wireless communication module are all connected with the controller, and the wireless communication module is connected with the upper computer. The motion control module is mainly a Kobuki mobile chassis, sensors comprise a high-precision odometer, a Kinect2.0 depth vision sensor and an Rplidar A1 laser radar, the controller is a Jetson TK1 development board provided with an Ubuntu14.04 and an ROS indigo system, and the wireless communication module is an Intel 7260AC HMW wireless network card. After the building is finished, a Kinect2.0 depth vision sensor, an Rplidar A1 laser radar and a wireless network card Intel 7260 ACHMW driving system suitable for an ROS system and a Kobuki2.0 depth vision sensor software system are configured on the controller. The wireless communication module is a wifi module. An Ubuntu14.04 and an ROS indigo operating system are built on the upper computer, an automatically connected wifi module is built in the operating system, and environment variables of bashrc files are set, so that development and testing of the mobile robot are facilitated. The upper computer remotely logs in to a Ubuntu system of Jetson Tk1 through SSH, starts the robot moving chassis, issues speed information to the robot moving chassis in a communication mode of using ROS through a wireless wifi module, changes the linear speed and the angular speed of the robot, and controls the robot to move.
Step two: the ROS mobile robot is arranged in a room to be detected, the indoor environment is scanned by the aid of the laser radar in the sensor, position information and direction information of the ROS mobile robot are collected by the aid of a milemeter in the sensor, the ROS mobile robot explores the indoor environment through a square wave path, the upper computer acquires scanning information of the laser radar through the implementation of the wireless communication module, and a ROS system in the upper computer constructs a grid map by the aid of a map construction function package.
The autonomous exploration procedure of the ROS system requires the use of the Rplidar a1 lidar and odometer data, and the ROS system captures the lidar and odometer published messages through a subscription topic. And the ROS system processes the acquired message, adjusts the Twist message to change the linear velocity or the angular velocity of the robot, issues the message to a mobile chassis of the ROS mobile robot, and controls the robot to move according to a designed mode.
The topological map of the global indoor environment exploration structure can reduce the stored data amount, does not need to store each scanned position, only stores the position with a certain distance from the last storage position, but the topological map is too abstract, the important task of indoor autonomous exploration and map construction is indoor building scanning, the mapping of the 3D environment needs higher memory and computational consumption, and in order to simplify the map construction, the grid method is used for constructing the grid map. ROS mobile robot uses Rplidar A1 laser radar to scan indoor environmental information, and the host computer passes through the wifi module and updates the real-time data that return of laser radar, and the host computer passes through the construction map function package of ROS system and realizes the construction of grid map. When an obstacle appears in data returned by the laser radar, the rviz visualization tool in the upper computer displays the area of the detected obstacle on the grid map in black, and when no obstacle exists, the area without the detected obstacle is described in light gray, and the area without the detected obstacle is still displayed in dark gray.
Most commonly, a roaming mode is used, so that the robot moves towards an unknown area indoors, the indoor environment is searched by using a laser radar, and the process is repeatedly executed until the search of all indoor environments is completed. In the built ROS mobile robot platform, firstly, a roaming mode is used for indoor environment exploration and map building. When a roaming search is performed in a specific room, the map construction can be completed, but it takes too long and tends to fall into a local loop. In order to solve the problem, the invention designs a new exploration mode and carries out square wave type exploration in an ideal indoor environment.
When the robot starts, the ROS mobile robot is placed in a room with no obstacles within 1 m around, and a square wave exploration program of the ROS system is started. The robot starts to move towards an unknown area, after the Rplidar A1 laser radar detects that a wall appears in the front, when the distance from the indoor wall is less than 0.8 m, the controller controls the motion control module to start steering to avoid the front wall. Turning to the in-process, when laser radar detects the place ahead and does not have the barrier, ROS mobile robot begins to stop rotatory, restart and move forward, continue to explore, timer in the controller is triggered this moment, begin the timing, after moving 10 seconds, ROS mobile robot stops moving, rotatory 90 degrees again according to the direction of rotation when keeping away the barrier, continue to move after the rotation is accomplished, it is rotatory again when meetting the wall, keep away the barrier direction of rotation and the direction of rotation before this moment opposite, repeat preceding step, seek until accomplishing indoor environment.
In the exploration process, an upper computer communicates with an operating system of the ROS mobile robot through a wireless communication module, and a mapping function package in the ROS system is used for building an S L AM grid map.
The ROS mobile robot autonomously explores the indoor environment to build a map, and tedious remote control operation of the robot can be avoided. And the remote control robot carries out the map building, because artificial reason can make the robot too close to or keep away from the barrier, to when keeping away from the barrier, do not have clear standard, rely on artificial judgement entirely, lead to the map that has the obstacle avoidance standard and independently explore the map that establishes and have obvious difference. When the robot is remotely controlled, the robot is unreasonable in control speed setting, too close to an obstacle and incapable of rotating in time, and the robot touches the obstacle, so that the map building effect is influenced.
Obstacle avoidance is carried out by an artificial potential field obstacle avoidance method in the research of the ROS mobile robot, and the position of the ROS mobile robot in a global coordinate system and the position of an obstacle are solved by the artificial potential field obstacle avoidance method through tf coordinate transformation of an ROS system; and in the artificial potential field function, calculating a resultant force of the total repulsive force and the attractive force, and taking the direction of the resultant force as the obstacle avoidance direction of the robot.
The artificial potential field is divided into a gravitational field and a repulsive field, wherein the gravitational field is the attraction potential of the target object to the mobile robot, so that the robot moves to the position of the target object; the repulsive force field, namely the repulsive force of the obstacle to the robot, can make the mobile robot far away from the obstacle.
The artificial potential field function is derived by first considering the robot as a particle and moving in two dimensions, assuming the current position of the robot is pc=[xc,yc]TThe starting point position is ps=[xs,ys]TTarget point position is pt=[xt,yt]TThe position of the obstacle detected by the laser radar is po=[xo,yo]T
The gravitational potential formed by the target to the ROS moving robot is Ua(pc):
Figure BDA0001728628360000111
The repulsion potential of the barrier to the ROS moving robot is Ure(pc):
Figure BDA0001728628360000121
In the formula, lambda, k, d0Are all constants, wherein the euclidean distance between the robot and the target:
Figure BDA0001728628360000122
the euclidean distance between the robot and the obstacle is:
Figure BDA0001728628360000123
the sum of the gravitational potential and the repulsive potential is U (p)c):
U(pc)=Ua(pc)+Ure(pc);
The gravitational forces generated by the gravitational field are:
Figure BDA0001728628360000124
repulsion of obstacle to ROS mobile robot:
Figure BDA0001728628360000125
the resultant force of the virtual potential field borne by the ROS mobile robot is as follows:
Figure BDA0001728628360000126
when aiming at a plurality of obstacles, the repulsive force generated by each obstacle to the robot needs to be calculated, and the repulsive forces generated by the plurality of obstacles are combined into a total repulsive force. In practical application, when the robot meets an obstacle, the resultant force direction is taken as the motion direction of the robot, and the local obstacle avoidance of the robot can be realized. The flow chart of the artificial potential field obstacle avoidance method in the ROS system is shown in FIG. 2, the artificial potential field obstacle avoidance node obtains scanning information of a laser radar by using the publishing and subscribing functions of the ROS system, when the laser radar detects an obstacle, the robot stops moving, an artificial potential field obstacle avoidance function is called, the resultant force of the target attraction force and the obstacle repulsion force is calculated according to the position information of a odometer, and the direction of the resultant force is used as the obstacle avoidance direction of the mobile robot. The mobile robot rotates to the obstacle avoidance direction in situ and then starts to move again, so that the obstacle avoidance is realized.
Fig. 3 shows the situation that the ROS mobile robot simulates an ideal indoor environment in gazebo simulation software to perform an obstacle avoidance experiment of the robot. Fig. 3(a) shows that the robot moves towards the white box, fig. 3(b) shows that when the laser radar of the robot detects that there is an obstacle in the front and the distance from the obstacle is less than the set obstacle avoidance distance, the robot stops moving, an artificial potential field obstacle avoidance function is called, the obstacle avoidance direction is calculated, and then the robot rotates to the obstacle avoidance direction and continues to move forward. Fig. 3(c) and 3(d) show the case where the robot moves in the obstacle avoidance direction while avoiding an obstacle. Experimental results show that the robot can well avoid the obstacle by using an artificial potential field algorithm.
The mobile robot designed based on the ROS system and combined with the artificial potential field obstacle avoidance algorithm autonomously explores the unknown indoor environment to construct the indoor grid map, compared with the traditional manual exploration map construction, the method is higher in efficiency, the constructed grid map is closer to the indoor environment, and the effect is better. Compare traditional obstacle avoidance experiment, need set up barrier coordinate and target coordinate in advance, combine the artifical potential field that independently explores the realization to keep away the obstacle, keep away the obstacle mode nimble, can avoid mobile robot to be absorbed in local exploration endless loop moreover, can accomplish the exploration of whole indoor environment. Compared with the traditional mobile robot target tracking, the mobile robot can be navigated to the target position by using the constructed map to track the target of the mobile robot.
Step three: and guiding the established grid map into an ROS system of the ROS mobile robot, navigating the robot to a specified position in the map by using the established grid map, and realizing target tracking by the vision sensor based on a Kalman filtering method and a Meanshift tracking method.
After a MAP is built, the upper computer runs an Rplidar _ amcl.launch start file of an Rplidar A1 laser radar by using an rviz visualization tool, and a MAP _ file or a TURT L EBOT _ MAP _ FI L E environment variable of a bashrc file is used for guiding the built grid MAP into the ROS mobile robot for target navigation and obstacle avoidance.
The mobile robot target tracking mainly uses a visual sensor, and combines a robot motion control technology and an image processing technology by using a designed real-time target tracking node, so as to accurately and quickly track a selected target. The real-time target tracking node target _ tracking.py designed by the invention realizes an improved target tracking algorithm, can reduce the time for positioning the target and improves the real-time performance of the mobile robot for tracking the target. The principle of the real-time target tracking node is based on a Kalman filtering method and a MeanShift tracking method, and is a reasonable improvement on the Kalman filtering method and the MeanShift tracking method, and the principle of the real-time target tracking node is shown in FIG. 4. Firstly, establishing a Kalman object, and initializing relevant parameters of the Kalman object; and the Kalman filter estimates the predicted position of the current frame Kalman and the predicted position of the current frame Camsht according to the position of the previous frame of the target, and then corrects the position of the previous frame by using the estimation result of the predicted position of the current frame Camsht. The method comprises the following specific steps:
(1) firstly, initializing parameters of a Kalman filtering state transition matrix A, an observation matrix H, a process noise covariance matrix Q, a measurement noise covariance matrix R, a state error covariance matrix P and the like, and establishing Kalman tracking object parameters.
(2) According to the target state position of the previous frame, using the tracking state of the target: performing Kalman prediction on X (k/k-1) ═ AX (k-1/k-1) to obtain the position (X) of the target in the current frame1,y1) Updating the state error covariance P (k/k-1); wherein X (k/k-1) is the result of predicting the state at the time k by using the state at the time k-1, and X (k-1/k-1) is the optimal result at the time k-1.
The state in the target equation of state x (k) ═ AX (k-1) + w (k) is set to:
Figure BDA0001728628360000141
wherein X (k) is the state of the system at time k, and (x (k-1), y (k-1)) is the position of the target at time k-1, and the moving speeds are vx(k-1) and vy(k-1)。
(3) Using the window width w and height h of the previous frame and the predicted position (x) of the current frame1,y1) As the centre of the window, Me is incorporatedThe anShift tracking method obtains the actual position (x) of the target in the current frame2,y2);
(4) Using the actual position (x) of the object in the current frame2,y2) Calculating an observed value of a Kalman filter from a measurement equation z (k) hx (k) v (k) of the target, and calculating a Kalman gain k (k) P (k/k-1) H '[ HP (k/k-1) H' + R]-1The target position (X) is obtained by Kalman state update correction X (k/k) ═ X (k/k-1) + k (k) (z (k) -HX (k/k-1))3,y3) As the accurate position of the target, a state error covariance matrix P (k/k) ═ 1-k (k) H) P (k/k-1) is updated at the same time, where P (k/k-1) is a predicted value of the state error covariance at time k-1: p (k/k-1) ═ AP (k-1/k-1) a' + Q.
The target position (x)3,y3) And (4) repeating the steps (2) to (4) as the predicted position of the next frame to realize real-time tracking of the target, wherein if the process is closed, the algorithm is terminated, and if not, the process returns to the step (2).
And the real-time target tracking node target _ tracking.py establishes a Kalman parameter object in the class TargetTracking, and initializes Kalman parameter and Camshift parameter. A mouse callback function mouse _ cb (self, event, x, y, flags, param), through which a tracking target can be manually selected in an image window of an upper computer by a cv2.setmouse callback (self. node _ name, se-lf. mouse _ cb) mouse response function; after the tracking target is determined, a histogram of a target window is established in an image processing function process _ image (self, image _ color), the initial state of the target is initialized, and the functions from the step (2) to the step (4) are realized through codes. In the code, target information obtained by random process noise and a current frame Camshft is used, a target state is updated by using a state equation X (k) ═ AX (k-1) + W (k) of a target, random measurement noise is generated, a target measurement value is obtained by using a measurement equation Z (k) ═ HX (k) + V (k), and a target position is corrected according to the measurement value.
And after the robot is controlled to reach the designated position in the grid map in the upper computer, starting a target tracking node of the robot, and selecting a target on an interactive interface of the upper computer to enable the robot to track the target. The robot tracking uses a Meanshift target tracking algorithm, firstly, a target model is determined, then a candidate target model is established, a target object is searched in each frame, the similarity degree of the target model and the candidate model is judged by using a Bhattacharyya coefficient, the candidate model is closer to the target model when the similarity degree is higher, and the searched area is closer to the target area; according to the searched target area and the depth information of the target area, the distance between the target and the robot is calculated, the linear speed of the robot for tracking the target is adjusted, and the rotation angular speed of the robot for tracking the target is adjusted according to the deviation between the target and the center of the Kinect2.0 vision sensor image window in the upper computer. The depth information represents pixel information of the distance of the object in the image, and is used for calculating the distance between the robot and the object after conversion.
And (3) tracking the target object by using a MeanShift method, wherein the established target model is as follows:
Figure BDA0001728628360000151
wherein, the function is Kronecker delta, h is the bandwidth matrix of the window, the number of pixels of the candidate target object is limited, k (| | x | | y2) As a kernel function, b (x)i) Is the sampling point xiAnd mapping the calculated image characteristic value to a quantization function obtained by corresponding bin values. The target model can express the visual characteristics of the target object, and the target models are distinguished when the characteristics in the images are different, and the corresponding characteristic spaces are also distinguished from each other.
Assuming that y is the image coordinate of the center of the candidate target in the current frame, the model of the candidate target located at y is:
Figure BDA0001728628360000152
wherein,
Figure BDA0001728628360000153
Cuis a normalized coefficient:
Figure BDA0001728628360000154
the degree of similarity between the target object model and the candidate object region is measured using the Bhattacharyya coefficient:
Figure BDA0001728628360000161
if MeanShift wants to track a target, it is most important to find the position y in the image plane so that the target object and the candidate target object obtain the minimum distance in the distance space of the selected features, which is equivalent to the Bhattacharyya coefficient of the similarity degree d (y)
Figure BDA0001728628360000162
Taking the maximum value.
Giving the initial position of the target object in the current image frame as y0Mixing rho [ p (y), q)]Using a first order taylor series expansion to yield:
Figure BDA0001728628360000163
defining a weight coefficient:
Figure BDA0001728628360000164
the MeanShift algorithm obtains the iteration position in the current frame as:
Figure BDA0001728628360000165
searching a target object in each frame, continuously iterating by using a Meanshift algorithm, finding a region with the maximum similarity value, and calculating a new position y of the target in the current frame1Up to y1-y0If < iteration stop or the number of iterations reaches the maximum value, y1Becomes the new location for the next frame iteration. And judging the similarity degree of the target model and the candidate model by using the Papanicolaou coefficient, wherein the greater the similarity degree is, the closer the candidate model is to the target model, and the closer the searched area is to the target area.
On the basis of real-time target tracking nodes, shielded target tracking nodes are designed, and the problem of complete shielding in the target tracking process of the mobile robot is solved. The motion state of the motion target tracked by the mobile robot is stable in most of time, the motion state of the target during occlusion can be estimated for predictive tracking by using state information before the motion target is lost, and the motion target is locked again after occlusion is finished. When designing the shielding tracking node, the condition for judging the occurrence of shielding needs to be considered, when the shielding does not occur, the real-time tracking node is used for tracking the target, and when the shielding completely occurs, the shielding tracking node is used for tracking the target. Through experimental tests, when a target is shielded, the Papanicolaou distance coefficient of the color histogram of the predicted target area and the color histogram of the target area before shielding can be changed, and the change range is 0-1. When normal tracking is carried out, the Papanicolaou distance is very small and is infinitely close to 0, when complete shielding is carried out, the Papanicolaou distance is very large and is infinitely close to 1, 0.6 is selected as a judgment threshold, and when the Papanicolaou distance is larger than 0.6, the shielding tracking node is executed.
The design principle of the occlusion tracking node occlusion _ tracking is as follows: suppose the coordinates of pixel points of a moving target in a video frame are (x, y), and the moving speed of the target is vxAnd vyAnd the image frame updating time is dt, and a kinematic equation of the target is established as follows:
Figure BDA0001728628360000171
wherein, ax(k-1) and ay(k-1) is the acceleration in the x and y directions at time k-1, which translates into:
X(k)=AX(k-1)+W(k-1);
wherein:
Figure BDA0001728628360000172
Figure BDA0001728628360000173
from the above equation, it can be seen that a Kalman linear state equation of the moving target can be established, and in order to use the Kalman filter method, a measurement equation needs to be established, assuming that the measurement equation is:
Figure BDA0001728628360000174
conversion to:
Z(k)=HX(k)+V(k),
wherein:
Figure BDA0001728628360000181
after the state equation and the measurement equation of the moving target are established, a Kalman filter can be used, when shielding occurs, the position of the target is continuously predicted and corrected according to the motion state and the measurement value of the previous frame, and prediction tracking during shielding is realized.
During actual test, the covariance matrix of the initial state error of the system can influence the shielding tracking effect of the mobile robot, the initial value is difficult to measure, and no experience value is available due to the difference of the mobile robot platform, and the shielding tracking node oclusion _ tracking can update the value when circularly running, and through debugging, the following fixed values are found to be given, so that the ideal tracking effect can be achieved:
state error covariance matrix:
Figure BDA0001728628360000182
process noise error covariance matrix:
Figure BDA0001728628360000183
the occlusion tracking node oclusion _ tracking _ py is in the image processing function process _ image (self, image _ color) according to the motion speed v before the target is lostxAnd vyThe calculation target is within the video frame update time dt,moving distance v of target in x direction and y directionxDt and vyDt; then according to the correction position (x) of the Kalman filter in the previous frame3,y3) Using x ═ x3+vx*dt,y=y3+vyDt, obtaining the state of the target in the current frame X (k) ([ x y vx vy ]]TThen, the measurement equation z (k) ═ hx (k) + v (k) is used to obtain the measurement value of the target. And correcting by using a Kalman filter according to the measured value to obtain the position of the target in the current frame.
And continuously predicting and updating the target position through a cyclic operation command of the ROS system, namely a cyclic operation program of spin () to realize the prediction tracking of the target. And updating the histogram of the predicted target window at intervals, judging whether the shielding is finished or not according to the Papanicolaou distance, if the shielding is finished and the moving target reappears in the visual field of the visual sensor, re-locking the target, automatically switching the tracking mode, executing the real-time tracking node and recovering the normal tracking state.
A flow chart of the target tracking system is shown in fig. 5. When the mobile robot tracks a moving target, the moving node of the mobile robot acquires a target window through subscription/roi _ zone themes, sets the angular speed of the robot by calculating the distance between the center of the target window and the center of the image window, and controls the robot to rotate so that the robot tracks the target. The real-time image collected by the visual sensor is acquired by subscribing a function type, namely, kinect2/qhd/image _ color theme defined in the ROS system architecture, and the acquired image data is processed by an image callback function image Cb (self). And acquiring a depth image through a subscription/kinect 2/qhd/image _ depth _ rect theme, and calculating the distance between the robot and the target. If the distance is not within the range of the set threshold distance, the linear speed of the mobile robot is automatically adjusted according to the deviation of the actual distance and the preset distance, the moving node of the mobile robot issues/cmd _ vel _ mux/input/navi themes to control the movement of the robot, when the deviation is too large, the robot moves fast but cannot exceed the set maximum speed, and when the deviation is too small, the robot moves slowly but cannot be smaller than the set minimum speed, so that the automatic adjustment of the speed of the mobile robot is realized. The target tracking condition of the mobile robot can be observed in an image window of the upper computer. Under the normal condition, the mobile robot tracks the target by using the real-time tracking mode, when shielding occurs, the mobile robot is switched to the shielding tracking mode to carry out predictive tracking, and after the shielding is finished, the mobile robot catches the moving target again and then is switched back to the real-time tracking mode. When the two modes track the target, the mobile robot can automatically adjust and keep a safe tracking distance with the moving target. When the target stops, the mobile robot automatically carries out fine adjustment and stops in the safe distance range.
And analyzing the display result of the upper computer and the test result of the mobile robot platform respectively in order to verify and show the tracking effect of the real-time tracking node.
And tracking the test result displayed by the upper computer of the node in real time, as shown in fig. 6. Fig. 6(a), 6(b), 6(c) and 6(d) show the tracking results when the moving object is far from the mobile robot, the moving object is close to the mobile robot, and the moving object moves left and right, respectively, where the tilted rectangle is the actual area of the object searched by the real-time tracking node, and the non-tilted rectangle is the predicted object area. The experimental result shows that the Kalman filter of the real-time tracking node can accurately predict the moving direction of the target, so that the MeanShift can quickly search the moving target, and the target window is adaptively adjusted to realize the real-time tracking of the target. When the test moving object moves rapidly, only the inclined rectangle is used for marking the target area. Fig. 6(e) and 6(f) are tracking results of the real-time tracking nodes when the moving object moves left and right rapidly. The experimental result shows that when the instantaneous speed of the moving target is greatly changed, the moving target image in the video frame becomes fuzzy, the target shape is changed, but the real-time tracking node can still capture the moving target in real time, and real-time tracking is realized.
When the mobile robot carries out actual test, relevant parameters need to be manually set according to the experimental environment. In the real-time tracking test, the threshold value of the safe distance is 0.65 meter, the maximum rotating speed is 1.2rad/s, the minimum rotating speed is 0.2rad/s, the maximum linear velocity is 0.5m/s, and the minimum linear velocity is 0.05 m/s.
Fig. 7 is a result of an actual test using a real-time target tracking node, and fig. 7(a), 7(b), and 7(c) show that the mobile robot synchronously rotates in real time following a target; 7(d), 7(e) and 7(f) show the process of tracking the forward movement of the target by the mobile robot in real time, when the target is far away from the robot and is greater than the set safe distance, the mobile robot starts to track the forward movement of the target, and after the movement of the moving target stops, the mobile robot starts to automatically adjust and stops within the safe distance range; fig. 7(g), 7(h) and 7(i) show the motion process of the moving object approaching the mobile robot, when the moving object approaches the mobile robot and is below the set safe distance, the mobile robot starts to back up, tracks the moving object, automatically keeps the safe tracking distance with the moving object until the moving object stops moving, and stops moving after the mobile robot automatically adjusts to a proper position. In the actual test process, when the moving target moves normally or changes speed suddenly, the mobile robot can still track stably, which shows that the designed real-time tracking node can meet the requirement of the mobile robot on tracking the target in real time.
In order to better verify the performance of the real-time tracking node, the traditional MeanShift tracking method is designed into the MeanShift tracking node and compared with the real-time tracking node. And verifying the operation performance of the two nodes according to the time of searching the moving target in the video frame by the two tracking nodes, as shown in table 1. Although the real-time tracking node increases program steps, prediction is carried out by using a Kalman method, iteration times are reduced, and searching time is shortened. As can be seen from table 1, the real-time tracking node can search for a moving target faster and end the iterative operation, and the average time is about 0.001 second, which is about 0.008 second faster than that of the traditional MeanShift tracking node. The method shows the superiority of the performance of the real-time tracking node designed by using the ROS system, and meets the real-time requirement of target tracking of the mobile robot.
TABLE 1 comparison of conventional MeanShift trace and real-time trace arithmetic Performance
Figure BDA0001728628360000201
And the occlusion tracking node is additionally provided with an occlusion tracking mode on the basis of a real-time tracking mode. The display result of the upper computer of the occlusion tracking node is shown in fig. 8.
Fig. 8(a) shows the tracking result of the real-time tracking mode when the target is not completely shielded, and experiments show that the real-time tracking mode can still perform target tracking well when the target is semi-shielded; fig. 8(b) and 8(c) show that when the target is completely shielded, the babbit distance of the shielding tracking node is greater than the set threshold value 0.6, the shielding tracking mode is automatically switched to, shielding prediction is performed, the motion direction of the target is predicted, prediction tracking is realized, the non-inclined rectangle represents the predicted target position, and experiments show that the shielding tracking mode can accurately predict the target motion direction according to the prior information of the target state before shielding; fig. 8(d) shows that when the occlusion is finished, the moving target is relocked, the occlusion tracking mode is finished, and the target tracking is performed by automatically switching to the real-time tracking mode. Experiments show that the designed shielding tracking node can well perform shielding tracking on the target, and the robustness of the tracking system is improved.
Fig. 9 shows a test result of a mobile robot platform using an occlusion tracking node, where fig. 9(a) shows a situation that the mobile robot tracks a moving target using a real-time tracking mode, fig. 9(b) and 9(c) show a result of tracking the moving target using the occlusion tracking mode when the target is completely occluded, and fig. 9(d) shows a result of automatically switching to the real-time tracking mode to track the target after the occlusion is completed and the moving target is relocked. In fig. 9, when the moving target moves towards the white baffle, the real-time tracking mode drives the mobile robot to synchronously track the moving target, when the moving target is completely shielded, the shielding tracking mode predicts the moving direction of the target, the mobile robot moves according to the predicted target direction, after shielding is finished, the moving target is locked again, the real-time tracking mode is switched to, and the mobile robot continues to track the target in real time. Experiments show that the designed shielding tracking node can well solve the shielding problem of target tracking of the mobile robot under certain conditions, and an ideal tracking effect is achieved.
The invention is based on the realization of a mobile robot target tracking system of an ROS (reactive oxygen species) (Robot Operating System) robot Operating system, aims at the problem of poor real-time performance of target tracking of the mobile robot, designs a real-time target tracking node, predicts the target position by Kalman filtering according to the prior information of the target, and searches the target by using MeanShift on the basis of the predicted information, thereby improving the operation speed of the system, shortening the target searching time and meeting the real-time performance requirement of the system. Aiming at the problem of complete shielding of target tracking, a state equation and an observation equation of a target are established, a designed shielding tracking node carries out predictive tracking on the target according to state information of the target before shielding, and after shielding is finished, the target is locked again and automatically switched back to a real-time tracking mode. The experimental result verifies the real-time performance and robustness of the tracking system.
The invention uses the ROS system, utilizes the prediction function of the Kalman filter and the tracking function of the MeanShift method based on color characteristics, flexibly uses the tracking method, combines hardware and software, designs a real-time tracking node and an occlusion tracking node, and solves the real-time problem and the complete occlusion problem of the target tracking process of the modified Turtlebot2 mobile robot. The experimental results and experimental data show that the designed tracking node realizes real-time tracking of the mobile robot on the target, can meet the target tracking requirement of the mobile robot under certain conditions, and improves the robustness and stability of the target tracking system.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (7)

1. A robot indoor environment exploration, obstacle avoidance and target tracking method based on ROS is characterized by comprising the following steps:
the method comprises the following steps: building a hardware platform of the ROS mobile robot: the bottom of the mobile robot is provided with a motion control module, the upper part of the mobile robot is fixed with a sensor, the middle part of the mobile robot is provided with a controller and a wireless communication module, the controller is provided with an ROS system, the motion control module, the sensor and the wireless communication module are all connected with the controller, and the wireless communication module is connected with an upper computer;
step two: the method comprises the following steps that an ROS mobile robot is arranged in a room to be detected, the indoor environment is scanned by using a laser radar in a sensor, the position information and the direction information of the mobile robot are collected by using a mileometer in the sensor, the ROS mobile robot explores the indoor environment through a square wave path, an upper computer acquires the scanning information of the laser radar through a wireless communication module, and an ROS system in the upper computer constructs a grid map by using a map construction function package;
step three: guiding the established grid map into an ROS system of an ROS mobile robot, navigating the robot to a specified position in the map by using the established grid map, and realizing target tracking by a visual sensor based on a Kalman filtering method and a Meanshift tracking method;
the square wave path exploration method comprises the following steps: when the ROS mobile robot is started, the ROS mobile robot is placed in an indoor place without obstacles within 1 m around, the ROS mobile robot starts to move to an unknown area, after a laser radar detects that a wall appears in a front room, and when the ROS mobile robot is less than 0.8 m away from the wall, a controller controls a motion control module to start steering to avoid the front wall; in the steering process, when the laser radar detects that no obstacle exists in front, the ROS mobile robot starts to stop rotating, starts to move forwards again, continues to search, at the moment, a timer in the controller is triggered, starts to time, stops moving after moving for 10 seconds, rotates for 90 degrees again according to the rotating direction during obstacle avoidance, continues to move after the rotation is finished, rotates again until the ROS mobile robot meets a wall, and repeats the previous steps until the indoor environment search is finished, wherein the obstacle avoidance rotating direction is opposite to the previous rotating direction;
in exploration, the ROS mobile robot avoids the barrier by using an artificial potential field barrier avoiding method, wherein the artificial potential field barrier avoiding method comprises the following steps:
(1) setting the starting point position to ps=[xs,ys]TTarget point position is pt=[xt,yt]T(ii) a The ROS mobile robot is regarded as a mass point and moves in a two-dimensional space;
(2) the upper computer obtains the position p of the ROS mobile robot in the global coordinate system through tf coordinate transformation of the ROS systemc=[xc,yc]TThe position of the obstacle detected by the laser radar is po=[xo,yo]T
(3) Calculating the resultant force of the total repulsion force and the attraction force of the virtual potential field borne by the ROS mobile robot as follows:
Figure FDA0002470206710000011
wherein, U (p)c)=Ua(pc)+Ure(pc) Is the sum of the gravitational potential and the repulsive potential,
Figure FDA0002470206710000012
in order to target the gravitational potential created by the mobile robot for ROS,
Figure FDA0002470206710000021
repulsive force potential, lambda, k, d, formed for obstacle to ROS moving robot0Are all constant and are all provided with the same power,
Figure FDA0002470206710000022
is the euclidean distance between the robot and the target,
Figure FDA0002470206710000023
is the Euclidean distance between the robot and the obstacle;
Figure FDA0002470206710000024
is the gravitational force generated by the gravitational field;
Figure FDA0002470206710000025
repulsion of the obstacle to the ROS mobile robot; when a plurality of obstacles exist, calculating the output of each obstacle to the robotThe generated repulsive force is formed by combining the repulsive forces generated by a plurality of obstacles into a total repulsive force;
(3) taking the direction of the resultant force as the obstacle avoidance direction of the robot, and rotating the ROS mobile robot to the obstacle avoidance direction for movement to realize local obstacle avoidance of the robot;
when no shielding occurs, the ROS mobile robot uses the real-time tracking node to track the target, and when the shielding occurs completely, the ROS mobile robot uses the shielding tracking node to track the target; when the similarity degree between the target object model and the candidate object region, namely the Bhattacharyya distance coefficient, is larger than 0.6, executing an occlusion tracking node;
the shielding tracking node is in an image processing function process _ image (self, image _ color) according to the motion speed v before the target is lostxAnd vyCalculating the moving distance v of the target in the x direction and the y direction in the video frame updating time dtxDt and vyDt; then according to the correction position (x) of the Kalman filter in the previous frame3,y3) Using x ═ x3+vx*dt,y=y3+vyDt, obtaining the state of the target in the current frame X (k) ([ x y vx vy ]]TThen, a measurement equation is used for obtaining a measurement value of the target; and correcting by using a Kalman filter according to the measured value to obtain the position of the target in the current frame.
2. The ROS-based robot indoor environment exploring, obstacle avoiding and target tracking method of claim 1, wherein the motion control module is mainly a Kobuki mobile chassis, sensors comprise an odometer, a Kinect2.0 depth vision sensor and an Rplidar A1 laser radar, the controller is a Jetson TK1 development board provided with Ubuntu14.04 and an ROS indigo system, the wireless communication module is an Intel 7260AC HMW wireless network card, and the wireless communication module realizes data transmission through a wifi module; the controller is provided with a Kinect2.0 depth vision sensor suitable for an ROS system, an Rplidar A1 laser radar, a driving system of a wireless network card Intel 7260AC HMW and a software system of a Kobuki2.0 depth vision sensor; ubuntu14.04 and ROS indigo operating system have been built on the host computer, and the host computer passes through SSH long-range login to the Ubuntu system of controller, starts the removal chassis of robot, and the host computer uses the communication mode of ROS system through wireless communication module, issues speed message for the robot removal chassis, changes the linear velocity and the angular velocity of robot, control robot motion.
3. The ROS-based robot indoor environment exploring, obstacle avoiding and target tracking method of claim 1, wherein a laser radar of the ROS mobile robot scans indoor environment information, an upper computer updates data returned by the laser radar in real time through a wifi module, the upper computer calls a map building function package of the ROS system, when an obstacle appears in the data returned by the laser radar, an rviz visualization tool in the upper computer displays a detected obstacle area on a grid map in black, a detected area without the obstacle is described in light gray, and an unexplored area is displayed in dark gray.
4. The ROS-based robot indoor environment exploring, obstacle avoiding and target tracking method of claim 1 is characterized in that the method of navigating in three steps is that an upper computer uses an rviz visualization tool to operate an rplidar _ amcl.launch start file of a laser radar, a MAP _ file or a turn L EBOT _ MAP _ FI L E environment variable of a bashrc file is used to introduce a constructed grid MAP into the ROS mobile robot, when the robot carries out two-dimensional pose estimation in the grid MAP, the initial pose direction of the robot is set, the robot starts rotating and stops rotating after rotating to the set direction, the direction of the robot in the actual environment is designated, the navigation pose of the robot is set by using two-dimensional target navigation, when the navigation target is set, the robot starts to plan a path and finishes the path, the robot starts to move towards the target along the planned path, after the target obstacle avoiding method of the artificial potential field, the robot reaches the target position, stops moving, and rotates to the target position, and the grid MAP is designated.
5. The ROS-based robot indoor environment exploring, obstacle avoiding and target tracking method of claim 1, wherein the vision sensor utilizes a real-time target tracking node to realize target tracking, and the steps are as follows:
step (a): initializing a state transition matrix A, an observation matrix H, a process noise covariance matrix Q, a measurement noise covariance matrix R and a state error covariance matrix P parameter of Kalman filtering, and establishing a Kalman tracking object parameter; step (b): according to the target state position of the previous frame, using the tracking state of the target: performing Kalman prediction on X (k/k-1) ═ AX (k-1/k-1) to obtain the position (X) of the target in the current frame1,y1) Updating the state error covariance P (k/k-1); wherein X (k/k-1) is the result of predicting the state at the k moment by using the state at the k-1 moment, and X (k-1/k-1) is the optimal result at the k-1 moment;
the state in the target equation of state x (k) ═ AX (k-1) + w (k) is set to:
Figure FDA0002470206710000041
wherein X (k) is the state of the system at time k, and (x (k-1), y (k-1)) is the position of the target at time k-1, and the moving speeds are vx(k-1) and vy(k-1);
Step (c) uses the window width w and height h of the previous frame and predicts the position (x) of the current frame1,y1) As the center of the window, the actual position (x) of the target in the current frame is obtained by combining the MeanShift tracking method2,y2);
Step (d) uses the actual position (x) of the object in the current frame2,y2) Calculating an observed value of a Kalman filter from a measurement equation z (k) hx (k) v (k) of the target, and calculating a Kalman gain k (k) P (k/k-1) H '[ HP (k/k-1) H' + R]-1The target position (X) is obtained by Kalman state update correction X (k/k) ═ X (k/k-1) + k (k) (z (k) -HX (k/k-1))3,y3) As the exact position of the target, the state error covariance matrix P (k/k) ═ 1-k (k) H) P (k/k-1) is updated simultaneously, where P (k/k-1) is the k-1 time pairPredicted value of state error covariance at time k: p (k/k-1) ═ AP (k-1/k-1) a' + Q;
step (e) locating the target position (x)3,y3) And (d) repeating the steps (b) to (d) as the predicted position of the next frame to realize real-time tracking of the target, wherein if the process is closed, the algorithm is terminated, and if not, the process returns to the step (b).
6. The ROS-based robot indoor environment exploring, obstacle avoiding and target tracking method according to claim 5, wherein the MeanShift tracking method is used for tracking a target object:
the established target model is as follows:
Figure FDA0002470206710000042
wherein, the function is a kronecker function, h is a bandwidth matrix of the window, k (| | x | | purple sweet2) As a kernel function, b (x)i) Is the sampling point xiThe calculated image characteristic value is mapped to a corresponding bin value to obtain a quantization function;
assuming that y is the image coordinate of the center of the candidate target in the current frame, the model of the candidate target located at y is:
Figure FDA0002470206710000043
wherein,
Figure FDA0002470206710000051
m and n both represent the number of sampled data points, CuIs a normalized coefficient:
Figure FDA0002470206710000052
and measuring the similarity degree between the target object model and the candidate object area by adopting a Papanicolaou distance coefficient:
Figure FDA0002470206710000053
the target object and the candidate target object obtain the minimum distance in the distance space of the selected features, which is equivalent to the Papanicolaou distance coefficient of the similarity degree d (y)
Figure FDA0002470206710000054
Taking a maximum value;
giving the initial position of the target object in the current image frame as y0Mixing rho [ p (y), q)]Using a first order taylor series expansion to yield:
Figure FDA0002470206710000055
defining a weight coefficient:
Figure FDA0002470206710000056
the iteration position in the current frame is obtained as follows:
Figure FDA0002470206710000057
searching a target object in each frame, continuously iterating by using a MeanShift tracking method, finding an area with the maximum similarity value, and calculating a new position y of the target in the current frame1Up to y1-y0If < iteration stop or the number of iterations reaches the maximum value, y1Becoming the new position of the next frame overlay;
the real-time target tracking node calculates the distance between the target and the ROS mobile robot according to the searched target area and the depth information of the target area, adjusts the linear speed of the ROS mobile robot for tracking the target, and adjusts the rotation angular speed of the ROS mobile robot for tracking the target according to the deviation between the target and the center of the image window of the visual sensor in the upper computer.
7. According to claim 6The ROS-based robot indoor environment exploring, obstacle avoiding and target tracking method is characterized in that the design method of the shielding tracking node is as follows: suppose the coordinates of pixel points of a moving target in a video frame are (x, y), and the moving speed of the target is vxAnd vyAnd the image frame updating time is dt, and the kinematic equation of the target is established as follows:
Figure FDA0002470206710000061
wherein, ax(k-1) and ay(k-1) is the acceleration in the x and y directions at time k-1, which translates into:
X(k)=AX(k-1)+W(k-1);
wherein:
Figure FDA0002470206710000062
Figure FDA0002470206710000063
establishing a Kalman linear state equation of the moving target, wherein the measurement equation is established as follows:
Figure FDA0002470206710000064
conversion to:
Z(k)=HX(k)+V(k),
wherein:
Figure FDA0002470206710000065
when occlusion occurs, continuously predicting and correcting the position of a target by using a Kalman filter according to the motion state and the measured value of the previous frame, and realizing the prediction tracking during occlusion; state error covariance matrix in Kalman filter:
Figure FDA0002470206710000071
process noise error covariance matrix:
Figure FDA0002470206710000072
CN201810764178.1A 2018-07-12 2018-07-12 ROS-based robot indoor environment exploration, obstacle avoidance and target tracking method Active CN108646761B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810764178.1A CN108646761B (en) 2018-07-12 2018-07-12 ROS-based robot indoor environment exploration, obstacle avoidance and target tracking method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810764178.1A CN108646761B (en) 2018-07-12 2018-07-12 ROS-based robot indoor environment exploration, obstacle avoidance and target tracking method

Publications (2)

Publication Number Publication Date
CN108646761A CN108646761A (en) 2018-10-12
CN108646761B true CN108646761B (en) 2020-07-31

Family

ID=63751133

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810764178.1A Active CN108646761B (en) 2018-07-12 2018-07-12 ROS-based robot indoor environment exploration, obstacle avoidance and target tracking method

Country Status (1)

Country Link
CN (1) CN108646761B (en)

Families Citing this family (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111123732B (en) * 2018-10-31 2023-06-27 百度在线网络技术(北京)有限公司 Method and device for simulating automatic driving vehicle, storage medium and terminal equipment
CN109544472B (en) * 2018-11-08 2022-06-21 苏州佳世达光电有限公司 Object driving device and object driving method
CN109213201B (en) * 2018-11-30 2021-08-24 北京润科通用技术有限公司 Obstacle avoidance method and device
CN109917818B (en) * 2019-01-31 2021-08-13 天津大学 Collaborative search containment method based on ground robot
CN111650928B (en) * 2019-02-18 2024-03-05 北京奇虎科技有限公司 Autonomous exploration method and device for sweeping robot
CN110221613B (en) * 2019-06-12 2020-04-17 北京洛必德科技有限公司 Robot path planning method and device based on improved artificial potential field method
TWI743519B (en) * 2019-07-18 2021-10-21 萬潤科技股份有限公司 Self-propelled device and method for establishing map
CN110509271A (en) * 2019-07-23 2019-11-29 国营芜湖机械厂 It is a kind of that robot control method is followed based on laser radar
WO2021068150A1 (en) * 2019-10-10 2021-04-15 Huawei Technologies Co., Ltd. Controlling method of mobile apparatus and computer program thereof
CN110887489A (en) * 2019-11-22 2020-03-17 深圳晨芯时代科技有限公司 AR robot-based SLAM algorithm experimental method
CN111006652B (en) * 2019-12-20 2023-08-01 深圳市飞瑶电机科技有限公司 Robot side-by-side operation method
CN113093176B (en) * 2019-12-23 2022-05-17 北京三快在线科技有限公司 Linear obstacle detection method, linear obstacle detection device, electronic apparatus, and storage medium
CN111308993B (en) * 2020-02-13 2022-04-01 青岛联合创智科技有限公司 Human body target following method based on monocular vision
CN111360841B (en) * 2020-05-27 2020-08-18 北京云迹科技有限公司 Robot monitoring method and device, storage medium and electronic equipment
CN111805535B (en) * 2020-06-11 2022-06-07 浙江大华技术股份有限公司 Positioning navigation method, device and computer storage medium
CN112130565B (en) * 2020-09-14 2023-06-23 贵州翰凯斯智能技术有限公司 Self-propelled robot platform control system and communication method thereof
CN112270076B (en) * 2020-10-15 2022-10-28 同济大学 Environment model construction method and system based on intelligent agent active perception
CN112738022B (en) * 2020-12-07 2022-05-03 浙江工业大学 Attack method for ROS message of robot operating system
CN112698629A (en) * 2020-12-23 2021-04-23 江苏睿科大器机器人有限公司 AGV (automatic guided vehicle) scheduling method and system suitable for hospital scene
CN113029143B (en) * 2021-02-24 2023-06-02 同济大学 Indoor navigation method suitable for pepper robot
CN113093729A (en) * 2021-03-10 2021-07-09 上海工程技术大学 Intelligent shopping trolley based on vision and laser radar and control method
CN113313151A (en) * 2021-04-28 2021-08-27 上海有个机器人有限公司 Laser dynamic matching method, electronic equipment and storage medium
CN113052152B (en) * 2021-06-02 2021-07-30 中国人民解放军国防科技大学 Indoor semantic map construction method, device and equipment based on vision
CN113612920A (en) * 2021-06-23 2021-11-05 广西电网有限责任公司电力科学研究院 Method and device for shooting power equipment image by unmanned aerial vehicle
CN114200471B (en) * 2021-12-07 2022-08-23 杭州电子科技大学信息工程学院 Forest fire source detection system and method based on unmanned aerial vehicle, storage medium and equipment
CN114373329A (en) * 2021-12-31 2022-04-19 广东奥博信息产业股份有限公司 Vehicle searching method for indoor parking lot, electronic equipment and readable storage medium
CN114460939B (en) * 2022-01-22 2024-09-20 贺晓转 Autonomous navigation improvement method for intelligent walking robot in complex environment
CN115648221A (en) * 2022-11-22 2023-01-31 福州大学 Education robot based on ROS system
CN116382310B (en) * 2023-06-06 2023-08-18 南京理工大学 Artificial potential field path planning method and system
CN116578101B (en) * 2023-07-12 2023-09-12 季华实验室 AGV pose adjustment method based on two-dimensional code, electronic equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103325126A (en) * 2013-07-09 2013-09-25 中国石油大学(华东) Video target tracking method under circumstance of scale change and shielding

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914068A (en) * 2013-01-07 2014-07-09 中国人民解放军第二炮兵工程大学 Service robot autonomous navigation method based on raster maps
CN103559725B (en) * 2013-08-09 2016-01-06 中国地质大学(武汉) A kind of wireless sensor node optimum choice method of following the tracks of towards vision
CN105487535A (en) * 2014-10-09 2016-04-13 东北大学 Mobile robot indoor environment exploration system and control method based on ROS
CN104992451A (en) * 2015-06-25 2015-10-21 河海大学 Improved target tracking method
CN105466421B (en) * 2015-12-16 2018-07-17 东南大学 Mobile robot autonomous cruise method towards reliable WIFI connections
CN105955262A (en) * 2016-05-09 2016-09-21 哈尔滨理工大学 Mobile robot real-time layered path planning method based on grid map

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103325126A (en) * 2013-07-09 2013-09-25 中国石油大学(华东) Video target tracking method under circumstance of scale change and shielding

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ROS based stereo vision system for autonomous vehicle;B Abhishek 等;《2017 IEEE International Conference on Power, Control, Signals and Instrumentation Engineering (ICPCSI)》;20170922;第2269-2273页 *
一种抗遮挡的运动目标跟踪算法;孙中森等;《中国学术期刊文摘》;20071231;第13卷(第18期);第165页 *
移动机器人SLAM与路径规划在ROS框架下的实现;陈卓等;《医疗卫生装备》;20170229;第38卷(第2期);第109-113页 *

Also Published As

Publication number Publication date
CN108646761A (en) 2018-10-12

Similar Documents

Publication Publication Date Title
CN108646761B (en) ROS-based robot indoor environment exploration, obstacle avoidance and target tracking method
Thorpe et al. Vision and navigation for the Carnegie Mellon Navlab
US8576235B1 (en) Visibility transition planning for dynamic camera control
EP1504277B1 (en) Real-time target tracking of an unpredictable target amid unknown obstacles
JP5881743B2 (en) Self-position estimation of mobile camera using depth map
Wurm et al. Bridging the gap between feature-and grid-based SLAM
Yang et al. Real-time optimal navigation planning using learned motion costs
Ye et al. 6-DOF pose estimation of a robotic navigation aid by tracking visual and geometric features
Yokoyama et al. Autonomous mobile robot with simple navigation system based on deep reinforcement learning and a monocular camera
Yokoyama et al. Success weighted by completion time: A dynamics-aware evaluation criteria for embodied navigation
CN113110455A (en) Multi-robot collaborative exploration method, device and system for unknown initial state
Holz et al. Continuous 3D sensing for navigation and SLAM in cluttered and dynamic environments
CN107728612A (en) Identify that different crowd carries out method, storage device and the mobile terminal of advertisement pushing
Yang et al. Vision-based localization and mapping for an autonomous mower
Fan et al. A nonlinear optimization-based monocular dense mapping system of visual-inertial odometry
Martín et al. Octree-based localization using RGB-D data for indoor robots
CN115690343A (en) Robot laser radar scanning and mapping method based on visual following
Kumar et al. Periodic SLAM: Using cyclic constraints to improve the performance of visual-inertial SLAM on legged robots
Cui et al. Simulation and Implementation of Slam Drawing Based on Ros Wheeled Mobile Robot
Gui et al. Robust direct visual inertial odometry via entropy-based relative pose estimation
Belter et al. Keyframe-Based local normal distribution transform occupancy maps for environment mapping
Tahara et al. Ex-dof: Expansion of action degree-of-freedom with virtual camera rotation for omnidirectional image
Pfaff et al. Navigation in combined outdoor and indoor environments using multi-level surface maps
Hornung Humanoid robot navigation in complex indoor environments
Jianjun et al. A direct visual-inertial sensor fusion approach in multi-state constraint Kalman filter

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant