CN108646761A - Robot indoor environment exploration, avoidance and method for tracking target based on ROS - Google Patents

Robot indoor environment exploration, avoidance and method for tracking target based on ROS Download PDF

Info

Publication number
CN108646761A
CN108646761A CN201810764178.1A CN201810764178A CN108646761A CN 108646761 A CN108646761 A CN 108646761A CN 201810764178 A CN201810764178 A CN 201810764178A CN 108646761 A CN108646761 A CN 108646761A
Authority
CN
China
Prior art keywords
target
ros
robot
tracking
avoidance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810764178.1A
Other languages
Chinese (zh)
Other versions
CN108646761B (en
Inventor
姚利娜
王继玉
吴巍
陈文浩
李丰哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou University
Original Assignee
Zhengzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou University filed Critical Zhengzhou University
Priority to CN201810764178.1A priority Critical patent/CN108646761B/en
Publication of CN108646761A publication Critical patent/CN108646761A/en
Application granted granted Critical
Publication of CN108646761B publication Critical patent/CN108646761B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0234Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons
    • G05D1/0236Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Abstract

The present invention proposes a kind of robot indoor environment exploration, avoidance and method for tracking target based on ROS, the grating map established based on laser radar information, in conjunction with local map deduction and global boundary search, the autonomous strategy of exploring of design can avoid mobile robot from being absorbed in part exploration endless loop, guarantee the exploration for completing entire indoor environment.The present invention devises real-time tracking node using improved kalman-filter method and MeanShift methods in ROS systems and blocks tracking node, solve the real time problems of mobile robot target following and complete occlusion issue, improve the arithmetic speed of system, the target search time is shortened, the requirement of real-time of tracking is met;When target is blocked completely, status information of the tracking node before is blocked, predicting tracing is carried out to target, after blocking, target is relocked, automatically switches tracing mode, using real-time tracking pattern to target into line trace.

Description

Robot indoor environment exploration, avoidance and method for tracking target based on ROS
Technical field
The present invention relates to the technical fields of robot motion more particularly to a kind of robot indoor environment based on ROS to visit Rope, avoidance and method for tracking target.
Background technology
With the development of science and technology and social progress, mobile robot technology have obtained rapid development.Currently, machine The application of device people has been directed to the multiple fields such as health, medical treatment, tour guide, education, amusement, security, daily life, is suitable for various Working environment, or even dangerous, dirty and uninteresting occasion etc..The information of working space is unknown, machine in many cases, When people enters circumstances not known, needs effectively to detect operating environment, construct the map of operating environment.Only in structure Could be navigated on the basis of the map built, path planning, Robot dodge strategy and other operation.Environment is carried out in circumstances not known It explores, and is that mobile robot needs indispensable basic energy according to the corresponding environmental map of information architecture that laser radar obtains Power.However robot can not obtain the map of working environment in advance in many environment, such as Mine pit survey, deep-sea exploration, Under the working environments such as hazardous environment rescue, the usual mankind cannot be introduced into scene and obtain environmental information, it is necessary to rely on mobile robot Carry out the establishment of the detection and model of environment.When robot to target into when line trace, due to the movement of robot, video camera Shake, the irregular movement of tracking target, illumination the factors such as influence, can all increase the complexity of robotic tracking's target. Mobile robot builds figure and positioning etc. by operating system, motion control, path planning, avoidance, tracking, environment, completes various The task of various kinds.
It is increasingly strong to code reuse and modular demand with the fast development and complication of robot field.ROS It is the secondary operation system operated on the master operating systems such as Ubuntu, there is distributed open source software framework, by robot Bottom hardware be abstracted, improve the reusability of code, have bsp driver management and the execution of common functions, can Various functions of similar legacy operating system, including common function realization, inter-process messages transmission and program bag management etc. are provided, In addition, additionally provide relevant tool and library, for obtaining, compiling, edit code and journey is run between multiple computers Sequence completes Distributed Calculation.ROS can support a variety of robot modelings and sensor, and researcher is made to be rapidly performed by exploitation And emulation.
ROS systems support a variety of programming languages such as C++, Python, are integrated with the OpenCV developed for robot vision Library possesses SLAM map structurings and navigation feature packet.ROS can use standardization robot descriptor format (URDF) to establish oneself Robot model, can also use Gazebo simulation softwares, establish ideal simulated environment, in simulated environment, driving machine Device people carries out avoidance, path planning, the emulation experiments such as map structuring and navigation.
Method for tracking target be broadly divided into tracking based on region, feature based tracking, the tracking based on model and base In the tracking of active profile.The target following of most of view-based access control model sensors, is all based on the target following of color characteristic, such as MeanShift algorithms are applied to target following by Comaniciu et al. in the literature, make MeanShift algorithms in target following It is used widely in field.When mobile robot is using traditional MeanShift trackings tracking target, target following window It cannot adaptively adjust, can not reflect the moving situation of target, when there are the interference of homochromy object, targets to fast move and block When, the tracking effect of this method is not satisfactory.Bradski proposes Camshift algorithms in the literature, using color histogram, The color probability distribution figure of target window is calculated, realizes target following.Mobile robot tracks target using Camshift methods When, when target is blocked completely or target speed variation is too fast, mobile robot can lose target.
Invention content
When carrying out target following for existing mobile robot, when target is blocked or target speed changed completely When fast, the technical issues of mobile robot can lose target, the present invention proposes that a kind of robot indoor environment based on ROS is visited Rope, avoidance and method for tracking target use laser radar sensor, by upper based on ROS systems on mobile robot platform The remote control of position machine realizes the autonomous exploration of mobile robot, barrier avoiding function, can be real under the premise of without user intervention Now to the autonomous exploration of unknown indoor environment and map structuring, solve the real time problems of mobile robot target following and complete Full occlusion issue.
In order to achieve the above object, the technical proposal of the invention is realized in this way:In a kind of robot chamber based on ROS Environment exploration, avoidance and method for tracking target, its step are as follows:
Step 1:Build the hardware platform of ROS mobile robots:The bottom of mobile robot is provided with motion control mould The top of block, mobile robot is fixed with sensor, and the middle part of mobile robot is equipped with controller and wireless communication module, control ROS systems are installed, motion-control module, sensor and wireless communication module are connected with controller, radio communication mold on device Block is connected with host computer
Step 2:ROS mobile robots are arranged in interior to be detected, the laser radar scanning room in sensor is utilized Interior environment, the location information and directional information of mobile robot are acquired using the odometer in sensor, and ROS mobile robots are logical Square wave track search indoor environment is crossed, host computer by radio communication implement to obtain the scanning information of laser radar by module, upper ROS systems in machine build grating map using structure map function packet;
Step 3:Established grating map is imported in the ROS systems of ROS mobile robots, uses the grid of structure Designated position in digital map navigation robot to map, visual sensor are based on Kalman filter method and are tracked with MeanShift Method realizes the tracking of target.
The motion-control module is mainly Kobuki mobile chassis, and sensor includes odometer, Kinect2.0 depth Visual sensor and Rplidar A1 laser radars, controller are to be mounted with Ubuntu14.04 and ROS indigo systems Jetson TK1 development boards, wireless communication module are Intel 7260AC HMW wireless network cards, and wireless communication module passes through wifi Module realizes the transmission of data;Be configured on controller the Kinect2.0 deep visions sensor suitable for ROS systems, The drive system and Kobuki2.0 deep visions of Rplidar A1 laser radars and wireless network card Intel 7260AC HMW sense The software systems of device;Ubuntu14.04 and ROS indigo operating systems are built on host computer, host computer is long-range by SSH The Ubuntu systems for logging on to controller, start the mobile chassis of robot, and host computer uses ROS systems by wireless wifi module The communication mode of system, publication speed messages change the linear speed and angular speed of robot to mobile underpan, control robot Movement.
The method of the square wave track search is:When startup, ROS mobile robots are placed into and are not hindered in 1 meter around The interior place of object, ROS mobile robots is hindered to start to move to zone of ignorance, Airborne Lidar measures front interior and wall occurs Afterwards, after ROS mobile robots are less than 0.8 meter apart from wall, controller controlled motion control module proceeds by steering, avoids Front wall;In steering procedure, when Airborne Lidar measures all is clear ahead, ROS mobile robots start to stop rotating, Restart to move forward, continue to explore, the timer in controller is triggered at this time, starts timing, when 10 seconds mobile Afterwards, ROS mobile robots stop movement, and direction of rotation when according to avoidance is rotated by 90 ° again, continue to move after the completion of rotation It is dynamic, rotated again when encountering wall, avoidance direction of rotation at this time and direction of rotation before on the contrary, step before repeatedly, It is explored until completing indoor environment.
ROS mobile robots carry out avoidance using Artificial Potential Field barrier-avoiding method in exploration, Artificial Potential Field barrier-avoiding method Step is:
(1) setting initial point position is ps=[xs,ys]T, aiming spot pt=[xt,yt]T;By ROS mobile robots Regard a particle as, and is moved in two-dimensional space;
(2) host computer finds out position of the ROS mobile robots in global coordinate system by the tf coordinate transforms of ROS systems pc=[xc,yc]T, it is p using the Obstacle Position that Airborne Lidar measureso=[xo,yo]T
(3) calculating virtual total repulsion of potential field and the resultant force of gravitation suffered by ROS mobile robots is:Wherein, U (pc)=Ua(pc)+Ure(pc) be the sum of gravitational potential and repulsion gesture,For the gravitational potential that target forms ROS mobile robots, For the repulsion gesture that barrier forms ROS mobile robots, λ, k, d0It is constant,It is robot with the Euclidean distance between target,It is robot with the Euclidean distance between barrier;For the gravitation generated by gravitational field;
Machine is moved to ROS for barrier The repulsion of people;When there are multiple barriers, the repulsion that each barrier generates robot is calculated, multiple barriers are generated Repulsion synthesizes a total repulsion;
(3) using the direction of resultant force as the avoidance direction of robot, ROS mobile robots rotate to avoidance direction and are transported It is dynamic, realize the local avoidance of robot.
The laser radar scanning indoor environment information of the ROS mobile robots, host computer is updated by wifi module to swash The data that optical radar returns in real time, host computer call the structure map function packet of ROS systems, when in the data that laser radar returns When there is barrier, grating map is gone out the barrier area of detection by the rviz visualization tools in host computer using black display Domain shows the region that do not explore using Dark grey using the region for the not barrier that light grey description detected.
The method navigated in the step 3 is:Host computer uses rviz visualization tools, runs laser radar Rplidar_amcl.launch startup files, using map_file or in the TURTLEBOT_MAP_FILE rings of .bashrc files In the variable of border, the grating map of structure is imported into ROS mobile robots;In grating map, robot carries out two-dimentional pose and estimates Timing, is arranged the initial pose direction of robot, and robot proceeds by rotation, after rotating to the direction of setting, stops rotating; The direction of designated robot in the actual environment, the navigation pose for the setting robot that navigated using two dimension target;Work as setting navigation After target, machine starts planning path, and after the completion of path planning, robot starts to move towards target along the path of planning, leads to Artificial Potential Field barrier-avoiding method avoiding obstacles are crossed, after robot reaches target location, stop movement, and rotate to object pose side Backward, it stops rotating, reaches the designated position in grating map.
The visual sensor realizes that target following, step are using real-time modeling method node:
Step (a):To the state-transition matrix A of Kalman filter, observing matrix H, process noise covariance matrix Q, survey Amount noise covariance matrix R and state error covariance matrix P parameters are initialized, and Kalman tracking object parameters are established;
Step (b):According to the dbjective state position of former frame, the tracking mode of target is used:X (k/k-1)=AX (k-1/ K-1 Kalman predictions) are carried out, the position (x of target in the current frame is obtained1,y1), update state error covariance P (k/k-1); Wherein, X (k/k-1) be using k-1 moment state to k moment states predicted as a result, X (k-1/k-1) is the k-1 moment Optimal result;
It sets the state in the state equation X (k) of target=AX (k-1)+W (k) to:
Wherein, the state of etching system when X (k) is k, (x (k-1), y (k-1)) are the position of k-1 moment targets, mobile speed Degree is respectively vx(k-1) and vy(k-1);
Step (c) uses the window width w and height h of former frame, and by the position (x of the present frame of prediction1,y1) conduct The center of window obtains the physical location (x of target in the current frame in conjunction with MeanShift trackings2,y2);
Step (d) uses the physical location (x of target in the current frame2,y2), according to the measurement equation Z (k) of target=HX (k)+V (k) calculates the observation of Kalman filter, calculates kalman gain K (k)=P (k/k-1) H'[HP (k/k-1) H'+ R]-1, corrected X (k/k)=X (k/k-1)+K (k) (Z (k)-HX (k/k-1)) is updated by Kalman states, obtains the position of target Set (x3,y3) accurate location as target, while updating state error covariance matrix P (k/k)=(1-K (k) H) P (k/k- 1), wherein P (k/k-1) is predicted value of the k-1 moment to the state error covariance at k moment:P (k/k-1)=AP (k-1/k- 1)A'+Q;
Step (e) is by target location (x3,y3) predicted position as next frame, step (b) is repeated to step (d) realization The real-time tracking of target, if the process of closing, algorithm terminates, otherwise, return to step (b).
The method that the MeanShift trackings carry out target object tracking is:
The object module of foundation is:
Wherein, δ is Kronecker function, and h is the bandwidth matrices of window, k (| | x | |2) it is kernel function, b (xi) it is sampled point xiThe image feature value of calculating is mapped to the quantization function that corresponding bin is worth to;
Assuming that y is the image coordinate at candidate target center in present frame, the model positioned at the candidate target of y is:
Wherein,M and n indicates the number of sampled data points, CuFor normalization coefficient:
Similarity degree between target object model and candidate object region is weighed using Pasteur's distance coefficient:
Target object is set to obtain minimum range in the metric space for the feature having been selected with candidate target object, quite In Pasteur's distance coefficient of d (y)
It is y that target object, which is provided, in the initial position of current image frame0, ρ [p (y), q] is used into first order Taylor series exhibition It is obtained after opening:
Define weight coefficient:
The iterative position obtained in the current frame is:
Target object is found in every frame, by using the continuous iteration of MeanShift trackings, finds maximum similar value Region, calculate the new position y of target in present frame1, until | | y1-y0| | < ε stop iteration or iterations reach most Big value, y1The new position repeatedly reached as next frame;
The real-time modeling method node according to the target area of searching and the depth information of target area, calculate target with The distance of ROS mobile robots adjusts the linear velocity of ROS mobile robot tracking targets, according to vision in target and host computer The deviation of sensor image window center adjusts angular velocity of rotation when ROS mobile robot tracking targets.
When not having to block, ROS mobile robots carry out target following using real-time tracking node, complete when occurring When blocking, ROS mobile robot uses block tracking node and carry out target following;When target object model and candidate object region Between similarity degree, that is, Pasteur's distance coefficient be more than 0.6 when, execution block tracking node;The design side for blocking tracking node Method is:Assuming that the pixel point coordinates of moving target in the video frame is (x, y), target speed vxAnd vy, image frame update Time is dt, and the kinematical equation for establishing target is:
Wherein, ax(k-1) and ay(k-1) it is the acceleration on the directions k-1 moment x and y, is converted into:
X (k)=AX (k-1)+W (k-1);
Wherein:
The Kalman linear state equations of moving target are established, establishing measurement equation is:
It is converted into:
Z (k)=HX (k)+V (k),
Wherein:
When blocking, using Kalman filter according to the motion state and measured value of former frame, constantly prediction and The position of target is corrected, realizes predicting tracing when blocking;State error covariance matrix in Kalman filter:Process noise error co-variance matrix:
It is described to block tracking node in image processing function process_image (self, image_color) according to mesh Movement velocity v before mark lossxAnd vyTarget is calculated in video frame renewal time dt, movement of the target in the directions x and the directions y Distance vx* dt and vy*dt;Further according to the correction position (x of former frame Kalman filter3,y3), use x=x3+vx* dt, y= y3+vy* dt, obtain target present frame state X (k)=[x y vx vy]TIt reuses measurement equation and obtains the measurement of target Value;It according to measured value, is corrected using Kalman filter, obtains target in the position of present frame.
Beneficial effects of the present invention:Task is explored towards circumstances not known, realizes the mobile robot based on ROS systems certainly Main exploration and avoidance;Based on the 2D grating maps that laser radar Rplidar A1 information is established, deduced in conjunction with local map and complete Office's boundary search, the autonomous exploration strategy of design can avoid mobile robot from being absorbed in part and explore endless loop, guarantee At the exploration of entire indoor environment;Under the premise of without user intervention, the autonomous exploration to unknown indoor environment and ground are realized Figure structure, and can on host computer real-time display map structuring process, the two dimension relative to traditional autonomous heuristic approach structure Map, intuitive easily identification, is observed convenient for user.The present invention in ROS systems using improved kalman-filter method and MeanShift methods devise real-time tracking node and block tracking node, solve the real-time of mobile robot target following Sex chromosome mosaicism and complete occlusion issue;Using the forecast function of Kalman filter, first future position is believed further according to prediction Breath tracks node into line trace using Meanshift, improves the arithmetic speed of system, shorten the target search time, meet The requirement of real-time of tracking;The state equation and observational equation for establishing target block tracking section when target is blocked completely Status information before point basis carries out predicting tracing to target, after blocking, relocks target, automatically switch with Track pattern, using real-time tracking pattern to target into line trace.
Description of the drawings
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technology description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with Obtain other attached drawings according to these attached drawings.
Fig. 1 is the mobile robot control structure chart of the present invention.
Fig. 2 is the flow chart of Artificial Potential Field barrier-avoiding method in ROS systems.
Fig. 3 is the avoidance experiment that ROS mobile robots carry out in simulating ideal indoor environment.
Fig. 4 is the flow chart of real-time tracking node of the present invention.
Fig. 5 is the flow chart of Target Tracking System of the present invention.
Fig. 6 is the test result that real-time tracking node host computer is shown, wherein (a), (b), (c) and (d) indicate fortune respectively Tracking of the moving-target far from mobile robot, moving target close to mobile robot and moving target to the left and when moving right As a result;(e) it quickly to the left and is moved right for moving target with (f), the tracking result of real-time tracking node.
Fig. 7 is to carry out actual test result using real-time modeling method node, wherein (a), (b) and (c) indicate moving machine Device people follows target to synchronize rotation in real time;(d), (e) and (f) indicates what mobile robot real-time tracking target travelled forward Process;(g), (h) and (i) indicates motion process of the moving target close to mobile robot.
Fig. 8 is to block tracking node host computer to show result, wherein when (a) indicates that target is not blocked completely, in real time The tracking result of tracing mode;When Fig. 8 (b) and Fig. 8 (c) indicates that target is blocked completely, the tracking result of tracking node is blocked; Fig. 8 (d) is indicated after blocking, and is automatically switched to real-time tracking pattern and is carried out target following.
Fig. 9 is using the test result for blocking tracking node in mobile robot platform, wherein (a) indicates mobile machine People uses the case where real-time tracking mode tracking moving target, figure (b) and figure (c) to indicate, when target generation is blocked completely, to make With being indicated after ought blocking as a result, scheming (d) for tracing mode pursuit movement target is blocked, moving target is relocked, automatically It is switched to the result that real-time tracking pattern carries out target following.
Specific implementation mode
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation describes, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, those of ordinary skill in the art are obtained every other under the premise of not making the creative labor Embodiment shall fall within the protection scope of the present invention.
A kind of robot indoor environment exploration, avoidance and method for tracking target based on ROS, its step are as follows:
Step 1:Build the hardware platform of ROS mobile robots:The bottom of mobile robot is provided with motion control mould The top of block, mobile robot is fixed with sensor, and the middle part of mobile robot is equipped with controller and wireless communication module, control ROS systems are installed, motion-control module, sensor and wireless communication module are connected with controller, radio communication mold on device Block is connected with host computer.
For mobile robot control structure chart as shown in Figure 1, according to control structure figure, the hardware for building mobile robot is flat Platform:Including motion-control module, sensor, controller, wireless communication module and host computer, the bottom of mobile robot is provided with The top of motion-control module, mobile robot is fixed with sensor, and the middle part of mobile robot is equipped with controller and channel radio Believe that module, motion-control module, sensor and wireless communication module are connected with controller, wireless communication module and host computer It is connected.Motion-control module is mainly the mobile chassis of Kobuki, and sensor includes high-precision odometer, Kinect2.0 The visual sensor and Rplidar A1 laser radars of depth, controller are to be mounted with Ubuntu14.04 and ROS indigo systems The Jetson TK1 development boards of system, wireless communication module are 7260 AC HMW wireless network cards of Intel.After having built, controller On be configured with Kinect2.0 deep visions sensor, Rplidar A1 laser radars and wireless network card suitable for ROS systems The drive system of 7260 ACHMW of Intel and the software systems of Kobuki2.0 deep vision sensors.Wireless communication module For wifi module.Ubuntu14.04 and ROS indigo operating systems have been built in host computer, and have been established in an operating system Automatically the wifi module connected is provided with the environmental variance of .bashrc files, the exploitation and test of robot easy to remove.On Position machine is remotely logged into the Ubuntu systems of Jetson Tk1 by SSH, starts mobile underpan, passes through wireless wifi moulds Block uses the communication mode of ROS, publication speed messages to change the linear speed and angular speed of robot to mobile underpan, control Robot motion processed.
Step 2:ROS mobile robots are arranged in interior to be detected, the laser radar scanning room in sensor is utilized Interior environment, the location information and directional information of mobile robot are acquired using the odometer in sensor, and ROS mobile robots are logical Square wave track search indoor environment is crossed, host computer by radio communication implement to obtain the scanning information of laser radar by module, upper ROS systems in machine build grating map using structure map function packet.
The autonomous heuristic routine of ROS systems needs the data using Rplidar A1 laser radars and odometer, ROS systems The message of laser radar and odometer publication is obtained by subscribing to theme.ROS systems handle the message of acquisition, adjustment The linear velocity or angular speed of Twist message alterations robot give out information to the mobile chassis of ROS mobile robots, control machine Device people moves in the way of design.
Global indoor environment, which explores the topological map constructed, can reduce the data volume of storage, need not store the every of scanning A position, only storage has the position of certain distance with the last one storage location, but topological map is excessively abstract, indoor autonomous The vital task of exploration and map structuring is the scanning of interior architecture, and the mapping of 3D environment needs higher memory and calculating to disappear Consumption, in order to simplify map structuring, the present invention carries out the structure of grating map using Grid Method.ROS mobile robots use Rplidar A1 laser radar scanning indoor environment information, host computer update the number that laser radar returns in real time by wifi module According to host computer realizes the structure of grating map by the structure map function packet of ROS systems.When the data that laser radar returns In, when there is barrier, grating map is gone out the barrier of detection by the rviz visualization tools in host computer using black display Region, when there is no barrier, using the region for the not barrier that light grey description detected, the region that do not explore, still It is shown as Dark grey.
Most commonly using the mode of roaming, so that robot is moved indoors to zone of ignorance, utilize Airborne Lidar Rope indoor environment, executes repeatedly, the exploration until completing whole indoor environments.It is first in the ROS mobile robot platforms built Roaming mode is first used, indoor environment exploration is carried out and builds figure.In specified room, when carrying out roaming exploration, it can complete Map structuring, but overlong time is spent, while being easily trapped into local endless loop.In order to solve this problem, the present invention designs Go out new exploring mode, in ideal indoor environment, carries out square wave formula exploration.
When just starting to start robot, ROS mobile robots are placed into the not no indoor ground of barrier in 1 meter around Side starts the square wave heuristic routine of ROS systems.Robot starts to move to zone of ignorance, Rplidar A1 laser radar detections After there is wall to front, after being less than 0.8 meter apart from indoor wall, controller controlled motion control module, which proceeds by, to be turned To, avoid front wall.In steering procedure, when Airborne Lidar measures all is clear ahead, ROS mobile robots start to stop Spin-ended turn, restart to move forward, continue to explore, the timer in controller is triggered at this time, starts timing, works as shifting After 10 seconds dynamic, ROS mobile robots stop movement, and direction of rotation when according to avoidance is rotated by 90 ° again, continue after the completion of rotation It is moved, is rotated again when encountering wall, avoidance direction of rotation at this time and direction of rotation before are on the contrary, before repetition Step is explored until completing indoor environment.
In heuristic process, module is communicated host computer by radio communication with the operating system of ROS mobile robots, SLAM grating map structures are carried out using the gmapping function packets in ROS systems.Using the position of odometer calculating robot, Being accurately positioned for mobile robot is realized by the Feature Points Matching function of environmental map, is then swept according to what laser radar obtained It retouches data and establishes local map, while global map being updated.Robot is moved since some indoor unknown position, Pose estimation is carried out using odometer in heuristic process, realizes self poisoning using laser radar data, while with building grid Figure.
ROS mobile robots independently explore indoor environment and carry out map structuring, can be grasped to avoid uninteresting Robot remote Make.And remote-controlled robot carries out building figure, due to artificial origin, can make robot too close to or far from barrier, for what When avoidance, it is complete to judge by artificial without specific standard, lead to the map established and the ground for thering is avoidance standard independently to explore foundation Figure has apparent gap.When remote-controlled robot, robot manipulates the unreasonable of speed setting, also results in robot far from barrier It is too close, it can not rotate in time, encounter barrier, influence the effect for building map.
ROS mobile robots are passed through in exploring by Artificial Potential Field barrier-avoiding method progress avoidance, Artificial Potential Field barrier-avoiding method The tf coordinate transforms of ROS systems find out the position of position and barrier of the ROS mobile robots in global coordinate system; In Artificial potential functions, the resultant force of total repulsion and gravitation is calculated, by the direction of resultant force, the avoidance direction as robot.
Artificial Potential Field is divided into gravitational field and repulsion field, the attraction potential that gravitational field, that is, target object generates mobile robot, Make robot to azimuth motion where target object;The repulsive potential that repulsion field, that is, barrier generates robot can make movement Robot is far from barrier.
Artificial potential functions are derived, regard robot as a particle first, and moved in two-dimensional space, Assuming that the current location of robot is pc=[xc,yc]T, initial point position ps=[xs,ys]T, aiming spot pt=[xt, yt]T, it is p using the Obstacle Position that Airborne Lidar measureso=[xo,yo]T
Target is U to the gravitational potential that ROS mobile robots are formeda(pc):
Barrier is U to the repulsion gesture that ROS mobile robots are formedre(pc):
λ, k, d in formula0It is constant, wherein robot is the same as the Euclidean distance between target:Robot is with the Euclidean distance between barrier:
The sum of gravitational potential and repulsion gesture are U (pc):
U(pc)=Ua(pc)+Ure(pc);
By gravitational field generate gravitation be:
Repulsion of the barrier to ROS mobile robots:
The resultant force of virtual potential field suffered by ROS mobile robots:
When for multiple barriers, need to calculate the repulsive force that each barrier generates robot, by multiple barriers The repulsion of generation synthesizes a total repulsion.In practical application, when robot encounters barrier, using resultant direction as robot The direction of motion can realize the local avoidance of robot.The flow chart of Artificial Potential Field barrier-avoiding method such as Fig. 2 institutes in ROS systems Show, publication and subscription function of the Artificial Potential Field avoidance node using ROS systems obtain the scanning information of laser radar, work as laser When detections of radar is to barrier, robot stops movement, calls Artificial Potential Field avoidance function, according to the location information of odometer, The resultant force for calculating target gravitation and barrier repulsion, using the direction of resultant force as the avoidance direction of mobile robot.Mobile machine People rotates in place avoidance direction, then restarts to move, to realize avoidance.
Fig. 3 is ROS mobile robots in gazebo simulation softwares, simulates ideal indoor environment, carries out robot and keep away The case where barrier experiment.Fig. 3 (a) is that robot is moved towards white box, and Fig. 3 (b) is before robotic laser radar detection is arrived There is a barrier in side, and apart from barrier be less than the avoidance of setting apart from when, robot stops movement, calls Artificial Potential Field avoidance letter Number, calculates avoidance direction, then robot rotates to avoidance direction, is further continued for advancing.Fig. 3 (c) and Fig. 3 (d) indicates machine People's avoiding obstacles, the case where when being moved along avoidance direction.Experimental result shows, robot can be with using artificial potential field algorithm Avoidance is carried out well.
The present invention is based on ROS systems and Artificial Potential Field obstacle avoidance algorithm is combined, the mobile robot of design is to unknown indoor ring Border is independently explored, and indoor grille map is built, and builds figure compared to traditional manual exploration, more efficient, the grating map of structure is more Close to indoor environment, effect is more preferable.It is tested compared to traditional avoidance, needs to set obstacle article coordinate and coordinates of targets in advance, In conjunction with the autonomous Artificial Potential Field avoidance explored and realized, avoidance mode is flexible, and mobile robot can be avoided to be absorbed in local spy Rope endless loop can complete the exploration of entire indoor environment.Compared to traditional mobile robot target following, we can use The digital map navigation mobile robot of structure carries out the target following of mobile robot to target location.
Step 3:Established grating map is imported in the ROS systems of ROS mobile robots, uses the grid of structure Designated position in digital map navigation robot to map, visual sensor are based on Kalman filter method and are tracked with MeanShift Method realizes the tracking of target.
After establishing map, host computer uses rviz visualization tools, operation Rplidar A1 laser radars Rplidar_amcl.launch startup files, using map_file or in the TURTLEBOT_MAP_FILE rings of .bashrc files In the variable of border, the grating map of structure is imported into ROS mobile robots, carries out target navigation and avoidance.In grating map, machine When carrying out two-dimentional pose estimation in device people, after the initial pose direction of robot is arranged, robot proceeds by rotation, rotates to It behind the direction of setting, stops rotating, waits for task instruction.The direction of designated robot in the actual environment, is led using 2D targets Boat, sets the navigation pose of robot;After setting navigation target, machine starts planning path, after the completion of path planning, machine People starts to move towards target along the path of planning, avoiding obstacles, after robot reaches target location, stops movement, and revolve The case where after going to object pose direction, stopping rotating, completing navigation work, reaches the designated position in map.
Mobile robot target following mainly uses visual sensor, using the real-time modeling method node of design, by machine Device people's movement control technology is combined with image processing techniques, quickly and accurately tracks selected target.The reality that the present invention designs When target following node target_tracking.py realize improved target tracking algorism, can reduce positioning target when Between, improve the real-time of mobile robot tracking target.The principle of real-time modeling method node is based primarily upon Kalman filter side Method and MeanShift trackings, are the rational modification to Kalman filter and MeanShift trackings, real-time tracking section The principle of point is as shown in Figure 4.Kalman's object is initially set up, the relevant parameter of Kalman's object is initialized;Kalman Filter is according to the predicted position of the location estimation present frame Kalman of target former frame, the predicted position of present frame Camshft, Then the estimated result for reusing the predicted position of present frame Camshft is corrected the position of former frame.Specific steps For:
(1) first to the state-transition matrix A of Kalman filter, observing matrix H, process noise covariance matrix Q, measurement The parameters such as noise covariance matrix R, state error covariance matrix P are initialized, and Kalman tracking object parameters are established.
(2) according to the dbjective state position of former frame, the tracking mode of target is used:X (k/k-1)=AX (k-1/k-1) Kalman predictions are carried out, the position (x of target in the current frame is obtained1,y1), update state error covariance P (k/k-1);Its In, X (k/k-1) be using k-1 moment state to k moment states predicted as a result, X (k-1/k-1) be the k-1 moment most Excellent result.
It sets the state in the state equation X (k) of target=AX (k-1)+W (k) to:
Wherein, the state of etching system when X (k) is k, (x (k-1), y (k-1)) are the position of k-1 moment targets, mobile speed Degree is respectively vx(k-1) and vy(k-1)。
(3) the window width w and height h of former frame are used, and by the position (x of the present frame of prediction1,y1) it is used as window Center, obtain the physical location (x of target in the current frame in conjunction with MeanShift trackings2,y2);
(4) physical location (x of target in the current frame is used2,y2), according to the measurement equation Z (k) of target=HX (k)+V (k) observation for calculating Kalman filter, calculates kalman gain K (k)=P (k/k-1) H'[HP (k/k-1) H'+R]-1, warp Kalman states update corrected X (k/k)=X (k/k-1)+K (k) (Z (k)-HX (k/k-1)) is crossed, the position (x of target is obtained3, y3), as the accurate location of target, while state error covariance matrix P (k/k)=(1-K (k) H) P (k/k-1) is updated, In, P (k/k-1) is predicted value of the k-1 moment to the state error covariance at k moment:P (k/k-1)=AP (k-1/k-1) A'+ Q。
By target location (x3,y3) predicted position as next frame, repeat the reality that step (2) realizes target to step (4) When track, if close process, algorithm terminate, otherwise, return (2).
Real-time modeling method node target_tracking.py establishes Kalman parameters in class TargetTracking Object initializes Kalman parameters and Camshift parameters.Mouse call back function mouse_cb (self, event, x, y, Flags, param), it is responded by cv2.setMouseCallback (self.node_name, se-lf.mouse_cb) mouse Function can manually select tracking target in the image window of host computer;After determining tracking target, in image processing function The histogram of target window is established in process_image (self, image_color), the original state of initialized target is led to It crosses code and realizes that step (2) arrives the function of step (4).It is obtained by random process noise and present frame Camshft in code Target information updates dbjective state using state equation X (k)=AX (k-1)+W (k) of target, generates random measurement noise, lead to It crosses measurement equation Z (k)=HX (k)+V (k) and obtains target measurement value, according to measured value, correct target location.
After controlling the designated position that robot reaches in grating map in host computer, start the target following section of robot Point, in the interactive interface of host computer, selected target allows robotic tracking's target.Robotic tracking using Meanshift targets with Track algorithm, it is first determined then object module establishes candidate target model, target object is found in every frame, is using Pasteur Number judges the similarity degree of object module and candidate family, and similarity degree is bigger, then candidate family is found closer to object module Region closer to target area;According to the depth information of the target area of searching and target area, target and robot are calculated Distance, the linear velocity of robotic tracking's target is adjusted, according to Kinect2.0 visual sensor image windows in target and host computer The deviation at mouth center, adjusts angular velocity of rotation when robotic tracking's target.Depth information indicates the picture of target range in image Prime information after conversion, is used for the distance of calculating robot and target.
Target object tracking is carried out using MeanShift methods, the object module of foundation is:
Wherein, δ is Kronecker function (Kronecker delta), and h is that the bandwidth matrices of window are, by candidate target The pixel quantity of object is limited, k (| | x | |2) it is kernel function, b (xi) it is sampled point xiThe image feature value of calculating is mapped to The quantization function that corresponding bin is worth to.Object module it can be shown that target object visual signature, the feature in image is not Together, then object module will have any different, and corresponding feature space is also different from each other.
Assuming that y is the image coordinate at candidate target center in present frame, the model positioned at the candidate target of y is:
Wherein,CuFor normalization coefficient:
Phase between target object model and candidate object region is weighed using Pasteur's distance (Bhattacharyya) coefficient Like degree:
MeanShift is if it is intended to realize the tracking of target, it is most important that first finds out position y in the plane of delineation, makes Target object obtains minimum range with candidate target object in the metric space for the feature having been selected, and is equivalent to similarity degree The Bhattacharryya coefficients of d (y)It is maximized.
It is y that target object, which is provided, in the initial position of current image frame0, ρ [p (y), q] is used into first order Taylor series exhibition It is obtained after opening:
Define weight coefficient:
MeanShift algorithms obtain iterative position in the current frame:
Target object is found in every frame, by using the continuous iteration of Meanshift algorithms, finds the area of maximum similar value Domain calculates the new position y of target in present frame1, until | | y1-y0| | < ε stop iteration or iterations reach maximum Value, y1The new position repeatedly reached as next frame.The similarity degree of object module and candidate family is judged using Pasteur's coefficient, it is similar Degree is bigger, then candidate family is closer to object module, and the region of searching is closer to target area.
On the basis of real-time modeling method node, shelter target tracking node is devised, mobile robot target is solved Complete occlusion issue during tracking.The motion state of the moving target of mobile robot tracking, most of the time is stable , the status information before moving target being used to lose, the moving situation of target carries out predicting tracing when estimation is blocked, when After blocking, then relock moving target.When tracking node is blocked in design, the condition for considering to judge to block generation is needed, When not having to block, target following is carried out using real-time tracking node, when occurring blocking completely, reuses and blocks tracking Node carries out target following.By experiment test, it is found that when target is blocked, predicts the color histogram of target area and block Pasteur's distance coefficient of the color histogram of preceding target area can change, and variation range is between 0~1.When normal tracking When, Pasteur's distance can be very small, is infinitely close to 0, and when blocking completely, Pasteur's distance can be very big, is infinitely close to 1, this Invention selects 0.6 to be used as judgment threshold, and when Pasteur's distance is more than 0.6, tracking node is blocked in execution.
Block the design principle of tracking node occlusion_tracking.py:Assuming that moving target is in the video frame Pixel point coordinates is (x, y), target speed vxAnd vy, the image frame update time is dt, establishes the kinematical equation of target It is as follows:
Wherein, ax(k-1) and ay(k-1) it is the acceleration on the directions k-1 moment x and y, is converted into:
X (k)=AX (k-1)+W (k-1);
Wherein:
As can be seen from the above equation, the Kalman linear state equations that moving target can be established, in order to use Kalman filter method, it is also necessary to establish and measure equation, it is assumed that measuring equation is:
It is converted into:
Z (k)=HX (k)+V (k),
Wherein:
After having established the state equation of moving target and having measured equation, Kalman filter can be used, when blocking When, according to the motion state and measured value of former frame, the continuous position of prediction and correction target, realize prediction when blocking with Track.
In actual test, the tracking of blocking that system initial state error co-variance matrix can influence mobile robot is imitated Fruit, since initial value is difficult to measure and the difference of mobile robot platform, inexperienced value is available, and blocks tracking Node occlusion_tracking.py can update its value in circular flow, by debugging, find to assign following fixed value, Ideal tracking effect can be reached:
State error covariance matrix:
Process noise error co-variance matrix:
Block tracking node occlusion_tracking.py image processing function process_image (self, Image_color the movement velocity v before being lost according to target in)xAnd vy, target is calculated in video frame renewal time dt, target Displacement distance v in the directions x and the directions yx* dt and vy*dt;Further according to the correction position (x of former frame Kalman filter3, y3), use x=x3+vx* dt, y=y3+vy* dt, obtain target present frame state X (k)=[x y vx vy]TIt reuses It measures equation Z (k)=HX (k)+V (k) and obtains the measured value of target.According to measured value, it is corrected using Kalman filter, Target is obtained in the position of present frame.
Pass through circular flow order rospy.spin () circular flow program of ROS systems, constantly prediction and update The predicting tracing of target is realized in target location.And at regular intervals, the histogram of update prediction target window, according to Pasteur Distance, whether judgement is blocked terminates, if terminate, and moving target reappears in the visual field of visual sensor, then again Lock onto target automatically switches tracing mode, executes real-time tracking node, restores normal tracking mode.
The flow chart of Target Tracking System, as shown in Figure 5.Mobile robot is in pursuit movement target, mobile robot Movement node obtains target window by subscription/roi_zone themes, by calculating target window center and image window center Distance, the angular speed of robot is set, and the rotation of control robot makes robotic tracking's target.By subscribing to ROS system architectures Defined in a kind of function type general designation ----kinect2/qhd/image_color themes obtain visual sensor acquisition Realtime graphic handles the image data of acquisition by image call back function imageCb (self, image_color).By ordering Read/kinect2/qhd/image_depth_rect themes acquisition depth image, the distance of calculating robot and target.If away from From not within the scope of the threshold distance of setting, then according to the deviation of actual range and pre-determined distance, mobile robot is automatically adjusted Linear velocity, pass through moveable robot movement node publication/cmd_vel_mux/input/navi themes, control robot fortune Dynamic, when deviation is excessive, then robot fast moves, but no more than the maximum speed of setting, when deviation is too small, then robot is slow It is slow mobile, but the minimum speed of setting cannot be less than, realize the automatic adjustment of mobile robot speed.It can be in the figure of host computer As window observes the target following situation of mobile robot.Under normal circumstances, mobile robot uses real-time tracking mode tracking Target, when blocking, be switched to block tracing mode carry out predicting tracing capture movement mesh again after blocking After mark, then switch back into real-time tracking pattern.When both of which tracks target, mobile robot can all automatically adjust, with movement mesh Mark keeps the tracking range of safety.When target stops, mobile robot is finely adjusted automatically, is parked within the scope of safe distance.
In order to verify and show the tracking effect of real-time tracking node, respectively to the display result of host computer and mobile machine The test result of people's platform is analyzed.
The test result that real-time tracking node host computer is shown, as shown in Figure 6.Fig. 6 (a), Fig. 6 (b), Fig. 6 (c) and Fig. 6 (d) indicate respectively moving target far from mobile robot, moving target close to mobile robot and moving target to the left and to Tracking result when right movement, inclined rectangle is the actual area for the target that real-time tracking node searching arrives, not inclined Rectangle is the target area of prediction.The experimental results showed that the Kalman filter of real-time tracking node can be predicted accurately The movement orientation of target, enables MeanShift quickly to search moving target, and the adaptive target window that adjusts realizes target Real-time tracking.When test moving target fast moves, inclined rectangle marked target area is used only.Fig. 6 (e) and Fig. 6 (f) It is moving target quickly to the left and when moving right, the tracking result of real-time tracking node.The experimental results showed that moving target wink When great variety occurs for Shi Sudu, movement destination image thickens in video frame, and target shape changes, but in real time with Track node still can real-time capture to moving target, realize real-time tracking.
When mobile robot carries out actual test, need according to experimental situation, manual setting relevant parameter.Real-time tracking is surveyed In examination, safety distance threshold is 0.65 meter, maximum rotative speed 1.2rad/s, and minimum rotary speed is 0.2rad/s, maximum Linear velocity 0.5m/s, minimum linear velocity 0.05m/s.
Fig. 7 is using real-time modeling method node progress actual test as a result, Fig. 7 (a), Fig. 7 (b) and Fig. 7 (c) are indicated Mobile robot follows target to synchronize rotation in real time;Fig. 7 (d), Fig. 7 (e) and Fig. 7 (f) indicate mobile robot in real time with The process that track target travels forward, when target is far from robot, when being more than the safe distance of setting, mobile robot starts to track Target travels forward, and after moving target stop motion, mobile robot starts adjust automatically, is parked within the scope of safe distance;Fig. 7 (g), Fig. 7 (h) and Fig. 7 (i) indicates motion process of the moving target close to mobile robot, when moving target is close to mobile machine People, when less than the safe distance set, mobile robot starts to fall back backward, pursuit movement target, is protected automatically with moving target Safe tracking range is held, until moving target stops moving, mobile robot stops fortune after being automatically adjusted to suitable position It is dynamic.During actual test, moving target normally moves or when suddenly change speed, what mobile robot can still be stablized Into line trace, illustrate that the real-time tracking node of design disclosure satisfy that the requirement of mobile robot real-time tracking target.
In order to preferably verify the performance of real-time tracking node, traditional MeanShift trackings are designed to MeanShift tracks node, is compared with real-time tracking node.Movement is searched in the video frame according to two kinds of tracking nodes The time of target verifies the operational performance of two kinds of nodes, as shown in table 1.Although real-time tracking node increases program step, It is to be predicted using Kalman methods, reduces iterations, shorten search time.As shown in Table 1, real-time tracking node Moving target can be searched faster, terminates this interative computation, and average time at 0.001 second or so, is compared traditional It is 0.008 second or so fast that MeanShift tracks node.Illustrate the superior of the real-time tracking joint behavior designed using ROS systems Property, meet the requirement of real-time of mobile robot target following.
The traditional MeanShift tracking of table 1 and real-time tracking operational performance compare
Tracking node is blocked on the basis of real-time tracking pattern, increases and blocks tracing mode.It blocks on tracking node Position machine shows that the results are shown in Figure 8.
When Fig. 8 (a) indicates that target is not blocked completely, the tracking result of real-time tracking pattern, experiment shows partly to block When, real-time tracking pattern still can be good at carrying out target following;When Fig. 8 (b) and Fig. 8 (c) indicates that target is blocked completely, The Pasteur's distance for blocking tracking node is more than the threshold value 0.6 of setting, automatically switches to and blocks tracing mode, carries out occlusion prediction, It predicts the direction of motion of target, realizes predicting tracing, indicate that the target location of prediction, experiment show to hide without inclined rectangle Tracing mode is kept off according to the prior information of dbjective state before blocking, can accurately predict target direction of motion;Fig. 8 (d) It indicates after blocking, relock moving target, tracing mode is blocked in end, automatically switches to the progress of real-time tracking pattern Target following.Experiment shows that the tracking node that blocks of design can be good at carrying out target blocking tracking, and improving tracking is The robustness of system.
Fig. 9 is that tracking node is blocked in use, and in the test result of mobile robot platform, Fig. 9 (a) indicates mobile robot The case where using real-time tracking mode tracking moving target, Fig. 9 (b) and Fig. 9 (c) indicate, when target generation is blocked completely, to make With blocking tracing mode pursuit movement target as a result, Fig. 9 (d) indicates after it block, to relock moving target, automatically It is switched to the result that real-time tracking pattern carries out target following.In Fig. 9 when moving target is moved towards white baffle, real-time tracking Mode activated mobile robot synchronized tracking moving target when moving target is blocked completely, blocks tracing mode prediction target The direction of motion, mobile robot is moved according to the target direction of prediction, after blocking, is relocked moving target, is cut Real-time tracking pattern is changed to, mobile robot continues to carry out real-time tracking to target.What experiment showed design blocks tracking node The occlusion issue that can be good at solving mobile robot target following under certain condition, obtains ideal tracking effect.
The present invention is based on the mobile robot target of ROS (Robot Operating System) robot operating system with The realization of track system, for the problem of mobile robot target following real-time difference, the real-time modeling method node of design, according to The prior information of target, using Kalman filter future position, MeanShift searches for mesh on the basis of predictive information Mark, improves the arithmetic speed of system, shortens the target search time, meet the requirement of real-time of system.For target with The complete occlusion issue of track establishes the state equation and observational equation of target, and tracking node is blocked in design, according to blocking preceding target Status information, to target carry out predicting tracing, after blocking, relock target, automatically switch back into real-time tracking pattern. The experiment show real-time and robustness of tracking system.
The present invention uses ROS systems, and it is special to be based on color using the forecast function and MeanShift methods of Kalman filter Tracking is flexibly used, is combined with hardware and software by the following function of sign, is devised real-time tracking node and is blocked tracking Node solves the real time problems of the Turtlebot2 mobile robot object tracking process of repacking and complete occlusion issue. Experimental result and experimental data show the tracking node of design, realize real-time tracking of the mobile robot to target, Neng Gouman The target following of mobile robot needs under sufficient certain condition, improves the robustness and stability of Target Tracking System.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all essences in the present invention With within principle, any modification, equivalent replacement, improvement and so on should all be included in the protection scope of the present invention god.

Claims (10)

1. a kind of robot indoor environment exploration, avoidance and method for tracking target based on ROS, which is characterized in that its step is such as Under:
Step 1:Build the hardware platform of ROS mobile robots:The bottom of mobile robot is provided with motion-control module, moves The top of mobile robot is fixed with sensor, and the middle part of mobile robot is equipped with controller and wireless communication module, on controller ROS systems are installed, motion-control module, sensor and wireless communication module be connected with controller, wireless communication module and Host computer is connected;
Step 2:ROS mobile robots are arranged in interior to be detected, ring in the laser radar scanning room in sensor is utilized Border acquires the location information and directional information of mobile robot, the ROS mobile robots side of passing through using the odometer in sensor Wave path explores indoor environment, and host computer by radio communication implement to obtain the scanning information of laser radar by module, in host computer ROS systems using structure map function packet build grating map;
Step 3:Established grating map is imported in the ROS systems of ROS mobile robots, uses the grating map of structure Designated position in navigating robot to map, visual sensor are based on Kalman filter method and MeanShift trackings Realize the tracking of target.
2. robot indoor environment exploration, avoidance and method for tracking target according to claim 1 based on ROS, special Sign is that the motion-control module is mainly Kobuki mobile chassis, and sensor includes odometer, Kinect2.0 depth Visual sensor and Rplidar A1 laser radars, controller are to be mounted with Ubuntu14.04 and ROS indigo systems Jetson TK1 development boards, wireless communication module are 7260 AC HMW wireless network cards of Intel, and wireless communication module passes through wifi Module realizes the transmission of data;Be configured on controller the Kinect2.0 deep visions sensor suitable for ROS systems, The drive system and Kobuki2.0 deep visions of 7260 AC HMW of Rplidar A1 laser radars and wireless network card Intel passes The software systems of sensor;Ubuntu14.04 and ROS indigo operating systems are built on host computer, host computer is remote by SSH Journey logs on to the Ubuntu systems of controller, starts the mobile chassis of robot, module uses ROS to host computer by radio communication The communication mode of system, publication speed messages change the linear speed and angular speed of robot to mobile underpan, control machine People moves.
3. robot indoor environment exploration, avoidance and method for tracking target according to claim 1 based on ROS, special Sign is that the method for the square wave track search is:When startup, ROS mobile robots are placed into around in 1 meter without obstacle The interior place of object, ROS mobile robots start to move to zone of ignorance, and Airborne Lidar measures front interior and wall occurs Afterwards, after ROS mobile robots are less than 0.8 meter apart from wall, controller controlled motion control module proceeds by steering, avoids Front wall;In steering procedure, when Airborne Lidar measures all is clear ahead, ROS mobile robots start to stop rotating, Restart to move forward, continue to explore, the timer in controller is triggered at this time, starts timing, when 10 seconds mobile Afterwards, ROS mobile robots stop movement, and direction of rotation when according to avoidance is rotated by 90 ° again, continue to move after the completion of rotation It is dynamic, rotated again when encountering wall, avoidance direction of rotation at this time and direction of rotation before on the contrary, step before repeatedly, It is explored until completing indoor environment.
4. robot indoor environment exploration, avoidance and method for tracking target according to claim 3 based on ROS, special Sign is that ROS mobile robots carry out avoidance, the step of Artificial Potential Field barrier-avoiding method using Artificial Potential Field barrier-avoiding method in exploration Suddenly it is:
(1) setting initial point position is ps=[xs,ys]T, aiming spot pt=[xt,yt]T;ROS mobile robots are regarded as One particle, and moved in two-dimensional space;
(2) host computer finds out position p of the ROS mobile robots in global coordinate system by the tf coordinate transforms of ROS systemsc= [xc,yc]T, it is p using the Obstacle Position that Airborne Lidar measureso=[xo,yo]T
(3) calculating virtual total repulsion of potential field and the resultant force of gravitation suffered by ROS mobile robots is:Wherein, U (pc)=Ua(pc)+Ure(pc) be the sum of gravitational potential and repulsion gesture,For the gravitational potential that target forms ROS mobile robots, For the repulsion gesture that barrier forms ROS mobile robots, λ, k, d0It is constant,It is robot with the Euclidean distance between target,It is robot with the Euclidean distance between barrier;For the gravitation generated by gravitational field;
It is barrier to ROS mobile robots Repulsion;When there are multiple barriers, the repulsion that each barrier generates robot, the repulsion that multiple barriers are generated are calculated Synthesize a total repulsion;
(3) using the direction of resultant force as the avoidance direction of robot, ROS mobile robots rotate to avoidance direction and are moved, Realize the local avoidance of robot.
5. robot indoor environment exploration, avoidance and method for tracking target according to claim 4 based on ROS, special Sign is that the laser radar scanning indoor environment information of the ROS mobile robots, host computer updates laser by wifi module The data that radar returns in real time, host computer call the structure map function packet of ROS systems, go out when in the data that laser radar returns When existing barrier, grating map is gone out the barrier region of detection by the rviz visualization tools in host computer using black display, Using the region for the not barrier that light grey description detected, the region that do not explore is shown using Dark grey.
6. robot indoor environment exploration, avoidance and method for tracking target according to claim 1 based on ROS, special Sign is that the method navigated in the step 3 is:Host computer uses rviz visualization tools, runs laser radar Rplidar_amcl.launch startup files, using map_file or in the TURTLEBOT_MAP_FILE rings of .bashrc files In the variable of border, the grating map of structure is imported into ROS mobile robots;In grating map, robot carries out two-dimentional pose and estimates Timing, is arranged the initial pose direction of robot, and robot proceeds by rotation, after rotating to the direction of setting, stops rotating; The direction of designated robot in the actual environment, the navigation pose for the setting robot that navigated using two dimension target;Work as setting navigation After target, machine starts planning path, and after the completion of path planning, robot starts to move towards target along the path of planning, leads to Artificial Potential Field barrier-avoiding method avoiding obstacles are crossed, after robot reaches target location, stop movement, and rotate to object pose side Backward, it stops rotating, reaches the designated position in grating map.
7. robot indoor environment exploration, avoidance and method for tracking target according to claim 1 based on ROS, special Sign is that the visual sensor realizes that target following, step are using real-time modeling method node:
Step (a):To the state-transition matrix A of Kalman filter, observing matrix H, process noise covariance matrix Q, measures and make an uproar Sound covariance matrix R and state error covariance matrix P parameters are initialized, and Kalman tracking object parameters are established;
Step (b):According to the dbjective state position of former frame, the tracking mode of target is used:X (k/k-1)=AX (k-1/k-1) Kalman predictions are carried out, the position (x of target in the current frame is obtained1,y1), update state error covariance P (k/k-1);Its In, X (k/k-1) be using k-1 moment state to k moment states predicted as a result, X (k-1/k-1) be the k-1 moment most Excellent result;
It sets the state in the state equation X (k) of target=AX (k-1)+W (k) to:
Wherein, the state of etching system when X (k) is k, (x (k-1), y (k-1)) are the position of k-1 moment targets, movement speed point It Wei not vx(k-1) and vy(k-1);
Step (c) uses the window width w and height h of former frame, and by the position (x of the present frame of prediction1,y1) it is used as window Center, obtain the physical location (x of target in the current frame in conjunction with MeanShift trackings2,y2);
Step (d) uses the physical location (x of target in the current frame2,y2), according to the measurement equation Z (k) of target=HX (k)+V (k) observation for calculating Kalman filter, calculates kalman gain K (k)=P (k/k-1) H'[HP (k/k-1) H'+R]-1, warp Kalman states update corrected X (k/k)=X (k/k-1)+K (k) (Z (k)-HX (k/k-1)) is crossed, the position (x of target is obtained3, y3) accurate location as target, while state error covariance matrix P (k/k)=(1-K (k) H) P (k/k-1) is updated, In, P (k/k-1) is predicted value of the k-1 moment to the state error covariance at k moment:P (k/k-1)=AP (k-1/k-1) A'+ Q;
Step (e) is by target location (x3,y3) predicted position as next frame, it repeats step (b) and realizes target to step (d) Real-time tracking, if close process, algorithm terminate, otherwise, return to step (b).
8. robot indoor environment exploration, avoidance and method for tracking target according to claim 7 based on ROS, special Sign is that the method that the MeanShift trackings carry out target object tracking is:
The object module of foundation is:
Wherein, δ is Kronecker function, and h is the bandwidth matrices of window, k (| | x | |2) it is kernel function, b (xi) it is sampled point xiMeter The image feature value of calculation is mapped to the quantization function that corresponding bin is worth to;
Assuming that y is the image coordinate at candidate target center in present frame, the model positioned at the candidate target of y is:
Wherein,M and n indicates the number of sampled data points, CuFor normalization coefficient:
Similarity degree between target object model and candidate object region is weighed using Pasteur's distance coefficient:
So that target object is obtained minimum range in the metric space for the feature having been selected with candidate target object, is equivalent to phase Like Pasteur's distance coefficient of degree d (y)It is maximized;
It is y that target object, which is provided, in the initial position of current image frame0, ρ [p (y), q] is obtained using after first order Taylor series expansion It arrives:
Define weight coefficient:
The iterative position obtained in the current frame is:
Target object is found in every frame, by using the continuous iteration of MeanShift trackings, finds the area of maximum similar value Domain calculates the new position y of target in present frame1, until | | y1-y0| | < ε stop iteration or iterations reach maximum Value, y1The new position repeatedly reached as next frame;
The real-time modeling method node calculates target and ROS according to the target area of searching and the depth information of target area The distance of mobile robot adjusts the linear velocity of ROS mobile robot tracking targets, according to visual sensing in target and host computer The deviation at device image window center adjusts angular velocity of rotation when ROS mobile robot tracking targets.
9. robot indoor environment exploration, avoidance and method for tracking target according to claim 8 based on ROS, special Sign is, when not having to block, ROS mobile robots carry out target following using real-time tracking node, complete when occurring When blocking, ROS mobile robot uses block tracking node and carry out target following;When target object model and candidate object region Between similarity degree, that is, Pasteur's distance coefficient be more than 0.6 when, execution block tracking node;The design side for blocking tracking node Method is:Assuming that the pixel point coordinates of moving target in the video frame is (x, y), target speed vxAnd vy, image frame update Time is dt, and the kinematical equation for establishing target is:
Wherein, ax(k-1) and ay(k-1) it is the acceleration on the directions k-1 moment x and y, is converted into:
X (k)=AX (k-1)+W (k-1);
Wherein:
The Kalman linear state equations of moving target are established, establishing measurement equation is:
It is converted into:
Z (k)=HX (k)+V (k),
Wherein:
When blocking, using Kalman filter according to the motion state and measured value of former frame, constantly prediction and correction Predicting tracing when blocking is realized in the position of target;State error covariance matrix in Kalman filter:Process noise error co-variance matrix:
10. robot indoor environment exploration, avoidance and method for tracking target according to claim 9 based on ROS, special Sign is, described to block tracking node in image processing function process_image (self, image_color) according to mesh Movement velocity v before mark lossxAnd vyTarget is calculated in video frame renewal time dt, movement of the target in the directions x and the directions y Distance vx* dt and vy*dt;Further according to the correction position (x of former frame Kalman filter3,y3), use x=x3+vx* dt, y= y3+vy* dt, obtain target present frame state X (k)=[x y vx vy]TIt reuses measurement equation and obtains the measurement of target Value;It according to measured value, is corrected using Kalman filter, obtains target in the position of present frame.
CN201810764178.1A 2018-07-12 2018-07-12 ROS-based robot indoor environment exploration, obstacle avoidance and target tracking method Active CN108646761B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810764178.1A CN108646761B (en) 2018-07-12 2018-07-12 ROS-based robot indoor environment exploration, obstacle avoidance and target tracking method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810764178.1A CN108646761B (en) 2018-07-12 2018-07-12 ROS-based robot indoor environment exploration, obstacle avoidance and target tracking method

Publications (2)

Publication Number Publication Date
CN108646761A true CN108646761A (en) 2018-10-12
CN108646761B CN108646761B (en) 2020-07-31

Family

ID=63751133

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810764178.1A Active CN108646761B (en) 2018-07-12 2018-07-12 ROS-based robot indoor environment exploration, obstacle avoidance and target tracking method

Country Status (1)

Country Link
CN (1) CN108646761B (en)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109213201A (en) * 2018-11-30 2019-01-15 北京润科通用技术有限公司 A kind of barrier-avoiding method and device
CN109544472A (en) * 2018-11-08 2019-03-29 苏州佳世达光电有限公司 Object drive device and object driving method
CN109917818A (en) * 2019-01-31 2019-06-21 天津大学 Collaboratively searching based on ground robot contains method
CN110221613A (en) * 2019-06-12 2019-09-10 北京洛必德科技有限公司 Robot path planning method and device based on modified Artificial Potential Field Method
CN110509271A (en) * 2019-07-23 2019-11-29 国营芜湖机械厂 It is a kind of that robot control method is followed based on laser radar
CN110887489A (en) * 2019-11-22 2020-03-17 深圳晨芯时代科技有限公司 AR robot-based SLAM algorithm experimental method
CN111006652A (en) * 2019-12-20 2020-04-14 深圳无境智能机器人有限公司 Method for running robot close to edge
CN111123732A (en) * 2018-10-31 2020-05-08 百度在线网络技术(北京)有限公司 Method, device, storage medium and terminal equipment for simulating automatic driving vehicle
CN111308993A (en) * 2020-02-13 2020-06-19 青岛联合创智科技有限公司 Human body target following method based on monocular vision
CN111360841A (en) * 2020-05-27 2020-07-03 北京云迹科技有限公司 Robot monitoring method and device, storage medium and electronic equipment
CN111650928A (en) * 2019-02-18 2020-09-11 北京奇虎科技有限公司 Autonomous exploration method and device for sweeping robot
CN111805535A (en) * 2020-06-11 2020-10-23 浙江大华技术股份有限公司 Positioning navigation method, device and computer storage medium
CN112130565A (en) * 2020-09-14 2020-12-25 贵州翰凯斯智能技术有限公司 Self-walking robot platform control system and communication method thereof
CN112241168A (en) * 2019-07-18 2021-01-19 万润科技股份有限公司 Self-propelled device and map building method thereof
CN112270076A (en) * 2020-10-15 2021-01-26 同济大学 Environment model construction method and system based on intelligent agent active perception
WO2021068150A1 (en) * 2019-10-10 2021-04-15 Huawei Technologies Co., Ltd. Controlling method of mobile apparatus and computer program thereof
CN112698629A (en) * 2020-12-23 2021-04-23 江苏睿科大器机器人有限公司 AGV (automatic guided vehicle) scheduling method and system suitable for hospital scene
CN112738022A (en) * 2020-12-07 2021-04-30 浙江工业大学 Attack method for ROS message of robot operating system
CN113029143A (en) * 2021-02-24 2021-06-25 同济大学 Indoor navigation method suitable for pepper robot
CN113052152A (en) * 2021-06-02 2021-06-29 中国人民解放军国防科技大学 Indoor semantic map construction method, device and equipment based on vision
CN113093729A (en) * 2021-03-10 2021-07-09 上海工程技术大学 Intelligent shopping trolley based on vision and laser radar and control method
CN113093176A (en) * 2019-12-23 2021-07-09 北京三快在线科技有限公司 Linear obstacle detection method, linear obstacle detection device, electronic apparatus, and storage medium
CN113612920A (en) * 2021-06-23 2021-11-05 广西电网有限责任公司电力科学研究院 Method and device for shooting power equipment image by unmanned aerial vehicle
CN114200471A (en) * 2021-12-07 2022-03-18 杭州电子科技大学信息工程学院 Forest fire source detection system and method based on unmanned aerial vehicle, storage medium and equipment
CN114373329A (en) * 2021-12-31 2022-04-19 广东奥博信息产业股份有限公司 Vehicle searching method for indoor parking lot, electronic equipment and readable storage medium
CN114460939A (en) * 2022-01-22 2022-05-10 贺晓转 Intelligent walking robot autonomous navigation improvement method under complex environment
CN116382310A (en) * 2023-06-06 2023-07-04 南京理工大学 Artificial potential field path planning method and system
CN116578101A (en) * 2023-07-12 2023-08-11 季华实验室 AGV pose adjustment method based on two-dimensional code, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103325126A (en) * 2013-07-09 2013-09-25 中国石油大学(华东) Video target tracking method under circumstance of scale change and shielding
CN103559725A (en) * 2013-08-09 2014-02-05 中国地质大学(武汉) Wireless sensor node optimization selection method orientated at visual tracking
CN103914068A (en) * 2013-01-07 2014-07-09 中国人民解放军第二炮兵工程大学 Service robot autonomous navigation method based on raster maps
CN104992451A (en) * 2015-06-25 2015-10-21 河海大学 Improved target tracking method
CN105466421A (en) * 2015-12-16 2016-04-06 东南大学 Mobile robot autonomous cruise method for reliable WIFI connection
CN105487535A (en) * 2014-10-09 2016-04-13 东北大学 Mobile robot indoor environment exploration system and control method based on ROS
CN105955262A (en) * 2016-05-09 2016-09-21 哈尔滨理工大学 Mobile robot real-time layered path planning method based on grid map

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914068A (en) * 2013-01-07 2014-07-09 中国人民解放军第二炮兵工程大学 Service robot autonomous navigation method based on raster maps
CN103325126A (en) * 2013-07-09 2013-09-25 中国石油大学(华东) Video target tracking method under circumstance of scale change and shielding
CN103559725A (en) * 2013-08-09 2014-02-05 中国地质大学(武汉) Wireless sensor node optimization selection method orientated at visual tracking
CN105487535A (en) * 2014-10-09 2016-04-13 东北大学 Mobile robot indoor environment exploration system and control method based on ROS
CN104992451A (en) * 2015-06-25 2015-10-21 河海大学 Improved target tracking method
CN105466421A (en) * 2015-12-16 2016-04-06 东南大学 Mobile robot autonomous cruise method for reliable WIFI connection
CN105955262A (en) * 2016-05-09 2016-09-21 哈尔滨理工大学 Mobile robot real-time layered path planning method based on grid map

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
B ABHISHEK 等: "ROS based stereo vision system for autonomous vehicle", 《2017 IEEE INTERNATIONAL CONFERENCE ON POWER, CONTROL, SIGNALS AND INSTRUMENTATION ENGINEERING (ICPCSI)》 *
孙中森等: "一种抗遮挡的运动目标跟踪算法", 《中国学术期刊文摘》 *
陈卓等: "移动机器人SLAM与路径规划在ROS框架下的实现", 《医疗卫生装备》 *

Cited By (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111123732A (en) * 2018-10-31 2020-05-08 百度在线网络技术(北京)有限公司 Method, device, storage medium and terminal equipment for simulating automatic driving vehicle
CN109544472A (en) * 2018-11-08 2019-03-29 苏州佳世达光电有限公司 Object drive device and object driving method
CN109213201A (en) * 2018-11-30 2019-01-15 北京润科通用技术有限公司 A kind of barrier-avoiding method and device
CN109213201B (en) * 2018-11-30 2021-08-24 北京润科通用技术有限公司 Obstacle avoidance method and device
CN109917818A (en) * 2019-01-31 2019-06-21 天津大学 Collaboratively searching based on ground robot contains method
CN109917818B (en) * 2019-01-31 2021-08-13 天津大学 Collaborative search containment method based on ground robot
CN111650928A (en) * 2019-02-18 2020-09-11 北京奇虎科技有限公司 Autonomous exploration method and device for sweeping robot
CN111650928B (en) * 2019-02-18 2024-03-05 北京奇虎科技有限公司 Autonomous exploration method and device for sweeping robot
CN110221613A (en) * 2019-06-12 2019-09-10 北京洛必德科技有限公司 Robot path planning method and device based on modified Artificial Potential Field Method
CN110221613B (en) * 2019-06-12 2020-04-17 北京洛必德科技有限公司 Robot path planning method and device based on improved artificial potential field method
CN112241168A (en) * 2019-07-18 2021-01-19 万润科技股份有限公司 Self-propelled device and map building method thereof
CN112241168B (en) * 2019-07-18 2024-02-02 万润科技股份有限公司 Self-propelled device and map building method thereof
CN110509271A (en) * 2019-07-23 2019-11-29 国营芜湖机械厂 It is a kind of that robot control method is followed based on laser radar
WO2021068150A1 (en) * 2019-10-10 2021-04-15 Huawei Technologies Co., Ltd. Controlling method of mobile apparatus and computer program thereof
CN112955842A (en) * 2019-10-10 2021-06-11 华为技术有限公司 Control method of mobile device and computer program thereof
CN110887489A (en) * 2019-11-22 2020-03-17 深圳晨芯时代科技有限公司 AR robot-based SLAM algorithm experimental method
CN111006652A (en) * 2019-12-20 2020-04-14 深圳无境智能机器人有限公司 Method for running robot close to edge
CN111006652B (en) * 2019-12-20 2023-08-01 深圳市飞瑶电机科技有限公司 Robot side-by-side operation method
CN113093176A (en) * 2019-12-23 2021-07-09 北京三快在线科技有限公司 Linear obstacle detection method, linear obstacle detection device, electronic apparatus, and storage medium
CN111308993A (en) * 2020-02-13 2020-06-19 青岛联合创智科技有限公司 Human body target following method based on monocular vision
CN111308993B (en) * 2020-02-13 2022-04-01 青岛联合创智科技有限公司 Human body target following method based on monocular vision
CN111360841A (en) * 2020-05-27 2020-07-03 北京云迹科技有限公司 Robot monitoring method and device, storage medium and electronic equipment
CN111360841B (en) * 2020-05-27 2020-08-18 北京云迹科技有限公司 Robot monitoring method and device, storage medium and electronic equipment
CN111805535A (en) * 2020-06-11 2020-10-23 浙江大华技术股份有限公司 Positioning navigation method, device and computer storage medium
CN112130565A (en) * 2020-09-14 2020-12-25 贵州翰凯斯智能技术有限公司 Self-walking robot platform control system and communication method thereof
CN112270076A (en) * 2020-10-15 2021-01-26 同济大学 Environment model construction method and system based on intelligent agent active perception
CN112738022A (en) * 2020-12-07 2021-04-30 浙江工业大学 Attack method for ROS message of robot operating system
CN112738022B (en) * 2020-12-07 2022-05-03 浙江工业大学 Attack method for ROS message of robot operating system
CN112698629A (en) * 2020-12-23 2021-04-23 江苏睿科大器机器人有限公司 AGV (automatic guided vehicle) scheduling method and system suitable for hospital scene
CN113029143A (en) * 2021-02-24 2021-06-25 同济大学 Indoor navigation method suitable for pepper robot
CN113093729A (en) * 2021-03-10 2021-07-09 上海工程技术大学 Intelligent shopping trolley based on vision and laser radar and control method
CN113052152A (en) * 2021-06-02 2021-06-29 中国人民解放军国防科技大学 Indoor semantic map construction method, device and equipment based on vision
CN113612920A (en) * 2021-06-23 2021-11-05 广西电网有限责任公司电力科学研究院 Method and device for shooting power equipment image by unmanned aerial vehicle
CN114200471A (en) * 2021-12-07 2022-03-18 杭州电子科技大学信息工程学院 Forest fire source detection system and method based on unmanned aerial vehicle, storage medium and equipment
CN114373329A (en) * 2021-12-31 2022-04-19 广东奥博信息产业股份有限公司 Vehicle searching method for indoor parking lot, electronic equipment and readable storage medium
CN114460939A (en) * 2022-01-22 2022-05-10 贺晓转 Intelligent walking robot autonomous navigation improvement method under complex environment
CN116382310A (en) * 2023-06-06 2023-07-04 南京理工大学 Artificial potential field path planning method and system
CN116382310B (en) * 2023-06-06 2023-08-18 南京理工大学 Artificial potential field path planning method and system
CN116578101A (en) * 2023-07-12 2023-08-11 季华实验室 AGV pose adjustment method based on two-dimensional code, electronic equipment and storage medium
CN116578101B (en) * 2023-07-12 2023-09-12 季华实验室 AGV pose adjustment method based on two-dimensional code, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN108646761B (en) 2020-07-31

Similar Documents

Publication Publication Date Title
CN108646761A (en) Robot indoor environment exploration, avoidance and method for tracking target based on ROS
US10832056B1 (en) Visual-inertial positional awareness for autonomous and non-autonomous tracking
US10354396B1 (en) Visual-inertial positional awareness for autonomous and non-autonomous device
US11948369B2 (en) Visual-inertial positional awareness for autonomous and non-autonomous mapping
Thorpe et al. Vision and navigation for the Carnegie-Mellon Navlab
CN105043396B (en) The method and system of self-built map in a kind of mobile robot room
TWI467494B (en) Mobile camera localization using depth maps
CN109959377A (en) A kind of robot navigation's positioning system and method
EP3428760B1 (en) Mapping optimization in autonomous and non-autonomous platforms
Mueller et al. Ue4sim: A photo-realistic simulator for computer vision applications
Ye et al. 6-DOF pose estimation of a robotic navigation aid by tracking visual and geometric features
Prieto et al. As-is building-structure reconstruction from a probabilistic next best scan approach
US20180315209A1 (en) Localizing and mapping platform
CN106708037A (en) Autonomous mobile equipment positioning method and device, and autonomous mobile equipment
Bayer et al. Speeded up elevation map for exploration of large-scale subterranean environments
Karam et al. Integrating a low-cost mems imu into a laser-based slam for indoor mobile mapping
Kovács Visual monocular obstacle avoidance for small unmanned vehicles
Basit et al. Joint localization of pursuit quadcopters and target using monocular cues
De Silva et al. Comparative analysis of octomap and rtabmap for multi-robot disaster site mapping
Klaser et al. Simulation of an autonomous vehicle with a vision-based navigation system in unstructured terrains using OctoMap
Cui et al. Simulation and Implementation of Slam Drawing Based on Ros Wheeled Mobile Robot
Li et al. Auto-maps-generation through Self-path-generation in ROS-based Robot Navigation
Abdulov et al. AEROBOT-2020 UAV Challenge: A Report
Pittol et al. Monocular 3d exploration using lines-of-sight and local maps
Livatino et al. Automatic selection of visual landmark for mobile robot navigation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant