CN109102525B - Mobile robot following control method based on self-adaptive posture estimation - Google Patents
Mobile robot following control method based on self-adaptive posture estimation Download PDFInfo
- Publication number
- CN109102525B CN109102525B CN201810795013.0A CN201810795013A CN109102525B CN 109102525 B CN109102525 B CN 109102525B CN 201810795013 A CN201810795013 A CN 201810795013A CN 109102525 B CN109102525 B CN 109102525B
- Authority
- CN
- China
- Prior art keywords
- coordinate system
- camera
- points
- pixel
- tracking
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
- B25J13/08—Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/66—Analysis of geometric attributes of image moments or centre of gravity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
Abstract
A mobile robot following control method based on self-adaptive attitude estimation comprises the following steps: 1) establishing a robot kinematics model; 2) tracking the characteristic region; 3) performing target tracking through autonomous online learning; 4) extracting a characteristic region, performing expansion, corrosion and filtering optimization processing on the characteristic region, extracting characteristic points and adaptively matching the characteristic points; 5) carrying out pose estimation on the matched feature points; 6) designing a PID visual servo following controller. The invention provides a PID mobile robot vision following control method capable of effectively solving self-adaptive pose estimation under the background that feature points cannot be tracked or the feature points are missing.
Description
Technical Field
The invention relates to a mobile robot target tracking and following control system based on vision, in particular to a mobile robot following control method with self-adaptive posture estimation under the condition of input limitation.
Background
With the development of scientific technology and control technology, computer vision has been widely applied in various fields, and the characteristics of abundant visual data information amount, abundant processing means and the like enable the vision-based mobile robot control to be widely applied in the fields of scientific research, military, industry, logistics and the like. The pose of the robot is one of basic problems in robot motion control, is always concerned with widely, and not only can enrich the theoretical achievement of motion control of the mobile robot and meet the increasingly high requirements of multiple fields on the motion control technology aiming at the research of the vision-based mobile robot target following servo control technology, but also has great theoretical and engineering significance. In addition, visual information is introduced, the capability range of the mobile robot is expanded, and the requirement of man-machine interaction can be effectively met.
However, in practical environments, especially in complex backgrounds, various interference problems such as light factors and jitter in the motion process inevitably exist in visual information, and new challenges are brought to the path tracking control of the mobile robot based on the vision.
The mobile robot following control method based on self-adaptive pose estimation is a control strategy which combines a pose estimation system and a PID parameter drive control system and designs a controller to enable the whole system to quickly and asymptotically stabilize. Compared with other control methods, the on-line learning target tracking adaptive pose estimation method enables the robot to stably track the feature points when moving under a complex background, can solve the uncertainty problems that the feature points cannot be tracked and are missing and the like, and has attracted general attention in the field of visual servo control of the mobile robot in recent years. The method comprises the following steps that a variable weight real-time compression target tracking method under a collaborative training frame is adopted in a thesis (a plurality of researches on real-time visual target tracking in a complex scene) by Zhujian chapter and the like, a Nihon seal and the like adopt single-target long-term tracking of autonomous selective learning in the thesis (human body detection and target tracking method research based on video), and a robot target tracking method of stereoscopic vision online multi-instance learning is adopted in the thesis (robot target tracking based on an improved online multi-instance learning algorithm) by Wanglai and the like. However, none of these results has utilized monocular vision online autonomous learning target tracking feature points and pose estimation to design a PID servo-follow controller for a mobile robot. In addition, in practical application, no matter a gyroscope or a stereoscopic vision camera, there is a certain practical limitation on the acquisition of the pose, so that the study on monocular visual target tracking self-adaptive real-time pose estimation is necessary.
Disclosure of Invention
In order to overcome the defect that the prior art cannot solve the problem of a monocular camera pose estimation visual servo control system of a mobile robot, the invention provides a mobile robot following control method based on self-adaptive pose estimation.
The technical scheme adopted by the invention for solving the technical problems is as follows:
a mobile robot following control method based on adaptive pose estimation, the method comprising the steps of:
1) establishing a mobile robot model based on vision, and defining x and y as a normalized horizontal and vertical coordinate of a camera, zcAs the coordinate of the camera on the z-axis, the velocity vector of the mobile robot under the camera coordinate system isvcAnd ωcRespectively is the z-axis velocity and the x-z plane angular velocity of the mobile robot under a camera coordinate system, and the velocity vector of the mobile robot under the self coordinate system isvrAnd ωrThe z-axis velocity and the x-z plane angular velocity of the mobile robot in the self coordinate system are respectively, then the kinematic model of the mobile robot based on the vision is as follows:
2) tracking the characteristic region and extracting characteristic points; tracking a characteristic region, extracting the characteristic region, marking a blue region as 255 in an HSV color space model, marking other regions as 0 for binaryzation, optimizing a binaryzation image by utilizing expansion, corrosion and filtering to obtain a white connected region marked as 255, and calculating four barycenters of the connected region, namely four characteristic points;
defining four connected regions as center of gravityThe connected region center of gravity is calculated as follows:
wherein f (u, v) is pixel point value, Ω is connected region, and is obtained by formula (2)Calculating other three gravity points in the same way
The pixel coordinates are converted to image coordinates as follows:
wherein dx is the length unit of a pixel in the x direction, dy is the length unit of a pixel in the y direction, and u0,v0Is the number of horizontal and vertical pixels of the phase difference between the pixel coordinate of the image center and the pixel coordinate of the image origin, and the pixel coordinate can be expressed by the formula (3)Conversion into coordinates in image coordinate systemThe coordinates of the other three points in the image coordinate system can be calculated by the same method
The image coordinates are converted to camera coordinates as calculated:
where f is the focal length, the image coordinates are calculated using equation (4)Conversion to coordinates in the camera coordinate systemThe coordinates of the other three points in the camera coordinate system are calculated by the same method
3) Pose estimation
Step 2) obtaining the coordinates of the feature points in the camera coordinate system The world coordinate system is established on the object coordinate system, and the first characteristic point is the origin of the object coordinate system, namely the origin of the world coordinate system; therefore, the world coordinates of four characteristic points on the target plate can be obtained according to actual measurement
The conversion relation between the camera coordinate system and the world coordinate system is as follows:
wherein the content of the first and second substances,is a matrix of rotations of the optical system,is a translation matrix, and utilizes formula (5) to solve R rotation by corresponding four points of camera coordinate system with four points on world coordinate systemA rotation matrix and a t translation matrix;
the calculation for solving the rotation angle using the rotation matrix is as follows:
wherein, thetaxIs a camera coordinate system XcAxis relative to world coordinate system XwAngle of rotation of the shaft, thetayIs the camera coordinate system YcAxis relative to world coordinate system YwAngle of rotation of the shaft, thetazIs a camera coordinate system ZcAxis relative to world coordinate system ZwThe rotation angle of the shaft, i.e. the pose of the camera;
the world coordinates of the camera are calculated using the translation matrix:
wherein the content of the first and second substances,the world coordinate position of the camera is used, in order to verify whether the pose is correct or not, the point coordinate under the fifth world coordinate system is re-projected into the pixel coordinate system to verify whether the pose is correct or not, and the re-projection calculation mode is as follows:
wherein the content of the first and second substances,is the world coordinate of the fifth feature point, (u)5,v5) Is the pixel coordinates after the re-projection,is that the fifth feature point is converted to a depth value in the camera coordinate system,is a camera internal reference matrix;
4) designing a PID controller
The input signal to the angular velocity PID controller is the pixel abscissa value 320, and the output signal is the abscissa u of the fifth reprojection point5The feedback signal is also the abscissa u of the fifth reprojection point5The angular velocity incremental PID algorithm is as follows:
wherein, K in the angular velocity PID controller parameterωpIs the proportional control coefficient, KωiIs the integral control coefficient, KωdIs a differential control coefficient, epix[k]Is the pixel error signal at time k;
the input signal of the linear velocity PID controller is a 500mm depth information value, and the output signal is the distance from the camera to the target plateThe feedback signal is also the distance of the camera to the target plateThe linear velocity incremental PID algorithm is as follows:
Δv[k]=Kvp{ed[k]-ed[k-1]}+Kvied[k]+Kvd{ed[k]-2ed[k-1]+ed[k-2]} (10)
wherein, the linear velocity PID controller parameter KvpIs the proportional control coefficient, KviIs the integral control coefficient, KvdIs a differential control coefficient, ed[k]Is the depth distance error signal at time k.
Further, in the step 2), the step of tracking the feature region is as follows:
2.1: initialization: initializing a camera and starting the camera, manually or automatically selecting a tracking area with the number of pixel points larger than 10, and setting basic parameters of a tracking algorithm;
2.2: the iteration starts: and (3) taking a target area when the h frame is taken under a complex background, uniformly generating a plurality of points, tracking the points to the h +1 frame by adopting a Lucas-Kanade tracker, and obtaining the predicted positions of the points of the h frame by back tracking, wherein a deviation formula is calculated as follows:
wherein, Δ XhIs the Euclidean distance, XhIn order to be the initial position of the test,predicting position for back tracking, Δ XhAs one of the conditions for screening tracking points,. DELTA.XhLeave < 10, otherwise delete;
2.3: normalized cross-correlation: describing the correlation degree of the two targets by combining a normalized cross correlation method and deleting points with low correlation degree, wherein the algorithm is as follows:
wherein f (u, v) is a pixel value,is the pixel mean, g (x, y) is the template pixel value,the method comprises the following steps of taking a template pixel mean value, taking n as a tracking point number, taking NCC as correlation, reserving points when the NCC is larger and the correlation degree is higher, and otherwise, deleting points, and solving a translation scale median value and a scaling scale median value by using the tracking points left after deletion to obtain a new characteristic region;
2.4: generating positive and negative samples: to improve the recognition accuracy, online learning uses a nearest neighbor classifier to generate positive and negative samples:
positive nearest neighbor similarity:
negative nearest neighbor similarity:
relative similarity:
wherein, S (p)i,pj) Is (p)i,pj) Similarity of image elements, N is normalized correlation coefficient, M is target area, relative similarity SrThe larger the similarity, the higher the similarity, and the positive sample with the relative similarity greater than 0.2 and the negative sample with the relative similarity less than 0.2 are set;
2.5: and (3) iterative updating: let h be h +1, jump to 2.2.
The technical conception of the invention is as follows: firstly, a mobile robot kinematic model and a pixel conversion calculation are established. Then, a mobile robot following control problem of target tracking adaptive attitude estimation is given based on the model. And (4) self-adaptively distributing characteristic points by utilizing the tracked target, and performing pose estimation by adopting the solvePNP. And finally, designing a PID controller by adopting an incremental PID control algorithm and combining pose feedback information and reprojection information to realize real-time vision servo robot following control.
The invention has the following beneficial effects: the target is tracked in an online learning autonomous tracking mode, and the target object is easy to track under a complex background; the target object tracking is not lost, the feature points can be accurately obtained under a complex background, and the problem that the feature points cannot be tracked or are lost in tracking is effectively solved; extracting, segmenting and adaptively matching four characteristic points of the target object region, and performing pose estimation on line in real time to obtain pose information and provide effective distance and angle information for the mobile robot; specific parameters of the incremental PID controller are given, and the problem that the robot cannot be quickly asymptotically and stably followed is effectively solved.
Drawings
Fig. 1 is a schematic diagram of mobile robot camera model coordinate system establishment.
Fig. 2 is a block diagram of a mobile robot following control method based on adaptive pose estimation.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
Referring to fig. 1 and 2, a mobile robot following control method based on adaptive pose estimation includes the following steps:
1) establishing a mobile robot model based on vision, and defining x and y as a normalized horizontal and vertical coordinate of a camera, zcAs the coordinate of the camera on the z-axis, the velocity vector of the mobile robot under the camera coordinate system isvcAnd ωcRespectively is the z-axis velocity and the x-z plane angular velocity of the mobile robot under a camera coordinate system, and the velocity vector of the mobile robot under the self coordinate system isvrAnd ωrThe z-axis velocity and the x-z plane angular velocity of the mobile robot in the self coordinate system are respectively, then the kinematic model of the mobile robot based on the vision is as follows:
2) tracking the characteristic region and extracting characteristic points; tracking a characteristic region, extracting the characteristic region, marking a blue region as 255 in an HSV color space model, marking other regions as 0 for binaryzation, optimizing a binaryzation image by utilizing expansion, corrosion and filtering to obtain a white connected region marked as 255, and calculating four barycenters of the connected region, namely four characteristic points;
defining four connected regions as center of gravityThe connected region center of gravity is calculated as follows:
where f (u, v) is the pixel point value, Ω1For the first connected region, using equation (2)The other three gravity points are calculated in the same way,to obtainWherein omega2Is a second one of the connected regions,to obtainWherein omega3As a third one of the connected regions, a second one,to obtainWherein omega4A fourth connected region;
the pixel coordinates are converted to image coordinates as follows:
wherein dx is the length unit of a pixel in the x direction, dy is the length unit of a pixel in the y direction, and u0,v0Is the number of horizontal and vertical pixels of the phase difference between the pixel coordinate of the image center and the pixel coordinate of the image origin, and uses the formula (3) to calculate the pixel coordinateConversion into coordinates in image coordinate systemThe coordinates of the other three points in the image coordinate system are calculated in the same way,to obtainTo obtainTo obtain
The image coordinates are converted to camera coordinates as calculated:
where f is the focal length, the image coordinates are calculated using equation (4)Conversion to coordinates in the camera coordinate systemThe coordinates of the camera coordinate system of the other three points are calculated in the same way,to obtainTo obtain To obtain
3) Pose estimation
Step 2) obtaining the coordinates of the feature points in the camera coordinate system The world coordinate system is established on the object coordinate system, and the first characteristic point is the origin of the object coordinate system, namely the origin of the world coordinate system; therefore, the world coordinates of four characteristic points on the target plate can be obtained according to actual measurement
The conversion relation between the camera coordinate system and the world coordinate system is as follows:
wherein the content of the first and second substances,is a matrix of rotations of the optical system,the method is characterized in that the method is a translation matrix, and an R rotation matrix and a t translation matrix are solved by corresponding four points of a camera coordinate system with four points on a world coordinate system by using a formula (5);
the calculation for solving the rotation angle using the rotation matrix is as follows:
wherein, thetaxIs a camera coordinate system XcAxis relative to world coordinate system XwAngle of rotation of the shaft, thetayIs the camera coordinate system YcAxis relative to world coordinate system YwAngle of rotation of the shaft, thetazIs a camera coordinate system ZcAxis relative to world coordinate system ZwThe rotation angle of the shaft, i.e. the pose of the camera;
the world coordinates of the camera are calculated using the translation matrix:
wherein the content of the first and second substances,the world coordinate position of the camera is used, in order to verify whether the pose is correct or not, the point coordinate under the fifth world coordinate system is re-projected into the pixel coordinate system to verify whether the pose is correct or not, and the re-projection calculation mode is as follows:
wherein the content of the first and second substances,is the world coordinate of the fifth feature point, (u)5,v5) Is the pixel coordinates after the re-projection,is that the fifth feature point is converted to a depth value in the camera coordinate system,is a camera internal reference matrix;
4) designing a PID controller
The input signal to the angular velocity PID controller is the pixel abscissa value 320, and the output signal is the abscissa u of the fifth reprojection point5The feedback signal is also the abscissa u of the fifth reprojection point5The angular velocity incremental PID algorithm is as follows:
wherein, K in the angular velocity PID controller parameterωpIs the proportional control coefficient, KωiIs the integral control coefficient, KωdIs a differential control coefficient, epix[k]Is the pixel error signal at time k;
the input signal of the linear velocity PID controller is a 500mm depth information value, and the output signal is the distance from the camera to the target plateThe feedback signal is also the distance of the camera to the target plateThe linear velocity incremental PID algorithm is as follows:
Δv[k]=Kvp{ed[k]-ed[k-1]}+Kvied[k]+Kvd{ed[k]-2ed[k-1]+ed[k-2]} (10)
wherein, the linear velocity PID controller parameter KvpIs the proportional control coefficient, KviIs the integral control coefficient, KvdIs a differential control coefficient, ed[k]Is the depth distance error signal at time k.
Further, in the step 2), the step of tracking the feature region is as follows:
2.1: initialization: initializing a camera and starting the camera, manually or automatically selecting a tracking area with the number of pixel points larger than 10, and setting basic parameters of a tracking algorithm;
2.2: the iteration starts: and (3) taking a target area when the h frame is taken under a complex background, uniformly generating a plurality of points, tracking the points to the h +1 frame by adopting a Lucas-Kanade tracker, and obtaining the predicted positions of the points of the h frame by back tracking, wherein a deviation formula is calculated as follows:
wherein, Δ XhIs the Euclidean distance, XhIn order to be the initial position of the test,predicting position for back tracking, Δ XhAs one of the conditions for screening tracking points,. DELTA.XhLeave < 10, otherwise delete;
2.3: normalized cross-correlation: describing the correlation degree of the two targets by combining a normalized cross correlation method and deleting points with low correlation degree, wherein the algorithm is as follows:
wherein f (u, v) is a pixel value,is the pixel mean, g (x, y) is the template pixel value,the method comprises the following steps of taking a template pixel mean value, taking n as a tracking point number, taking NCC as correlation, reserving points when the NCC is larger and the correlation degree is higher, and otherwise, deleting points, and solving a translation scale median value and a scaling scale median value by using the tracking points left after deletion to obtain a new characteristic region;
2.4: generating positive and negative samples: to improve the recognition accuracy, online learning uses a nearest neighbor classifier to generate positive and negative samples:
positive nearest neighbor similarity:
negative nearest neighbor similarity:
relative similarity:
wherein, S (p)i,pj) Is (p)i,pj) Similarity of image elements, N is normalized correlation coefficient, M is target area, relative similarity SrThe larger the similarity, the higher the similarity, and the positive sample with the relative similarity greater than 0.2 and the negative sample with the relative similarity less than 0.2 are set;
2.5: and (3) iterative updating: let h be h +1, jump to 2.2.
Claims (2)
1. A mobile robot following control method based on adaptive pose estimation is characterized by comprising the following steps:
1) establishing a mobile robot model based on vision, and defining x and y as a normalized horizontal and vertical coordinate of a camera, zcAs the coordinate of the camera on the z-axis, the velocity vector of the mobile robot under the camera coordinate system isvcAnd ωcAre respectively provided withFor the z-axis velocity and the x-z plane angular velocity of the mobile robot in a camera coordinate system, the velocity vector of the mobile robot in a self coordinate system isvrAnd ωrThe z-axis velocity and the x-z plane angular velocity of the mobile robot in the self coordinate system are respectively, then the kinematic model of the mobile robot based on the vision is as follows:
2) tracking the characteristic region and extracting characteristic points; tracking a characteristic region, extracting the characteristic region, marking a blue region as 255 in an HSV color space model, marking other regions as 0 for binaryzation, optimizing a binaryzation image by utilizing expansion, corrosion and filtering to obtain a white connected region marked as 255, and calculating four barycenters of the connected region, namely four characteristic points;
defining four connected regions as center of gravityThe connected region center of gravity is calculated as follows:
wherein f (u, v) is pixel point value, Ω is connected region, and is obtained by formula (2)Calculating other three gravity points in the same way
The pixel coordinates are converted to image coordinates as follows:
wherein dx is the length unit of a pixel in the x direction, dy is the length unit of a pixel in the y direction, and u0,v0Is the number of horizontal and vertical pixels of the phase difference between the pixel coordinate of the image center and the pixel coordinate of the image origin, and uses the formula (3) to calculate the pixel coordinateConversion into coordinates in image coordinate systemThe coordinates of the other three points in the image coordinate system are calculated in the same way
The image coordinates are converted to camera coordinates as calculated:
where f is the focal length, the image coordinates are calculated using equation (4)Conversion to coordinates in the camera coordinate systemThe coordinates of the other three points in the camera coordinate system are calculated by the same method
3) Pose estimation
Step 2) obtaining the coordinates of the feature points in the camera coordinate system The world coordinate system is established on the object coordinate system, and the first characteristic point is the origin of the object coordinate system, namely the origin of the world coordinate system; therefore, the world coordinates of four characteristic points on the target plate can be obtained according to actual measurement
The conversion relation between the camera coordinate system and the world coordinate system is as follows:
wherein the content of the first and second substances,is a matrix of rotations of the optical system,the method is characterized in that the method is a translation matrix, and an R rotation matrix and a t translation matrix are solved by corresponding four points of a camera coordinate system with four points on a world coordinate system by using a formula (5);
the calculation for solving the rotation angle using the rotation matrix is as follows:
wherein, thetaxIs a camera coordinate system XcAxis relative to world coordinate system XwAngle of rotation of the shaft, thetayIs the camera coordinate system YcAxis relative to world coordinate system YwAngle of rotation of the shaft, thetazIs a camera coordinate system ZcAxis relative to world coordinate system ZwThe rotation angle of the shaft, i.e. the pose of the camera;
the world coordinates of the camera are calculated using the translation matrix:
wherein the content of the first and second substances,the world coordinate position of the camera is used, in order to verify whether the pose is correct or not, the point coordinate under the fifth world coordinate system is re-projected into the pixel coordinate system to verify whether the pose is correct or not, and the re-projection calculation mode is as follows:
wherein the content of the first and second substances,is the world coordinate of the fifth feature point, (u)5,v5) Is the pixel coordinates after the re-projection,is that the fifth feature point is converted to a depth value in the camera coordinate system,is a camera internal reference matrix;
4) designing a PID controller
The input signal to the angular velocity PID controller is the pixel abscissa value 320, and the output signal is the fifth re-projectionAbscissa u of point5The feedback signal is also the abscissa u of the fifth reprojection point5The angular velocity incremental PID algorithm is as follows:
wherein, K in the angular velocity PID controller parameterωpIs the proportional control coefficient, KωiIs the integral control coefficient, KωdIs a differential control coefficient, epix[k]Is the pixel error signal at time k;
the input signal of the linear velocity PID controller is a 500mm depth information value, and the output signal is the distance from the camera to the target plateThe feedback signal is also the distance of the camera to the target plateThe linear velocity incremental PID algorithm is as follows:
Δv[k]=Kvp{ed[k]-ed[k-1]}+Kvied[k]+Kvd{ed[k]-2ed[k-1]+ed[k-2]} (10)
wherein, the linear velocity PID controller parameter KvpIs the proportional control coefficient, KviIs the integral control coefficient, KvdIs a differential control coefficient, ed[k]Is the depth distance error signal at time k.
2. The mobile robot following control method based on adaptive pose estimation as claimed in claim 1, wherein: in the step 2), the step of tracking the characteristic region is as follows:
2.1: initialization: initializing a camera and starting the camera, manually or automatically selecting a tracking area with the number of pixel points larger than 10, and setting basic parameters of a tracking algorithm;
2.2: the iteration starts: and (3) taking a target area when the h frame is taken under a complex background, uniformly generating a plurality of points, tracking the points to the h +1 frame by adopting a Lucas-Kanade tracker, and obtaining the predicted positions of the points of the h frame by back tracking, wherein a deviation formula is calculated as follows:
wherein, Δ XhIs the Euclidean distance, XhIn order to be the initial position of the test,predicting position for back tracking, Δ XhAs one of the conditions for screening tracking points,. DELTA.XhLeave < 10, otherwise delete;
2.3: normalized cross-correlation: describing the correlation degree of the two targets by combining a normalized cross correlation method and deleting points with low correlation degree, wherein the algorithm is as follows:
wherein f (u, v) is a pixel value,is the pixel mean, g (x, y) is the template pixel value,the method comprises the following steps of taking a template pixel mean value, taking n as a tracking point number, taking NCC as correlation, reserving points when the NCC is larger and the correlation degree is higher, and otherwise, deleting points, and solving a translation scale median value and a scaling scale median value by using the tracking points left after deletion to obtain a new characteristic region;
2.4: generating positive and negative samples: to improve the recognition accuracy, online learning uses a nearest neighbor classifier to generate positive and negative samples:
positive nearest neighbor similarity:
negative nearest neighbor similarity:
relative similarity:
wherein, S (p)i,pj) Is (p)i,pj) Similarity of image elements, N is normalized correlation coefficient, M is target area, relative similarity SrThe larger the similarity, the higher the similarity, and the positive sample with the relative similarity greater than 0.2 and the negative sample with the relative similarity less than 0.2 are set;
2.5: and (3) iterative updating: let h be h +1, jump to 2.2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810795013.0A CN109102525B (en) | 2018-07-19 | 2018-07-19 | Mobile robot following control method based on self-adaptive posture estimation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810795013.0A CN109102525B (en) | 2018-07-19 | 2018-07-19 | Mobile robot following control method based on self-adaptive posture estimation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109102525A CN109102525A (en) | 2018-12-28 |
CN109102525B true CN109102525B (en) | 2021-06-18 |
Family
ID=64846893
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810795013.0A Active CN109102525B (en) | 2018-07-19 | 2018-07-19 | Mobile robot following control method based on self-adaptive posture estimation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109102525B (en) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109760050A (en) * | 2019-01-12 | 2019-05-17 | 鲁班嫡系机器人(深圳)有限公司 | Robot behavior training method, device, system, storage medium and equipment |
CN110470298B (en) * | 2019-07-04 | 2021-02-26 | 浙江工业大学 | Robot vision servo pose estimation method based on rolling time domain |
CN110490908B (en) * | 2019-08-26 | 2021-09-21 | 北京华捷艾米科技有限公司 | Pose tracking method and device for small object in dynamic scene |
CN110728715B (en) * | 2019-09-06 | 2023-04-25 | 南京工程学院 | Intelligent inspection robot camera angle self-adaptive adjustment method |
CN111267095B (en) * | 2020-01-14 | 2022-03-01 | 大连理工大学 | Mechanical arm grabbing control method based on binocular vision |
CN111552292B (en) * | 2020-05-09 | 2023-11-10 | 沈阳建筑大学 | Vision-based mobile robot path generation and dynamic target tracking method |
CN112184765B (en) * | 2020-09-18 | 2022-08-23 | 西北工业大学 | Autonomous tracking method for underwater vehicle |
CN113297997B (en) * | 2021-05-31 | 2022-08-02 | 合肥工业大学 | 6-freedom face tracking method and device of non-contact physiological detection robot |
CN113379850B (en) * | 2021-06-30 | 2024-01-30 | 深圳银星智能集团股份有限公司 | Mobile robot control method, device, mobile robot and storage medium |
CN114162127B (en) * | 2021-12-28 | 2023-06-27 | 华南农业大学 | Paddy field unmanned agricultural machinery path tracking control method based on machine pose estimation |
CN117097918B (en) * | 2023-10-19 | 2024-01-09 | 奥视(天津)科技有限公司 | Live broadcast display device and control method thereof |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104732518A (en) * | 2015-01-19 | 2015-06-24 | 北京工业大学 | PTAM improvement method based on ground characteristics of intelligent robot |
CN104881044A (en) * | 2015-06-11 | 2015-09-02 | 北京理工大学 | Adaptive tracking control method of multi-mobile-robot system under condition of attitude unknown |
CN105488780A (en) * | 2015-03-25 | 2016-04-13 | 遨博(北京)智能科技有限公司 | Monocular vision ranging tracking device used for industrial production line, and tracking method thereof |
CN205375196U (en) * | 2016-03-01 | 2016-07-06 | 河北工业大学 | A robot control of group device for wind -powered electricity generation field is patrolled and examined |
CN107193279A (en) * | 2017-05-09 | 2017-09-22 | 复旦大学 | Robot localization and map structuring system based on monocular vision and IMU information |
-
2018
- 2018-07-19 CN CN201810795013.0A patent/CN109102525B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104732518A (en) * | 2015-01-19 | 2015-06-24 | 北京工业大学 | PTAM improvement method based on ground characteristics of intelligent robot |
CN104732518B (en) * | 2015-01-19 | 2017-09-01 | 北京工业大学 | A kind of PTAM improved methods based on intelligent robot terrain surface specifications |
CN105488780A (en) * | 2015-03-25 | 2016-04-13 | 遨博(北京)智能科技有限公司 | Monocular vision ranging tracking device used for industrial production line, and tracking method thereof |
CN104881044A (en) * | 2015-06-11 | 2015-09-02 | 北京理工大学 | Adaptive tracking control method of multi-mobile-robot system under condition of attitude unknown |
CN205375196U (en) * | 2016-03-01 | 2016-07-06 | 河北工业大学 | A robot control of group device for wind -powered electricity generation field is patrolled and examined |
CN107193279A (en) * | 2017-05-09 | 2017-09-22 | 复旦大学 | Robot localization and map structuring system based on monocular vision and IMU information |
Also Published As
Publication number | Publication date |
---|---|
CN109102525A (en) | 2018-12-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109102525B (en) | Mobile robot following control method based on self-adaptive posture estimation | |
Park et al. | Elastic lidar fusion: Dense map-centric continuous-time slam | |
CN108242079B (en) | VSLAM method based on multi-feature visual odometer and graph optimization model | |
CN110222581B (en) | Binocular camera-based quad-rotor unmanned aerial vehicle visual target tracking method | |
Concha et al. | Visual-inertial direct SLAM | |
CN102722697B (en) | Unmanned aerial vehicle autonomous navigation landing visual target tracking method | |
van der Zwaan et al. | Visual station keeping for floating robots in unstructured environments | |
Liu et al. | Using unsupervised deep learning technique for monocular visual odometry | |
CN112949452B (en) | Robot low-light environment grabbing detection method based on multitask shared network | |
Zhao et al. | Vision-based tracking control of quadrotor with backstepping sliding mode control | |
CN114708293A (en) | Robot motion estimation method based on deep learning point-line feature and IMU tight coupling | |
CN114494150A (en) | Design method of monocular vision odometer based on semi-direct method | |
CN117218210A (en) | Binocular active vision semi-dense depth estimation method based on bionic eyes | |
Fanani et al. | Keypoint trajectory estimation using propagation based tracking | |
Tian et al. | Research on multi-sensor fusion SLAM algorithm based on improved gmapping | |
Xu et al. | Direct visual-inertial odometry with semi-dense mapping | |
CN109903309A (en) | A kind of robot motion's information estimating method based on angle optical flow method | |
Huang et al. | MC-VEO: A Visual-Event Odometry With Accurate 6-DoF Motion Compensation | |
Gao et al. | Coarse TRVO: A robust visual odometry with detector-free local feature | |
Shi et al. | Real-Time Multi-Modal Active Vision for Object Detection on UAVs Equipped With Limited Field of View LiDAR and Camera | |
Spica et al. | Active structure from motion for spherical and cylindrical targets | |
Liu et al. | An RGB-D-based cross-field of view pose estimation system for a free flight target in a wind tunnel | |
Taguchi et al. | Unsupervised Simultaneous Learning for Camera Re-Localization and Depth Estimation from Video | |
Wang et al. | An End-to-End Robotic Visual Localization Algorithm Based on Deep Learning | |
Cheng et al. | Image following using the feature-based optical flow approach |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |