CN109102525A - A kind of mobile robot follow-up control method based on the estimation of adaptive pose - Google Patents

A kind of mobile robot follow-up control method based on the estimation of adaptive pose Download PDF

Info

Publication number
CN109102525A
CN109102525A CN201810795013.0A CN201810795013A CN109102525A CN 109102525 A CN109102525 A CN 109102525A CN 201810795013 A CN201810795013 A CN 201810795013A CN 109102525 A CN109102525 A CN 109102525A
Authority
CN
China
Prior art keywords
coordinate
camera
pixel
follows
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810795013.0A
Other languages
Chinese (zh)
Other versions
CN109102525B (en
Inventor
俞立
陈旭
吴锦辉
刘安东
仇翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201810795013.0A priority Critical patent/CN109102525B/en
Publication of CN109102525A publication Critical patent/CN109102525A/en
Application granted granted Critical
Publication of CN109102525B publication Critical patent/CN109102525B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Abstract

A kind of mobile robot follow-up control method based on the estimation of adaptive pose, comprising the following steps: 1) establish robot kinematics' model;2) tracking feature region;3) autonomous on-line study carries out target following;4) characteristic area is extracted, characteristic point and Adaptive matching characteristic point are extracted in expansion, burn into filtering optimization processing feature region;5) matched characteristic point carries out pose estimation;6) PID visual servo following controller is designed.The present invention provides one kind can effectively solve characteristic point can not track or characteristic point missing complex background under adaptive pose estimation PID mobile robot visual follow-up control method.

Description

A kind of mobile robot follow-up control method based on the estimation of adaptive pose
Technical field
The present invention relates to the mobile robot target tracking following control systems of view-based access control model, more particularly to there are defeated Enter the mobile robot follow-up control method of adaptive pose estimation under limitation.
Background technique
With the development of science and technology and control technology, computer vision is in the existing extensive utilization of every field, vision The features such as data information amount is abundant, and processing means are abundant makes the mobile robot control of view-based access control model be widely used in section It grinds, is military, the fields such as industry and logistics.The pose of robot is as one of the basic problem in motion planning and robot control, and one Directly by extensive concern, the research of servo control technique is followed for the mobile robot target of view-based access control model, it not only can be rich The theoretical result of rich moveable robot movement control, can also meet it is multi-field to movement control technology increasingly higher demands, With great theory and engineering significance.In addition, the limit of power for the mobile robot that extended can by introducing visual information Effectively to meet the needs of human-computer interaction.
However in the actual environment, especially under complex background, visual information be inevitably present light factor with And the various interference problems such as shake in motion process, new choose is brought to the mobile Robot control of view-based access control model War.
Mobile robot follow-up control method based on the estimation of adaptive pose is to drive pose estimating system and pid parameter Autocontrol system, which combines and designs controller, makes whole system quickly asymptotically stable control strategy.Compared to other Control method, on-line study target tracking adaptive bit orientation estimation method make robot also can be steady when moving under complex background Determine tracking characteristics point, can handle can not tracking feature point and characteristic point missing etc. uncertain problems, in recent years in movement Robot Visual Servoing control field receives general concern.Zhu Jianzhang etc. paper (under complex scene real-time vision target with Several researchs of track) in using variable weight Real Time Compression under coorinated training frame target following, Ni Hongyin etc. (is based in paper The human testing of video and method for tracking target research) it is middle using the single goal long-term follow that independently selection learns, Wang Jiali etc. Stereoscopic vision more examples online are used in paper (based on the robot target tracking for improving online multi-instance learning algorithm) The robot target of habit tracks.However, these results are all without utilizing the online autonomous learning target tracking characteristic point of monocular vision The PID servo tracking controller of mobile robot is designed with pose estimation.Also, in practical applications, either gyroscope Or there is certain practical limitation to the acquisition of pose, therefore, certainly for monocular vision target following in stereo vision camera It is necessary for adapting to the research of real-time pose estimation.
Summary of the invention
In order to overcome the prior art that can not solve mobile robot monocular camera pose estimation Visual servoing control system Deficiency, the present invention provide a kind of mobile robot follow-up control method based on the estimation of adaptive pose, pass through robot motion It learns modeling and pixel conversion calculates, and autonomous learning tracking object extracts characteristic point under complex background, carries out adaptive Matching estimation pose, provides a kind of design of increment type PID Movement Controller of Mobile Robot based on pose estimated result.
The technical solution adopted by the present invention to solve the technical problems is:
A kind of mobile robot follow-up control method based on the estimation of adaptive pose, the described method comprises the following steps:
1) the mobile robot model of view-based access control model is established, defining x and y is the camera transverse and longitudinal coordinate after normalization, zcFor In the coordinate of z-axis, velocity vector of the camera under camera coordinates system be cameravcAnd ωcRespectively Mobile robot in z-axis line speed and x-z-plane angular speed, velocity vector of the robot under local Coordinate System isvrAnd ωrRespectively the reference of mobile robot is in z-axis line speed and x-z-plane angular speed, then The moveable robot movement model of view-based access control model are as follows:
2) tracking feature extracted region characteristic point;Characteristic area is extracted, in hsv color spatial model in tracking feature region Middle that blue region is labeled as 255, other area markings are 0 progress binaryzation, and utilize expansion, burn into filtering optimization two Value image, acquisition are labeled as 255 white connected region, calculate four centers of gravity of connected region, i.e. four characteristic points;
The center of gravity for defining four connected regions isEven Logical regional barycenter calculates as follows:
Wherein, f (u, v) is pixel point value, and Ω is connected region, is obtained using formula (2)Similarly calculate other Three focus points
It is as follows that pixel coordinate is converted into image coordinate calculating:
Wherein, dx is the length unit that a pixel accounts for the direction x, and dy is the length unit that a pixel accounts for the direction y, u0, v0It is the horizontal and vertical pixel number differed between picture centre pixel coordinate and image origin pixel coordinate, utilizes formula It (3) can be by pixel coordinateThe coordinate being converted under image coordinate systemThe other three point can similarly be calculated Coordinate under image coordinate system
It is as follows that image coordinate is converted into camera coordinates calculating:
Wherein, f is focal length, using formula (4) by image coordinateThe coordinate being converted under camera coordinates systemSimilarly calculate coordinate of the other three point under camera coordinates system
3) pose is estimated
Step 2) obtains coordinate of the characteristic point under camera coordinates system And world coordinate system foundation is fastened in object coordinates, first characteristic point is that object is sat Mark the origin of system namely the origin of world coordinate system;The world of four characteristic points on Target Board can be obtained according to actual measurement as a result, Coordinate
The transformational relation of camera coordinates system and world coordinate system are as follows:
Wherein,It is spin matrix,It is translation matrix, is sat camera using formula (5) Four points of mark system are corresponding with four points that world coordinates is fastened to solve R spin matrix and t translation matrix;
The calculating for solving rotation angle using spin matrix is as follows:
Wherein, θxIt is camera coordinates system XcAxis is relative to world coordinate system XwThe rotation angle of axis, θyIt is camera coordinates system YcAxis Relative to world coordinate system YwThe rotation angle of axis, θzIt is camera coordinates system ZcAxis is relative to world coordinate system ZwThe rotation angle of axis, i.e., The posture of camera;
The world coordinates of camera is calculated using translation matrix:
Wherein,It is the world coordinates position of camera, it is whether correct in order to verify pose, by the 5th world Point coordinate re-projection under coordinate system verifies whether pose is correct, and re-projection calculation is as follows into pixel coordinate system:
Wherein,It is the world coordinates of the 5th characteristic point, (u5,v5) it is pixel coordinate after re-projection, It is the depth value that the 5th characteristic point is transformed under camera coordinates system,It is camera internal reference matrix;
4) PID controller is designed
The input signal of angular speed PID controller is pixel abscissa value 320, and output signal is the 5th re-projection point Abscissa u5, feedback signal is also the abscissa u of the 5th re-projection point5, angular speed increment type PID algorithm is as follows:
Wherein, K in angular speed PID controller parameterωpIt is proportional control factor, KωiIt is integral control coefficient, KωdIt is micro- Divide control coefrficient, epix[k] is k moment pixel error signal;
The input signal of linear velocity PID controller is 500mm depth information value, output signal be camera to Target Board away from FromFeedback signal is also distance of the camera to Target BoardLinear velocity increment type PID algorithm is as follows:
Δ v [k]=Kvp{ed[k]-ed[k-1]}+Kvied[k]+Kvd{ed[k]-2ed[k-1]+ed[k-2]} (10)
Wherein, K in linear velocity PID controller parametervpIt is proportional control factor, KviIt is integral control coefficient, KvdIt is differential Control coefrficient, ed[k] is k moment depth distance error signal.
Further, in the step 2), the step of tracking feature region, is as follows:
2.1: initialization: initialization camera simultaneously starts, and manually or automatically selectes the trace regions that pixel number is greater than 10, if Determine the basic parameter of tracing algorithm;
2.2: iteration starts: taking target area when h frame under complex background, and uniformly generates some points, uses Lucas-Kanade tracker tracks these points to h+1 frame, and anti-tracking goes back to obtain the predicted position of these points of h frame, then Deviation formula calculates as follows:
Wherein, Δ XhFor Euclidean distance, XhFor preliminary examination position,Predicted position, Δ X are tracked to be counterhAs screening tracking point One of condition, Δ Xh< 10 leaves, and otherwise deletes;
2.3: normalization crosscorrelation: describing the degree of correlation of two targets in conjunction with normalization cross-correlation technique and delete phase selection The not high point of pass degree, algorithm are as follows:
Wherein, f (u, v) is pixel value,For pixel mean value, g (x, y) is template pixel value,It is equal for template pixel Value, n are tracking points, and NCC is correlation, and NCC is bigger, and degree of correlation is higher, and NCC > 0 leaves a little, is otherwise deleted a little, after deleting choosing Remaining tracking point asks translation scale intermediate value and zoom scale intermediate value to obtain new characteristic area;
2.4: generate positive negative sample: in order to improve accuracy of identification, on-line study generates positive and negative sample using nearest neighbor classifier This:
Positive arest neighbors similarity:
Negative arest neighbors similarity:
Relative similarity:
Wherein, S (pi,pj) it is (pi,pj) image primitive similarity, N be standardization related coefficient, M is target area, Relative similarity SrIt is bigger, then it represents that similarity is higher, and it is positive sample that setting relative similarity, which is greater than 0.2, and be negative sample less than 0.2 This;
2.5: iteration updates: enabling h=h+1, jumps to 2.2.
Technical concept of the invention are as follows: firstly, the conversion for establishing moveable robot movement model and pixel calculates.So Afterwards, the mobile robot following control problem of the adaptive pose estimation of target tracking is provided based on model.Utilize the target of tracking Self-adjusted block characteristic point, and solvePNP is taken to carry out pose estimation.Finally, using incremental timestamp algorithm, bound site Appearance feedback information and re-projection information design PID controller realize real-time vision servo robot model- following control.
Beneficial effects of the present invention are mainly manifested in: tracking target, complex background by the autonomous trace mode of on-line study Under be easy to track object;Object tracking is not lost, and characteristic point can be accurately obtained under complex background, effectively solution characteristic point The problem of can not tracking or tracking missing;Target object area extracts segmentation four characteristic points of Adaptive matching, and real-time online carries out Pose estimation, obtains posture information, provides effective distance and angle information to mobile robot;Provide incremental timestamp The design parameter of device effectively solves the problems, such as that robot can not quickly Asymptotic Stability follow.
Detailed description of the invention
Fig. 1 is mobile robot camera model establishment of coordinate system schematic diagram.
Fig. 2 is a kind of program chart of mobile robot follow-up control method based on the estimation of adaptive pose.
Specific embodiment
The invention will be further described below in conjunction with the accompanying drawings.
Referring to Figures 1 and 2, a kind of mobile robot follow-up control method based on the estimation of adaptive pose, including it is following Step:
1) the mobile robot model of view-based access control model is established, defining x and y is the camera transverse and longitudinal coordinate after normalization, zcFor In the coordinate of z-axis, velocity vector of the camera under camera coordinates system be cameravcAnd ωcRespectively Mobile robot in z-axis line speed and x-z-plane angular speed, velocity vector of the robot under local Coordinate System isvrAnd ωrRespectively the reference of mobile robot is in z-axis line speed and x-z-plane angular speed, then The moveable robot movement model of view-based access control model are as follows:
2) tracking feature extracted region characteristic point;Characteristic area is extracted, in hsv color spatial model in tracking feature region Middle that blue region is labeled as 255, other area markings are 0 progress binaryzation, and utilize expansion, burn into filtering optimization two Value image, acquisition are labeled as 255 white connected region, calculate four centers of gravity of connected region, i.e. four characteristic points;
The center of gravity for defining four connected regions isEven Logical regional barycenter calculates as follows:
Wherein, f (u, v) is pixel point value, Ω1For first connected region, obtained using formula (2)Similarly Other three focus points are calculated,?Wherein Ω2For second connected region Domain,?Wherein Ω3For third connected region,?Wherein Ω4For the 4th connected region;
It is as follows that pixel coordinate is converted into image coordinate calculating:
Wherein, dx is the length unit that a pixel accounts for the direction x, and dy is the length unit that a pixel accounts for the direction y, u0, v0It is the horizontal and vertical pixel number differed between picture centre pixel coordinate and image origin pixel coordinate, utilizes formula (3) by pixel coordinateThe coordinate being converted under image coordinate systemThe other three point is similarly calculated to scheme As the coordinate under coordinate system,???
It is as follows that image coordinate is converted into camera coordinates calculating:
Wherein, f is focal length, using formula (4) by image coordinateThe coordinate being converted under camera coordinates systemThe camera coordinates system coordinate of the other three point is similarly calculated,?? ?
3) pose is estimated
Step 2) obtains coordinate of the characteristic point under camera coordinates system And world coordinate system foundation is fastened in object coordinates, first characteristic point is that object is sat Mark the origin of system namely the origin of world coordinate system;The world of four characteristic points on Target Board can be obtained according to actual measurement as a result, Coordinate
The transformational relation of camera coordinates system and world coordinate system are as follows:
Wherein,It is spin matrix,It is translation matrix, is sat camera using formula (5) Four points of mark system are corresponding with four points that world coordinates is fastened to solve R spin matrix and t translation matrix;
The calculating for solving rotation angle using spin matrix is as follows:
Wherein, θxIt is camera coordinates system XcAxis is relative to world coordinate system XwThe rotation angle of axis, θyIt is camera coordinates system YcAxis Relative to world coordinate system YwThe rotation angle of axis, θzIt is camera coordinates system ZcAxis is relative to world coordinate system ZwThe rotation angle of axis, i.e., The posture of camera;
The world coordinates of camera is calculated using translation matrix:
Wherein,It is the world coordinates position of camera, it is whether correct in order to verify pose, by the 5th world Point coordinate re-projection under coordinate system verifies whether pose is correct, and re-projection calculation is as follows into pixel coordinate system:
Wherein,It is the world coordinates of the 5th characteristic point, (u5,v5) it is pixel coordinate after re-projection, It is the depth value that the 5th characteristic point is transformed under camera coordinates system,It is camera internal reference matrix;
4) PID controller is designed
The input signal of angular speed PID controller is pixel abscissa value 320, and output signal is the 5th re-projection point Abscissa u5, feedback signal is also the abscissa u of the 5th re-projection point5, angular speed increment type PID algorithm is as follows:
Wherein, K in angular speed PID controller parameterωpIt is proportional control factor, KωiIt is integral control coefficient, KωdIt is micro- Divide control coefrficient, epix[k] is k moment pixel error signal;
The input signal of linear velocity PID controller is 500mm depth information value, output signal be camera to Target Board away from FromFeedback signal is also distance of the camera to Target BoardLinear velocity increment type PID algorithm is as follows:
Δ v [k]=Kvp{ed[k]-ed[k-1]}+Kvied[k]+Kvd{ed[k]-2ed[k-1]+ed[k-2]} (10)
Wherein, K in linear velocity PID controller parametervpIt is proportional control factor, KviIt is integral control coefficient, KvdIt is differential Control coefrficient, ed[k] is k moment depth distance error signal.
Further, in the step 2), the step of tracking feature region, is as follows:
2.1: initialization: initialization camera simultaneously starts, and manually or automatically selectes the trace regions that pixel number is greater than 10, if Determine the basic parameter of tracing algorithm;
2.2: iteration starts: taking target area when h frame under complex background, and uniformly generates some points, uses Lucas-Kanade tracker tracks these points to h+1 frame, and anti-tracking goes back to obtain the predicted position of these points of h frame, then Deviation formula calculates as follows:
Wherein, Δ XhFor Euclidean distance, XhFor preliminary examination position,Predicted position, Δ X are tracked to be counterhIt is tracked as screening One of the condition of point, Δ Xh< 10 leaves, and otherwise deletes;
2.3: normalization crosscorrelation: describing the degree of correlation of two targets in conjunction with normalization cross-correlation technique and delete phase selection The not high point of pass degree, algorithm are as follows:
Wherein, f (u, v) is pixel value,For pixel mean value, g (x, y) is template pixel value,It is equal for template pixel Value, n are tracking points, and NCC is correlation, and NCC is bigger, and degree of correlation is higher, and NCC > 0 leaves a little, is otherwise deleted a little, after deleting choosing Remaining tracking point asks translation scale intermediate value and zoom scale intermediate value to obtain new characteristic area;
2.4: generate positive negative sample: in order to improve accuracy of identification, on-line study generates positive and negative sample using nearest neighbor classifier This:
Positive arest neighbors similarity:
Negative arest neighbors similarity:
Relative similarity:
Wherein, S (pi,pj) it is (pi,pj) image primitive similarity, N be standardization related coefficient, M is target area, Relative similarity SrIt is bigger, then it represents that similarity is higher, and it is positive sample that setting relative similarity, which is greater than 0.2, and be negative sample less than 0.2 This;
2.5: iteration updates: enabling h=h+1, jumps to 2.2.

Claims (2)

1. a kind of mobile robot follow-up control method based on the estimation of adaptive pose, which is characterized in that the method includes Following steps:
1) the mobile robot model of view-based access control model is established, defining x and y is the camera transverse and longitudinal coordinate after normalization, zcExist for camera The coordinate of z-axis, velocity vector of the camera under camera coordinates system arevcAnd ωcRespectively moving machine Device people in z-axis line speed and x-z-plane angular speed, velocity vector of the robot under local Coordinate System isvrAnd ωrRespectively the reference of mobile robot is in z-axis line speed and x-z-plane angular speed, then The moveable robot movement model of view-based access control model are as follows:
2) tracking feature extracted region characteristic point;Characteristic area is extracted in tracking feature region, will in hsv color spatial model Blue region is labeled as 255, other area markings are 0 progress binaryzation, and utilize expansion, burn into filtering optimization binaryzation Image, acquisition are labeled as 255 white connected region, calculate four centers of gravity of connected region, i.e. four characteristic points;
The center of gravity for defining four connected regions isConnected region Domain center of gravity calculation is as follows:
Wherein, f (u, v) is pixel point value, and Ω is connected region, is obtained using formula (2)Similarly calculate other three Focus point
It is as follows that pixel coordinate is converted into image coordinate calculating:
Wherein, dx is the length unit that a pixel accounts for the direction x, and dy is the length unit that a pixel accounts for the direction y, u0, v0 It is the horizontal and vertical pixel number differed between picture centre pixel coordinate and image origin pixel coordinate, it will using formula (3) Pixel coordinateThe coordinate being converted under image coordinate systemThe other three point is similarly calculated to sit in image Coordinate under mark systemIt is as follows that image coordinate is converted into camera coordinates calculating:
Wherein, f is focal length, using formula (4) by image coordinateThe coordinate being converted under camera coordinates systemSimilarly calculate coordinate of the other three point under camera coordinates system
3) pose is estimated
Step 2) obtains coordinate of the characteristic point under camera coordinates system And world coordinate system foundation is fastened in object coordinates, first characteristic point is that object is sat Mark the origin of system namely the origin of world coordinate system;The world of four characteristic points on Target Board can be obtained according to actual measurement as a result, Coordinate
The transformational relation of camera coordinates system and world coordinate system are as follows:
Wherein,It is spin matrix,It is translation matrix, using formula (5) by camera coordinates system Four points are corresponding with four points that world coordinates is fastened to solve R spin matrix and t translation matrix;
The calculating for solving rotation angle using spin matrix is as follows:
Wherein, θxIt is camera coordinates system XcAxis is relative to world coordinate system XwThe rotation angle of axis, θyIt is camera coordinates system YcAxis is opposite In world coordinate system YwThe rotation angle of axis, θzIt is camera coordinates system ZcAxis is relative to world coordinate system ZwThe rotation angle of axis, i.e. camera Posture;
The world coordinates of camera is calculated using translation matrix:
Wherein,It is the world coordinates position of camera, it is whether correct in order to verify pose, by the 5th world coordinates Point coordinate re-projection under system verifies whether pose is correct, and re-projection calculation is as follows into pixel coordinate system:
Wherein,It is the world coordinates of the 5th characteristic point, (u5,v5) it is pixel coordinate after re-projection,It is Five characteristic points are transformed into the depth value under camera coordinates system,It is camera internal reference matrix;
4) PID controller is designed
The input signal of angular speed PID controller is pixel abscissa value 320, and output signal is the horizontal seat of the 5th re-projection point Mark u5, feedback signal is also the abscissa u of the 5th re-projection point5, angular speed increment type PID algorithm is as follows:
Wherein, K in angular speed PID controller parameterωpIt is proportional control factor, KωiIt is integral control coefficient, KωdIt is differential control Coefficient processed, epix[k] is k moment pixel error signal;
The input signal of linear velocity PID controller is 500mm depth information value, and output signal is distance of the camera to Target BoardFeedback signal is also distance of the camera to Target BoardLinear velocity increment type PID algorithm is as follows:
Δ v [k]=Kvp{ed[k]-ed[k-1]}+Kvied[k]+Kvd{ed[k]-2ed[k-1]+ed[k-2]} (10)
Wherein, K in linear velocity PID controller parametervpIt is proportional control factor, KviIt is integral control coefficient, KvdIt is differential control Coefficient, ed[k] is k moment depth distance error signal.
2. a kind of PID mobile robot follow-up control method based on the estimation of adaptive pose as described in claim 1, special Sign is: in the step 2), the step of tracking feature region, is as follows:
2.1: initialization: initialization camera simultaneously starts, and manually or automatically selectes the trace regions that pixel number is greater than 10, and setting chases after The basic parameter of track algorithm;
2.2: iteration starts: taking target area when h frame under complex background, and uniformly generates some points, using Lucas- Kanade tracker tracks these points to h+1 frame, and anti-tracking goes back to obtain the predicted position of these points of h frame, then deviation is public Formula calculates as follows:
Wherein, Δ XhFor Euclidean distance, XhFor preliminary examination position,Predicted position, Δ X are tracked to be counterhItem as screening tracking point One of part, Δ Xh< 10 leaves, and otherwise deletes;
2.3: normalization crosscorrelation:, which describing the degree of correlation of two targets in conjunction with normalization cross-correlation technique, and deletes phase selection closes journey Not high point is spent, algorithm is as follows:
Wherein, f (u, v) is pixel value,For pixel mean value, g (x, y) is template pixel value,For template pixel mean value, n For tracking points, NCC is correlation, and NCC is bigger, and degree of correlation is higher, and NCC > 0 leaves a little, otherwise deletes a little, is left after deleting choosing Tracking point ask translation scale intermediate value and zoom scale intermediate value to obtain new characteristic area;
2.4: generate positive negative sample: in order to improve accuracy of identification, on-line study generates positive negative sample using nearest neighbor classifier:
Positive arest neighbors similarity:
Negative arest neighbors similarity:
Relative similarity:
Wherein, S (pi,pj) it is (pi,pj) image primitive similarity, N is the related coefficient of standardization, and M is target area, opposite phase Like degree SrIt is bigger, then it represents that similarity is higher, and it is negative sample less than 0.2 that it is positive sample that setting relative similarity, which is greater than 0.2,;
2.5: iteration updates: enabling h=h+1, jumps to 2.2.
CN201810795013.0A 2018-07-19 2018-07-19 Mobile robot following control method based on self-adaptive posture estimation Active CN109102525B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810795013.0A CN109102525B (en) 2018-07-19 2018-07-19 Mobile robot following control method based on self-adaptive posture estimation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810795013.0A CN109102525B (en) 2018-07-19 2018-07-19 Mobile robot following control method based on self-adaptive posture estimation

Publications (2)

Publication Number Publication Date
CN109102525A true CN109102525A (en) 2018-12-28
CN109102525B CN109102525B (en) 2021-06-18

Family

ID=64846893

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810795013.0A Active CN109102525B (en) 2018-07-19 2018-07-19 Mobile robot following control method based on self-adaptive posture estimation

Country Status (1)

Country Link
CN (1) CN109102525B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110293560A (en) * 2019-01-12 2019-10-01 鲁班嫡系机器人(深圳)有限公司 Robot behavior training, planing method, device, system, storage medium and equipment
CN110470298A (en) * 2019-07-04 2019-11-19 浙江工业大学 A kind of Robot Visual Servoing position and orientation estimation method based on rolling time horizon
CN110490908A (en) * 2019-08-26 2019-11-22 北京华捷艾米科技有限公司 The pose method for tracing and device of wisp under a kind of dynamic scene
CN110728715A (en) * 2019-09-06 2020-01-24 南京工程学院 Camera angle self-adaptive adjusting method of intelligent inspection robot
CN111267095A (en) * 2020-01-14 2020-06-12 大连理工大学 Mechanical arm grabbing control method based on binocular vision
CN111552292A (en) * 2020-05-09 2020-08-18 沈阳建筑大学 Vision-based mobile robot path generation and dynamic target tracking method
CN112184765A (en) * 2020-09-18 2021-01-05 西北工业大学 Autonomous tracking method of underwater vehicle based on vision
CN113297997A (en) * 2021-05-31 2021-08-24 合肥工业大学 6-freedom face tracking method and device of non-contact physiological detection robot
CN113379850A (en) * 2021-06-30 2021-09-10 深圳市银星智能科技股份有限公司 Mobile robot control method, mobile robot control device, mobile robot, and storage medium
CN114162127A (en) * 2021-12-28 2022-03-11 华南农业大学 Paddy field unmanned agricultural machine path tracking control method based on machine tool pose estimation
CN115435790A (en) * 2022-09-06 2022-12-06 视辰信息科技(上海)有限公司 Method and system for fusing visual positioning and visual odometer pose
CN117097918A (en) * 2023-10-19 2023-11-21 奥视(天津)科技有限公司 Live broadcast display device and control method thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104732518A (en) * 2015-01-19 2015-06-24 北京工业大学 PTAM improvement method based on ground characteristics of intelligent robot
CN104881044A (en) * 2015-06-11 2015-09-02 北京理工大学 Adaptive tracking control method of multi-mobile-robot system under condition of attitude unknown
CN105488780A (en) * 2015-03-25 2016-04-13 遨博(北京)智能科技有限公司 Monocular vision ranging tracking device used for industrial production line, and tracking method thereof
CN205375196U (en) * 2016-03-01 2016-07-06 河北工业大学 A robot control of group device for wind -powered electricity generation field is patrolled and examined
CN107193279A (en) * 2017-05-09 2017-09-22 复旦大学 Robot localization and map structuring system based on monocular vision and IMU information

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104732518A (en) * 2015-01-19 2015-06-24 北京工业大学 PTAM improvement method based on ground characteristics of intelligent robot
CN104732518B (en) * 2015-01-19 2017-09-01 北京工业大学 A kind of PTAM improved methods based on intelligent robot terrain surface specifications
CN105488780A (en) * 2015-03-25 2016-04-13 遨博(北京)智能科技有限公司 Monocular vision ranging tracking device used for industrial production line, and tracking method thereof
CN104881044A (en) * 2015-06-11 2015-09-02 北京理工大学 Adaptive tracking control method of multi-mobile-robot system under condition of attitude unknown
CN205375196U (en) * 2016-03-01 2016-07-06 河北工业大学 A robot control of group device for wind -powered electricity generation field is patrolled and examined
CN107193279A (en) * 2017-05-09 2017-09-22 复旦大学 Robot localization and map structuring system based on monocular vision and IMU information

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110293560A (en) * 2019-01-12 2019-10-01 鲁班嫡系机器人(深圳)有限公司 Robot behavior training, planing method, device, system, storage medium and equipment
CN110470298A (en) * 2019-07-04 2019-11-19 浙江工业大学 A kind of Robot Visual Servoing position and orientation estimation method based on rolling time horizon
CN110470298B (en) * 2019-07-04 2021-02-26 浙江工业大学 Robot vision servo pose estimation method based on rolling time domain
CN110490908A (en) * 2019-08-26 2019-11-22 北京华捷艾米科技有限公司 The pose method for tracing and device of wisp under a kind of dynamic scene
CN110490908B (en) * 2019-08-26 2021-09-21 北京华捷艾米科技有限公司 Pose tracking method and device for small object in dynamic scene
CN110728715A (en) * 2019-09-06 2020-01-24 南京工程学院 Camera angle self-adaptive adjusting method of intelligent inspection robot
CN111267095B (en) * 2020-01-14 2022-03-01 大连理工大学 Mechanical arm grabbing control method based on binocular vision
CN111267095A (en) * 2020-01-14 2020-06-12 大连理工大学 Mechanical arm grabbing control method based on binocular vision
CN111552292A (en) * 2020-05-09 2020-08-18 沈阳建筑大学 Vision-based mobile robot path generation and dynamic target tracking method
CN111552292B (en) * 2020-05-09 2023-11-10 沈阳建筑大学 Vision-based mobile robot path generation and dynamic target tracking method
CN112184765A (en) * 2020-09-18 2021-01-05 西北工业大学 Autonomous tracking method of underwater vehicle based on vision
CN113297997A (en) * 2021-05-31 2021-08-24 合肥工业大学 6-freedom face tracking method and device of non-contact physiological detection robot
CN113379850A (en) * 2021-06-30 2021-09-10 深圳市银星智能科技股份有限公司 Mobile robot control method, mobile robot control device, mobile robot, and storage medium
CN113379850B (en) * 2021-06-30 2024-01-30 深圳银星智能集团股份有限公司 Mobile robot control method, device, mobile robot and storage medium
CN114162127A (en) * 2021-12-28 2022-03-11 华南农业大学 Paddy field unmanned agricultural machine path tracking control method based on machine tool pose estimation
CN114162127B (en) * 2021-12-28 2023-06-27 华南农业大学 Paddy field unmanned agricultural machinery path tracking control method based on machine pose estimation
CN115435790A (en) * 2022-09-06 2022-12-06 视辰信息科技(上海)有限公司 Method and system for fusing visual positioning and visual odometer pose
CN117097918A (en) * 2023-10-19 2023-11-21 奥视(天津)科技有限公司 Live broadcast display device and control method thereof
CN117097918B (en) * 2023-10-19 2024-01-09 奥视(天津)科技有限公司 Live broadcast display device and control method thereof

Also Published As

Publication number Publication date
CN109102525B (en) 2021-06-18

Similar Documents

Publication Publication Date Title
CN109102525A (en) A kind of mobile robot follow-up control method based on the estimation of adaptive pose
CN109166149B (en) Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU
Concha et al. Visual-inertial direct SLAM
CN102722697B (en) Unmanned aerial vehicle autonomous navigation landing visual target tracking method
Fang et al. Homography-based visual servo regulation of mobile robots
Hutchinson et al. A tutorial on visual servo control
CN102638653B (en) Automatic face tracing method on basis of Kinect
CN109074083A (en) Control method for movement, mobile robot and computer storage medium
CN106055091A (en) Hand posture estimation method based on depth information and calibration method
CN107958479A (en) A kind of mobile terminal 3D faces augmented reality implementation method
CN104463859B (en) A kind of real-time video joining method based on tracking specified point
CN110163963B (en) Mapping device and mapping method based on SLAM
CN106371442B (en) A kind of mobile robot control method based on the transformation of tensor product model
CN110675453B (en) Self-positioning method for moving target in known scene
López-Nicolás et al. Switching visual control based on epipoles for mobile robots
WO2013125876A1 (en) Method and device for head tracking and computer-readable recording medium
Ma et al. Crlf: Automatic calibration and refinement based on line feature for lidar and camera in road scenes
CN117218210A (en) Binocular active vision semi-dense depth estimation method based on bionic eyes
CN104950893A (en) Homography matrix based visual servo control method for shortest path
Dang et al. Path-analysis-based reinforcement learning algorithm for imitation filming
Rougeaux et al. Robust tracking by a humanoid vision system
Kurmankhojayev et al. Monocular pose capture with a depth camera using a Sums-of-Gaussians body model
Gans et al. Visual servoing to an arbitrary pose with respect to an object given a single known length
Huang et al. MC-VEO: A Visual-Event Odometry With Accurate 6-DoF Motion Compensation
CN109542094A (en) Mobile robot visual point stabilization without desired image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant