CN103454919B - The control method of the kinetic control system of mobile robot in intelligent space - Google Patents

The control method of the kinetic control system of mobile robot in intelligent space Download PDF

Info

Publication number
CN103454919B
CN103454919B CN201310361519.8A CN201310361519A CN103454919B CN 103454919 B CN103454919 B CN 103454919B CN 201310361519 A CN201310361519 A CN 201310361519A CN 103454919 B CN103454919 B CN 103454919B
Authority
CN
China
Prior art keywords
mobile robot
course
identification network
lateral distance
drift angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310361519.8A
Other languages
Chinese (zh)
Other versions
CN103454919A (en
Inventor
袁明新
申燚
江亚峰
赵荣
孙小肖
刘萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University of Science and Technology
Original Assignee
Jiangsu University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University of Science and Technology filed Critical Jiangsu University of Science and Technology
Priority to CN201310361519.8A priority Critical patent/CN103454919B/en
Publication of CN103454919A publication Critical patent/CN103454919A/en
Application granted granted Critical
Publication of CN103454919B publication Critical patent/CN103454919B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses kinetic control system and the method for mobile robot in a kind of intelligent space.Kinetic control system is made up of intelligent space and mobile robot, and wherein intelligent space comprises monitoring host computer, distributed vision system and the wireless sensor network system based on Zigbee.Described method first carries out the acquisition of mobile robot's posture information; Then the acquisition of mobile robot control deviation e is carried out; Finally carry out the multiobject self adjusting PID motion control of mobile robot based on RBF identification network.The present invention, compared with the existing Mobile Robot Control System based on Vehicular video, has calculated amount little, and real-time is good, the more accurate advantage of motion control.

Description

The control method of the kinetic control system of mobile robot in intelligent space
Art
The present invention relates to the structure of moveable robot movement control system in intelligent space, refer more particularly to the motion control method of mobile robot.
Background technology
The intellectual space technique of mobile robot, exactly perception device, the relevant position being arranged on to performer distribution robot space, realize the complete perception of robot to people in space and thing, thus help its navigation more quick, accurate and stable in uncertain environment.And the motion control of mobile robot refers to the environmental information and displacement state that can detect according to it, come to move to predetermined target point along path to be tracked quickly and accurately by the driving mechanism controlling self.Therefore the motion control of mobile robot in intelligent space, namely be first to utilize in intelligent space overall Vision system to obtain the posture information of mobile robot, then driven the actuating unit of mobile robot by control method, thus realize the path trace of mobile robot.
Motion control is one of gordian technique in mobile robot autonomous navigation, is the Nonholonomic Constraints Systems of a quasi-representative, has larger motion control difficulty based on the mobile robot of two-wheel Differential Driving.Motion control method conventional at present has sliding formwork control methods and retreats control methods.Wu Qingyun etc. are based on the kinetic model of mobile robot with nonholonomic constraints, in conjunction with Torque Control and back-stepping design, give a kind of fast terminal sliding mode controller [1], achieve the quick track following of the overall situation of robot, but there is inevitably " buffeting " problem in sliding-mode control.He Naibao etc., when supposing the system model uncertain parameter is unknown, combine the recurrence method and robust control technique that retreat, and through multistep Recursive Design, output feedback controller and parameter adaptive control restrain [2].The deficiency retreating control methods is that design process is complicated, and requires the acceleration that robot provides fully large, but this is impossible in reality.China Patent No. is 201010013646.5 patent document discloses " a kind of motion control method of double-wheel differential type robot ", this control method mainly with the geometric center of robot for true origin, in world coordinate system XOY, set up body coordinate system xoy, control by robot controlling angle and positioning the motion control carrying out robot.Rotating angle movement controls and polynomial expression of target direction angular difference current based on robot and realizes, and positioning motion control based on the distance of current location and target location, and realizes with the polynomial expression that robot direction and robot and target link angle are variable.This control strategy has the following disadvantages:
(1) although the motion of robot comprises position and direction, both carry out simultaneously, and this patent controls respectively by corner and positioning, and this is unfavorable for the continuity in robot kinematics;
(2) corner and positioning control to realize based on polynomial expression, and the motion of mobile robot is an incomplete constrained system, utilizes limited polynomial expression to be difficult to accurately build the Motion Controlling Model of robot corner and positioning;
(3) even if utilize polynomial expression to realize the motion control of robot, how the degree of polynomial and coefficient are chosen most important, but this patent was not mentioned;
(4) how to obtain robot location and tracking position of object is the key realizing control strategy, this patent was not also mentioned.
China Patent No. is patent document discloses " control method based on robotic tracking's target of impulsive neural networks " of CN201010240067.4, the method inputs using the environmental information obtained as network, the left and right motor of robot exports as network, and the Nonlinear Mapping utilizing neural network powerful is to realize the control of whole robot.The deficiency of this control strategy is:
(1) if whole for robot control processed as black box problem, and realize Nonlinear Modeling by neural network, the nodes of this number of plies to neural network model, each layer requires higher, and this patent had not been mentioned and how to have been chosen;
(2) neural network is utilized to carry out the Nonlinear Modeling of motion planning and robot control, need more accurate network weight and threshold value, and this needs a large amount of learning samples, but be difficult to obtain complete learning sample for kinematic system, this patent had not mentioned how obtaining sample data yet;
(3) although this control strategy utilizes vision sensor to obtain environmental information, what adopt is a kind of vehicle-mounted camera, is not for the motion planning and robot control research under overall Vision system.Compared with overall Vision system, the process data of vehicle-mounted camera are larger, are difficult to the requirement of real-time reaching robot; What vehicle-mounted camera obtained in addition is local environmental information, and robot can not realize the path trace of the overall situation, and motion control accuracy is not high.
In a word, the motion control of mobile robot is the technological difficulties of robot field, carry out the kinetic control system of mobile robot based on the sensor device in intelligent space at present and method design also little.In addition, the motion control of mobile robot or a multi objective control problem, and it often constantly carries out the adjustment of self attitude in motion process because keeping away barrier needs, therefore, how to design one and can carry out multi objective control, and environmentally can change and carry out self-adjusting control method and seem particularly important.
[1] Wu Qingyun, Yan Maode, He Yuyao. the fast terminal sliding formwork Trajectory Tracking Control [J] of mobile robot. systems engineering and electronic technology, 2007,29 (12): 2127 ?2130.
[2] He Naibao, Jiang Changsheng, Gao Qian. a class uncertain nonlinear system is based on the adaptive Gaussian filtering [J] of Backstepping. Harbin Institute of Technology's journal, 2009,41 (5): 169 ?171.
Summary of the invention
The object of the invention is to the motion control in order to solve mobile robot in intelligent space, constructing the kinetic control system of mobile robot, and then provide the motion control method of a kind of mobile robot.
The kinetic control system of mobile robot in intelligent space of the present invention, be made up of intelligent space and mobile robot, wherein intelligent space comprises monitoring host computer, distributed vision system and the wireless sensor network system based on Zigbee, distributed vision system to be vertically mounted on indoor canopy respectively by gimbals by multiple ccd video camera and to form, and multiple ccd video camera (3) is connected with the Multiplexing Image Grab Card in the PCI slot being inserted in monitoring host computer respectively by video line again; Wireless sensor network system based on Zigbee is made up of blind node and Zigbee gateway, and blind node installation is on the microcontroller of mobile robot, and Zigbee gateway is connected with monitoring host computer by RS232 serial ports.
Described blind node is employing is the chip of CC2431 with hardware positioning engine model.
The control method of moveable robot movement control system in intelligent space, first carries out the acquisition of mobile robot's posture information; Then the acquisition of mobile robot control deviation e is carried out; Finally carry out the multiobject self adjusting PID motion control of mobile robot based on RBF identification network.
Described mobile robot's posture information obtains, and adopts visible sensation method, comprises the position of mobile robot and the location of course angle;
The location positioning method of mobile robot adopts following steps:
(1) ccd video camera collection is utilized to contain the coloured image of mobile robot;
(2) based on the Euclidean distance of colour element vector, in conjunction with background image, Threshold segmentation is carried out to the coloured image obtained in step (1), thus obtain difference bianry image;
(3) use opening operation to carry out denoising Processing to bianry image, thus obtain the accurate bianry image containing mobile robot;
(4) bianry image containing mobile robot is lined by line scan, when scanned current line line segment is adjacent with previous row line segment, then synthesize connected region; Otherwise, the connected region that initialization is new;
(5) according to the pixel coordinate of step (4) each connected region, thus the position coordinates of each mobile robot is obtained.
The localization method of the course angle of mobile robot adopts following steps:
1) ccd video camera collection is utilized to post the coloured image of the mobile robot of the T-shaped color block of direction and mark;
2) by the coloured image of mobile robot from RGB color space conversion to HIS color space;
3) according to presetting H and S threshold value, Iamge Segmentation is carried out to the T-shaped color block of mobile robot;
4) use opening operation and closed operation to the smoothing process of image after segmentation;
5) carry out linear fit to T-shaped identification image, obtain the slope of identification color block, and be converted to angle, the angle, final heading of finally carrying out mobile robot according to direction color block is again determined.
The acquisition of described mobile robot control deviation e, comprises the lateral distance e of mobile robot dwith drift angle, course e θacquisition; Lateral distance e dthe centre coordinate P of current mobile robot c, with reference to robot center point P to path to be tracked rthe vertical range d of place's tangent line; Drift angle, course e θthe current deflection θ of current mobile robot c, with reference robot center point P on path to be tracked rplace tangential direction θ rdifferential seat angle θ.
The method of the multiobject self adjusting PID motion control of the described mobile robot based on RBF identification network, comprises the steps:
(1) PID of mobile robot's speed adjustment controlled quentity controlled variable Δ v controls, and comprises the steps:
A. the lateral distance e of mobile robot is asked for d;
B. drift angle, the course e of mobile robot is asked for θ;
C. the PID setting up following k moment speed adjustment controlled quentity controlled variable Δ v (k) controls:
Δv(k)=Δv(k-1)
+K p_d(k)(e d(k)-e d(k-1))+K i_d(k)e d(k)+K d_d(k)(e d(k)-2e d(k-1)+e d(k-2))
+K p_θ(k)(e θ(k)-e θ(k-1))+K i_θ(k)e θ(k)+K d_θ(k)(e θ(k)-2e θ(k-1)+e θ(k-1))
In formula, K p_d(k), K i_d(k), K d_dk () is respectively k moment lateral distance PID controller ratio, integration, differential coefficient, the lateral distance e of mobile robot d, drift angle, course e θ;
K p_ θ(k), K i_ θ(k), K d_ θk () is respectively ratio, integration, the differential coefficient of drift angle, k moment course PID controller;
(2) based on RBF identification network, the pid parameter (K of k moment lateral distance PID controller is carried out p_d(k), K i_d(k), K d_d(k)) self-adjusting, comprise the steps:
a.K p_d(k)=K p_d(k-1)+λ p_dΔK p_d(k);
b.K i_d(k)=K i_d(k-1)+λ i_dΔK i_d(k);
c.K d_d(k)=K d_d(k-1)+λ d_dΔK d_d(k);
In formula, λ p_d, λ i_d, λ d_dbe respectively K p_d(k), K i_d(k), K d_dk the learning rate of () is normal number; Δ K p_d(k), Δ K i_d(k), Δ K d_dk () is respectively K p_d(k), K i_d(k), K d_d(k) on-line tuning value;
d.ΔK p_d(k)=e d(k)J d(e d(k)-e d(k-1));
e.ΔK i_d(k)=T se d(k)J de d(k);
f. ΔK d _ d ( k ) = e d ( k ) J d ( e d ( k ) - 2 e d ( k - 1 ) + e d ( k - 2 ) T s ) ;
G. the Jacobian sensitivity information of lateral distance RBF identification network is asked for:
J d ≈ Σ j = 1 6 w j _ d R j _ d c j _ d - u d ( k ) σ j _ d 2 ;
In formula, Ts is sampling period, w j_dfor weights, c between the middle layer of lateral distance RBF identification network and output layer j_dfor Gaussian function center, the σ of lateral distance RBF identification network j_dfor the width parameter of the gaussian kernel function of lateral distance RBF identification network;
(3) based on RBF identification network, the pid parameter (K of drift angle, k moment course PID controller is carried out p_ θ(k), K i_ θ(k), K d_ θ(k)) self-adjusting, comprise the steps:
a.)K p_θ(k)=K p_θ(k-1)+λ p_θΔK p_θ(k);
b.)K i_θ(k)=K i_θ(k-1)+λ i_θΔK i_θ(k);
c.)K d_θ(k)=K d_θ(k-1)+λ d_θΔK d_θ(k);
In formula, λ p_ θ, λ i_ θ, λ d_ θbe respectively K p_ θ(k), K i_ θ(k), K d_ θk the learning rate of () is normal number; Δ K p_ θ(k), Δ K i_ θ(k), Δ K d_ θk () is respectively K p_ θ(k), K i_ θ(k), K d_ θ(k) on-line tuning value;
d.)ΔK p_θ(k)=e θ(k)J θ(e θ(k)-e θ(k-1));
e.)ΔK i_θ(k)=T se θ(k)J θe θ(k);
f.) ΔK d _ θ ( k ) = e θ ( k ) J θ ( e θ ( k ) - 2 e θ ( k - 1 ) + e θ ( k - 2 ) T s ) ;
G.) the Jacobian sensitivity information of drift angle, course RBF identification network 29 is asked for:
J θ ≈ Σ j = 1 6 w j _ θ R j _ θ c j _ θ - u θ ( k ) σ j _ θ 2 ;
In formula, w j_ θfor weights, c between the middle layer of drift angle, course RBF identification network and output layer j_ θfor Gaussian function center, the σ of drift angle, course RBF identification network j_ θfor the width parameter of the gaussian kernel function of drift angle, course RBF identification network;
(4) the asking for of mobile robot's speed v and angular velocity omega, comprises the steps:
(a.) the basic rotating speed v of given left and right wheels 0, then revolver rotating speed is: v l=v 0+ Δ v;
(b.) right wheel speed is: v r=v 0-Δ v, Δ v are speed adjustment controlled quentity controlled variable;
(c.) rotating speed of mobile robot's central point is:
(d.) angular velocity of mobile robot's central point is:
In formula, b wfor revolver and the right spacing of taking turns of mobile robot.
Beneficial effect
The present invention is directed to the motion control of mobile robot in intelligent space, utilize distributed vision system, based on the wireless sensor network system of ZigBee technology and the mobile robot's ontological construction kinetic control system of mobile robot, and then provide the motion control method of a kind of mobile robot.The moveable robot movement that in designed intelligent space, kinetic control system mainly achieves under overall Vision controls, compared with the existing Mobile Robot Control System based on Vehicular video, have calculated amount little, real-time is good, the more accurate advantage of motion control.In addition, the environment of indoor mobile robot is uncertain, and mobile robot often frequently carries out break-in because keeping away barrier needs in motion process, and this control rate for some preset parameters is difficult to be suitable for.For this reason, the multiple goal PID motion control that the present invention is based on RBF identification network achieves the online adaptive adjustment of pid parameter, make mobile robot can carry out the self-adjusting of controller according to actual environment, thus further increase the motion control accuracy of mobile robot.
Accompanying drawing explanation
The kinetic control system composition of Fig. 1 mobile robot;
The structure composition of Fig. 2 mobile robot;
The vision location algorithm flow process of Fig. 3 position of mobile robot;
The vision location algorithm flow process of Fig. 4 mobile robot course angle;
The position and attitude error schematic diagram of Fig. 5 mobile robot;
Fig. 6 is based on the Self-Adjusting PID Controller structure of RBF identification network;
The topological structure of Fig. 7 lateral distance RBF identification network;
The topological structure of drift angle, Fig. 8 course RBF identification network;
1. interior space ceilings in figure, 2. gimbals, 3.CCD video camera, 4. Multiplexing Image Grab Card, 5. monitoring host computer, 6.Zigbee gateway, 7. mobile robot, 8. blind node, 9. path to be tracked, 10. microcontroller, 11. ultrasonic distance sensors, 12. infrared Proximity Sensors, 13. electronic compass sensor, 14.CC2431 chip, 15. revolver stepper motor drivers, 16. rightly take turns stepper motor driver, 17. revolver stepper motors, 18. rightly take turns stepper motor, 19. revolvers, 20. rightly take turns, 21. revolver scramblers, 22. rightly take turns scrambler, 23. current robot, 24. with reference to robot, 25. comparers, 26. lateral distance PID controller, 27. drift angle, course PID controller, 28. lateral distance RBF identification networks, 29. drift angle, course RBF identification network, 30. totalizers, 31. lateral distance RBF identification network input layer, 32. lateral distance RBF identification network middle layer, 33. lateral distance RBF identification network output layer, 34. drift angle, course RBF identification network input layer, 35. RBF identification network middle layer, drift angle, course, 36. drift angle, course RBF identification network output layer.
Embodiment
Below in conjunction with accompanying drawing, the kinetic control system of mobile robot in intelligent space of the present invention and method are described in detail:
In intelligent space, moveable robot movement control system is made up of intelligent space and mobile robot.
As shown in Figure 1, intelligent space comprises distributed vision system and the wireless sensor network system based on ZigBee technology.The structure of distributed vision system: Distributed C CD video camera 3 is vertically mounted on indoor canopy 1 by gimbals 2, and ccd video camera 3 is connected with Multiplexing Image Grab Card 4 by video line, and image pick-up card 4 is arranged in the PCI slot of Indoor Video main frame 5.Wireless sensor network system based on ZigBee technology comprises blind node 8 and Zigbee gateway 6.Blind node 8, and to be arranged on the microcontroller 10 of mobile robot 7 for core with the CC2431 chip with hardware positioning engine.Zigbee gateway 9 is connected with monitoring host computer 5 by RS232 serial ports.Namely the motion control object of mobile robot 7 in intelligent space is allow mobile robot 7 can path 9 to be tracked in accurate tracking.
As shown in Figure 2, mobile robot's structure of kinetic control system is composed as follows: with microcontroller 10 for core, and ultrasonic distance sensor 11 is used in detection, long-distance barrier thing, is connected with microcontroller 10; Infrared Proximity Sensor 12 is used for detecting closely barrier, is connected with microcontroller 10; Electronic compass sensor 13, in order to the direction in record move robot 7 motion process, is connected with microcontroller 10; CC2431 chip 14, as the blind node in Zigbee wireless sensor network, is used for realizing the radio communication between robot 7 and monitoring host computer 5.Microcontroller 10 is driven by revolver stepper motor driver 15 and controls revolver stepper motor 17, thus realizes the dragging of mobile robot 7 revolver 19; Equally, microcontroller 10 is driven by right stepper motor driver 16 of taking turns and is controlled rightly to take turns stepper motor 18, thus realize mobile robot 7 right take turns 20 dragging.Mobile robot 7 is by revolver 19 and rightly take turns the differential connected mode of 20 composition.Revolver scrambler 21 by carrying out revolver tachometric survey with revolver 19 coaxial connection, and by measurement feedback to microcontroller 10.Equally, right scrambler 22 of taking turns by 20 coaxial connections carrying out right wheel speed measurement with right wheel, and by measurement feedback to microcontroller 10.
Moveable robot movement control method in intelligent space of the present invention, first carries out the acquisition of mobile robot's posture information; Then the acquisition of mobile robot control deviation e is carried out; Finally carry out the multiobject self adjusting PID motion control of mobile robot based on RBF identification network.
Described mobile robot 7 posture information obtains, and adopts visible sensation method, comprises the position of mobile robot 7 and the location of course angle;
As shown in Figure 3, the vision location algorithm of mobile robot 7 position adopts following steps:
(1) ccd video camera 3 collection is utilized to contain the coloured image of mobile robot 7;
(2) based on the Euclidean distance of colour element vector, in conjunction with background image, Threshold segmentation is carried out to the coloured image obtained in step (1), thus obtain difference bianry image;
(3) use opening operation to carry out denoising Processing to bianry image, thus obtain the accurate bianry image containing mobile robot;
(4) bianry image containing mobile robot 7 is lined by line scan, when scanned current line line segment is adjacent with previous row line segment, then synthesize connected region; Otherwise, the connected region that initialization is new;
(5) according to the pixel coordinate of each connected region, thus the position coordinates of each mobile robot 7 is obtained.
As shown in Figure 4, the vision location algorithm of mobile robot 7 course angle adopts following steps:
(1) ccd video camera collection is utilized to post the coloured image of the mobile robot 7 of the T-shaped color block of direction and mark;
(2) by the coloured image of mobile robot 7 from RGB color space conversion to HIS color space;
(3) according to presetting H and S threshold value, Iamge Segmentation is carried out to the T-shaped color block of mobile robot 7;
(4) use opening operation and closed operation to the smoothing process of image after segmentation;
(5) carry out linear fit to T-shaped identification image, obtain the slope of identification color block, and be converted to angle, the angle, final heading of finally carrying out mobile robot 7 according to direction color block is again determined.
Mobile robot 7 control deviation e comprises the lateral distance e of mobile robot 7 dwith drift angle, course e θ.
As shown in Figure 5, lateral distance e dthe centre coordinate P of current mobile robot 23 c, with reference to robot 24 center point P to path 9 to be tracked rthe vertical range d of place's tangent line.Drift angle, course e θthe current deflection θ of current mobile robot 23 c, with reference robot 24 center point P on path 9 to be tracked rplace tangential direction θ rdifferential seat angle θ.
As shown in Figure 6, based on mobile robot's multiple goal Self-Adjusting PID Controller structure of RBF identification network, be by the reference pose P of mobile robot 7 rwith current pose P cafter comparer 25, obtain and comprise lateral distance e dwith drift angle, course e θcontrol deviation e.By lateral distance e dby obtaining lateral distance controlling value u after lateral distance PID controller 26 d, simultaneously by drift angle, course e θby obtaining drift angle, course controlling value u after drift angle, course PID controller 27 θ.By lateral distance controlling value u dwith drift angle, course controlling value e θthe speed adjustment controlled quentity controlled variable Δ v of mobile robot 7 is obtained through totalizer.Mobile robot 7 obtains Current central coordinate P according to controlled quentity controlled variable Δ v motion c.In lateral distance PID control procedure, lateral distance RBF identification network 28 can export u according to lateral distance PID controller 26 d, mobile robot 7 current lateral distance e d, a upper moment lateral distance carry out the online pid parameter adjustment of lateral distance PID controller 26; In the PID control procedure of drift angle, course, drift angle, course RBF identification network 29 can export u according to drift angle, course PID controller 27 θ, mobile robot 7 current course drift angle e θ, a upper moment lateral distance carry out the online pid parameter adjustment of drift angle, course PID controller 27.Adjusted by the online pid parameter of lateral distance PID controller 26 and drift angle, course PID controller 27, achieve the multiobject self adjusting PID motion control of mobile robot 7 based on RBF identification network.
As shown in Figure 7, lateral distance RBF identification network 28 is three-layer networks, is namely made up of lateral distance RBF identification network input layer 31, middle layer 32 and output layer 33.Lateral distance RBF identification network input layer 31 has three input nodes, respectively the output u of corresponding lateral distance PID controller 26 d, mobile robot 7 current lateral distance e d, a upper moment lateral distance .There are six hidden node in middle layer 32, and node function adopts gaussian kernel function:
R j _ d ( x d ) = exp [ - | | x d - c j _ d | | 2 2 σ j _ d 2 ] , j = 1,2 , L , 6
In formula, c j_dfor gaussian kernel function center, the σ of lateral distance RBF identification network 28 j_dfor the width parameter of the gaussian kernel function of lateral distance RBF identification network 28.
Lateral distance RBF identification network output layer 33 only has a node, and node is linear function, that is:
n d = Σ j = 1 6 w j _ d R j _ d ( x d )
In formula, w j_dfor the weights between the middle layer 32 of lateral distance RBF identification network 28 and output layer 33.
As shown in Figure 8, drift angle, course RBF identification network 29 is a three-layer network equally, is namely made up of drift angle, course RBF identification network input layer 34, middle layer 35 and output layer 36.Drift angle, course RBF identification network input layer 34 has three input nodes, respectively the output u of drift angle, corresponding course PID controller 27 θ, mobile robot 7 current course drift angle e θ, a upper moment drift angle, course .Middle layer 35 has six hidden node equally, and node function adopts gaussian kernel function equally:
R j _ θ ( x ) = exp [ - | | x - c j _ θ | | 2 2 σ j _ θ 2 ] , j = 1,2 , L , 6
In formula, c j_ θfor gaussian kernel function center, the σ of drift angle, course RBF identification network 29 j_ θfor the width parameter of the gaussian kernel function of drift angle, course RBF identification network 29.
Drift angle, course RBF identification network output layer 36 only has a node equally, and node is also linear function, that is:
n θ = Σ j = 1 6 w j _ θ R j _ θ ( x θ )
In formula, w j_ θfor the weights between the middle layer 35 of drift angle, course RBF identification network 29 and output layer 36.
Composition graphs 5, Fig. 6, Fig. 7 and Fig. 8, describe the method flow based on the multiobject self adjusting PID motion control of mobile robot 7 of RBF identification network, mainly comprise the steps:
(1) PID of mobile robot 7 speed adjustment controlled quentity controlled variable Δ v controls, and comprises the steps:
A. the lateral distance e of mobile robot 7 is asked for d;
B. drift angle, the course e of mobile robot 7 is asked for θ;
C. the PID setting up following k moment speed adjustment controlled quentity controlled variable Δ v (k) controls:
Δv(k)=Δv(k-1)
+K p_d(k)(e d(k)-e d(k-1))+K i_d(k)e d(k)+K d_d(k)(e d(k)-2e d(k-1)+e d(k-2))
+K p_θ(k)(e θ(k)-e θ(k-1))+K i_θ(k)e θ(k)+K d_θ(k)(e θ(k)-2e θ(k-1)+e θ(k-1))
In formula, K p_d(k), K i_d(k), K d_dk () is respectively ratio, integration, the differential coefficient of k moment lateral distance PID controller 26;
K p_ θ(k), K i_ θ(k), K d_ θk () is respectively ratio, integration, the differential coefficient of drift angle, k moment course PID controller 27.
(2) based on RBF identification network, the pid parameter (K of k moment lateral distance PID controller 26 is carried out p_d(k), K i_d(k), K d_d(k)) self-adjusting, comprise the steps:
a.K p_d(k)=K p_d(k-1)+λ p_dΔK p_d(k)
b.K i_d(k)=K i_d(k-1)+λ i_dΔK i_d(k)
c.K d_d(k)=K d_d(k-1)+λ d_dΔK d_d(k)
In formula, λ p_d, λ i_d, λ d_dbe respectively K p_d(k), K i_d(k), K d_dk the learning rate of () is normal number.Δ K p_d(k), Δ K i_d(k), Δ K d_dk () is respectively K p_d(k), K i_d(k), K d_d(k) on-line tuning value.
d.ΔK p_d(k)=e d(k)J d(e d(k)-e d(k-1))
e.ΔK i_d(k)=T se d(k)J de d(k)
f. ΔK d _ d ( k ) = e d ( k ) J d ( e d ( k ) - 2 e d ( k - 1 ) + e d ( k - 2 ) T s )
G. the Jacobian sensitivity information of lateral distance RBF identification network 28 is asked for:
J d ≈ Σ j = 1 6 w j _ d R j _ d c j _ d - u d ( k ) σ j _ d 2
In formula, Ts is the sampling period.
(3) based on RBF identification network, the pid parameter (K of drift angle, k moment course PID controller 27 is carried out p_ θ(k), K i_ θ(k), K d_ θ(k)) self-adjusting, comprise the steps:
a.K p_θ(k)=K p_θ(k-1)+λ p_θΔK p_θ(k)
b.K i_θ(k)=K i_θ(k-1)+λ i_θΔK i_θ(k)
c.K d_θ(k)=K d_θ(k-1)+λ d_θΔK d_θ(k)
In formula, λ p_ θ, λ i_ θ, λ d_ θbe respectively K p_ θ(k), K i_ θ(k), K d_ θk the learning rate of () is normal number.Δ K p_ θ(k), Δ K i_ θ(k), Δ K d_ θk () is respectively K p_ θ(k), K i_ θ(k), K d_ θ(k) on-line tuning value.
d.ΔK p_θ(k)=e θ(k)J θ(e θ(k)-e θ(k-1))
e.ΔK i_θ(k)=T se θ(k)J θe θ(k)
f. ΔK d _ θ ( k ) = e θ ( k ) J θ ( e θ ( k ) - 2 e θ ( k - 1 ) + e θ ( k - 2 ) T s )
G. the Jacobian sensitivity information of drift angle, course RBF identification network 29 is asked for:
J θ ≈ Σ j = 1 6 w j _ θ R j _ θ c j _ θ - u θ ( k ) σ j _ θ 2
(4) ask for speed v and the angular velocity omega of mobile robot 7 central point, comprise the steps:
A. the basic rotating speed v of given left and right wheels 0, then revolver rotating speed is: v l=v 0+ Δ v
B. right wheel speed is: v r=v 0-Δ v, Δ v are speed adjustment controlled quentity controlled variable
C. the rotating speed of mobile robot 7 central point is:
D. the angular velocity of mobile robot 7 central point is:
In formula, b wfor mobile robot 7 revolver 19 and right take turns 20 spacing.

Claims (1)

1. the control method of the kinetic control system of mobile robot in an intelligent space, this control method based on control system be made up of intelligent space and mobile robot (7), wherein intelligent space comprises monitoring host computer (5), distributed vision system and the wireless sensor network system based on Zigbee, distributed vision system to be vertically mounted on indoor canopy (1) respectively by gimbals (2) by multiple ccd video camera (3) and to form, multiple ccd video camera (3) is connected with the Multiplexing Image Grab Card (4) in the PCI slot being inserted in monitoring host computer (5) respectively by video line again, wireless sensor network system based on Zigbee is made up of blind node (8) and Zigbee gateway (6), blind node (8) is arranged on the microcontroller (10) of mobile robot (7), and Zigbee gateway (6) is connected with monitoring host computer (5) by RS232 serial ports, described control method, first carries out mobile robot (7) posture information and obtains, then the acquisition of mobile robot (7) control deviation e is carried out, finally carry out the multiobject self adjusting PID motion control of mobile robot (7) based on RBF identification network,
It is characterized in that: described mobile robot (7) posture information obtains, adopt visible sensation method, comprise the position of mobile robot (7) and the location of course angle;
The location positioning method of mobile robot (7), adopts following steps:
(1) ccd video camera (3) collection is utilized to contain the coloured image of mobile robot (7);
(2) based on the Euclidean distance of colour element vector, in conjunction with background image, Threshold segmentation is carried out to the coloured image obtained in step (1), thus obtain difference bianry image;
(3) use opening operation to carry out denoising Processing to bianry image, thus obtain the accurate bianry image containing mobile robot (7);
(4) bianry image containing mobile robot (7) is lined by line scan, if the current line line segment scanned is adjacent with previous row line segment, then synthesize connected region; Otherwise, the connected region that initialization is new;
(5) according to the pixel coordinate of step (4) each connected region, thus the position coordinates of each mobile robot (7) is obtained;
The localization method of the course angle of mobile robot (7), adopts following steps:
1) ccd video camera (3) collection is utilized to post the coloured image of the mobile robot (7) of the T-shaped color block of direction and mark;
2) by the coloured image of mobile robot (7) from RGB color space conversion to HIS color space;
3) according to presetting H and S threshold value, Iamge Segmentation is carried out to the T-shaped color block of mobile robot (7);
4) use opening operation and closed operation to the smoothing process of image after segmentation;
5) carry out linear fit to the segmentation image after level and smooth, obtain the slope of T-shaped color block, and be converted to angle, the angle, final heading of finally carrying out mobile robot (7) according to T-shaped color block is again determined;
The acquisition of described mobile robot (7) control deviation e, comprises the lateral distance e of mobile robot (7) dwith drift angle, course e θacquisition; Lateral distance e dthe centre coordinate P of current mobile robot (23) c, upper with reference to robot (24) center point P to path to be tracked (9) rthe vertical range d of place's tangent line; Drift angle, course e θthe current deflection θ of current mobile robot (23) c, upper with reference to robot (24) center point P with path to be tracked (9) rplace tangential direction θ rdifferential seat angle θ;
The method of the described mobile robot based on RBF identification network (7) multiobject self adjusting PID motion control, comprises the steps:
(I) PID of mobile robot (7) speed adjustment controlled quentity controlled variable Δ v controls, and comprises the steps:
A. the lateral distance e of mobile robot (7) is asked for d;
B. drift angle, the course e of mobile robot (7) is asked for θ;
C. the PID setting up following k moment speed adjustment controlled quentity controlled variable Δ v (k) controls:
Δv(k)=Δv(k-1)
+K p-d(k)(e d(k)-e d(k-1))+K i-d(k)e d(k)+K d-d(k)(e d(k)-2e d(k-1)+e d(k-2))
+K p-θ(k)(e θ(k)-e θ(k-1))+K i-θ(k)e θ(k)+K d-θ(k)(e θ(k)-2e θ(k-1)+e θ(k-1))
In formula, K p-d(k), K i-d(k), K d-dk () is respectively k moment lateral distance PID controller (26) ratio, integration, differential coefficient;
K p-θ(k), K i-θ(k), K d-θk () is respectively ratio, integration, the differential coefficient in drift angle PID controller, k moment course (27);
(II) based on RBF identification network, the pid parameter K of k moment lateral distance PID controller (26) is carried out p-d(k), K i-d(k), K d-dk () self-adjusting, comprises the steps:
a.K p-d(k)=K p-d(k-1)+λ p-dΔK p-d(k);
b.K i-d(k)=K i-d(k-1)+λ i-dΔK i-d(k);
c.K d-d(k)=K d-d(k-1)+λ d-dΔK d-d(k);
In formula, λ p-d, λ i-d, λ d-dbe respectively K p-d(k), K i-d(k), K d-dk the learning rate of () is normal number; Δ K p-d(k), Δ K i-d(k), Δ K d-dk () is respectively K p-d(k), K i-d(k), K d-d(k) on-line tuning value;
d.ΔK p-d(k)=e d(k)J d(e d(k)-e d(k-1));
e.ΔK i-d(k)=T se d(k)J de d(k);
f.
G. the Jacobian sensitivity information of lateral distance RBF identification network (28) is asked for:
In formula, Ts is sampling period, w j-dfor the weights between the middle layer (32) of lateral distance RBF identification network (28) and output layer (33), middle layer (32) have six hidden node, R j-dfor gaussian kernel function value, the c of lateral distance RBF identification network (28) hidden node j-dfor gaussian kernel function center, the u of lateral distance RBF identification network (28) dk output that () is lateral distance PID controller (26), σ j-dfor the width parameter of the gaussian kernel function of lateral distance RBF identification network (28);
(III) based on RBF identification network, the pid parameter K in drift angle PID controller, k moment course (27) is carried out p-θ(k), K i-θ(k), K d-θk () self-adjusting, comprises the steps:
a.)K p-θ(k)=K p-θ(k-1)+λ p-θΔK p-θ(k);
b.)K i-θ(k)=K i-θ(k-1)+λ i-θΔK i-θ(k);
c.)K d-θ(k)=K d-θ(k-1)+λ d-θΔK d-θ(k);
In formula, λ p-θ, λ i-θ, λ d-θbe respectively K p-θ(k), K i-θ(k), K d-θk the learning rate of () is normal number; Δ K p-θ(k), Δ K i-θ(k), Δ K d-θk () is respectively K p-θ(k), K i-θ(k), K d-θ(k) on-line tuning value;
d.)ΔK p-θ(k)=e θ(k)J θ(e θ(k)-e θ(k-1));
e.)ΔK i-θ(k)=T se θ(k)J θe θ(k);
f.)
G.) the Jacobian sensitivity information of drift angle, course RBF identification network (29) is asked for:
In formula, w j-θfor the weights between the middle layer (35) of drift angle, course RBF identification network (29) and output layer (36), middle layer (35) have six hidden node, R j-θfor gaussian kernel function value, the c of drift angle, course RBF identification network (29) hidden node j-θfor gaussian kernel function center, the u of drift angle, course RBF identification network (29) θk output that () is drift angle, course PID controller (27), σ j-θfor the width parameter of the gaussian kernel function of drift angle, course RBF identification network (29);
(IV) the asking for of mobile robot's (7) speed v and angular velocity omega, comprises the steps:
(a.) the basic rotating speed v of given left and right wheels 0, then revolver rotating speed is: v l=v 0+ Δ v;
(b.) right wheel speed is: v r=v 0-Δ v, Δ v are speed adjustment controlled quentity controlled variable;
(c.) rotating speed of mobile robot (7) central point is:
(d.) angular velocity of mobile robot (7) central point is:
In formula, b wfor revolver (19) and the right spacing of taking turns (20) of mobile robot (7).
CN201310361519.8A 2013-08-19 2013-08-19 The control method of the kinetic control system of mobile robot in intelligent space Expired - Fee Related CN103454919B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310361519.8A CN103454919B (en) 2013-08-19 2013-08-19 The control method of the kinetic control system of mobile robot in intelligent space

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310361519.8A CN103454919B (en) 2013-08-19 2013-08-19 The control method of the kinetic control system of mobile robot in intelligent space

Publications (2)

Publication Number Publication Date
CN103454919A CN103454919A (en) 2013-12-18
CN103454919B true CN103454919B (en) 2016-03-30

Family

ID=49737415

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310361519.8A Expired - Fee Related CN103454919B (en) 2013-08-19 2013-08-19 The control method of the kinetic control system of mobile robot in intelligent space

Country Status (1)

Country Link
CN (1) CN103454919B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105706011B (en) * 2013-11-07 2019-03-08 株式会社富士 Automatic operating system and automatically walk machine
CN105486309B (en) * 2015-12-02 2018-08-17 广州市吉特科技有限公司 It is a kind of based on color mode and assist in identifying Indoor Robot navigation and localization method
CN106774303B (en) * 2016-10-12 2019-04-02 纳恩博(北京)科技有限公司 A kind of method for tracing and tracing equipment
CN106531161A (en) * 2016-10-17 2017-03-22 南京理工大学 Image-recognition-based apparatus and method of automatically sorting and carrying articles by mobile trolley
CN107063257B (en) * 2017-02-05 2020-08-04 安凯 Separated floor sweeping robot and path planning method thereof
CN106843224B (en) * 2017-03-15 2020-03-10 广东工业大学 Method and device for cooperatively guiding transport vehicle through multi-view visual positioning
CN107273850B (en) * 2017-06-15 2021-06-11 上海工程技术大学 Autonomous following method based on mobile robot
CN108088447A (en) * 2017-12-15 2018-05-29 陕西理工大学 A kind of path post-processing approach of mobile robot
CN108426580B (en) * 2018-01-22 2021-04-30 中国地质大学(武汉) Unmanned aerial vehicle and intelligent vehicle collaborative navigation method based on image recognition
CN108958249B (en) * 2018-07-06 2021-02-26 东南大学 Ground robot control system and method considering unknown control direction
CN109947117A (en) * 2019-04-19 2019-06-28 辽宁工业大学 A kind of servo synchronization control system and control method suitable for monocular vision logistics distribution trolley
CN110450157A (en) * 2019-08-07 2019-11-15 安徽延达智能科技有限公司 A kind of robot automatic obstacle-avoiding system
CN112256038A (en) * 2020-11-03 2021-01-22 盈合(深圳)机器人与自动化科技有限公司 Intelligent space service method and system
CN112651456B (en) * 2020-12-31 2023-08-08 遵义师范学院 Unmanned vehicle control method based on RBF neural network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101791800A (en) * 2010-01-21 2010-08-04 西北工业大学 Motion control method of double-wheel differential type robot
CN102346489A (en) * 2010-07-28 2012-02-08 中国科学院自动化研究所 Pulse neural network based method for controlling object tracking of robot
CN102914303A (en) * 2012-10-11 2013-02-06 江苏科技大学 Navigation information acquisition method and intelligent space system with multiple mobile robots

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4918682B2 (en) * 2005-10-27 2012-04-18 国立大学法人山口大学 Ultrasonic motor control method, ultrasonic motor control apparatus, and program for controlling ultrasonic motor
KR101104544B1 (en) * 2009-10-12 2012-01-11 금오공과대학교 산학협력단 Swarm robot path control system based on network and method thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101791800A (en) * 2010-01-21 2010-08-04 西北工业大学 Motion control method of double-wheel differential type robot
CN102346489A (en) * 2010-07-28 2012-02-08 中国科学院自动化研究所 Pulse neural network based method for controlling object tracking of robot
CN102914303A (en) * 2012-10-11 2013-02-06 江苏科技大学 Navigation information acquisition method and intelligent space system with multiple mobile robots

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
两自由度并联机器人的RBF神经网络辨识自适应控制;陈正洪等;《武汉理工大学学报(交通科学与工程版)》;20080430;第32卷(第02期);210-213 *
轮式移动机器人神经网络控制;刘渭苗;《中国优秀硕士学位论文全文数据库》;20100215;40-48 *

Also Published As

Publication number Publication date
CN103454919A (en) 2013-12-18

Similar Documents

Publication Publication Date Title
CN103454919B (en) The control method of the kinetic control system of mobile robot in intelligent space
CN110007675B (en) Vehicle automatic driving decision-making system based on driving situation map and training set preparation method based on unmanned aerial vehicle
CN107085938B (en) The fault-tolerant planing method of intelligent driving local path followed based on lane line and GPS
US10867409B2 (en) Methods and systems to compensate for vehicle calibration errors
CN105629970A (en) Robot positioning obstacle-avoiding method based on supersonic wave
Siagian et al. Mobile robot navigation system in outdoor pedestrian environment using vision-based road recognition
CN114474061B (en) Cloud service-based multi-sensor fusion positioning navigation system and method for robot
CN110262555B (en) Real-time obstacle avoidance control method for unmanned aerial vehicle in continuous obstacle environment
CN114018248B (en) Mileage metering method and image building method integrating code wheel and laser radar
Wu et al. Robust LiDAR-based localization scheme for unmanned ground vehicle via multisensor fusion
Lin et al. Fast, robust and accurate posture detection algorithm based on Kalman filter and SSD for AGV
Wu et al. Vision-based target detection and tracking system for a quadcopter
CN102745196A (en) Intelligent control device and method for granular computing-based micro intelligent vehicle
Souza et al. Vision-based waypoint following using templates and artificial neural networks
Choi et al. Radar-based lane estimation with deep neural network for lane-keeping system of autonomous highway driving
CN102034244B (en) Insulator real-time target recognizing and tracking device based on motion background
Zeng et al. Monocular visual odometry using template matching and IMU
Zhang et al. Learning end-to-end inertial-wheel odometry for vehicle ego-motion estimation
Zhu et al. Unmanned Vehicle Route Tracking Method Based on Video Image Processing.
CN107065886A (en) Automobile Unmanned Systems based on cloud
CN106292660A (en) Balance car course corrections device and method based on speedometer and gray-scale sensor
Ju et al. Learning scene adaptive covariance error model of LiDAR scan matching for fusion based localization
Lv et al. Terrain vision aided online estimation of instantaneous centers of rotation for skid-steering mobile robots
Fachri et al. Multiple waypoint navigation for mobile robot using control lyapunov-barrier function (clbf)
Qu et al. Race-car Circular Drifting with Onboard Sensor Positioning Using Static Anchors

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160330

Termination date: 20180819

CF01 Termination of patent right due to non-payment of annual fee