CN103454919A - Motion control system and method of mobile robot in intelligent space - Google Patents
Motion control system and method of mobile robot in intelligent space Download PDFInfo
- Publication number
- CN103454919A CN103454919A CN2013103615198A CN201310361519A CN103454919A CN 103454919 A CN103454919 A CN 103454919A CN 2013103615198 A CN2013103615198 A CN 2013103615198A CN 201310361519 A CN201310361519 A CN 201310361519A CN 103454919 A CN103454919 A CN 103454919A
- Authority
- CN
- China
- Prior art keywords
- mobile robot
- course
- theta
- robot
- identification network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
- Feedback Control In General (AREA)
- Manipulator (AREA)
Abstract
The invention discloses a motion control system and method of a mobile robot in an intelligent space. The motion control system is composed of the intelligent space and the mobile robot. The intelligent space comprises a monitoring host, a distributed visual system and a wireless sensor network system based on Zigbee. The motion control method of the mobile robot in the intelligent space comprises the steps that pose information of the mobile robot is obtained at first; then control deviation e of the mobile robot is obtained; mobile robot multi-objective self-regulation PID motion control based on an RBF identification network is conducted at last. Compared with an existing mobile robot control system based on a vehicle-mounted video, the motion control system and method of the mobile robot in the intelligent space have the advantages of being small in calculated amount, good in instantaneity, and more accurate in motion control.
Description
Affiliated technical field
The present invention relates to the structure of moveable robot movement control system in intelligent space, refer more particularly to mobile robot's motion control method.
Background technology
Mobile robot's intellectual space technique, exactly perception device, performer are arranged on to the relevant position in robot space with distributing, realize the comprehensive perception of robot to people in space and thing, thereby help its navigation more quick, accurate and stable in uncertain environment.And mobile robot's motion control refers to environmental information and the displacement state that can detect according to it, by the driving mechanism of controlling self, come to move to predetermined target point along path to be tracked quickly and accurately.Therefore mobile robot's motion control in intelligent space, be at first to utilize overall Vision system in intelligent space to obtain mobile robot's posture information, then drive mobile robot's actuating unit by control method, thereby realize mobile robot's path trace.
Motion control is one of gordian technique in mobile robot autonomous navigation, and the differential driving of the two-wheel of take, as main mobile robot is the Nonholonomic Constraints Systems of a quasi-representative, has larger motion control difficulty.Motion control method commonly used has sliding formwork control method and retreat the control method at present.The kinetic model based on incomplete mobile robot such as Wu Qingyun, in conjunction with Torque Control and inverting design, provided a kind of fast terminal sliding mode controller [1], realized the quick track following of the overall situation of robot, but there is inevitably " buffeting " problem in sliding-mode control.He Naibao etc. when the uncertain unknown parameters of supposing the system model, in conjunction with the recurrence method retreated and robust control technique, through the multistep Recursive Design output feedback controller and parameter adaptive control law [2].The deficiency that retreats the control method is the design process complexity, and requires robot that fully large acceleration is provided, but this is impossible in reality.The patent documentation that China Patent No. is 201010013646.5 discloses " a kind of motion control method of double-wheel differential type robot ", it is true origin that this control method mainly be take the geometric center of robot, set up body coordinate system xoy in world coordinate system XOY, by robot controlling angle and positioning, control to carry out the motion control of robot.Corner motion control and polynomial expression target direction angular difference current based on robot realized, the distance of positioning motion control based on current location and target location, and take the polynomial expression that robot direction and robot and target link angle be variable and realize.This control strategy has the following disadvantages:
(1) although the motion of robot comprises position and direction, both carry out simultaneously, and this patent is by corner and positioning, to have controlled respectively, and this is unfavorable for the continuity in robot motion's process;
(2) corner and positioning are controlled and are based on polynomial expression and realize, and mobile robot's motion is an incomplete constrained system, utilizes limited polynomial expression to be difficult to accurately build the Motion Controlling Model of robot corner and positioning;
(3) even utilize polynomial expression to realize the motion control of robot, it is most important how the degree of polynomial and coefficient are chosen, but this patent was not mentioned;
(4) how obtaining robot location and tracking position of object is the key that realizes control strategy, and this patent was not mentioned yet.
The patent documentation that China Patent No. is CN201010240067.4 discloses " control method of the robotic tracking's target based on impulsive neural networks ", the method is usingd the environmental information of obtaining and is inputted as network, the left and right motor of robot is exported as network, utilizes Nonlinear Mapping that neural network is powerful to realize the control of whole robot.The deficiency of this control strategy is:
(1) if the whole control of robot is processed as the black box problem, and realize Nonlinear Modeling by neural network, the nodes of this number of plies to neural network model, each layer requires higher, and this patent had not been mentioned and how to have been chosen;
(2) utilize neural network to carry out the Nonlinear Modeling of robot motion's control, need more accurate network weight and threshold value, and this needs a large amount of learning samples, but for kinematic system, be difficult to obtain complete learning sample, this patent had not mentioned how obtaining sample data yet;
(3), although this control strategy utilizes vision sensor to obtain environmental information, what adopt is a kind of vehicle-mounted camera, is not to control research for the robot motion under the overall Vision system.With the overall Vision system, compare, the deal with data of vehicle-mounted camera is larger, is difficult to reach the requirement of real-time of robot; What vehicle-mounted camera obtained in addition is local environmental information, and robot can not realize overall path trace, and the motion control precision is not high.
In a word, mobile robot's motion control is robot field's technological difficulties, and kinetic control system and method that the current sensor device based in intelligent space carries out the mobile robot design also seldom.In addition, mobile robot's motion control or a multi objective control problem, and it often needs constantly to carry out the adjustment of self attitude in motion process because keeping away barrier, therefore, how to design and a kind ofly can carry out multi objective control, and can carry out self-adjusting control method according to environmental change and seem particularly important.
[1] Wu Qingyun, Yan Maode, He Yuyao. mobile robot's fast terminal sliding formwork Trajectory Tracking Control [J]. systems engineering and electronic technology, 2007,29 (12): 2127 ?2130.
[2] He Naibao, Jiang Changsheng, high pretty. the adaptive tracing of a class uncertain nonlinear system based on Backstepping controlled [J]. Harbin Institute of Technology's journal, 2009,41 (5): 169 ?171.
Summary of the invention
The object of the invention is to, in order to solve the motion control of mobile robot in intelligent space, build mobile robot's kinetic control system, and then a kind of mobile robot's motion control method is provided.
Mobile robot's kinetic control system in intelligent space of the present invention, by intelligent space and mobile robot, formed, wherein intelligent space comprises monitoring host computer, distributed vision system and the wireless sensor network system based on Zigbee, distributed vision system is vertically mounted on indoor canopy and forms by gimbals respectively by a plurality of ccd video cameras, and a plurality of ccd video cameras (3) are connected with the Multiplexing Image Grab Card in the PCI slot that is inserted in monitoring host computer by video line respectively again; Wireless sensor network system based on Zigbee is comprised of blind node and Zigbee gateway, and blind node installation is on mobile robot's microcontroller, and the Zigbee gateway is connected with monitoring host computer by the RS232 serial ports.
Described blind node is for adopting the chip that is CC2431 with hardware positioning engine model.
The control method of moveable robot movement control system in intelligent space, first carry out mobile robot's posture information and obtain; Then carry out obtaining of mobile robot control deviation e; Finally carry out the multiobject self adjusting PID motion control of mobile robot based on the RBF identification network.
Described mobile robot's posture information is obtained, and adopts visible sensation method, comprises mobile robot's position and the location of course angle;
Mobile robot's location positioning method adopts following steps:
(1) coloured image that utilizes the ccd video camera collection to contain the mobile robot;
(2) Euclidean distance based on the colour element vector, in conjunction with background image, carry out Threshold segmentation to the coloured image obtained in step (1), thereby obtain the difference bianry image;
(3) use opening operation to carry out denoising Processing to bianry image, thereby obtain the bianry image that contains accurately the mobile robot;
(4) bianry image that contains the mobile robot is lined by line scan, when scanned current line line segment is adjacent with the previous row line segment, synthesized connected region; Otherwise, the connected region that initialization is new;
(5) according to the pixel coordinate of each connected region of step (4), thereby obtain each mobile robot's position coordinates.
The localization method of mobile robot's course angle adopts following steps:
1) utilize the ccd video camera collection to post mobile robot's the coloured image of the T-shaped color block of direction and sign;
2) by mobile robot's coloured image from the RGB color space conversion to the HIS color space;
3) according to the H and the S threshold value that preset, mobile robot's T-shaped color block is carried out to image and cut apart;
4) use opening operation and closed operation to carry out smoothing processing to cutting apart rear image;
5) T-shaped identification image is carried out to linear fit, obtain the slope of sign color block, and be converted to angle, last angle, final heading of carrying out the mobile robot according to the direction color block is again determined.
Described mobile robot control deviation e obtains, and comprises mobile robot's lateral distance e
dwith drift angle, course e
θobtain; Lateral distance e
dcurrent mobile robot's centre coordinate P
c, on path to be tracked with reference to the robot center point P
rthe vertical range d of place's tangent line; Drift angle, course e
θcurrent mobile robot's current deflection θ
c, with on path to be tracked with reference to the robot center point P
rthe tangential direction θ of place
rdifferential seat angle θ.
The method of the multiobject self adjusting PID motion control of the described mobile robot based on the RBF identification network, comprise the steps:
(1) mobile robot's speed is adjusted the PID control of controlled quentity controlled variable Δ v, comprises the steps:
A. ask for mobile robot's lateral distance e
d;
B. ask for mobile robot's drift angle, course e
θ;
C. set up the following k PID control of speed adjustment controlled quentity controlled variable Δ v (k) constantly:
Δv(k)=Δv(k-1)
+K
p_d(k)(e
d(k)-e
d(k-1))+K
i_d(k)e
d(k)+K
d_d(k)(e
d(k)-2e
d(k-1)+e
d(k-2))
+K
p_θ(k)(e
θ(k)-e
θ(k-1))+K
i_θ(k)e
θ(k)+K
d_θ(k)(e
θ(k)-2e
θ(k-1)+e
θ(k-1))
In formula, K
p_d(k), K
i_d(k), K
d_d(k) be respectively k lateral distance PID controller ratio, integration, differential coefficient constantly, mobile robot's lateral distance e
d, drift angle, course e
θ;
K
p_ θ(k), K
i_ θ(k), K
d_ θ(k) be respectively k ratio, integration, the differential coefficient of drift angle, course PID controller constantly;
(2), based on the RBF identification network, carry out the k pid parameter (K of lateral distance PID controller constantly
p_d(k), K
i_d(k), K
d_d(k)) self-adjusting, comprise the steps:
a.K
p_d(k)=K
p_d(k-1)+λ
p_dΔK
p_d(k);
b.K
i_d(k)=K
i_d(k-1)+λ
i_dΔK
i_d(k);
c.K
d_d(k)=K
d_d(k-1)+λ
d_dΔK
d_d(k);
In formula, λ
p_d, λ
i_d, λ
d_dbe respectively K
p_d(k), K
i_d(k), K
d_d(k) learning rate is normal number; Δ K
p_d(k), Δ K
i_d(k), Δ K
d_d(k) be respectively K
p_d(k), K
i_d(k), K
d_d(k) online adjusted value;
d.ΔK
p_d(k)=e
d(k)J
d(e
d(k)-e
d(k-1));
e.ΔK
i_d(k)=T
se
d(k)J
de
d(k);
f.
G. ask for the Jacobian sensitivity information of lateral distance RBF identification network:
In formula, Ts is sampling period, w
j_dfor the middle layer of lateral distance RBF identification network and the weights between output layer, c
j_dgaussian function center, σ for lateral distance RBF identification network
j_dwidth parameter for the gaussian kernel function of lateral distance RBF identification network;
(3), based on the RBF identification network, carry out the k pid parameter (K of drift angle, course PID controller constantly
p_ θ(k), K
i_ θ(k), K
d_ θ(k)) self-adjusting, comprise the steps:
a.)K
p_θ(k)=K
p_θ(k-1)+λ
p_θΔK
p_θ(k);
b.)K
i_θ(k)=K
i_θ(k-1)+λ
i_θΔK
i_θ(k);
c.)K
d_θ(k)=K
d_θ(k-1)+λ
d_θΔK
d_θ(k);
In formula, λ
p_ θ, λ
i_ θ, λ
d_ θbe respectively K
p_ θ(k), K
i_ θ(k), K
d_ θ(k) learning rate is normal number; Δ K
p_ θ(k), Δ K
i_ θ(k), Δ K
d_ θ(k) be respectively K
p_ θ(k), K
i_ θ(k), K
d_ θ(k) online adjusted value;
d.)ΔK
p_θ(k)=e
θ(k)J
θ(e
θ(k)-e
θ(k-1));
e.)ΔK
i_θ(k)=T
se
θ(k)J
θe
θ(k);
f.)
G.) ask for the Jacobian sensitivity information of drift angle, course RBF identification network 29:
In formula, w
j_ θfor the middle layer of drift angle, course RBF identification network and the weights between output layer, c
j_ θgaussian function center, σ for drift angle, course RBF identification network
j_ θwidth parameter for the gaussian kernel function of drift angle, course RBF identification network;
(4) asking for of mobile robot's speed v and angular velocity omega, comprise the steps:
(a.) the basic rotating speed v of given left and right wheels
0, the revolver rotating speed is: v
l=v
0+ Δ v;
(b.) right wheel speed is: v
r=v
0-Δ v, Δ v is that speed is adjusted controlled quentity controlled variable;
In formula, b
wrevolver and right spacing of taking turns for the mobile robot.
Beneficial effect
The present invention is directed to the motion control of mobile robot in intelligent space, utilize distributed vision system, the wireless sensor network system based on ZigBee technology and mobile apparatus human body to build mobile robot's kinetic control system, and then a kind of mobile robot's motion control method is provided.In designed intelligent space, kinetic control system has mainly been realized the moveable robot movement control under the overall Vision, take Vehicular video and compare as main Mobile Robot Control System with existing, have calculated amount little, real-time is good, the more accurate advantage of motion control.In addition, the environment of indoor mobile robot is uncertain, and the mobile robot often needs frequently to carry out break-in because keeping away barrier in motion process, and this control rate for some preset parameters is difficult to applicable.For this reason, the multiple goal PID motion control that the present invention is based on the RBF identification network has realized the online adaptive adjustment of pid parameter, make the mobile robot can carry out according to actual environment the self-adjusting of controller, thereby further improved mobile robot's motion control precision.
The accompanying drawing explanation
Fig. 1 mobile robot's kinetic control system forms;
Fig. 2 mobile robot's structure forms;
The vision location algorithm flow process of Fig. 3 position of mobile robot;
The vision location algorithm flow process of Fig. 4 mobile robot course angle;
Fig. 5 mobile robot's position and attitude error schematic diagram;
The Self-Adjusting PID Controller structure of Fig. 6 based on the RBF identification network;
The topological structure of Fig. 7 lateral distance RBF identification network;
The topological structure of drift angle, Fig. 8 course RBF identification network;
1. interior space ceilings in figure, 2. gimbals, 3.CCD video camera, 4. Multiplexing Image Grab Card, 5. monitoring host computer, 6.Zigbee gateway, 7. mobile robot, 8. blind node, 9. path to be tracked, 10. microcontroller, 11. ultrasonic distance sensor, 12. infrared Proximity Sensor, 13. electronic compass sensor, 14.CC2431 chip, 15. revolver stepper motor driver, take turns stepper motor driver 16. right, 17. revolver stepper motor, take turns stepper motor 18. right, 19. revolver, 20. right, take turns, 21. revolver scrambler, take turns scrambler 22. right, 23. current robot, 24. with reference to robot, 25. comparer, 26. lateral distance PID controller, 27. drift angle, course PID controller, 28. lateral distance RBF identification network, 29. drift angle, course RBF identification network, 30. totalizer, 31. lateral distance RBF identification network input layer, 32. lateral distance RBF identification network middle layer, 33. lateral distance RBF identification network output layer, 34. drift angle, course RBF identification network input layer, 35. RBF identification network middle layer, drift angle, course, 36. drift angle, course RBF identification network output layer.
Embodiment
Kinetic control system and method below in conjunction with accompanying drawing to mobile robot in intelligent space of the present invention are described in detail:
In intelligent space, the moveable robot movement control system is comprised of intelligent space and mobile robot.
As shown in Figure 1, intelligent space comprises distributed vision system and the wireless sensor network system based on ZigBee technology.The structure of distributed vision system: Distributed C CD video camera 3 is vertically mounted on indoor canopy 1 by gimbals 2, and ccd video camera 3 is connected with Multiplexing Image Grab Card 4 by video line, and image pick-up card 4 is arranged in the PCI slot of Indoor Video main frame 5.Wireless sensor network system based on ZigBee technology comprises blind node 8 and Zigbee gateway 6.The CC2431 chip that blind node 8 be take with the hardware positioning engine is core, and is arranged on mobile robot 7 microcontroller 10.Zigbee gateway 9 is connected with monitoring host computer 5 by the RS232 serial ports.Mobile robot 7 motion control purpose in intelligent space, be allow mobile robot 7 can accurate trackings on path 9 to be tracked.
As shown in Figure 2, mobile robot's structure of kinetic control system is composed as follows: take microcontroller 10 as core, during ultrasonic distance sensor 11 is used for detecting, the long-distance barrier thing, be connected with microcontroller 10; Infrared Proximity Sensor 12 is used for detecting closely barrier, with microcontroller 10, is connected; Electronic compass sensor 13, in order to the direction in record move robot 7 motion processes, is connected with microcontroller 10; CC2431 chip 14, as the blind node in the Zigbee wireless sensor network, is used for realizing the radio communication between robot 7 and monitoring host computer 5.Microcontroller 10 drives by revolver stepper motor driver 15 and controls revolver stepper motor 17, thereby realizes dragging of mobile robot's 7 revolvers 19; Equally, microcontroller 10 drives and rightly takes turns stepper motor 18 with controlling by the right stepper motor driver 16 of taking turns, thereby realizing that mobile robot 7 is right takes turns 20 drag.Mobile robot 7 is by revolver 19 and the right differential connected mode of 20 composition of taking turns.Revolver scrambler 21 is by with coaxial connection of revolver 19, carrying out the revolver tachometric survey, and by measurement feedback to microcontroller 10.Equally, the right scrambler 22 of taking turns is by with right the wheel, 20 coaxial connections carrying out right wheel speed measurement, and by measurement feedback to microcontroller 10.
Moveable robot movement control method in intelligent space of the present invention, first carry out mobile robot's posture information and obtain; Then carry out obtaining of mobile robot control deviation e; Finally carry out the multiobject self adjusting PID motion control of mobile robot based on the RBF identification network.
Described mobile robot's 7 posture information are obtained, and adopt visible sensation method, comprise mobile robot 7 position and the location of course angle;
As shown in Figure 3, the vision location algorithm of mobile robot 7 positions adopts following steps:
(1) utilize ccd video camera 3 to gather the coloured image that contains mobile robot 7;
(2) Euclidean distance based on the colour element vector, in conjunction with background image, carry out Threshold segmentation to the coloured image obtained in step (1), thereby obtain the difference bianry image;
(3) use opening operation to carry out denoising Processing to bianry image, thereby obtain the bianry image that contains accurately the mobile robot;
(4) bianry image that contains mobile robot 7 is lined by line scan, when scanned current line line segment is adjacent with the previous row line segment, synthesized connected region; Otherwise, the connected region that initialization is new;
(5) according to the pixel coordinate of each connected region, thereby obtain each mobile robot's 7 position coordinates.
As shown in Figure 4, the vision location algorithm of mobile robot's 7 course angles adopts following steps:
(1) utilize the ccd video camera collection to post mobile robot 7 the coloured image of the T-shaped color block of direction and sign;
(2) by mobile robot 7 coloured image from the RGB color space conversion to the HIS color space;
(3) according to the H and the S threshold value that preset, T-shaped color block of the mobile robot 7 is carried out to image and cut apart;
(4) use opening operation and closed operation to carry out smoothing processing to cutting apart rear image;
(5) T-shaped identification image is carried out to linear fit, obtain the slope of sign color block, and be converted to angle, last angle, final heading of carrying out mobile robot 7 according to the direction color block is again determined.
Mobile robot 7 control deviation e comprise mobile robot 7 lateral distance e
dwith drift angle, course e
θ.
As shown in Figure 5, lateral distance e
dcurrent mobile robot's 23 centre coordinate P
c, on path 9 to be tracked with reference to robot 24 center point P
rthe vertical range d of place's tangent line.Drift angle, course e
θcurrent mobile robot's 23 current deflection θ
c, with on path 9 to be tracked with reference to robot 24 center point P
rthe tangential direction θ of place
rdifferential seat angle θ.
As shown in Figure 6, the mobile robot's multiple goal Self-Adjusting PID Controller structure based on the RBF identification network, be by mobile robot 7 reference pose P
rwith current pose P
cafter comparer 25, obtain and comprise lateral distance e
dwith drift angle, course e
θcontrol deviation e.By lateral distance e
dby the rear acquisition lateral distance controlling value of lateral distance PID controller 26 u
d, simultaneously by drift angle, course e
θby drift angle, course PID controller 27 drift angle, rear acquisition course controlling value u
θ.By lateral distance controlling value u
dwith drift angle, course controlling value e
θthe speed that obtains mobile robot 7 through totalizer is adjusted controlled quentity controlled variable Δ v.According to controlled quentity controlled variable Δ v, motion obtains current centre coordinate P to mobile robot 7
c.In lateral distance PID control procedure, lateral distance RBF identification network 28 can be according to lateral distance PID controller 26 output u
d, mobile robot 7 current lateral distance e
d, upper one lateral distance constantly
carry out the online pid parameter adjustment of lateral distance PID controller 26; In the PID control procedure of drift angle, course, drift angle, course RBF identification network 29 can be according to drift angle, course PID controller 27 output u
θ, mobile robot 7 current course drift angle e
θ, upper one lateral distance constantly
carry out the online pid parameter adjustment of drift angle, course PID controller 27.By the online pid parameter adjustment of lateral distance PID controller 26 and drift angle, course PID controller 27, realized the multiobject self adjusting PID motion control of mobile robot 7 based on the RBF identification network.
As shown in Figure 7, lateral distance RBF identification network 28 is three-layer networks, lateral distance RBF identification network input layer 31, middle layer 32 and output layer 33, consists of.Lateral distance RBF identification network input layer 31 has three input nodes, and respective side is to the output u of distance P ID controller 26 respectively
d, mobile robot 7 current lateral distance e
d, upper one lateral distance constantly
.There are six hidden nodes in middle layer 32, and the node function adopts gaussian kernel function:
In formula,
c
j_dgaussian kernel function center, σ for lateral distance RBF identification network 28
j_dwidth parameter for the gaussian kernel function of lateral distance RBF identification network 28.
Lateral distance RBF identification network output layer 33 only has a node, and node is linear function, that is:
In formula, w
j_dfor the middle layer 32 of lateral distance RBF identification network 28 and the weights between output layer 33.
As shown in Figure 8, drift angle, course RBF identification network 29 is a three-layer network equally, drift angle, course RBF identification network input layer 34, middle layer 35 and output layer 36, consists of.Drift angle, course RBF identification network input layer 34 has three input nodes, the output u of drift angle, corresponding course PID controller 27 respectively
θ, mobile robot 7 current course drift angle e
θ, upper one drift angle, course constantly
.Middle layer 35 has six hidden nodes equally, and the node function adopts gaussian kernel function equally:
In formula,
c
j_ θgaussian kernel function center, σ for drift angle, course RBF identification network 29
j_ θwidth parameter for the gaussian kernel function of drift angle, course RBF identification network 29.
Drift angle, course RBF identification network output layer 36 only has a node equally, and node is also linear function, that is:
In formula, w
j_ θfor the middle layer 35 of drift angle, course RBF identification network 29 and the weights between output layer 36.
In conjunction with Fig. 5, Fig. 6, Fig. 7 and Fig. 8, the method flow of the mobile robot 7 multiobject self adjusting PID motion control of narration based on the RBF identification network, mainly comprise the steps:
(1) mobile robot's 7 speed are adjusted the PID control of controlled quentity controlled variable Δ v, comprise the steps:
A. ask for mobile robot 7 lateral distance e
d;
B. ask for mobile robot 7 drift angle, course e
θ;
C. set up the following k PID control of speed adjustment controlled quentity controlled variable Δ v (k) constantly:
Δv(k)=Δv(k-1)
+K
p_d(k)(e
d(k)-e
d(k-1))+K
i_d(k)e
d(k)+K
d_d(k)(e
d(k)-2e
d(k-1)+e
d(k-2))
+K
p_θ(k)(e
θ(k)-e
θ(k-1))+K
i_θ(k)e
θ(k)+K
d_θ(k)(e
θ(k)-2e
θ(k-1)+e
θ(k-1))
In formula, K
p_d(k), K
i_d(k), K
d_d(k) be respectively k ratio, integration, the differential coefficient of lateral distance PID controller 26 constantly;
K
p_ θ(k), K
i_ θ(k), K
d_ θ(k) be respectively k ratio, integration, the differential coefficient of drift angle, course PID controller 27 constantly.
(2), based on the RBF identification network, carry out the k pid parameter (K of lateral distance PID controller 26 constantly
p_d(k), K
i_d(k), K
d_d(k)) self-adjusting, comprise the steps:
a.K
p_d(k)=K
p_d(k-1)+λ
p_dΔK
p_d(k)
b.K
i_d(k)=K
i_d(k-1)+λ
i_dΔK
i_d(k)
c.K
d_d(k)=K
d_d(k-1)+λ
d_dΔK
d_d(k)
In formula, λ
p_d, λ
i_d, λ
d_dbe respectively K
p_d(k), K
i_d(k), K
d_d(k) learning rate is normal number.Δ K
p_d(k), Δ K
i_d(k), Δ K
d_d(k) be respectively K
p_d(k), K
i_d(k), K
d_d(k) online adjusted value.
d.ΔK
p_d(k)=e
d(k)J
d(e
d(k)-e
d(k-1))
e.ΔK
i_d(k)=T
se
d(k)J
de
d(k)
f.
G. ask for the Jacobian sensitivity information of lateral distance RBF identification network 28:
In formula, Ts is the sampling period.
(3), based on the RBF identification network, carry out the k pid parameter (K of drift angle, course PID controller 27 constantly
p_ θ(k), K
i_ θ(k), K
d_ θ(k)) self-adjusting, comprise the steps:
a.K
p_θ(k)=K
p_θ(k-1)+λ
p_θΔK
p_θ(k)
b.K
i_θ(k)=K
i_θ(k-1)+λ
i_θΔK
i_θ(k)
c.K
d_θ(k)=K
d_θ(k-1)+λ
d_θΔK
d_θ(k)
In formula, λ
p_ θ, λ
i_ θ, λ
d_ θbe respectively K
p_ θ(k), K
i_ θ(k), K
d_ θ(k) learning rate is normal number.Δ K
p_ θ(k), Δ K
i_ θ(k), Δ K
d_ θ(k) be respectively K
p_ θ(k), K
i_ θ(k), K
d_ θ(k) online adjusted value.
d.ΔK
p_θ(k)=e
θ(k)J
θ(e
θ(k)-e
θ(k-1))
e.ΔK
i_θ(k)=T
se
θ(k)J
θe
θ(k)
f.
G. ask for the Jacobian sensitivity information of drift angle, course RBF identification network 29:
(4) ask for speed v and the angular velocity omega of mobile robot's 7 central points, comprise the steps:
A. the basic rotating speed v of given left and right wheels
0, the revolver rotating speed is: v
l=v
0+ Δ v
B. right wheel speed is: v
r=v
0-Δ v, Δ v is that speed is adjusted controlled quentity controlled variable
In formula, b
wfor mobile robot 7 revolver 19 and right 20 the spacing of taking turns.
Claims (6)
1. mobile robot's kinetic control system in an intelligent space, it is characterized in that: by intelligent space and mobile robot (7), formed, wherein intelligent space comprises monitoring host computer (5), distributed vision system and the wireless sensor network system based on Zigbee, distributed vision system is vertically mounted on the upper formation of indoor canopy (1) by gimbals (2) respectively by a plurality of ccd video cameras (3), and a plurality of ccd video cameras (3) are connected with the Multiplexing Image Grab Card (4) in the PCI slot that is inserted in monitoring host computer (5) by video line respectively again; Wireless sensor network system based on Zigbee is comprised of blind node (8) and Zigbee gateway (6), blind node (8) is arranged on the microcontroller (10) of mobile robot (7), and Zigbee gateway (6) is connected with monitoring host computer (5) by the RS232 serial ports.
2. mobile robot's kinetic control system in intelligent space according to claim 1, is characterized in that: the chip that described blind node (8) is CC2431 for employing with hardware positioning engine model.
3. the control method of moveable robot movement control system in an intelligent space as claimed in claim 1, is characterized in that first carrying out mobile robot's posture information obtains; Then carry out obtaining of mobile robot control deviation e; Finally carry out the multiobject self adjusting PID motion control of mobile robot based on the RBF identification network.
4. the control method of moveable robot movement control system in intelligent space according to claim 3, it is characterized in that described mobile robot (7) posture information obtains, adopt visible sensation method, comprise mobile robot's (7) position and the location of course angle;
Mobile robot's (7) location positioning method adopts following steps:
(1) coloured image that utilizes ccd video camera (3) collection to contain mobile robot (7);
(2) Euclidean distance based on the colour element vector, in conjunction with background image, carry out Threshold segmentation to the coloured image obtained in step (1), thereby obtain the difference bianry image;
(3) use opening operation to carry out denoising Processing to bianry image, thereby obtain the bianry image that contains accurately mobile robot (7);
(4) bianry image that contains mobile robot (7) is lined by line scan, if the current line line segment scanned is adjacent with the previous row line segment, synthesized connected region; Otherwise, the connected region that initialization is new;
(5) according to the pixel coordinate of each connected region of step (4), thereby obtain the position coordinates of each mobile robot (7).
The localization method of mobile robot's (7) course angle adopts following steps:
1) utilize ccd video camera (3) to gather to post mobile robot's (7) the coloured image of the T-shaped color block of direction and sign;
2) by mobile robot's (7) coloured image from the RGB color space conversion to the HIS color space;
3) according to the H and the S threshold value that preset, mobile robot's (7) T-shaped color block is carried out to image and cut apart;
4) use opening operation and closed operation to carry out smoothing processing to cutting apart rear image;
5) T-shaped identification image is carried out to linear fit, obtain the slope of sign color block, and be converted to angle, the angle, final heading of finally carrying out mobile robot (7) according to the direction color block is again determined.
5. the control method of moveable robot movement control system in intelligent space according to claim 3, is characterized in that obtaining of described mobile robot control deviation e, comprises mobile robot's lateral distance e
dwith drift angle, course e
θobtain; Lateral distance e
dcurrent mobile robot's (23) centre coordinate P
c, upper with reference to robot (24) center point P to path to be tracked (9)
rthe vertical range d of place's tangent line; Drift angle, course e
θcurrent mobile robot's (23) current deflection θ
c, upper with reference to robot (24) center point P with path to be tracked (9)
rthe tangential direction θ of place
rdifferential seat angle θ.
6. the control method of moveable robot movement control system in intelligent space according to claim 3, is characterized in that comprising the steps: the method for the described multiobject self adjusting PID motion control of mobile robot (7) based on the RBF identification network
(1) mobile robot (7) speed is adjusted the PID control of controlled quentity controlled variable Δ v, comprises the steps:
A. ask for the lateral distance e of mobile robot (7)
d;
B. ask for mobile robot's (7) drift angle, course e
θ;
C. set up the following k PID control of speed adjustment controlled quentity controlled variable Δ v (k) constantly:
Δv(k)=Δv(k-1)
+K
p_d(k)(e
d(k)-e
d(k-1))+K
i_d(k)e
d(k)+K
d_d(k)(e
d(k)-2e
d(k-1)+e
d(k-2))
+K
p_θ(k)(e
θ(k)-e
θ(k-1))+K
i_θ(k)e
θ(k)+K
d_θ(k)(e
θ(k)-2e
θ(k-1)+e
θ(k-1))
In formula, K
p_d(k), K
i_d(k), K
d_d(k) be respectively k lateral distance PID controller (26) ratio, integration, differential coefficient constantly;
K
p_ θ(k), K
i_ θ(k), K
d_ θ(k) be respectively k ratio, integration, the differential coefficient of drift angle, course PID controller (27) constantly;
(2), based on the RBF identification network, carry out the k pid parameter (K of lateral distance PID controller (26) constantly
p_d(k), K
i_d(k), K
d_d(k)) self-adjusting, comprise the steps:
a.K
p_d(k)=K
p_d(k-1)+λ
p_dΔK
p_d(k);
b.K
i_d(k)=K
i_d(k-1)+λ
i_dΔK
i_d(k);
c.K
d_d(k)=K
d_d(k-1)+λ
d_dΔK
d_d(k);
In formula, λ
p_d, λ
i_d, λ
d_dbe respectively K
p_d(k), K
i_d(k), K
d_d(k) learning rate is normal number; Δ K
p_d(k), Δ K
i_d(k), Δ K
d_d(k) be respectively K
p_d(k), K
i_d(k), K
d_d(k) online adjusted value;
d.ΔK
p_d(k)=e
d(k)J
d(e
d(k)-e
d(k-1));
e.ΔK
i_d(k)=T
se
d(k)J
de
d(k);
f.
G. ask for the Jacobian sensitivity information of lateral distance RBF identification network (28):
In formula, Ts is sampling period, w
j_dfor the middle layer (32) of lateral distance RBF identification network (28) and weights, the c between output layer (33)
j_dgaussian function center, σ for lateral distance RBF identification network (28)
j_dwidth parameter for the gaussian kernel function of lateral distance RBF identification network (28);
(3), based on the RBF identification network, carry out the k pid parameter (K of drift angle, course PID controller (27) constantly
p_ θ(k), K
i_ θ(k), K
d_ θ(k)) self-adjusting, comprise the steps:
a.)K
p_θ(k)=K
p_θ(k-1)+λ
p_θΔK
p_θ(k);
b.)K
i_θ(k)=K
i_θ(k-1)+λ
i_θΔK
i_θ(k);
c.)K
d_θ(k)=K
d_θ(k-1)+λ
d_θΔK
d_θ(k);
In formula, λ
p_ θ, λ
i_ θ, λ
d_ θbe respectively K
p_ θ(k), K
i_ θ(k), K
d_ θ(k) learning rate is normal number; Δ K
p_ θ(k), Δ K
i_ θ(k), Δ K
d_ θ(k) be respectively K
p_ θ(k), K
i_ θ(k), K
d_ θ(k) online adjusted value;
d.)ΔK
p_θ(k)=e
θ(k)J
θ(e
θ(k)-e
θ(k-1));
e.)ΔK
i_θ(k)=T
se
θ(k)J
θe
θ(k);
f.)
G.) ask for the Jacobian sensitivity information of drift angle, course RBF identification network 29:
In formula, w
j_ θfor the middle layer (35) of drift angle, course RBF identification network (29) and weights, the c between output layer (36)
j_ θgaussian function center, σ for drift angle, course RBF identification network (29)
j_ θwidth parameter for the gaussian kernel function of drift angle, course RBF identification network (29);
(4) asking for of mobile robot's (7) speed v and angular velocity omega, comprise the steps:
(a.) the basic rotating speed v of given left and right wheels
0, the revolver rotating speed is: v
l=v
0+ Δ v;
(b.) right wheel speed is: v
r=v
0-Δ v, Δ v is that speed is adjusted controlled quentity controlled variable;
In formula, b
wfor the revolver (19) of mobile robot (7) and the spacing of right wheel the (20).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310361519.8A CN103454919B (en) | 2013-08-19 | 2013-08-19 | The control method of the kinetic control system of mobile robot in intelligent space |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310361519.8A CN103454919B (en) | 2013-08-19 | 2013-08-19 | The control method of the kinetic control system of mobile robot in intelligent space |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103454919A true CN103454919A (en) | 2013-12-18 |
CN103454919B CN103454919B (en) | 2016-03-30 |
Family
ID=49737415
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310361519.8A Expired - Fee Related CN103454919B (en) | 2013-08-19 | 2013-08-19 | The control method of the kinetic control system of mobile robot in intelligent space |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103454919B (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105486309A (en) * | 2015-12-02 | 2016-04-13 | 赵铠彬 | Color mode and auxiliary identification-based indoor robot navigating and positioning method |
CN105706011A (en) * | 2013-11-07 | 2016-06-22 | 富士机械制造株式会社 | Automatic driving system and automatic travel machine |
CN106531161A (en) * | 2016-10-17 | 2017-03-22 | 南京理工大学 | Image-recognition-based apparatus and method of automatically sorting and carrying articles by mobile trolley |
CN106843224A (en) * | 2017-03-15 | 2017-06-13 | 广东工业大学 | A kind of method and device of multi-vision visual positioning collaboration guiding transport vehicle |
CN107063257A (en) * | 2017-02-05 | 2017-08-18 | 安凯 | A kind of separate type sweeping robot and its paths planning method |
CN107273850A (en) * | 2017-06-15 | 2017-10-20 | 上海工程技术大学 | A kind of autonomous follower method based on mobile robot |
WO2018068446A1 (en) * | 2016-10-12 | 2018-04-19 | 纳恩博(北京)科技有限公司 | Tracking method, tracking device, and computer storage medium |
CN108088447A (en) * | 2017-12-15 | 2018-05-29 | 陕西理工大学 | A kind of path post-processing approach of mobile robot |
CN108426580A (en) * | 2018-01-22 | 2018-08-21 | 中国地质大学(武汉) | Unmanned plane based on image recognition and intelligent vehicle collaborative navigation method |
CN108958249A (en) * | 2018-07-06 | 2018-12-07 | 东南大学 | A kind of ground robot control system and control method considering Unknown control direction |
CN109947117A (en) * | 2019-04-19 | 2019-06-28 | 辽宁工业大学 | A kind of servo synchronization control system and control method suitable for monocular vision logistics distribution trolley |
CN110450157A (en) * | 2019-08-07 | 2019-11-15 | 安徽延达智能科技有限公司 | A kind of robot automatic obstacle-avoiding system |
CN112256038A (en) * | 2020-11-03 | 2021-01-22 | 盈合(深圳)机器人与自动化科技有限公司 | Intelligent space service method and system |
CN112651456A (en) * | 2020-12-31 | 2021-04-13 | 遵义师范学院 | Unmanned vehicle control method based on RBF neural network |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007049412A1 (en) * | 2005-10-27 | 2007-05-03 | Yamaguchi University | Ultrasonic motor control method, ultrasonic motor control device, and program for controlling ultrasonic motor |
CN101791800A (en) * | 2010-01-21 | 2010-08-04 | 西北工业大学 | Motion control method of double-wheel differential type robot |
KR20110039659A (en) * | 2009-10-12 | 2011-04-20 | 금오공과대학교 산학협력단 | Swarm robot path control system based on network and method thereof |
CN102346489A (en) * | 2010-07-28 | 2012-02-08 | 中国科学院自动化研究所 | Pulse neural network based method for controlling object tracking of robot |
CN102914303A (en) * | 2012-10-11 | 2013-02-06 | 江苏科技大学 | Navigation information acquisition method and intelligent space system with multiple mobile robots |
-
2013
- 2013-08-19 CN CN201310361519.8A patent/CN103454919B/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007049412A1 (en) * | 2005-10-27 | 2007-05-03 | Yamaguchi University | Ultrasonic motor control method, ultrasonic motor control device, and program for controlling ultrasonic motor |
KR20110039659A (en) * | 2009-10-12 | 2011-04-20 | 금오공과대학교 산학협력단 | Swarm robot path control system based on network and method thereof |
CN101791800A (en) * | 2010-01-21 | 2010-08-04 | 西北工业大学 | Motion control method of double-wheel differential type robot |
CN102346489A (en) * | 2010-07-28 | 2012-02-08 | 中国科学院自动化研究所 | Pulse neural network based method for controlling object tracking of robot |
CN102914303A (en) * | 2012-10-11 | 2013-02-06 | 江苏科技大学 | Navigation information acquisition method and intelligent space system with multiple mobile robots |
Non-Patent Citations (2)
Title |
---|
刘渭苗: "轮式移动机器人神经网络控制", 《中国优秀硕士学位论文全文数据库》 * |
陈正洪等: "两自由度并联机器人的RBF神经网络辨识自适应控制", 《武汉理工大学学报(交通科学与工程版)》 * |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105706011A (en) * | 2013-11-07 | 2016-06-22 | 富士机械制造株式会社 | Automatic driving system and automatic travel machine |
CN105486309B (en) * | 2015-12-02 | 2018-08-17 | 广州市吉特科技有限公司 | It is a kind of based on color mode and assist in identifying Indoor Robot navigation and localization method |
CN105486309A (en) * | 2015-12-02 | 2016-04-13 | 赵铠彬 | Color mode and auxiliary identification-based indoor robot navigating and positioning method |
WO2018068446A1 (en) * | 2016-10-12 | 2018-04-19 | 纳恩博(北京)科技有限公司 | Tracking method, tracking device, and computer storage medium |
CN106531161A (en) * | 2016-10-17 | 2017-03-22 | 南京理工大学 | Image-recognition-based apparatus and method of automatically sorting and carrying articles by mobile trolley |
CN107063257A (en) * | 2017-02-05 | 2017-08-18 | 安凯 | A kind of separate type sweeping robot and its paths planning method |
CN107063257B (en) * | 2017-02-05 | 2020-08-04 | 安凯 | Separated floor sweeping robot and path planning method thereof |
CN106843224A (en) * | 2017-03-15 | 2017-06-13 | 广东工业大学 | A kind of method and device of multi-vision visual positioning collaboration guiding transport vehicle |
CN106843224B (en) * | 2017-03-15 | 2020-03-10 | 广东工业大学 | Method and device for cooperatively guiding transport vehicle through multi-view visual positioning |
CN107273850A (en) * | 2017-06-15 | 2017-10-20 | 上海工程技术大学 | A kind of autonomous follower method based on mobile robot |
CN108088447A (en) * | 2017-12-15 | 2018-05-29 | 陕西理工大学 | A kind of path post-processing approach of mobile robot |
CN108426580B (en) * | 2018-01-22 | 2021-04-30 | 中国地质大学(武汉) | Unmanned aerial vehicle and intelligent vehicle collaborative navigation method based on image recognition |
CN108426580A (en) * | 2018-01-22 | 2018-08-21 | 中国地质大学(武汉) | Unmanned plane based on image recognition and intelligent vehicle collaborative navigation method |
CN108958249A (en) * | 2018-07-06 | 2018-12-07 | 东南大学 | A kind of ground robot control system and control method considering Unknown control direction |
CN109947117A (en) * | 2019-04-19 | 2019-06-28 | 辽宁工业大学 | A kind of servo synchronization control system and control method suitable for monocular vision logistics distribution trolley |
CN110450157A (en) * | 2019-08-07 | 2019-11-15 | 安徽延达智能科技有限公司 | A kind of robot automatic obstacle-avoiding system |
CN112256038A (en) * | 2020-11-03 | 2021-01-22 | 盈合(深圳)机器人与自动化科技有限公司 | Intelligent space service method and system |
CN112651456A (en) * | 2020-12-31 | 2021-04-13 | 遵义师范学院 | Unmanned vehicle control method based on RBF neural network |
CN112651456B (en) * | 2020-12-31 | 2023-08-08 | 遵义师范学院 | Unmanned vehicle control method based on RBF neural network |
Also Published As
Publication number | Publication date |
---|---|
CN103454919B (en) | 2016-03-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103454919B (en) | The control method of the kinetic control system of mobile robot in intelligent space | |
CN109945858B (en) | Multi-sensing fusion positioning method for low-speed parking driving scene | |
CN108536149B (en) | Unmanned vehicle obstacle avoidance control method based on Dubins path | |
CN110007675B (en) | Vehicle automatic driving decision-making system based on driving situation map and training set preparation method based on unmanned aerial vehicle | |
CN107085938B (en) | The fault-tolerant planing method of intelligent driving local path followed based on lane line and GPS | |
Kim et al. | End-to-end deep learning for autonomous navigation of mobile robot | |
Siagian et al. | Mobile robot navigation system in outdoor pedestrian environment using vision-based road recognition | |
CN105629970A (en) | Robot positioning obstacle-avoiding method based on supersonic wave | |
Oh et al. | Indoor UAV control using multi-camera visual feedback | |
CN110262555B (en) | Real-time obstacle avoidance control method for unmanned aerial vehicle in continuous obstacle environment | |
CN114998276B (en) | Robot dynamic obstacle real-time detection method based on three-dimensional point cloud | |
Chen et al. | Real-time identification and avoidance of simultaneous static and dynamic obstacles on point cloud for UAVs navigation | |
Lin et al. | Fast, robust and accurate posture detection algorithm based on Kalman filter and SSD for AGV | |
Jun et al. | Autonomous driving system design for formula student driverless racecar | |
CN111708010B (en) | Mobile equipment positioning method, device and system and mobile equipment | |
CN113589685B (en) | Vehicle moving robot control system and method based on deep neural network | |
Souza et al. | Vision-based waypoint following using templates and artificial neural networks | |
Choi et al. | Radar-based lane estimation with deep neural network for lane-keeping system of autonomous highway driving | |
Al-Jarrah et al. | Blimp based on embedded computer vision and fuzzy control for following ground vehicles | |
CN208061025U (en) | A kind of automatic driving vehicle avoidance obstacle device based on the paths Dubins | |
Boucheloukh et al. | UAV navigation based on adaptive fuzzy backstepping controller using visual odometry | |
Muller et al. | A model-based object following system | |
Zhan et al. | Application of image process and distance computation to WMR obstacle avoidance and parking control | |
Ju et al. | Learning scene adaptive covariance error model of LiDAR scan matching for fusion based localization | |
Abbas et al. | Experimental analysis of trajectory control using computer vision and artificial intelligence for autonomous vehicles |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20160330 Termination date: 20180819 |
|
CF01 | Termination of patent right due to non-payment of annual fee |