Specific embodiment
In conjunction with following application scenarios, the invention will be further described.
Referring to Fig. 1, a kind of intelligent robot of the undercarriage on data center's physical equipment, including 1 He of robot body
Mechanical handing device 2;
The robot body 1 includes laser radar 11, central processing unit 12, motion control device 13, telecontrol equipment 14
With odometer 15;
The laser radar 11 is for scanning and acquiring the environmental information of robot body 1 in the work environment;
The central processing unit 12 is used to construct grating map according to the environmental information obtained from the laser radar 11, obtains
It takes robot itself posture information, and path planning is carried out according to the position of equipment in data center and cabinet and to the movement
Control device 13 issues corresponding move;
The move that the motion control device 13 is used to be issued according to the central processing unit 12 controls the movement
Device 14;
The telecontrol equipment 14 is equipped with wheel, mobile for controlling the robot body 1;
The odometer 15 is used to measure the travel information of the robot body 1;
The mechanical handing device 2 is arranged on the robot body 1, for carrying out carrying and the upper undercarriage behaviour of equipment
Make.
Preferably, the central processing unit 12 includes memory module 120, map structuring and locating module 122, path planning
Module 124 and instruction sending module 126;
The map structuring and locating module 122 are used to construct grid according to the environmental information acquired from the laser radar 11
Lattice map simultaneously obtains robot itself posture information;
The memory module 120 is for storing the grating map information;
The path planning module 124 is used for according to the grating map information, robot itself posture information and purpose
Ground information carries out path planning, to acquisite approachs planning information;
Described instruction sending module 126 is used for according to the grating map information, robot itself posture information and path
Planning information generates corresponding move and is sent to the motion control device 13;
Wherein, the destination information indicates the location information of the equipment or cabinet, can by be manually set or it is defeated
Enter;
Wherein, SLAM (simultaneous localization and is used in the map structuring and locating module
Mapping) technology is updated to grating map and obtains the current location information of itself.
The above embodiment of the present invention obtains the environmental information around intelligent robot by setting laser radar, by setting
It sets map structuring and locating module to handle collected environmental information, working environment where constructing intelligent robot
Grating map, and according to the accurate positionin of the acquisition of information intelligent robot further obtained from laser radar itself, and adopt
Accurate path planning, motion control device root are carried out according to the position of equipment in data center and cabinet with path planning module
Specified local position, which is reached, according to the path clustering robot of planning carries out upper undercarriage operation, intelligent height, adaptive capacity to environment
By force, human cost can be effectively reduced.
Preferably, the mechanical handing device 2 includes radio frequency identification module 21, binocular camera 22, image processing module
23, machine control modules 24 and robot manipulator structure 25;
The radio frequency identification module 21 is used to scanning and identifying label information on equipment or cabinet, confirms target object,
The target object is target device or target cabinet;
The binocular camera 22 is used for the target object image in 2 working region of collection machinery handling device, wherein institute
It states binocular camera 22 and divides for left camera and right camera the image information for respectively while obtaining working region;
Described image processing module 23 obtains the target object for handling the target object image of acquisition
Location information;
The machine control modules 24 are used for according to the location information of the target object of acquisition to the robot manipulator structure 25
Issue corresponding control instruction;
The robot manipulator structure 25 is used for according to the control instruction crawl target device received and completes target device
Upper undercarriage operation.
The above embodiment of the present invention is believed by the image that setting binocular camera obtains the working region of mechanical handing device
Breath, and by setting radio frequency identification module, it identifies the label information on equipment or cabinet, can accurately judge in machinery
Whether equipment or cabinet in handling device working region are the object that need to complete undercarriage operation;By the way that image procossing mould is arranged
Block further obtains the location information of target object, controls robot manipulator structure by machine control modules and completes upper undercarriage accordingly
Operation, entire mechanical handing device is adaptable, and accuracy is high, completes undercarriage in equipment for intelligent robot and provides guarantor
Card.
Preferably, described image processing module 23 includes camera calibration unit 230, image pre-processing unit 232, image
Cutting unit 234 and target positioning unit 236;
The camera calibration unit 230 obtains the inside of camera for demarcating to the binocular camera 22
Parameter and external parameter determine the mapping relations of two-dimensional image coordinate system and world three dimensional coordinate system;
Described image pretreatment unit 232 is for target object image degree of the comparing enhancing to acquisition, smooth and figure
The pretreatment of image intensifying obtains pretreated target object image;
Described image cutting unit 234 is for splitting target object from pretreated target image;
The target positioning unit 236 is for coordinate position according to target object in the picture, in conjunction with from the camera shooting
The camera calibration result and binocular vision principle of parallax that leader order member 230 obtains, are transformed into true three-dimension for two-dimensional coordinate
Space coordinate obtains the three dimensional local information of target object.
Preferably, described image cutting unit 234 is used to split target object from pretreated target image,
It specifically includes:
(1) the two width target object images acquired from binocular camera 22 are respectively labeled as target object image R1And mesh
Mark object images R2, and by target object image R1With target object image R2It is obtained after the processing of image pre-processing unit 232
Pretreated target object image is respectively labeled as R '1And R '2;
(2) using image segmentation algorithm to pretreated target object image R '1Target object segmentation is carried out, wherein in advance
Treated target object image R '1In each pixel can be expressed asWherein i=1,2 ..., I, I indicate pixel
Sum,Indicate that the pixel is the prospect for indicating target object,Indicate that the pixel is background;
(3) C={ C is combined into using the profile collection that contour extraction algorithm extracts target object1,C2,…,CZ, wherein each
Profile CzIndicate that a closed curve, z=1,2 ..., Z, Z indicate that the sum of profile in profile set, the vector have recorded target
Pixel position C on object boundsz={ p1,p2,…,pLz, wherein Lz indicates profile CzThe sum of middle pixel;
(4) by pretreated target object image R '1Each of middle profile set C profile CzIt is mapped to pretreatment
Target object image R ' afterwards2In, target object is obtained in image R '2Boundary profile, specifically:
(41) state transition matrix M is constructed, wherein the size of state transition matrix M is Lz × D, and D indicates the value of parallax
Range, D ∈ [dmin,dmax], dminAnd dmaxRespectively indicate the minimum value and maximum value of parallax value, it is every in state transition matrix M
The value of a element M (i, j) is Est(i, j), Est(i, j) indicates R '1Middle pixel piWith R '2Middle parallax is djRespective pixel
The state energy of point;
(42) state energy of each element M (i, j) in state transition matrix M is obtained using customized state energy formula
Est(i, j):
Wherein,
In formula, Est(i, j) indicates R '1Middle pixel piWith R '2Middle parallax is djCorresponding pixel points state energy, qj
It indicates in R '2In with R '1Middle pixel piParallax is djCorresponding pixel points, qj=pi-dj, E (pi,qj) indicate pixel piWith
Pixel qjObjective energy function, Cs(pi,qj) indicate pixel piWith pixel qjVision matching cost, wherein Φ(pi) indicate with pixel piCentered on
One local window, size are w × w, K (px)=1 indicates pixel pxBelong to prospect, qy=px-dj, ch(px) and ch(qy)
Respectively indicate pixel pxAnd qyR/G/B chromatic value, ω0Indicate object bounds weight, ω1Indicate object smoothing weights, CO
(pi,qj) indicate object bounds matching cost, whereinPr(O|py) table
Show pixel qyBelong to the posterior probability of prospect, N (pi-pi-1) indicate pixel piAnd pi-1Smoothness cost, wherein βdIndicate the discontinuous threshold value of parallax of setting;
(43) optimal energy path is obtained using backtracking algorithm to state transition matrix M, it will be every on optimal energy path
The corresponding pixel p of a M (i, j)iIt is mapped to R '2In obtain unique corresponding contour point pi-dj, obtain pretreated target pair
As image R '1Middle profile CzIt is mapped to pretreated target object image R '2Best match profile C ' afterwardsz;
(44) target object image R ' after the pre-treatment2The middle best match wheel for obtaining all profile and borders of target object
Exterior feature, and target object is gone out according to the best match contours segmentation of mapping.
This preferred embodiment carries out point of target object using the above method to the image obtained from binocular camera
It cuts, the two images that can overcome the problems, such as that different cameras obtain from binocular camera can have parallax, can be accurate, effect
Outline is carried out to rate to target object in two images, to accurately obtain target object respectively in two images
In profile and further progress segmentation so that strong from the cutting object matching in two images, to be obtained in subsequent module
The accurate three-dimensional position of target object is laid a good foundation.
Preferably, described to use image segmentation algorithm to pretreated target object image R '1Carry out target object point
It cuts, specifically includes:
(1) Threshold segmentation is carried out to image, obtains the boundary rectangle of target object as initial pictures profile, and external
One control point v is set every 4 pixels on rectangle, as initial profile control point S=(v1,v2,…,vn), and will be external
The center of rectangle is set as dynamic outline center τ=(Xτ,Yτ);
(2) using customized energy equation obtain each profile control point to its neighborhood position energy value:
Ei,j=α (i) Eih1(i,j)+β(i)Eih2(i,j)+γ(i)Eedge(i,j)+ε(i)Er(i,j)
In formula, Ei,jIndicate profile control point viTo the energy value of its neighborhood position, i indicates i-th of profile control point, j=
1,2,3,4, it respectively indicates and profile control point vi4 adjacent neighborhood territory pixel point v of surroundingi,j, Eih1(i, j) indicates that single order is continuous
Property power of enforcement, wherein Indicate the average distance between each profile control point, | vi,j-
vi-1| indicate neighborhood territory pixel point vi,jWith profile control point vi-1The distance between, α (i) indicates the discrete coefficient of first order of setting, Eih2
(i) second order continuity power of enforcement is indicated, wherein Eih2(i, j)=(| vi-1-vi,j|+|vi,j-vi+1|)2, | vi-vi+1| indicate neighborhood
Pixel vi,jWith profile control point vi+1The distance between, β (i) indicates the discrete second order coefficient of setting, Eedge(i, j) indicates side
Edge energy, whereinL(vi,j,vi-1) indicate neighborhood territory pixel point vi,jAnd wheel
Wide control point vi-1Between contour line, I (x, y) indicate pixel (x, y) gradient value,Indicate neighborhood territory pixel point vi,jWith
Profile control point vi-1Between on contour line all pixels point average gradient value, n indicates neighborhood territory pixel point vi,jIt is controlled with profile
Point vi-1Between on contour line pixel sum, μ indicates the edge energy factor of setting, and γ (i) indicates edge energy coefficient,
Er(i, j) indicates additional control force, wherein Er(i, j)=| H (vi,j)-H(vi)|2, H (vi,j) and H (vi) respectively indicate neighborhood picture
Vegetarian refreshments vi,jWith profile control point viGray value, ε (i) indicates additional control force coefficient, wherein Indicate profile control point viNeighborhood ash
Average value is spent,And σ2The average gray and variance of whole image are respectively indicated,WithIndicate that the gray scale of setting determines
The factor, wherein
(3) profile control point v if it existsiTo the energy value E of its neighborhood positioni,jLess than the energy threshold E of settingiY, then make
Profile control point viIt is moved to corresponding neighborhood territory pixel point vi,jThe position at place, and set energy threshold EiY=Ei,j;
(4) all number Ps that mobile profile control point occurs are counted;
(5) if P is less than the threshold value of setting, or reach setting maximum number of iterations, be then sequentially connected current all profiles
Control point, as target object profile and carry out dividing processing;Otherwise, (2)-(5) are repeated.
This preferred embodiment is adopted and is carried out first to the wherein piece image obtained from binocular camera with the aforedescribed process
Target object segmentation carries out contraction step by step to the boundary rectangle profile obtained from Threshold segmentation using the method for iteration,
The final profile for obtaining target object, can be well adapted for the chamfered shape of different target object, adaptable, accuracy
Height carries out outline and target object accurate three to the other piece image obtained from binocular camera to be subsequent in module
Dimension positioning is laid a good foundation.
Preferably, described image pretreatment unit 232 is used for target object image degree of the comparing enhancing of acquisition
Reason, specifically includes:
(1) by the RGB grayvalue transition of pixel (i, j) each in target object image to hsv color space;
(2) to pixel (i, j) each in target object image, one 8 × 8 centered on pixel (i, j) is selected
Neighborhood image block carries out wavelet transformation to this neighborhood image block, obtains the horizontal L of local noise of pixel (i, j)n(i, j):
In formula, Median { } indicates median function,Indicate that neighborhood image block carries out the first of wavelet transformation acquisition
The layer absolute value set of HH sub-band coefficients;
(3) background value B (i, j) and gradient value G (i, the j of pixel (i, j) are obtained
In formula, B (i, j) indicates that the background value of pixel (i, j), V (i+a, j+b) indicate pixel in hsv color space
The brightness value of (i+a, j+b), G (i, j) indicate the gradient value of pixel (i, j), Gx(i, j) indicates the level of pixel (i, j)
Direction gradient, Gy(i, j) shows the vertical direction gradient of pixel (i, j);
IfThen enhanced background value B ' (i, j) and ladder are obtained using empirical function
Angle value G ' (i, j):
In formula, μ indicates the enhancing threshold value of setting, and η indicates the reinforcing effect Dynamic gene of setting, Ln(i, j) indicates pixel
The local noise of point (i, j) is horizontal, and B ' (i, j) indicates that pixel (i, j) enhanced background value, G ' (i, j) indicate pixel
(i, j) enhanced gradient value, CabIndicate the experience scalar factor of setting, wherein CabIndicate 2 × 1 coefficient vector, then CabIn
It in total include 20 experience scalar factors;
Otherwise, then B ' (i, j)=B (i, j), G ' (i, j)=G (i, j) are set;
Preferably, μ=1, η=3;
(4) contrast enhancing model parameter σ (i, j) and ζ (i, j) are obtained
Wherein,
In formula, ω (i, j, i ', j ') indicates that weight coefficient, Ω (i, j) indicate the local neighborhood set of pixel (i, j),
Wherein select then 3 × 3 matrixes using centered on (i, j) as the local neighborhood of pixel (i, j), G (i ', j ') and G ' (i ', j ')
Respectively indicate the gradient value of pixel (i ', j ') enhancing front and back, B (i ', j ') and B ' (i ', j ') respectively indicate pixel (i ',
J ') enhancing front and back background value, C (i, j) indicate normalization coefficient,WithRespectively indicate the fog-level of spatial domain and codomain
Controlling elements;
(5) target object image degree of comparing is enhanced using following comparative's degree enhancing model:
V ' (i, j)=σ (i, j) V (i, j)+ζ (i, j)
In formula, V ' (i, j) indicates brightness value of the pixel (i, j) in hsv color space after contrast enhancing, V (i, j)
Indicate brightness value of contrast enhancing preceding pixel point (i, j) in hsv color space, σ (i, j) and ζ (i, j) respectively indicate comparison
Degree enhancing model parameter;
(6) enhanced each pixel is obtained into enhanced mesh from hsv color spatial alternation to RGB color
Mark object images.
This preferred embodiment is adopted and is handled with the aforedescribed process target object image degree of comparing enhancing, according to image
In each pixel brightness, using contrast enhancing model adaptation to image degree of comparing enhancing handle, Neng Gouzeng
Non-noise details in strong image, highlights the detail section of target object in target object image, reinforcing effect is good, adaptability
By force, it lays a good foundation for subsequent be further processed to target object image of system.
Finally it should be noted that the above embodiments are merely illustrative of the technical solutions of the present invention, rather than the present invention is protected
The limitation of range is protected, although explaining in detail referring to preferred embodiment to the present invention, those skilled in the art are answered
Work as analysis, it can be with modification or equivalent replacement of the technical solution of the present invention are made, without departing from the reality of technical solution of the present invention
Matter and range.