CN108217045B - A kind of intelligent robot of the undercarriage on data center's physical equipment - Google Patents

A kind of intelligent robot of the undercarriage on data center's physical equipment Download PDF

Info

Publication number
CN108217045B
CN108217045B CN201810005571.2A CN201810005571A CN108217045B CN 108217045 B CN108217045 B CN 108217045B CN 201810005571 A CN201810005571 A CN 201810005571A CN 108217045 B CN108217045 B CN 108217045B
Authority
CN
China
Prior art keywords
target object
image
pixel
indicate
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810005571.2A
Other languages
Chinese (zh)
Other versions
CN108217045A (en
Inventor
高明
王柏勇
黎炼
张志亮
李硕
林克全
关文坚
梅永坚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Power Supply Bureau of Guangdong Power Grid Co Ltd
Original Assignee
Guangzhou Ke Teng Information Technology Co Ltd
Guangzhou Power Supply Bureau Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Ke Teng Information Technology Co Ltd, Guangzhou Power Supply Bureau Co Ltd filed Critical Guangzhou Ke Teng Information Technology Co Ltd
Priority to CN201810005571.2A priority Critical patent/CN108217045B/en
Publication of CN108217045A publication Critical patent/CN108217045A/en
Application granted granted Critical
Publication of CN108217045B publication Critical patent/CN108217045B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65GTRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
    • B65G1/00Storing articles, individually or in orderly arrangement, in warehouses or magazines
    • B65G1/02Storage devices
    • B65G1/04Storage devices mechanical
    • B65G1/137Storage devices mechanical with arrangements or automatic control means for selecting which articles are to be removed
    • B65G1/1373Storage devices mechanical with arrangements or automatic control means for selecting which articles are to be removed for fulfilling orders in warehouses

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides a kind of intelligent robot of undercarriage on data center's physical equipment, including robot body and mechanical handing device;The robot body includes: laser radar, for scanning and acquiring the environmental information of robot body in the work environment;Central processing unit obtains robot itself posture information, and carry out path planning and issue corresponding move to the motion control device for constructing grating map according to the environmental information obtained from the laser radar;Motion control device, the move for being issued according to the central processing unit control the telecontrol equipment;Telecontrol equipment is equipped with wheel, mobile for controlling the robot body;Odometer, for measuring the travel information of the robot body;The mechanical handing device is arranged on the robot body, for carrying out the carrying and the operation of upper undercarriage of equipment.The present invention replaces manpower to carry out upper undercarriage operation to server apparatus, reduces the expenditure of human cost.

Description

A kind of intelligent robot of the undercarriage on data center's physical equipment
Technical field
The present invention relates to device intelligence installing area, in particular to the intelligence of a kind of undercarriage on data center's physical equipment It can robot.
Background technique
IT system scale amounts in enterprise are also increasing at present, and number of devices is also more and more in data center.Face To increasing IT physical equipment, business development bring change request is faced, in daily maintenance work, to server The demand for carrying out undercarriage is very big, and completion can improve the online efficiency of business to undercarriage in the intelligence of server, preferably supports Business development.
In the prior art, to undercarriage on data center server generally using manually upper undercarriage: even if manually counting According to the upper undercarriage of central server, it is necessary first to which the cabinet position of artificial enquiry server and U carry server to specific bit It sets and carries on determining installation site, after the installation is completed hand-kept Server location information.
The above method there are the shortcomings that:
(1) inefficient operation needs artificial enquiry Server location information before operating in this method, walks to equipment machine Cabinet.
(2) human cost is high, and this method uses manual operation completely, and the weight of existing server generally requires 3 adults People completes to install together;
Summary of the invention
In view of the above-mentioned problems, the present invention is intended to provide a kind of intelligence machine of the undercarriage on data center's physical equipment People.
The purpose of the present invention is realized using following technical scheme:
A kind of intelligent robot of the undercarriage on data center's physical equipment, including robot body and mechanical handing dress It sets;
The robot body includes central processing unit, laser radar, odometer, motion control device and telecontrol equipment;
The laser radar is for scanning and acquiring the environmental information of robot body in the work environment;
The central processing unit is used to construct grating map according to the environmental information obtained from the laser radar, obtains machine Itself posture information of device people carries out path planning and issues corresponding move to the motion control device;
The move that the motion control device is used to be issued according to the central processing unit controls the telecontrol equipment;
The telecontrol equipment is equipped with wheel, mobile for controlling the robot body;
The odometer is used to measure the travel information of the robot body;
The mechanical handing device is arranged on the robot body, for carrying out carrying and the upper undercarriage behaviour of equipment Make.
Preferably, the central processing unit includes memory module, map structuring and locating module, path planning module and refers to Enable sending module;
The map structuring and locating module are used for according to the environmental information building grid acquired from the laser radar Scheme and obtains robot itself posture information;
The memory module is for storing the grating map information;
The path planning module is used to be believed according to the grating map information, robot itself posture information and destination Breath carries out path planning, to acquisite approachs planning information;
Described instruction sending module is used for according to the grating map information, robot itself posture information and path planning Information generates corresponding move and is sent to the motion control device.
Preferably, the mechanical handing device includes binocular camera, radio frequency identification module, image processing module, machinery Control module and robot manipulator structure;
The radio frequency identification module is used to scan and identify the label information on equipment or cabinet, confirmation target object, institute Stating target object is target device or target cabinet;
The binocular camera is for the target object image in collection machinery handling device working region;
Described image processing module obtains the position of the target object for handling the target object image of acquisition Confidence breath;
The machine control modules are used to be issued according to the location information of the target object of acquisition to the robot manipulator structure Corresponding control instruction;
The robot manipulator structure is used for according to the control instruction crawl target device received and completes target device Undercarriage operation.
The invention has the benefit that a kind of intelligent machine of undercarriage on data center's physical equipment provided by the invention Device people replaces the artificial upper undercarriage for carrying out server apparatus to work, according to the server for needing upper undercarriage by robot Position is positioned, and the position where server is accurately searched out, and is automatically performed upper undercarriage by handling device and is worked, reduces The expenditure of human cost.
Detailed description of the invention
The present invention will be further described with reference to the accompanying drawings, but the embodiment in attached drawing is not constituted to any limit of the invention System, for those of ordinary skill in the art, without creative efforts, can also obtain according to the following drawings Other attached drawings.
Fig. 1 is frame construction drawing of the invention;
Fig. 2 is the frame construction drawing of mechanical handing device in the present invention;
Fig. 3 is the frame construction drawing of central processing unit in the present invention;
Fig. 4 is the frame construction drawing of image processing module in the present invention.
Appended drawing reference:
Robot body 1, mechanical handing device 2, laser radar 11, central processing unit 12, motion control device 13, movement Device 14, odometer 15, radio frequency identification module 21, binocular camera 22, image processing module 23, machine control modules 24, machine Tool hand structure 25, memory module 120, map structuring and locating module 122, path planning module 124, instruction sending module 126, Camera calibration unit 230, image pre-processing unit 232, image segmentation unit 234 and target positioning unit 236
Specific embodiment
In conjunction with following application scenarios, the invention will be further described.
Referring to Fig. 1, a kind of intelligent robot of the undercarriage on data center's physical equipment, including 1 He of robot body Mechanical handing device 2;
The robot body 1 includes laser radar 11, central processing unit 12, motion control device 13, telecontrol equipment 14 With odometer 15;
The laser radar 11 is for scanning and acquiring the environmental information of robot body 1 in the work environment;
The central processing unit 12 is used to construct grating map according to the environmental information obtained from the laser radar 11, obtains It takes robot itself posture information, and path planning is carried out according to the position of equipment in data center and cabinet and to the movement Control device 13 issues corresponding move;
The move that the motion control device 13 is used to be issued according to the central processing unit 12 controls the movement Device 14;
The telecontrol equipment 14 is equipped with wheel, mobile for controlling the robot body 1;
The odometer 15 is used to measure the travel information of the robot body 1;
The mechanical handing device 2 is arranged on the robot body 1, for carrying out carrying and the upper undercarriage behaviour of equipment Make.
Preferably, the central processing unit 12 includes memory module 120, map structuring and locating module 122, path planning Module 124 and instruction sending module 126;
The map structuring and locating module 122 are used to construct grid according to the environmental information acquired from the laser radar 11 Lattice map simultaneously obtains robot itself posture information;
The memory module 120 is for storing the grating map information;
The path planning module 124 is used for according to the grating map information, robot itself posture information and purpose Ground information carries out path planning, to acquisite approachs planning information;
Described instruction sending module 126 is used for according to the grating map information, robot itself posture information and path Planning information generates corresponding move and is sent to the motion control device 13;
Wherein, the destination information indicates the location information of the equipment or cabinet, can by be manually set or it is defeated Enter;
Wherein, SLAM (simultaneous localization and is used in the map structuring and locating module Mapping) technology is updated to grating map and obtains the current location information of itself.
The above embodiment of the present invention obtains the environmental information around intelligent robot by setting laser radar, by setting It sets map structuring and locating module to handle collected environmental information, working environment where constructing intelligent robot Grating map, and according to the accurate positionin of the acquisition of information intelligent robot further obtained from laser radar itself, and adopt Accurate path planning, motion control device root are carried out according to the position of equipment in data center and cabinet with path planning module Specified local position, which is reached, according to the path clustering robot of planning carries out upper undercarriage operation, intelligent height, adaptive capacity to environment By force, human cost can be effectively reduced.
Preferably, the mechanical handing device 2 includes radio frequency identification module 21, binocular camera 22, image processing module 23, machine control modules 24 and robot manipulator structure 25;
The radio frequency identification module 21 is used to scanning and identifying label information on equipment or cabinet, confirms target object, The target object is target device or target cabinet;
The binocular camera 22 is used for the target object image in 2 working region of collection machinery handling device, wherein institute It states binocular camera 22 and divides for left camera and right camera the image information for respectively while obtaining working region;
Described image processing module 23 obtains the target object for handling the target object image of acquisition Location information;
The machine control modules 24 are used for according to the location information of the target object of acquisition to the robot manipulator structure 25 Issue corresponding control instruction;
The robot manipulator structure 25 is used for according to the control instruction crawl target device received and completes target device Upper undercarriage operation.
The above embodiment of the present invention is believed by the image that setting binocular camera obtains the working region of mechanical handing device Breath, and by setting radio frequency identification module, it identifies the label information on equipment or cabinet, can accurately judge in machinery Whether equipment or cabinet in handling device working region are the object that need to complete undercarriage operation;By the way that image procossing mould is arranged Block further obtains the location information of target object, controls robot manipulator structure by machine control modules and completes upper undercarriage accordingly Operation, entire mechanical handing device is adaptable, and accuracy is high, completes undercarriage in equipment for intelligent robot and provides guarantor Card.
Preferably, described image processing module 23 includes camera calibration unit 230, image pre-processing unit 232, image Cutting unit 234 and target positioning unit 236;
The camera calibration unit 230 obtains the inside of camera for demarcating to the binocular camera 22 Parameter and external parameter determine the mapping relations of two-dimensional image coordinate system and world three dimensional coordinate system;
Described image pretreatment unit 232 is for target object image degree of the comparing enhancing to acquisition, smooth and figure The pretreatment of image intensifying obtains pretreated target object image;
Described image cutting unit 234 is for splitting target object from pretreated target image;
The target positioning unit 236 is for coordinate position according to target object in the picture, in conjunction with from the camera shooting The camera calibration result and binocular vision principle of parallax that leader order member 230 obtains, are transformed into true three-dimension for two-dimensional coordinate Space coordinate obtains the three dimensional local information of target object.
Preferably, described image cutting unit 234 is used to split target object from pretreated target image, It specifically includes:
(1) the two width target object images acquired from binocular camera 22 are respectively labeled as target object image R1And mesh Mark object images R2, and by target object image R1With target object image R2It is obtained after the processing of image pre-processing unit 232 Pretreated target object image is respectively labeled as R '1And R '2
(2) using image segmentation algorithm to pretreated target object image R '1Target object segmentation is carried out, wherein in advance Treated target object image R '1In each pixel can be expressed asWherein i=1,2 ..., I, I indicate pixel Sum,Indicate that the pixel is the prospect for indicating target object,Indicate that the pixel is background;
(3) C={ C is combined into using the profile collection that contour extraction algorithm extracts target object1,C2,…,CZ, wherein each Profile CzIndicate that a closed curve, z=1,2 ..., Z, Z indicate that the sum of profile in profile set, the vector have recorded target Pixel position C on object boundsz={ p1,p2,…,pLz, wherein Lz indicates profile CzThe sum of middle pixel;
(4) by pretreated target object image R '1Each of middle profile set C profile CzIt is mapped to pretreatment Target object image R ' afterwards2In, target object is obtained in image R '2Boundary profile, specifically:
(41) state transition matrix M is constructed, wherein the size of state transition matrix M is Lz × D, and D indicates the value of parallax Range, D ∈ [dmin,dmax], dminAnd dmaxRespectively indicate the minimum value and maximum value of parallax value, it is every in state transition matrix M The value of a element M (i, j) is Est(i, j), Est(i, j) indicates R '1Middle pixel piWith R '2Middle parallax is djRespective pixel The state energy of point;
(42) state energy of each element M (i, j) in state transition matrix M is obtained using customized state energy formula Est(i, j):
Wherein,
In formula, Est(i, j) indicates R '1Middle pixel piWith R '2Middle parallax is djCorresponding pixel points state energy, qj It indicates in R '2In with R '1Middle pixel piParallax is djCorresponding pixel points, qj=pi-dj, E (pi,qj) indicate pixel piWith Pixel qjObjective energy function, Cs(pi,qj) indicate pixel piWith pixel qjVision matching cost, wherein Φ(pi) indicate with pixel piCentered on One local window, size are w × w, K (px)=1 indicates pixel pxBelong to prospect, qy=px-dj, ch(px) and ch(qy) Respectively indicate pixel pxAnd qyR/G/B chromatic value, ω0Indicate object bounds weight, ω1Indicate object smoothing weights, CO (pi,qj) indicate object bounds matching cost, whereinPr(O|py) table Show pixel qyBelong to the posterior probability of prospect, N (pi-pi-1) indicate pixel piAnd pi-1Smoothness cost, wherein βdIndicate the discontinuous threshold value of parallax of setting;
(43) optimal energy path is obtained using backtracking algorithm to state transition matrix M, it will be every on optimal energy path The corresponding pixel p of a M (i, j)iIt is mapped to R '2In obtain unique corresponding contour point pi-dj, obtain pretreated target pair As image R '1Middle profile CzIt is mapped to pretreated target object image R '2Best match profile C ' afterwardsz
(44) target object image R ' after the pre-treatment2The middle best match wheel for obtaining all profile and borders of target object Exterior feature, and target object is gone out according to the best match contours segmentation of mapping.
This preferred embodiment carries out point of target object using the above method to the image obtained from binocular camera It cuts, the two images that can overcome the problems, such as that different cameras obtain from binocular camera can have parallax, can be accurate, effect Outline is carried out to rate to target object in two images, to accurately obtain target object respectively in two images In profile and further progress segmentation so that strong from the cutting object matching in two images, to be obtained in subsequent module The accurate three-dimensional position of target object is laid a good foundation.
Preferably, described to use image segmentation algorithm to pretreated target object image R '1Carry out target object point It cuts, specifically includes:
(1) Threshold segmentation is carried out to image, obtains the boundary rectangle of target object as initial pictures profile, and external One control point v is set every 4 pixels on rectangle, as initial profile control point S=(v1,v2,…,vn), and will be external The center of rectangle is set as dynamic outline center τ=(Xτ,Yτ);
(2) using customized energy equation obtain each profile control point to its neighborhood position energy value:
Ei,j=α (i) Eih1(i,j)+β(i)Eih2(i,j)+γ(i)Eedge(i,j)+ε(i)Er(i,j)
In formula, Ei,jIndicate profile control point viTo the energy value of its neighborhood position, i indicates i-th of profile control point, j= 1,2,3,4, it respectively indicates and profile control point vi4 adjacent neighborhood territory pixel point v of surroundingi,j, Eih1(i, j) indicates that single order is continuous Property power of enforcement, wherein Indicate the average distance between each profile control point, | vi,j- vi-1| indicate neighborhood territory pixel point vi,jWith profile control point vi-1The distance between, α (i) indicates the discrete coefficient of first order of setting, Eih2 (i) second order continuity power of enforcement is indicated, wherein Eih2(i, j)=(| vi-1-vi,j|+|vi,j-vi+1|)2, | vi-vi+1| indicate neighborhood Pixel vi,jWith profile control point vi+1The distance between, β (i) indicates the discrete second order coefficient of setting, Eedge(i, j) indicates side Edge energy, whereinL(vi,j,vi-1) indicate neighborhood territory pixel point vi,jAnd wheel Wide control point vi-1Between contour line, I (x, y) indicate pixel (x, y) gradient value,Indicate neighborhood territory pixel point vi,jWith Profile control point vi-1Between on contour line all pixels point average gradient value, n indicates neighborhood territory pixel point vi,jIt is controlled with profile Point vi-1Between on contour line pixel sum, μ indicates the edge energy factor of setting, and γ (i) indicates edge energy coefficient, Er(i, j) indicates additional control force, wherein Er(i, j)=| H (vi,j)-H(vi)|2, H (vi,j) and H (vi) respectively indicate neighborhood picture Vegetarian refreshments vi,jWith profile control point viGray value, ε (i) indicates additional control force coefficient, wherein Indicate profile control point viNeighborhood ash Average value is spent,And σ2The average gray and variance of whole image are respectively indicated,WithIndicate that the gray scale of setting determines The factor, wherein
(3) profile control point v if it existsiTo the energy value E of its neighborhood positioni,jLess than the energy threshold E of settingiY, then make Profile control point viIt is moved to corresponding neighborhood territory pixel point vi,jThe position at place, and set energy threshold EiY=Ei,j
(4) all number Ps that mobile profile control point occurs are counted;
(5) if P is less than the threshold value of setting, or reach setting maximum number of iterations, be then sequentially connected current all profiles Control point, as target object profile and carry out dividing processing;Otherwise, (2)-(5) are repeated.
This preferred embodiment is adopted and is carried out first to the wherein piece image obtained from binocular camera with the aforedescribed process Target object segmentation carries out contraction step by step to the boundary rectangle profile obtained from Threshold segmentation using the method for iteration, The final profile for obtaining target object, can be well adapted for the chamfered shape of different target object, adaptable, accuracy Height carries out outline and target object accurate three to the other piece image obtained from binocular camera to be subsequent in module Dimension positioning is laid a good foundation.
Preferably, described image pretreatment unit 232 is used for target object image degree of the comparing enhancing of acquisition Reason, specifically includes:
(1) by the RGB grayvalue transition of pixel (i, j) each in target object image to hsv color space;
(2) to pixel (i, j) each in target object image, one 8 × 8 centered on pixel (i, j) is selected Neighborhood image block carries out wavelet transformation to this neighborhood image block, obtains the horizontal L of local noise of pixel (i, j)n(i, j):
In formula, Median { } indicates median function,Indicate that neighborhood image block carries out the first of wavelet transformation acquisition The layer absolute value set of HH sub-band coefficients;
(3) background value B (i, j) and gradient value G (i, the j of pixel (i, j) are obtained
In formula, B (i, j) indicates that the background value of pixel (i, j), V (i+a, j+b) indicate pixel in hsv color space The brightness value of (i+a, j+b), G (i, j) indicate the gradient value of pixel (i, j), Gx(i, j) indicates the level of pixel (i, j) Direction gradient, Gy(i, j) shows the vertical direction gradient of pixel (i, j);
IfThen enhanced background value B ' (i, j) and ladder are obtained using empirical function Angle value G ' (i, j):
In formula, μ indicates the enhancing threshold value of setting, and η indicates the reinforcing effect Dynamic gene of setting, Ln(i, j) indicates pixel The local noise of point (i, j) is horizontal, and B ' (i, j) indicates that pixel (i, j) enhanced background value, G ' (i, j) indicate pixel (i, j) enhanced gradient value, CabIndicate the experience scalar factor of setting, wherein CabIndicate 2 × 1 coefficient vector, then CabIn It in total include 20 experience scalar factors;
Otherwise, then B ' (i, j)=B (i, j), G ' (i, j)=G (i, j) are set;
Preferably, μ=1, η=3;
(4) contrast enhancing model parameter σ (i, j) and ζ (i, j) are obtained
Wherein,
In formula, ω (i, j, i ', j ') indicates that weight coefficient, Ω (i, j) indicate the local neighborhood set of pixel (i, j), Wherein select then 3 × 3 matrixes using centered on (i, j) as the local neighborhood of pixel (i, j), G (i ', j ') and G ' (i ', j ') Respectively indicate the gradient value of pixel (i ', j ') enhancing front and back, B (i ', j ') and B ' (i ', j ') respectively indicate pixel (i ', J ') enhancing front and back background value, C (i, j) indicate normalization coefficient,WithRespectively indicate the fog-level of spatial domain and codomain Controlling elements;
(5) target object image degree of comparing is enhanced using following comparative's degree enhancing model:
V ' (i, j)=σ (i, j) V (i, j)+ζ (i, j)
In formula, V ' (i, j) indicates brightness value of the pixel (i, j) in hsv color space after contrast enhancing, V (i, j) Indicate brightness value of contrast enhancing preceding pixel point (i, j) in hsv color space, σ (i, j) and ζ (i, j) respectively indicate comparison Degree enhancing model parameter;
(6) enhanced each pixel is obtained into enhanced mesh from hsv color spatial alternation to RGB color Mark object images.
This preferred embodiment is adopted and is handled with the aforedescribed process target object image degree of comparing enhancing, according to image In each pixel brightness, using contrast enhancing model adaptation to image degree of comparing enhancing handle, Neng Gouzeng Non-noise details in strong image, highlights the detail section of target object in target object image, reinforcing effect is good, adaptability By force, it lays a good foundation for subsequent be further processed to target object image of system.
Finally it should be noted that the above embodiments are merely illustrative of the technical solutions of the present invention, rather than the present invention is protected The limitation of range is protected, although explaining in detail referring to preferred embodiment to the present invention, those skilled in the art are answered Work as analysis, it can be with modification or equivalent replacement of the technical solution of the present invention are made, without departing from the reality of technical solution of the present invention Matter and range.

Claims (2)

1. a kind of intelligent robot of the undercarriage on data center's physical equipment, which is characterized in that including robot body and Mechanical handing device;
The robot body includes laser radar, central processing unit, motion control device, telecontrol equipment and odometer;
The laser radar is for scanning and acquiring the environmental information of robot body in the work environment;
The central processing unit is used to construct grating map according to the environmental information obtained from the laser radar, obtains robot Itself posture information carries out path planning and issues corresponding move to the motion control device;
The move that the motion control device is used to be issued according to the central processing unit controls the telecontrol equipment;
The telecontrol equipment is equipped with wheel, mobile for controlling the robot body;
The odometer is used to measure the travel information of the robot body;
The mechanical handing device is arranged on the robot body, for carrying out the carrying and the operation of upper undercarriage of equipment;
Wherein, the mechanical handing device includes radio frequency identification module, binocular camera, image processing module, Mechanical course mould Block and robot manipulator structure;
The radio frequency identification module is used to scan and identify the label information on equipment or cabinet, confirmation target object, the mesh Marking object is target device or target cabinet;
The binocular camera is for the target object image in collection machinery handling device working region;
Described image processing module obtains the position letter of the target object for handling the target object image of acquisition Breath;
The machine control modules are used to be issued according to the location information of the target object of acquisition to the robot manipulator structure corresponding Control instruction;
The robot manipulator structure is used for according to the control instruction crawl target device received and the upper undercarriage for completing target device Operation;
Described image processing module further comprises camera calibration unit, image pre-processing unit, image segmentation unit and mesh Demarcate bit location;
The camera calibration unit obtains inner parameter and the outside of camera for demarcating to the binocular camera Parameter determines the mapping relations of two-dimensional image coordinate system and world three dimensional coordinate system;
Described image pretreatment unit is used to enhance the target object image degree of comparing of acquisition, is smooth and image enhancement Pretreatment, obtains pretreated target object image;
Described image cutting unit is for splitting target object from pretreated target image;
The target positioning unit is for coordinate position according to target object in the picture, in conjunction with from the camera calibration list The camera calibration result and binocular vision principle of parallax that member obtains, are transformed into true three-dimension space coordinate for two-dimensional coordinate, obtain Take the three dimensional local information of target object;
Described image cutting unit is used to split target object from pretreated target image, specifically includes:
(1) the two width target object images acquired from binocular camera are respectively labeled as target object image R1And target object Image R2, and by target object image R1With target object image R2After the pretreatment obtained after image pre-processing unit is handled Target object image be respectively labeled as R '1And R '2
(2) using image segmentation algorithm to pretreated target object image R '1Target object segmentation is carried out, wherein pre-processing Target object image R ' afterwards1In each pixel can be expressed asWherein i=1,2 ..., I, I indicate the total of pixel Number,Indicate that the pixel is the prospect for indicating target object,Indicate that the pixel is background;
(3) C={ C is combined into using the profile collection that contour extraction algorithm extracts target object1,C2,…,CZ, wherein each profile CzIndicate that a closed curve, z=1,2 ..., Z, Z indicate that the sum of profile in profile set, the vector have recorded target object Borderline pixel position Cz={ p1,p2,…,pLz, wherein Lz indicates profile CzThe sum of middle pixel;
(4) by pretreated target object image R '1Each of middle profile set C profile CzIt is mapped to pretreated Target object image R '2In, target object is obtained in image R '2Boundary profile, specifically:
(41) state transition matrix M is constructed, wherein the size of state transition matrix M is Lz × D, and D indicates the value range of parallax, D∈[dmin,dmax], dminAnd dmaxRespectively indicate the minimum value and maximum value of parallax value, each element in state transition matrix M The value of M (i, j) is Est(i, j), Est(i, j) indicates R '1Middle pixel p1With R '2Middle parallax is djCorresponding pixel points shape State energy;
(42) the state energy E of each element M (i, j) in state transition matrix M is obtained using customized state energy formulast (i, j):
Wherein,
In formula, Est(i, j) indicates R '1Middle pixel piWith R '2Middle parallax is djCorresponding pixel points state energy, qjIt indicates R’2In with R '1Middle pixel piParallax is djCorresponding pixel points, qj=pi-dj, E (pi,qj) indicate pixel piWith pixel qj Objective energy function, Cs(pi,qj) indicate pixel piWith pixel qjVision matching cost, wherein Φ(pi) indicate with pixel piCentered on a local window Mouthful, size is w × w, K (px)=1 indicates pixel pxBelong to prospect, qy=px-dj, ch(px) and ch(qy) respectively indicate picture Vegetarian refreshments pxAnd qyR/G/B chromatic value, ω0Indicate object bounds weight, ω1Indicate object smoothing weights, CO(pi,qj) expression pair As Boundary Match cost, whereinPr(O|py) indicate pixel qyBelong to The posterior probability of prospect, N (pi-pi-1) indicate pixel piAnd pi-1Smoothness cost, wherein βdIndicate the discontinuous threshold value of parallax of setting;
(43) optimal energy path is obtained, by each M on optimal energy path using backtracking algorithm to state transition matrix M (i, j) corresponding pixel piIt is mapped to R '2In obtain unique corresponding contour point pi-dj, obtain pretreated target object figure As R '1Middle profile CzIt is mapped to pretreated target object image R '2Best match profile C ' afterwardsz
(44) target object image R ' after the pre-treatment2The middle best match profile for obtaining all profile and borders of target object, and Go out target object according to the best match contours segmentation of mapping.
2. a kind of intelligent robot of undercarriage on data center's physical equipment according to claim 1, feature exist In the central processing unit includes memory module, map structuring and locating module, path planning module and instruction sending module;
The map structuring and locating module are used for according to the environmental information building grating map acquired from the laser radar simultaneously Obtain robot itself posture information;
The memory module is for storing the grating map information;
The path planning module be used for according to the grating map information, robot itself posture information and destination information into Row path planning, to acquisite approachs planning information;
Described instruction sending module is used for according to the grating map information, robot itself posture information and route planning information It generates corresponding move and is sent to the motion control device.
CN201810005571.2A 2018-01-03 2018-01-03 A kind of intelligent robot of the undercarriage on data center's physical equipment Active CN108217045B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810005571.2A CN108217045B (en) 2018-01-03 2018-01-03 A kind of intelligent robot of the undercarriage on data center's physical equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810005571.2A CN108217045B (en) 2018-01-03 2018-01-03 A kind of intelligent robot of the undercarriage on data center's physical equipment

Publications (2)

Publication Number Publication Date
CN108217045A CN108217045A (en) 2018-06-29
CN108217045B true CN108217045B (en) 2018-12-18

Family

ID=62642696

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810005571.2A Active CN108217045B (en) 2018-01-03 2018-01-03 A kind of intelligent robot of the undercarriage on data center's physical equipment

Country Status (1)

Country Link
CN (1) CN108217045B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109144056B (en) * 2018-08-02 2021-07-06 上海思岚科技有限公司 Global self-positioning method and device for mobile robot
CN110182718A (en) * 2019-04-25 2019-08-30 上海快仓智能科技有限公司 The control method and cargo movement system of transfer robot
CN113213054B (en) * 2021-05-12 2023-05-30 深圳市海柔创新科技有限公司 Method, device, equipment, robot and warehousing system for adjusting pick-and-place device
CN218255156U (en) * 2021-08-25 2023-01-10 深圳市海柔创新科技有限公司 Transfer robot
CN113666304A (en) * 2021-08-30 2021-11-19 上海快仓智能科技有限公司 Method, device, equipment and storage medium for controlling transfer robot
CN116214531B (en) * 2023-05-10 2023-06-30 佛山隆深机器人有限公司 Path planning method and device for industrial robot

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI622540B (en) * 2011-09-09 2018-05-01 辛波提克有限責任公司 Automated storage and retrieval system
US9694977B2 (en) * 2014-10-14 2017-07-04 Nextshift Robotics, Inc. Storage material handling system
CN104842362B (en) * 2015-06-18 2017-04-05 厦门理工学院 A kind of method of robot crawl material bag and robotic gripping device
CN105562361B (en) * 2015-12-23 2017-12-22 西安工程大学 A kind of autonomous method for sorting of fabric sorting machine people
CN106347919A (en) * 2016-11-10 2017-01-25 杭州南江机器人股份有限公司 Automatic warehousing system

Also Published As

Publication number Publication date
CN108217045A (en) 2018-06-29

Similar Documents

Publication Publication Date Title
CN108217045B (en) A kind of intelligent robot of the undercarriage on data center's physical equipment
CN110531759B (en) Robot exploration path generation method and device, computer equipment and storage medium
CN110097553B (en) Semantic mapping system based on instant positioning mapping and three-dimensional semantic segmentation
Borrmann et al. A mobile robot based system for fully automated thermal 3D mapping
CN102313547B (en) Vision navigation method of mobile robot based on hand-drawn outline semantic map
CN111486855A (en) Indoor two-dimensional semantic grid map construction method with object navigation points
CN110706248A (en) Visual perception mapping algorithm based on SLAM and mobile robot
CN107741234A (en) The offline map structuring and localization method of a kind of view-based access control model
CN110084243B (en) File identification and positioning method based on two-dimensional code and monocular camera
CN109887082A (en) A kind of interior architecture three-dimensional modeling method and device based on point cloud data
CN106960442A (en) Based on the infrared night robot vision wide view-field three-D construction method of monocular
CN110223351B (en) Depth camera positioning method based on convolutional neural network
CN110260866A (en) A kind of robot localization and barrier-avoiding method of view-based access control model sensor
CN112785643A (en) Indoor wall corner two-dimensional semantic map construction method based on robot platform
CN106647738A (en) Method and system for determining docking path of automated guided vehicle, and automated guided vehicle
CN108961330A (en) The long measuring method of pig body and system based on image
CN115388902B (en) Indoor positioning method and system, AR indoor positioning navigation method and system
CN112465832B (en) Single-side tree point cloud skeleton line extraction method and system based on binocular vision
CN112818925A (en) Urban building and crown identification method
Quintana et al. Door detection in 3D colored laser scans for autonomous indoor navigation
CN107978017A (en) Doors structure fast modeling method based on wire extraction
CN115655262B (en) Deep learning perception-based multi-level semantic map construction method and device
CN111998862A (en) Dense binocular SLAM method based on BNN
EP3825804A1 (en) Map construction method, apparatus, storage medium and electronic device
CN115222884A (en) Space object analysis and modeling optimization method based on artificial intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20190626

Address after: 510000 No. 2 Tianhe Second Road, Tianhe District, Guangzhou City, Guangdong Province

Patentee after: Guangzhou Power Supply Bureau

Address before: 510000 No. 2 Tianhe Second Road, Tianhe District, Guangzhou City, Guangdong Province

Co-patentee before: GUANGZHOU KE TENG INFORMATION TECHNOLOGY CO., LTD.

Patentee before: Guangzhou Power Supply Bureau

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210225

Address after: 510620, No. two, No. 2, Tianhe South Road, Guangzhou, Guangdong, Tianhe District

Patentee after: Guangzhou Power Supply Bureau of Guangdong Power Grid Co.,Ltd.

Address before: 510000 No. 2 Tianhe Second Road, Tianhe District, Guangzhou City, Guangdong Province

Patentee before: GUANGZHOU POWER SUPPLY Co.,Ltd.