CN108217045A - A kind of intelligent robot for undercarriage on data center's physical equipment - Google Patents

A kind of intelligent robot for undercarriage on data center's physical equipment Download PDF

Info

Publication number
CN108217045A
CN108217045A CN201810005571.2A CN201810005571A CN108217045A CN 108217045 A CN108217045 A CN 108217045A CN 201810005571 A CN201810005571 A CN 201810005571A CN 108217045 A CN108217045 A CN 108217045A
Authority
CN
China
Prior art keywords
target object
image
information
contour
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810005571.2A
Other languages
Chinese (zh)
Other versions
CN108217045B (en
Inventor
高明
王柏勇
黎炼
张志亮
李硕
林克全
关文坚
梅永坚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Power Supply Bureau of Guangdong Power Grid Co Ltd
Original Assignee
Guangzhou Ke Teng Information Technology Co Ltd
Guangzhou Power Supply Bureau Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Ke Teng Information Technology Co Ltd, Guangzhou Power Supply Bureau Co Ltd filed Critical Guangzhou Ke Teng Information Technology Co Ltd
Priority to CN201810005571.2A priority Critical patent/CN108217045B/en
Publication of CN108217045A publication Critical patent/CN108217045A/en
Application granted granted Critical
Publication of CN108217045B publication Critical patent/CN108217045B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65GTRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
    • B65G1/00Storing articles, individually or in orderly arrangement, in warehouses or magazines
    • B65G1/02Storage devices
    • B65G1/04Storage devices mechanical
    • B65G1/137Storage devices mechanical with arrangements or automatic control means for selecting which articles are to be removed
    • B65G1/1373Storage devices mechanical with arrangements or automatic control means for selecting which articles are to be removed for fulfilling orders in warehouses

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention provides a kind of intelligent robot for undercarriage on data center's physical equipment, including robot body and mechanical handing device;The robot body includes:Laser radar, for scanning and acquiring the environmental information of robot body in the work environment;Central processing unit for building grating map according to the environmental information obtained from the laser radar, obtains robot itself posture information, and carry out path planning and send out corresponding move to the motion control device;Motion control device, the move for being issued according to the central processing unit control the telecontrol equipment;Telecontrol equipment, equipped with wheel, for the robot body to be controlled to move;Odometer, for measuring the travel information of the robot body;The mechanical handing device is arranged on the robot body, for carrying out the carrying of equipment and the operation of upper undercarriage.The present invention carries out server apparatus upper undercarriage operation instead of manpower, reduces the expenditure of human cost.

Description

Intelligent robot for putting data center physical equipment on and off shelf
Technical Field
The invention relates to the field of intelligent installation of equipment, in particular to an intelligent robot for putting on and off racks of physical equipment of a data center.
Background
At present, the size of an IT system in an enterprise is larger and larger, and the number of equipment is larger and larger in a data center. In the face of increasing IT physical equipment and changing requirements brought by service development, the requirement for putting a server on and off the shelf is very large in daily operation and maintenance work, the intelligent putting on and off the shelf of the server is completed, the service online efficiency can be improved, and the service development is better supported.
In the prior art, the data center server is generally manually put on and taken off: even if the data center server is manually erected and erected, firstly, the position and the U position of a cabinet of the server need to be manually inquired, the server is carried to a specified position and lifted to a determined installation position, and the position information of the server is manually recorded after the installation is finished.
The above method has the following disadvantages:
(1) the operation efficiency is low, and the method needs to manually inquire the position information of the server before operation and walk to the equipment cabinet.
(2) The labor cost is high, the method completely uses manual operation, and the weight of the existing server generally needs 3 adults to complete installation together;
disclosure of Invention
In view of the above problems, the present invention aims to provide an intelligent robot for loading and unloading data center physical equipment.
The purpose of the invention is realized by adopting the following technical scheme:
an intelligent robot for putting up and putting down data center physical equipment comprises a robot main body and a mechanical carrying device;
the robot main body comprises a central processing unit, a laser radar, a milemeter, a motion control device and a motion device;
the laser radar is used for scanning and collecting environmental information of the robot main body in a working environment;
the central processing unit is used for constructing a grid map according to the environment information acquired from the laser radar, acquiring the self pose information of the robot, planning a path and sending a corresponding movement instruction to the motion control device;
the motion control device is used for controlling the motion device according to a moving instruction issued by the central processing unit;
the motion device is provided with wheels and is used for controlling the robot main body to move;
the odometer is used for measuring the travel information of the robot main body;
the mechanical carrying device is arranged on the robot main body and used for carrying equipment and carrying out racking and racking operations.
Preferably, the central processing unit comprises a storage module, a map construction and positioning module, a path planning module and an instruction sending module;
the map building and positioning module is used for building a grid map according to the environment information collected from the laser radar and acquiring the self pose information of the robot;
the storage module is used for storing the grid map information;
the path planning module is used for planning paths according to the raster map information, the self pose information of the robot and the destination information to acquire path planning information;
the instruction sending module is used for generating a corresponding movement instruction according to the grid map information, the self pose information of the robot and the path planning information and sending the movement instruction to the motion control device.
Preferably, the mechanical carrying device comprises a binocular camera, a radio frequency identification module, an image processing module, a mechanical control module and a manipulator structure;
the radio frequency identification module is used for scanning and identifying label information on equipment or a cabinet and confirming a target object, wherein the target object is target equipment or a target cabinet;
the binocular camera is used for collecting a target object image in a working area of the mechanical carrying device;
the image processing module is used for processing the acquired target object image to acquire the position information of the target object;
the mechanical control module is used for sending a corresponding control instruction to the manipulator structure according to the acquired position information of the target object;
the manipulator structure is used for grabbing the target equipment according to the received control instruction and finishing the operation of putting the target equipment on or off the shelf.
The invention has the beneficial effects that: according to the intelligent robot for racking the physical equipment of the data center, the robot replaces manual work to carry out racking work on the server equipment, the position of the server is accurately found according to the position of the server needing racking, racking work is automatically finished through the carrying device, and the expenditure of labor cost is reduced.
Drawings
The invention is further illustrated by means of the attached drawings, but the embodiments in the drawings do not constitute any limitation to the invention, and for a person skilled in the art, other drawings can be obtained on the basis of the following drawings without inventive effort.
FIG. 1 is a block diagram of the frame of the present invention;
FIG. 2 is a structural view of a frame of the mechanical transfer apparatus of the present invention;
FIG. 3 is a frame structure of the CPU of the present invention;
fig. 4 is a frame structure diagram of the image processing module of the present invention.
Reference numerals:
the robot comprises a robot main body 1, a mechanical carrying device 2, a laser radar 11, a central processing unit 12, a motion control device 13, a motion device 14, a odometer 15, a radio frequency identification module 21, a binocular camera 22, an image processing module 23, a mechanical control module 24, a manipulator structure 25, a storage module 120, a map construction and positioning module 122, a path planning module 124, an instruction sending module 126, a camera calibration unit 230, an image preprocessing unit 232, an image segmentation unit 234 and a target positioning unit 236
Detailed Description
The invention is further described in connection with the following application scenarios.
Referring to fig. 1, an intelligent robot for racking data center physical equipment comprises a robot main body 1 and a mechanical handling device 2;
the robot main body 1 comprises a laser radar 11, a central processing unit 12, a motion control device 13, a motion device 14 and an odometer 15;
the laser radar 11 is used for scanning and collecting environmental information of the robot main body 1 in a working environment;
the central processing unit 12 is configured to construct a grid map according to the environment information acquired from the laser radar 11, acquire pose information of the robot, plan a path according to positions of equipment and a cabinet in the data center, and send a corresponding movement instruction to the motion control device 13;
the motion control device 13 is configured to control the motion device 14 according to a movement instruction issued by the central processing unit 12;
the movement device 14 is provided with wheels for controlling the robot main body 1 to move;
the odometer 15 is used for measuring the travel information of the robot body 1;
the mechanical carrying device 2 is provided on the robot body 1, and is used for carrying equipment and handling equipment on and off the shelves.
Preferably, the central processor 12 includes a storage module 120, a mapping and positioning module 122, a path planning module 124 and an instruction sending module 126;
the map building and positioning module 122 is configured to build a grid map according to the environment information collected from the laser radar 11 and acquire pose information of the robot;
the storage module 120 is configured to store the grid map information;
the path planning module 124 is configured to perform path planning according to the raster map information, pose information of the robot, and destination information, and acquire path planning information;
the instruction sending module 126 is configured to generate a corresponding movement instruction according to the grid map information, the pose information of the robot and the path planning information, and send the movement instruction to the motion control device 13;
the destination information represents the position information of the equipment or the cabinet, and can be set or input manually;
the map building and positioning module adopts SLAM (simultaneous localization and mapping) technology to update the raster map and acquire the current position information of the raster map.
According to the embodiment of the invention, the environment information around the intelligent robot is acquired by setting the laser radar, the acquired environment information is processed by setting the map construction and positioning module, the grid map of the working environment where the intelligent robot is located is constructed, the intelligent robot is accurately positioned according to the information further acquired from the laser radar, the path planning module is adopted to accurately plan the path according to the positions of equipment and a cabinet in the data center, the motion control device controls the robot to reach the appointed position according to the planned path to carry out the racking operation, the intelligence is high, the environment adaptability is strong, and the labor cost can be effectively reduced.
Preferably, the mechanical carrying device 2 comprises a radio frequency identification module 21, a binocular camera 22, an image processing module 23, a mechanical control module 24 and a manipulator structure 25;
the radio frequency identification module 21 is configured to scan and identify tag information on a device or a cabinet, and determine a target object, where the target object is a target device or a target cabinet;
the binocular camera 22 is used for collecting images of a target object in a working area of the mechanical carrying device 2, wherein the binocular camera 22 is divided into a left camera and a right camera which respectively and simultaneously acquire image information of the working area;
the image processing module 23 is configured to process the acquired target object image to acquire position information of the target object;
the mechanical control module 24 is configured to send a corresponding control instruction to the manipulator structure 25 according to the acquired position information of the target object;
the manipulator structure 25 is used for grabbing the target equipment according to the received control instruction and completing the racking operation of the target equipment.
According to the embodiment of the invention, the image information of the working area of the mechanical carrying device is obtained by arranging the binocular camera, and the label information on the equipment or the cabinet is identified by arranging the radio frequency identification module, so that whether the equipment or the cabinet in the working area of the mechanical carrying device is an object needing to complete the operation of going on and off the shelves can be accurately judged; the position information of the target object is further acquired by arranging the image processing module, the mechanical control module controls the mechanical arm structure to complete corresponding racking and racking operations, the whole mechanical carrying device is high in adaptability and accuracy, and guarantees are provided for the intelligent robot to complete racking and racking of equipment.
Preferably, the image processing module 23 includes a camera calibration unit 230, an image preprocessing unit 232, an image segmentation unit 234 and a target positioning unit 236;
the camera calibration unit 230 is configured to calibrate the binocular camera 22, obtain internal parameters and external parameters of the camera, and determine a mapping relationship between an image two-dimensional coordinate system and a world three-dimensional coordinate system;
the image preprocessing unit 232 is configured to perform preprocessing of contrast enhancement, smoothing and image enhancement on the obtained target object image, and obtain a preprocessed target object image;
the image segmentation unit 234 is configured to segment the target object from the preprocessed target image;
the target positioning unit 236 is configured to convert the two-dimensional coordinates into real three-dimensional space coordinates according to the coordinate position of the target object in the image, and by combining the camera calibration result obtained from the camera calibration unit 230 and the binocular vision parallax principle, obtain three-dimensional position information of the target object.
Preferably, the image segmentation unit 234 is configured to segment the target object from the preprocessed target image, and specifically includes:
(1) the two target object images collected from the binocular camera 22 are respectively marked as target object images R1And a target object image R2And the target object image R is1And a target object image R2The preprocessed target object images obtained after being processed by the image preprocessing unit 232 are respectively marked as R'1And R'2
(2) Adopting an image segmentation algorithm to carry out comparison on the preprocessed target object image R'1Performing target object segmentation, wherein the preprocessed target object image R'1Each pixel point in the image can be expressed asWherein I is 1,2, …, I, I represents the total number of pixel points,the pixel point is represented as a foreground representing the target object,indicating that the pixel point is a background;
(3) extracting the contour set of the target object by adopting a contour extraction algorithm to obtain C ═ C1,C2,…,CZIn which each contour C iszRepresenting a closed curve, Z being 1,2, …, Z representing the total number of contours in the contour set, the vector recording the pixel position C on the boundary of the target objectz={p1,p2,…,pLzIn which Lz denotes the contour CzThe total number of middle pixel points;
(4) the preprocessed target object image R'1Each contour C in the middle contour set CzMapping to preprocessed target object image R'2In the method, the target object is obtained in an image R'2The boundary contour of (1) is specifically:
(41) constructing a state transition matrix M, wherein the size of the state transition matrix M is LzXD, D represents the value range of parallax, and D belongs to [ D ∈ [ [ D ]min,dmax],dminAnd dmaxRespectively representing the minimum value and the maximum value of the parallax value, and the value of each element M (i, j) in the state transition matrix M is Est(i,j),Est(i, j) represents R'1Middle pixel point piAnd R'2Middle parallax is djThe state energy of the corresponding pixel point;
(42) obtaining the state energy E of each element M (i, j) in the state transition matrix M by using a user-defined state energy formulast(i,j):
Wherein,
in the formula, Est(i, j) represents R'1Middle pixel point piAnd R'2Middle parallax is djCorresponding to the state energy of the pixel point, qjIs represented by R'2Middle and R'1Middle pixel point piParallax is djCorresponding pixel point of qj=pi-dj,E(pi,qj) Representing a pixel point piAnd pixel point qjTarget energy function of Cs(pi,qj) Representing a pixel point piAnd pixel point qjWherein the visual matching costs of Φ(pi) Representing by a pixel piA central partial window of size w × w, K (p)x) 1 denotes a pixel pxBelongs to the foreground, qy=px-dj,ch(px) And ch(qy) Respectively representing pixel points pxAnd q isyR/G/B chroma value of (a) (. omega.)0Representing the weight of the object boundary, ω1Representing object smoothing weights, CO(pi,qj) Representing object boundary matching costs, whereinPr(O|py) Representing a pixel point qyPosterior probability of belonging to the foreground, N (p)i-pi-1) Representing a pixel point piAnd pi-1At a smoothness cost of wherein βdIndicating a set disparity discontinuity threshold;
(43) obtaining an optimal energy path by adopting a backtracking algorithm for the state transition matrix M, and enabling each pixel point p corresponding to M (i, j) on the optimal energy pathiMapping to R'2To obtain a unique corresponding contour point pi-djObtaining a preprocessed target object image R'1Middle profile CzMapping to preprocessed target object image R'2Best match contour C 'behind'z
(44) Target object image R 'after preprocessing'2The optimal matching contour of all contour boundaries of the target object is obtained, and the target object is segmented according to the mapped optimal matching contour.
In the preferred embodiment, the method is adopted to segment the target object from the images acquired from the binocular cameras, so that the problem that two images acquired from different cameras in the binocular cameras have parallax can be solved, the contour matching of the target object in the two images can be accurately and efficiently performed, the contours of the target object in the two images can be accurately acquired respectively and further segmented, the matching of the segmented objects in the two images is high, and a foundation is laid for acquiring the accurate three-dimensional position of the target object in a subsequent module.
Preferably, the preprocessed target object image R 'is subjected to image segmentation algorithm'1Performing target object segmentation, specifically comprising:
(1) performing threshold segmentation on the image, acquiring an external rectangle of the target object as an initial image contour, and setting a control point v on the external rectangle every 4 pixel points as an initial contour control point S ═ v (v ═ v-1,v2,…,vn) And the center of the circumscribed rectangle is set as the dynamic contour center τ ═ (X)τ,Yτ);
(2) Obtaining the energy value from each contour control point to the neighborhood position by using a custom energy equation:
Ei,j=α(i)Eih1(i,j)+β(i)Eih2(i,j)+γ(i)Eedge(i,j)+ε(i)Er(i,j)
in the formula, Ei,jRepresenting contour control points viEnergy values to its neighborhood position, i denotes the ith contour control point, j is 1,2,3,4, respectively, and denotes the contour control point viFour-side adjacent 4 neighborhood pixel points vi,j,Eih1(i, j) represents a first order continuous forcing force, wherein Represents the average distance between the control points of each contour, | vi,j-vi-1I represents a neighborhood pixel vi,jAnd contour control points vi-1The distance between, α (i) represents the set discrete first order coefficient, Eih2(i) Representing a second order continuous force, wherein Eih2(i,j)=(|vi-1-vi,j|+|vi,j-vi+1|)2,|vi-vi+1I represents a neighborhood pixel vi,jAnd contour control points vi+1The distance between, β (i) represents the set discrete second order coefficient, Eedge(i, j) represents the edge energy, whereinL(vi,j,vi-1) Representing a neighborhood pixel vi,jAnd contour control points vi-1I (x, y) represents the gradient value of the pixel point (x, y),representing a neighborhood pixel vi,jAnd contour control points vi-1The average gradient value of all pixel points on the contour line, n represents the neighborhood pixel point vi,jAnd contour control points vi-1The total number of pixel points on the contour line between (1) and (ii), mu represents the set edge energy factor, [ mu ], [ gamma ], [ i ] represents the edge energy coefficient, Er(i, j) represents an applied control force, wherein Er(i,j)=|H(vi,j)-H(vi)|2,H(vi,j) And H (v)i) Respectively representing neighborhood pixel points vi,jAnd contour control points viε (i) represents the applied control force coefficient, wherein Representing contour control points viThe average value of the neighborhood gray levels of (c),and σ2Respectively represent the mean and variance of the gray scale of the entire image,andrepresents a set gradation determination factor, wherein
(3) If there is a contour control point viEnergy value E to its neighborhood positioni,jLess than a set energy threshold EiYLet the contour control point viMove to the corresponding neighborhood pixel point vi,jAt the position and setting an energy threshold EiY=Ei,j
(4) Counting the number P of all contour control points which are moved;
(5) if P is smaller than a set threshold value or reaches a set maximum iteration number, sequentially connecting all the current contour control points as the contour of the target object and carrying out segmentation processing; otherwise, repeating (2) - (5).
In the preferred embodiment, the method is adopted to firstly segment the target object of one image obtained from the binocular camera, and the iteration method is adopted to shrink the outline of the circumscribed rectangle obtained from the threshold segmentation step by step, so as to finally obtain the outline of the target object, so that the method can be well adapted to the outline shapes of different target objects, has strong adaptability and high accuracy, and lays a foundation for contour matching and accurate three-dimensional positioning of the target object of the other image obtained from the binocular camera in the module.
Preferably, the image preprocessing unit 232 is configured to perform contrast enhancement processing on the acquired target object image, and specifically includes:
(1) converting the RGB gray value of each pixel point (i, j) in the target object image into HSV color space;
(2) for each pixel point (i, j) in the target object image, selecting an 8 multiplied by 8 neighborhood image block with the pixel point (i, j) as the center, performing wavelet transformation on the neighborhood image block, and acquiring the local noise level L of the pixel point (i, j)n(i,j):
Where Median {. denotes the Median function,representing a first layer HH subband coefficient absolute value set obtained by wavelet transformation of a neighborhood image block;
(3) obtaining a background value B (i, j) and a gradient value G (i, j) of the pixel point (i, j)
In the formula, B (i, j) represents a background value of the pixel (i, j), V (i + a, j + B) represents a brightness value of the pixel (i + a, j + B) in the HSV color space, G (i, j) represents a gradient value of the pixel (i, j), and G (i, j) represents a gradient value of the pixel (i, j)x(i, j) represents the horizontal gradient of the pixel point (i, j), Gy(i, j) showing the vertical gradient of the pixel point (i, j);
if it is notObtaining the enhanced background value B '(i, j) and gradient value G' (i, j) by using an empirical function:
where μ denotes a set enhancement threshold, η denotes a set enhancement effect adjustment factor, and Ln(i, j) represents the local noise level of the pixel (i, j), B '(i, j) represents the background value of the pixel (i, j) after enhancement, G' (i, j) represents the gradient value of the pixel (i, j) after enhancement, CabRepresents a set empirical scalar coefficient, where CabRepresents a 2 × 1 coefficient vector, then CabIn total, 20 empirical scalar coefficients;
otherwise, B '(i, j) is set to B (i, j), and G' (i, j) is set to G (i, j);
preferably, μ ═ 1, η ═ 3;
(4) obtaining contrast enhancement model parameters σ (i, j) and ζ (i, j)
Wherein,
in the formula, ω (i, j, i ', j') represents a weight coefficient, Ω (i, j) represents a local neighborhood set of the pixel (i, j), wherein a 3 × 3 matrix centered on (i, j) is selected as a local neighborhood of the pixel (i, j), G (i ', j') and G '(i', j ') respectively represent gradient values before and after enhancement of the pixel (i', j '), B (i', j ') and B' (i ', j') respectively represent background values before and after enhancement of the pixel (i ', j'), C (i, j) represents a normalization coefficient,andfuzzy degree control factors respectively representing a spatial domain and a value domain;
(5) contrast enhancement is performed on the target object image using the following contrast enhancement model:
V′(i,j)=σ(i,j)·V(i,j)+ζ(i,j)
in the formula, V' (i, j) represents the brightness value of the pixel point (i, j) after contrast enhancement in the HSV color space, V (i, j) represents the brightness value of the pixel point (i, j) before contrast enhancement in the HSV color space, and σ (i, j) and ζ (i, j) represent contrast enhancement model parameters respectively;
(6) and transforming each enhanced pixel point from the HSV color space to the RGB color space to obtain an enhanced target object image.
In the preferred embodiment, the method is adopted to perform contrast enhancement processing on the target object image, and the contrast enhancement model is adopted to perform contrast enhancement processing on the image in a self-adaptive manner according to the brightness of each pixel point in the image, so that the non-noise details in the image can be enhanced, the detail part of the target object in the target object image is highlighted, the enhancement effect is good, the adaptability is strong, and a foundation is laid for the system to further process the target object image subsequently.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the protection scope of the present invention, although the present invention is described in detail with reference to the preferred embodiments, it should be analyzed by those skilled in the art that modifications or equivalent substitutions can be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (5)

1. An intelligent robot for putting up and putting down data center physical equipment is characterized by comprising a robot main body and a mechanical carrying device;
the robot main body comprises a laser radar, a central processing unit, a motion control device, a motion device and a milemeter;
the laser radar is used for scanning and collecting environmental information of the robot main body in a working environment;
the central processing unit is used for constructing a grid map according to the environment information acquired from the laser radar, acquiring the self pose information of the robot, planning a path and sending a corresponding movement instruction to the motion control device;
the motion control device is used for controlling the motion device according to a moving instruction issued by the central processing unit;
the motion device is provided with wheels and is used for controlling the robot main body to move;
the odometer is used for measuring the travel information of the robot main body;
the mechanical carrying device is arranged on the robot main body and used for carrying equipment and carrying out racking and racking operations.
2. The intelligent robot for the racking of the data center physical equipment according to claim 1, wherein the central processor comprises a storage module, a mapping and positioning module, a path planning module and an instruction sending module;
the map building and positioning module is used for building a grid map according to the environment information collected from the laser radar and acquiring the self pose information of the robot;
the storage module is used for storing the grid map information;
the path planning module is used for planning paths according to the raster map information, the self pose information of the robot and the destination information to acquire path planning information;
the instruction sending module is used for generating a corresponding movement instruction according to the grid map information, the self pose information of the robot and the path planning information and sending the movement instruction to the motion control device.
3. The intelligent robot for the racking of the data center physical equipment according to claim 1, wherein the mechanical handling device comprises a radio frequency identification module, a binocular camera, an image processing module, a mechanical control module and a manipulator structure;
the radio frequency identification module is used for scanning and identifying label information on equipment or a cabinet and confirming a target object, wherein the target object is target equipment or a target cabinet;
the binocular camera is used for collecting a target object image in a working area of the mechanical carrying device;
the image processing module is used for processing the acquired target object image to acquire the position information of the target object;
the mechanical control module is used for sending a corresponding control instruction to the manipulator structure according to the acquired position information of the target object;
the manipulator structure is used for grabbing the target equipment according to the received control instruction and finishing the operation of putting the target equipment on or off the shelf.
4. The intelligent robot for the racking of the data center physical equipment according to claim 3, wherein the image processing module comprises a camera calibration unit, an image preprocessing unit, an image segmentation unit and a target positioning unit;
the camera marking unit is used for marking the binocular camera, acquiring internal parameters and external parameters of the camera and determining the mapping relation between an image two-dimensional coordinate system and a world three-dimensional coordinate system;
the image preprocessing unit is used for preprocessing the acquired target object image by contrast enhancement, smoothing and image enhancement to acquire a preprocessed target object image;
the image segmentation unit is used for segmenting the target object from the preprocessed target image;
the target positioning unit is used for converting the two-dimensional coordinates into real three-dimensional space coordinates according to the coordinate position of the target object in the image and by combining a camera calibration result obtained from the camera calibration unit and a binocular vision parallax principle, and obtaining three-dimensional position information of the target object.
5. The intelligent robot for racking and unmounting of data center physical equipment according to claim 4, wherein the image segmentation unit is configured to segment the target object from the preprocessed target image, and specifically comprises:
(1) respectively marking two target object images collected from a binocular camera as target object images R1And a target object image R2And the target object image R is1And a target object image R2The preprocessed target object images obtained after being processed by the image preprocessing unit are respectively marked as R'1And R'2
(2) Adopting an image segmentation algorithm to carry out comparison on the preprocessed target object image R'1Performing target object segmentation, wherein the preprocessed target object image R'1Each pixel point in the list can be represented as KpiWhere I is 1,2, …, I denotes the total number of pixels, Kpi1 denotes the pixel point as the foreground of the target object, and Kpi0 represents that the pixel point is a background;
(3) extracting the contour set of the target object by adopting a contour extraction algorithm to obtain C ═ C1,C2,…,CZIn which each contour C iszRepresenting a closed curve, Z being 1,2, …, Z representing the total number of contours in the contour set, the vector recording the pixel position C on the boundary of the target objectz={p1,p2,…,pLzIn which Lz denotes the contour CzThe total number of middle pixel points;
(4) the preprocessed target object image R'1Each contour C in the middle contour set CzMapping to preprocessed target object image R'2In the method, the target object is obtained in an image R'2The boundary contour of (1) is specifically:
(41) constructing a state transition matrix M, wherein the size of the state transition matrix M is LzXD, D represents the value range of parallax, and D belongs to [ D ∈ [ [ D ]min,dmax],dminAnd dmaxRespectively representing the minimum value and the maximum value of the parallax value, and the value of each element M (i, j) in the state transition matrix M is Est(i,j),Est(i, j) represents R'1Middle pixel point p1And R'2Middle parallax is djCorresponding to the shape of the pixelState energy;
(42) obtaining the state energy E of each element M (i, j) in the state transition matrix M by using a user-defined state energy formulast(i,j):
Wherein,
in the formula, Est(i, j) represents R'1Middle pixel point piAnd R'2Middle parallax is djCorresponding to the state energy of the pixel point, qjIs represented by R'2Middle and R'1Middle pixel point piParallax is djCorresponding pixel point of qj=pi-dj,E(pi,qj) Representing a pixel point piAnd pixel point qjTarget energy function of Cs(Pi,qj) Representing a pixel point piAnd pixel point qjWherein the visual matching costs of Φ(pi) Representing by a pixel piA central partial window of size w × w, K (p)x) 1 denotes a pixel pxBelongs to the foreground, qy=px-dj,ch(px) And ch(qy) Respectively representing pixel points pxAnd q isyR/G/B chroma value of (a) (. omega.)0Representing the weight of the object boundary, ω1Representing object smoothing weights, CO(pi,qj) Representing object boundary matching costs, whereinPr(O|py) Representing a pixel point qyPosterior probability of belonging to the foreground, N (p)i-pi-1) Representing a pixel point piAnd pi-1At a smoothness cost of wherein βdIndicating a set disparity discontinuity threshold;
(43) obtaining an optimal energy path by adopting a backtracking algorithm for the state transition matrix M, and enabling each pixel point p corresponding to M (i, j) on the optimal energy pathiMapping to R'2To obtain a unique corresponding contour point pi-djObtaining a preprocessed target object image R'1Middle profile CzMapping to preprocessed target object image R'2Best match contour C 'behind'z
(44) Target object image R 'after preprocessing'2The optimal matching contour of all contour boundaries of the target object is obtained, and the target object is segmented according to the mapped optimal matching contour.
CN201810005571.2A 2018-01-03 2018-01-03 A kind of intelligent robot of the undercarriage on data center's physical equipment Active CN108217045B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810005571.2A CN108217045B (en) 2018-01-03 2018-01-03 A kind of intelligent robot of the undercarriage on data center's physical equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810005571.2A CN108217045B (en) 2018-01-03 2018-01-03 A kind of intelligent robot of the undercarriage on data center's physical equipment

Publications (2)

Publication Number Publication Date
CN108217045A true CN108217045A (en) 2018-06-29
CN108217045B CN108217045B (en) 2018-12-18

Family

ID=62642696

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810005571.2A Active CN108217045B (en) 2018-01-03 2018-01-03 A kind of intelligent robot of the undercarriage on data center's physical equipment

Country Status (1)

Country Link
CN (1) CN108217045B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109144056A (en) * 2018-08-02 2019-01-04 上海思岚科技有限公司 The global method for self-locating and equipment of mobile robot
CN110182718A (en) * 2019-04-25 2019-08-30 上海快仓智能科技有限公司 The control method and cargo movement system of transfer robot
CN113666304A (en) * 2021-08-30 2021-11-19 上海快仓智能科技有限公司 Method, device, equipment and storage medium for controlling transfer robot
WO2022237221A1 (en) * 2021-05-12 2022-11-17 深圳市海柔创新科技有限公司 Adjustment method, apparatus, and device for goods retrieval and placement apparatus, robot, and warehouse system
WO2023024772A1 (en) * 2021-08-25 2023-03-02 深圳市海柔创新科技有限公司 Transfer robot
CN116214531A (en) * 2023-05-10 2023-06-06 佛山隆深机器人有限公司 Path planning method and device for industrial robot

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104842362A (en) * 2015-06-18 2015-08-19 厦门理工学院 Method for grabbing material bag by robot and robot grabbing device
CN105562361A (en) * 2015-12-23 2016-05-11 西安工程大学 Independent sorting method of fabric sorting robot
CN106347919A (en) * 2016-11-10 2017-01-25 杭州南江机器人股份有限公司 Automatic warehousing system
EP3200032A1 (en) * 2011-09-09 2017-08-02 Symbotic LLC Storage and retrieval system case unit detection
US20170297820A1 (en) * 2014-10-14 2017-10-19 Nextshift Robotics, Inc. Storage material handling system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3200032A1 (en) * 2011-09-09 2017-08-02 Symbotic LLC Storage and retrieval system case unit detection
US20170297820A1 (en) * 2014-10-14 2017-10-19 Nextshift Robotics, Inc. Storage material handling system
CN104842362A (en) * 2015-06-18 2015-08-19 厦门理工学院 Method for grabbing material bag by robot and robot grabbing device
CN105562361A (en) * 2015-12-23 2016-05-11 西安工程大学 Independent sorting method of fabric sorting robot
CN106347919A (en) * 2016-11-10 2017-01-25 杭州南江机器人股份有限公司 Automatic warehousing system

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109144056A (en) * 2018-08-02 2019-01-04 上海思岚科技有限公司 The global method for self-locating and equipment of mobile robot
CN109144056B (en) * 2018-08-02 2021-07-06 上海思岚科技有限公司 Global self-positioning method and device for mobile robot
CN110182718A (en) * 2019-04-25 2019-08-30 上海快仓智能科技有限公司 The control method and cargo movement system of transfer robot
WO2022237221A1 (en) * 2021-05-12 2022-11-17 深圳市海柔创新科技有限公司 Adjustment method, apparatus, and device for goods retrieval and placement apparatus, robot, and warehouse system
WO2023024772A1 (en) * 2021-08-25 2023-03-02 深圳市海柔创新科技有限公司 Transfer robot
CN113666304A (en) * 2021-08-30 2021-11-19 上海快仓智能科技有限公司 Method, device, equipment and storage medium for controlling transfer robot
CN116214531A (en) * 2023-05-10 2023-06-06 佛山隆深机器人有限公司 Path planning method and device for industrial robot
CN116214531B (en) * 2023-05-10 2023-06-30 佛山隆深机器人有限公司 Path planning method and device for industrial robot

Also Published As

Publication number Publication date
CN108217045B (en) 2018-12-18

Similar Documents

Publication Publication Date Title
CN108217045B (en) A kind of intelligent robot of the undercarriage on data center's physical equipment
CN111679291B (en) Inspection robot target positioning configuration method based on three-dimensional laser radar
EP3100236B1 (en) Method and system for constructing personalized avatars using a parameterized deformable mesh
Rizzini et al. Integration of a stereo vision system into an autonomous underwater vehicle for pipe manipulation tasks
CN112785643A (en) Indoor wall corner two-dimensional semantic map construction method based on robot platform
CN104484648B (en) Robot variable visual angle obstacle detection method based on outline identification
CN110281231B (en) Three-dimensional vision grabbing method for mobile robot for unmanned FDM additive manufacturing
CN108628306B (en) Robot walking obstacle detection method and device, computer equipment and storage medium
Bascle et al. Stereo matching, reconstruction and refinement of 3D curves using deformable contours
CN108205324B (en) Intelligent road cleaning device
CN110930374A (en) Acupoint positioning method based on double-depth camera
CN109829476B (en) End-to-end three-dimensional object detection method based on YOLO
US20220414291A1 (en) Device for Defining a Sequence of Movements in a Generic Model
Borsu et al. Automated surface deformations detection and marking on automotive body panels
CN112330813A (en) Wearing three-dimensional human body model reconstruction method based on monocular depth camera
CN110648362B (en) Binocular stereo vision badminton positioning identification and posture calculation method
CN113723389B (en) Pillar insulator positioning method and device
EP3825804A1 (en) Map construction method, apparatus, storage medium and electronic device
CN118247809A (en) Live pig individual identification and positioning method based on RGB-D camera
Tiozzo Fasiolo et al. Combining LiDAR SLAM and deep learning-based people detection for autonomous indoor mapping in a crowded environment
CN110689553B (en) Automatic segmentation method of RGB-D image
Zang et al. A flexible visual inspection system combining pose estimation and visual servo approaches
CN107562207A (en) A kind of intelligent medical system based on gesture identification control
CN114089364A (en) Integrated sensing system device and implementation method
CN117315092B (en) Automatic labeling method and data processing equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20190626

Address after: 510000 No. 2 Tianhe Second Road, Tianhe District, Guangzhou City, Guangdong Province

Patentee after: Guangzhou Power Supply Bureau

Address before: 510000 No. 2 Tianhe Second Road, Tianhe District, Guangzhou City, Guangdong Province

Co-patentee before: GUANGZHOU KE TENG INFORMATION TECHNOLOGY CO., LTD.

Patentee before: Guangzhou Power Supply Bureau

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210225

Address after: 510620, No. two, No. 2, Tianhe South Road, Guangzhou, Guangdong, Tianhe District

Patentee after: Guangzhou Power Supply Bureau of Guangdong Power Grid Co.,Ltd.

Address before: 510000 No. 2 Tianhe Second Road, Tianhe District, Guangzhou City, Guangdong Province

Patentee before: GUANGZHOU POWER SUPPLY Co.,Ltd.