CN114211504A - Indoor multifunctional operation robot of transformer substation - Google Patents

Indoor multifunctional operation robot of transformer substation Download PDF

Info

Publication number
CN114211504A
CN114211504A CN202210105971.7A CN202210105971A CN114211504A CN 114211504 A CN114211504 A CN 114211504A CN 202210105971 A CN202210105971 A CN 202210105971A CN 114211504 A CN114211504 A CN 114211504A
Authority
CN
China
Prior art keywords
point
points
base
cabinet
sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210105971.7A
Other languages
Chinese (zh)
Inventor
孙财平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Qinfang Mechanical And Electrical Technology Development Co ltd
Original Assignee
Shanghai Qinfang Mechanical And Electrical Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Qinfang Mechanical And Electrical Technology Development Co ltd filed Critical Shanghai Qinfang Mechanical And Electrical Technology Development Co ltd
Priority to CN202210105971.7A priority Critical patent/CN114211504A/en
Publication of CN114211504A publication Critical patent/CN114211504A/en
Priority to CN202211027753.2A priority patent/CN115256398A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J5/00Manipulators mounted on wheels or on carriages
    • B25J5/007Manipulators mounted on wheels or on carriages mounted on wheels
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/02Programme-controlled manipulators characterised by movement of the arms, e.g. cartesian coordinate type
    • B25J9/04Programme-controlled manipulators characterised by movement of the arms, e.g. cartesian coordinate type by rotating at least one arm, excluding the head movement itself, e.g. cylindrical coordinate type or polar coordinate type
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • B25J9/1666Avoiding collision or forbidden zones
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

The invention relates to the technical field of special robots, in particular to a multifunctional operating robot in a transformer substation room, wherein a base is provided with a laser radar, a gyroscope and a core controller, the base is provided with a pit avoidance sensor, an obstacle avoidance sensor and an antenna, the base is provided with a core control panel, a discharge detection sensor, a safe touch edge sensor and a support, the top of the support is provided with a binocular pan-tilt, a high-precision six-axis mechanical arm is arranged on a arm seat, an operating tool and a depth camera are arranged on the arm seat, the position relation between the tail end work and an operating target is accurately analyzed and calculated by the depth camera through two-dimensional code information by using a depth map correction method, the support is provided with an operating seat, and the operating seat is connected with a five-axis linkage ground knife operating mechanism through a rotating shaft. The gesture can be recognized through a geometric mode, and the requirements of good robustness and high accuracy in an indoor scene are met.

Description

Indoor multifunctional operation robot of transformer substation
Technical Field
The invention relates to the technical field of special robots, in particular to an indoor multifunctional operation robot for a transformer substation.
Background
With the increase of the number of domestic transformer substations, more and more switch room equipment is provided, the demand of operation and maintenance personnel is increased, unattended operation is performed below 220KV of the current transformer substation, a centralized operation and maintenance mode of operation and maintenance classes in a partition area is performed, the number of the transformer substations is increased, the personnel-to-station ratio is reduced, and the per-person workload is continuously increased.
The operation and maintenance mode of the existing domestic transformer substation switch room mainly adopts a manual inspection and manual operation mode or a robot inspection and manual operation mode, the workload of operation and maintenance personnel is large, the working efficiency is low, and meanwhile, safety accidents are easy to cause; meanwhile, when the robot operates, a fixed numerical value is usually adopted to ensure that the tool is in a designated state, and although some tools are realized in an image mode, the problems that self-adaptive leveling to different scenes cannot be realized due to light rays, markers and the like or robustness is insufficient, interference is easy to occur and the like exist.
Therefore, a multifunctional operation robot in a transformer substation room needs to be designed, various switching operation tasks of the switch cabinet can be automatically carried out through a man-machine interaction mode, regular inspection tasks of the switch room can be completely and automatically completed, operation and maintenance personnel can be remotely assisted to complete a series of switching operation tasks of the switch cabinet, and meanwhile gestures are recognized in a geometric mode of recognizing a depth map.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides an indoor multifunctional operation robot for a transformer substation, various switching operation tasks of a switch cabinet are autonomously performed through a man-machine interaction mode, a regular inspection task of the switch cabinet can be fully and autonomously completed, operation and maintenance personnel can be remotely assisted to complete a series of switching operation tasks of the switch cabinet, and meanwhile, the posture is identified in a geometric mode of identifying a depth map.
In order to achieve the above purpose, the invention provides a multifunctional operation robot in a transformer substation room, which comprises a base, traveling wheels, a bracket, a rotating shaft, a laser radar, a gyroscope, a motor encoder, an obstacle avoidance sensor, a high-precision pit avoidance sensor, an antenna, a binocular head, a battery management system, a core controller, a core control panel, a high-precision six-axis mechanical arm, an operation tool, a depth camera, a three-color lamp, a discharge detection sensor, a safe contact edge sensor and a ground cutter operation mechanism, wherein four traveling wheels are installed at four corners under the base, the laser radar, the gyroscope and the core controller are respectively arranged at the front end of the upper surface of the base, the high-precision pit avoidance sensor is arranged at the front end of the lower surface of the base, the obstacle avoidance sensor and the antenna are arranged at the rear end of the upper surface of the base, the core control panel is arranged in the middle of the upper surface of the base, and the discharge detection sensor is arranged on one side of the core control panel, be provided with safe edge sensor that touches on the base front and back end support, the base rear end is provided with the support, the outside of support is provided with the tricolor lamp, be provided with emergency stop button on the tricolor lamp below support, the top of support is provided with two mesh cloud platforms, be provided with the operation panel on the middle part platform of support, install six arms of high accuracy on the operation panel, be provided with operating tool and degree of depth camera on the six arms of high accuracy, the operation panel passes through the axis of rotation and is connected with ground sword operating device, the axis of rotation bottom is provided with another degree of depth camera, core controller and core control panel information are mutual, core controller passes through antenna and backstage information interaction.
The obstacle avoidance sensor is a large-angle ultrasonic obstacle avoidance sensor.
The battery management system is arranged in the battery box below the base.
The discharge detection sensor is an ultrahigh frequency partial discharge detection sensor.
The binocular head is provided with a high-definition visible light camera and a high-precision thermal infrared imager.
The operating tools are four sets.
The ground cutter operating mechanism is a five-axis linkage ground cutter operating mechanism.
The motor encoder is arranged inside the travelling wheel.
A tool posture depth map correction method for a multifunctional operation robot in a transformer substation room is characterized in that a depth camera is used for accurately analyzing and calculating the position relation between terminal work and an operation target through two-dimensional code information, and the method comprises the following specific steps:
s1: collecting a depth map, and converting the depth map into a point cloud;
s2: determining a plane area of the cabinet surface and an area of an object on the cabinet surface by using a cloth simulation filtering algorithm;
s3: calculating the average value of the distances of the points around the three points in a traversing translation mode, thereby determining the point positions on the three cabinet surfaces; the three points are A, B, C points obtained after the depth camera scans the two-dimensional code information;
s4: calculating the current postures of the camera and the tool end according to the geometric relation between the point position and the distance;
the specific method of S1 is as follows: defining the depth of the cabinet surface as the positive direction of a z-axis of a coordinate axis, the right side facing the cabinet surface as the positive direction of an x-axis, and the upper side facing the cabinet surface as the positive direction of a y-axis, and under the coordinate system, taking a point on the actual cabinet surface corresponding to the lower left corner of the depth map as a coordinate origin, and creating a point cloud of the cabinet surface according to the value of each pixel on the depth map;
the specific method of S2 is as follows: assuming that the cabinet surface is horizontally placed on the ground, a mathematical method is used for simulating a piece of cloth to fall on the cabinet surface from top to bottom, so that the parts which are holes and the parts which are convex objects can be obtained, and the main body plane of the cabinet surface can be found;
the specific method of S3 is as follows: using a traversal mode to calculate A, B, C the average value of the distances of the points around the three points, thereby determining the points on the three cabinets, from the calculation in S2, obtaining which pixel points in the depth map are the points of the plane on the cabinet, which points are the points of the switch and the hole on the cabinet, determining three points A, B, C belonging to the cabinets for calculating the postures of the camera and the tool, wherein the point A is the point traversed from the upper left corner to the lower right corner, and for the point AijIf all B (i + m), j and Ci (j + m) belong to the cabinet surface points, the three points are taken, if not, A moves to the next point to continue calculation, and if not, the value of m is changed to continue calculation;
the specific method of S4 is as follows: a set of A, B and C points is obtained in S3, and since a is the same as B in ordinate and different in abscissa, it is used to calculate the tilt angle of the camera, which is calculated as follows:
Figure BDA0003493499040000041
the distance measurement method comprises the steps that a distance value of a point A is obtained, a distance value of a point B is obtained, m is a value m set in S3, S is an actual distance of each pixel point on a cabinet, the point B stays at a fixed position during robot navigation, the value is unchanged after measurement, the pitch angle is calculated, and the yaw angle value can be measured through the point A and the point C.
S2 includes the steps of:
s21: establishing an equation:
Figure BDA0003493499040000042
x represents the position of the particle in the cloth at the time t, Fext (X, t) represents an external driving factor (gravity and collision), Fint (X, t) represents an internal driving factor (internal connection among the particles), and the position of the cloth particle is influenced by two factors of Fext (X, t) and Fint (X, t);
external factors, if only the external factors are considered, the derivation calculation of the cloth particles at the time t + Δ t is as follows:
Figure BDA0003493499040000043
m is the weight of the particle and is set as 1, G is a universal gravitation constant, and the position of the particle at the next moment due to external factors can be obtained by setting the step length of time delta t;
regarding internal factors, in order to simulate the relationship between particles, two adjacent particles are arbitrarily selected, and if both particles are movable, the two particles are moved in opposite directions by the same distance; if one is immovable, move the other; if both have the same height, no movement is made, according to this simulation criterion, the formula is defined:
Figure BDA0003493499040000051
wherein
Figure BDA0003493499040000052
B is equal to 1 when the particle can move and is equal to 0 when the particle cannot move; adjacent particles with pi p0 and n is the point criterionA unit vector (0, 0, 1) T quantized to the vertical direction;
s22: the point cloud in the S1 is inverted, because the point cloud generated by the depth map contains fixed point numbers, a 'cloth' with the same length and width specification as the point cloud is set, the point cloud falls at a certain distance above the point cloud, the position of each point of the cloth under the current iteration number is calculated by only considering the external factors in the formula (1), whether each point falls on the plane or not is judged, the distance needing to be displaced is calculated according to the formula (2) after whether each point can be moved or not, the two calculations are repeated until the specified number of times or no particle can be moved, and if the distance between the point cloud of the S1 and the corresponding cloth particle after the iteration of the S2 is smaller than the relative parameter value, the point cloud is considered as the point of the cabinet surface.
Compared with the prior art, the invention can enter a transformer substation to realize various switching operation tasks of the switch cabinet through unique structure and component design, can recognize the gesture through a geometric mode by combining a depth map correction method of the gesture of the robot tool by the depth camera, meets the requirements of good robustness and high accuracy in an indoor scene, improves the personal safety and operation inspection efficiency of operation and maintenance personnel, and reduces the labor cost.
Drawings
FIG. 1 is a schematic side view of the apparatus of the present invention.
Fig. 2 is a perspective view of the first device of the present invention.
Fig. 3 is a perspective view of the second embodiment of the present invention.
Fig. 4 is a perspective view of the device of the present invention.
Fig. 5 is a schematic diagram of point location after the depth camera recognizes the two-dimensional code.
Description of reference numerals:
1 is laser radar, 2 is the gyroscope, 3 is motor encoder, 4 is for keeping away the barrier sensor, 5 is for keeping away the hole sensor, 6 is the antenna, 7 is two mesh cloud platforms, 8 is battery management system, 9 is the core control ware, 10 is the core control board, 11 is high accuracy six arms, 12 is operating means, 13 is the degree of depth camera, 14 is the tricolor lamp, 15 is the detection sensor that discharges, 16 is the safe limit sensor that touches, 17 is ground sword operating device, 18 is emergency stop button.
Detailed Description
The invention will now be further described with reference to the accompanying drawings.
Referring to fig. 1-5, the invention provides a multifunctional operation robot in a transformer substation room, comprising a base, traveling wheels, a bracket, a rotating shaft, a laser radar 1, a gyroscope 2, a motor encoder 3, an obstacle avoidance sensor 4, a high-precision pit avoidance sensor 5, an antenna 6, a binocular head 7, a battery management system 8, a core controller 9, a core control board 10, a high-precision six-axis mechanical arm 11, an operation tool 12, a depth camera 13, a tri-color lamp 14, a discharge detection sensor 15, a safe contact edge sensor 16 and a ground knife operation mechanism 17, four traveling wheels arranged at four corners under the base, a laser radar 1, a gyroscope 2 and a core controller 9 arranged at the front end of the upper surface of the base respectively, a high-precision pit avoidance sensor 5 arranged at the front end of the lower surface of the base, an obstacle avoidance sensor 4 and an antenna 6 arranged at the rear end of the upper surface of the base, and a core control board 10 arranged at the middle part of the upper surface of the base, one side of core control panel 10 is provided with discharge detection sensor 15, be provided with safe edge sensor 16 that touches on the base front and back end support, the base rear end is provided with the support, the outside of support is provided with three-color lamp 14, be provided with emergency stop button 18 on the support of 14 below three-color lamp, the top of support is provided with two mesh cloud platforms 7, be provided with the operation panel on the middle part platform of support, install six arms of high accuracy 11 on the operation panel, be provided with operation tool 12 and degree of depth camera 13 on the six arms of high accuracy 11, the operation panel passes through the axis of rotation and is connected with ground sword operating device 17, the axis of rotation bottom is provided with another degree of depth camera 13, core control 9 is mutual with core control panel 10 information, core control 9 passes through antenna 6 and backstage information interaction.
The obstacle avoidance sensor 4 is a large-angle ultrasonic obstacle avoidance sensor.
The battery management system 8 is disposed in the battery compartment below the base.
The discharge detection sensor 15 is an ultrahigh frequency partial discharge detection sensor.
The binocular head 7 is provided with a high-definition visible light camera and a high-precision thermal infrared imager.
The operating tools 12 are four sets.
The ground cutter operating mechanism 17 is a five-axis linkage ground cutter operating mechanism.
The motor encoder 3 is provided inside the traveling wheel.
A tool posture depth map correction method of a multifunctional operation robot in a transformer substation room is characterized in that a depth camera 13 is used for accurately analyzing and calculating the position relation between terminal work and an operation target through two-dimensional code information, and the method comprises the following specific steps:
s1: collecting a depth map, and converting the depth map into a point cloud;
s2: determining a plane area of the cabinet surface and an area of an object on the cabinet surface by using a cloth simulation filtering algorithm;
s3: calculating the average value of the distances of the points around the three points in a traversing translation mode, thereby determining the point positions on the three cabinet surfaces; the three points are A, B, C points acquired after the depth camera 13 scans the two-dimensional code information;
s4: calculating the current postures of the camera and the tool end according to the geometric relation between the point position and the distance;
the specific method of S1 is as follows: defining the depth of the cabinet surface as the positive direction of a z-axis of a coordinate axis, the right side facing the cabinet surface as the positive direction of an x-axis, and the upper side facing the cabinet surface as the positive direction of a y-axis, and under the coordinate system, taking a point on the actual cabinet surface corresponding to the lower left corner of the depth map as a coordinate origin, and creating a point cloud of the cabinet surface according to the value of each pixel on the depth map;
the specific method of S2 is as follows: assuming that the cabinet surface is horizontally placed on the ground, a mathematical method is used for simulating a piece of cloth to fall on the cabinet surface from top to bottom, so that the parts which are holes and the parts which are convex objects can be obtained, and the main body plane of the cabinet surface can be found;
the specific method of S3 is as follows: using traversal mode, calculating A, B, C average value of distance of points around three points, thus determining point positions on three cabinets, obtaining which pixel points in the depth map are points of plane on the cabinet, which points are points of switch and hole on the cabinet from the calculation in S2, determining three points A, B, C belonging to cabinets for calculating camera A, B, CAnd tool pose, point A is the point traversed from the top left corner to the bottom right corner, for point AijIf all B (i + m), j and Ci (j + m) belong to the cabinet surface points, the three points are taken, if not, A moves to the next point to continue calculation, and if not, the value of m is changed to continue calculation;
the specific method of S4 is as follows: a set of A, B and C points is obtained in S3, and since a is the same as B in ordinate and different in abscissa, it is used to calculate the tilt angle of the camera, which is calculated as follows:
Figure BDA0003493499040000081
the distance measurement method comprises the steps that a distance value of a point A is obtained, a distance value of a point B is obtained, m is a value m set in S3, S is an actual distance of each pixel point on a cabinet, the point B stays at a fixed position during robot navigation, the value is unchanged after measurement, the pitch angle is calculated, and the yaw angle value can be measured through the point A and the point C.
S2 includes the steps of:
s21: establishing an equation:
Figure BDA0003493499040000082
x represents the position of the particle in the cloth at the time t, Fext (X, t) represents an external driving factor (gravity and collision), Fint (X, t) represents an internal driving factor (internal connection among the particles), and the position of the cloth particle is influenced by two factors of Fext (X, t) and Fint (X, t);
external factors, if only the external factors are considered, the derivation calculation of the cloth particles at the time t + Δ t is as follows:
Figure BDA0003493499040000091
m is the weight of the particle and is set as 1, G is a universal gravitation constant, and the position of the particle at the next moment due to external factors can be obtained by setting the step length of time delta t;
with respect to internal factors, to simulate the relationship between particlesRandomly selecting two adjacent particles, and if the two particles are movable, enabling the two particles to move in opposite directions for the same distance; if one is immovable, move the other; if both have the same height, no movement is made, according to this simulation criterion, the formula is defined:
Figure BDA0003493499040000092
wherein
Figure BDA0003493499040000093
B is equal to 1 when the particle can move and is equal to 0 when the particle cannot move; pi is the neighboring particle of p0, n is the unit vector (0, 0, 1) T that normalizes the point to the vertical;
s22: the point cloud in the S1 is inverted, because the point cloud generated by the depth map contains fixed point numbers, a 'cloth' with the same length and width specification as the point cloud is set, the point cloud falls at a certain distance above the point cloud, the position of each point of the cloth under the current iteration number is calculated by only considering the external factors in the formula (1), whether each point falls on a plane or not is judged, the distance needing to be displaced is calculated according to the formula (2) after whether each point can be moved or not, the two calculations are repeated until the specified number of times or no particle can be moved, and if the distance between the point cloud of the S1 and the corresponding cloth particle after the iteration of the S2 is smaller than the relative parameter value (for example, 0.005 m), the point cloud is regarded as the point of the cabinet surface.
Example 1:
a,
Step 1: the robot acquires laser radar 1, motor encoder 3, imu data.
Step 2: matching laser frames, matching the laser frames with a map, and predicting pose transformation of the robot in the middle of adjacent frames.
And step 3: the rear end receives the pose of the robot estimated by the front-end odometer at different moments and the information of loop detection, optimizes the pose and the information to obtain a globally consistent track and map, and the odometer is a numerical value in the direction of X, Y calculated by data calculated by a motor encoder and a gyroscope.
And 4, step 4: and finally, outputting a slam map to realize autonomous positioning.
II,
Step 1: the core control board 10 obtains the distance value of the large-angle ultrasonic obstacle avoidance sensor 4, the distance value of the pit avoidance sensor 5 and the information value of the safe edge touching sensor 16.
Step 2: and setting preset values of an ultrasonic sensor 4 and a pit avoidance sensor 5.
And step 3: when the acquired distance value exceeds the preset value, the core control board 10 stops driving the motor output.
And 4, step 4: the core control board 10 receives the information that the safety edge touching sensor 16 is triggered, and stops the output of the driving motor.
III,
Step 1: the battery intelligent system 8 monitors the electric quantity of the battery cell in real time.
And 2, setting a low electric quantity preset value of the battery system by the background system.
And 3, when the electric quantity of the battery is lower than a preset value set by the system, starting the system and returning to a charging command.
And 4, the core controller 9 receives a charging return command, and the robot automatically returns to a charging point for charging according to a preset path.
Fourthly,
Step 1: setting parameters of a visible light camera and setting parameters of a thermal infrared imager.
Step 2: and taking pictures by a visible light camera and taking pictures by a thermal infrared imager.
And step 3: matching of pictures with script information
And 4, step 4: picture information is identified.
And 5: and outputting the result.
V, V,
Step 1: the robot navigates autonomously to a target position.
Step 2: the six-axis free mechanical arm 11 runs to a target position through system path planning.
And step 3: the end-point operation tool 12 selects a corresponding operation tool according to the task set by the system.
And 4, step 4: the depth camera 13 recognizes the switch cabinet face two-dimensional code information.
And 5: leveling the attitude of the arm 11 and the end tool.
Step 6: the depth camera 13 precisely analyzes and calculates the position relationship between the end work and the operation target by using the depth map correction method through the two-dimensional code information.
And 7: the depth camera 13 communicates the analysis data values to the core controller 9.
And 8: the core controller 9 directs the end tool to operate on the target based on the data values transmitted by the depth camera 13 and the system tasks.
Sixthly,
Step 1: the robot navigates autonomously to a target position.
Step 2: the rotating shaft of the ground cutter mechanism 17 rotates 90 degrees.
And step 3: the depth camera 13 at the bottom of the rotating shaft on the ground knife mechanism 17 recognizes the two-dimensional code information.
And 4, step 4: and the five-axis linkage device of the ground cutter mechanism 17 levels the posture of the ground cutter mechanism according to the information of the depth camera 13.
And 5: the depth camera 13 precisely analyzes and calculates the distance between the grounding switch mechanism 17 and the switch cabinet grounding switch device by using a depth map correction method through two-dimensional code information.
Step 6: the depth camera 13 communicates the analysis distance value to the core controller 9.
And 7: the core controller 9 guides the extension shaft of the ground knife mechanism to align with the switch cabinet opening/closing device to extend according to the distance value transmitted by the depth camera 13.
And 8: after the switch cabinet is stretched in place, the rotating shaft rotates to realize the opening and closing of the switch cabinet opening/closing device.
Seventhly, step 1: the distance between the probe of the ultrahigh frequency partial discharge sensor 15 and the switch cabinet is 50 cm.
Step 2: the ultrahigh frequency partial discharge sensor 15 detects a high frequency signal released inside the switch cabinet.
And step 3: and the ultrahigh frequency partial discharge controller acquires a high frequency signal and analyzes data.
And 4, step 4: and the data analysis result is wirelessly transmitted to the background server through the antenna 6 and is displayed on the background server.
Eighthly, step 1: the core controller 9 is in serial communication with a core control board 10.
Step 2: each subsystem communicates with the core controller 9 and information is passed to the core controller 9.
And step 3: each module is connected to the core control board 10, and information is transmitted to the core control board 10.
And 4, step 4: and the robot starts and runs to monitor the information of each module and subsystem in real time.
And 5: the robot self-checking information is displayed by the change of the color of the three-color lamp 14 output by the core control panel 10.
Ninthly, step 1: the core controller 9 is connected with the vehicle-mounted AP through a network cable, and information is interacted with each other.
Step 2: the vehicle-mounted AP realizes wireless communication with the base station through the antenna.
And step 3: the base station and the background system are communicated through a network cable or an optical fiber to realize information interaction.
And 4, step 4: the core controller realizes information interaction with the background system through the vehicle-mounted antenna-base station.
Tenthly, step 1: the core controller 9 is in serial communication with a core control board 10.
Step 2: information interaction between the core controller 9 and the core control board 10
Eleven, step 1: and the system database configures routing inspection contents.
Step 2: and selecting different contents according to different time periods.
And step 3: and the background system reads the contents of the system database and displays the contents in task management.
And 4, step 4: and 4 polling modes are selected according to requirements in the background system task management.
The invention integrally solves the technical problems that the existing transformer substation is large in labor consumption and high in potential safety hazard, even if the existing transformer substation is operated by a robot, the existing transformer substation is realized by a common simple image mode, light, markers and the like cannot be subjected to self-adaptive leveling to different scenes or the robustness is insufficient, the technical problems that the existing transformer substation can be realized by a simple image mode, the existing transformer substation can be effectively self-established, self-navigated, self-charged, self-image recognition, self-determined thermal infrared imager temperature measurement, self-determined analysis, result derivation and report output can be realized by combining a unique robot structure design and a depth map correction method, the robot can autonomously carry out various switching operation tasks of a switch cabinet through a man-machine interaction mode, the personal safety and the operation inspection efficiency of operation and maintenance personnel are improved, and the labor cost and the potential safety production are reduced.

Claims (10)

1. A multifunctional operation robot in a transformer substation room is characterized by comprising a base, traveling wheels, a support, a rotating shaft, a laser radar (1), a gyroscope (2), a motor encoder (3), an obstacle avoidance sensor (4), a high-precision pit avoidance sensor (5), an antenna (6), a binocular holder (7), a battery management system (8), a core controller (9), a core control panel (10), a high-precision six-axis mechanical arm (11), an operation tool (12), a depth camera (13), a tricolor lamp (14), a discharge detection sensor (15), a safe contact edge sensor (16) and a ground cutter operation mechanism (17), wherein four traveling wheels are installed at the lower four corners of the base, the laser radar (1), the gyroscope (2) and the core controller (9) are respectively arranged at the front end of the upper surface of the base, the high-precision pit avoidance sensor (5) is arranged at the front end of the lower surface of the base, the safety protection device is characterized in that a barrier avoiding sensor (4) and an antenna (6) are respectively arranged at the front end and the rear end of the upper surface of the base, a core control panel (10) is arranged in the middle of the upper surface of the base, a discharge detection sensor (15) is arranged on one side of the core control panel (10), a safety edge touching sensor (16) is arranged on a support at the front end and the rear end of the base, a support is arranged at the rear end of the base, a three-color lamp (14) is arranged on the outer side of the support, an emergency stop button (18) is arranged on the support below the three-color lamp (14), a binocular head (7) is arranged at the top of the support, an operation seat is arranged on a platform in the middle of the support, a high-precision six-axis mechanical arm (11) is installed on the operation seat, an operation tool (12) and a depth camera (13) are arranged on the high-precision six-axis mechanical arm (11), and the operation seat is connected with a ground knife operation mechanism (17) through a rotation shaft, the bottom of the rotating shaft is provided with another depth camera (13), the core controller (9) is in information interaction with the core control panel (10), and the core controller (9) is in information interaction with the background through the antenna (6).
2. The multifunctional operating robot for the substation room as claimed in claim 1, wherein the obstacle avoidance sensor (4) is a large-angle ultrasonic obstacle avoidance sensor.
3. A multifunctional operating robot in a substation room according to claim 1, characterized in that the battery management system (8) is arranged in the battery box under the base.
4. A multifunctional operating robot in a substation room according to claim 1, characterized in that the discharge detection sensor (15) is a very high frequency partial discharge detection sensor.
5. A substation indoor multifunctional operating robot according to claim 1, characterized in that the binocular head (7) is configured with high definition visible light camera and high precision thermal infrared imager.
6. A substation indoor multifunctional operating robot according to claim 1, characterized in that the operating tools (12) are four sets.
7. A substation indoor multifunctional operation robot according to claim 1, characterized in that the ground cutter operation mechanism (17) is a five-axis linkage ground cutter operation mechanism.
8. A substation indoor multifunctional operating robot according to claim 1, characterized in that the motor encoder (3) is arranged inside a travelling wheel.
9. The tool posture depth map correction method of the multifunctional operation robot in the substation room according to claim 1, characterized in that the depth camera (13) is used for accurately analyzing and calculating the position relation between the terminal work and the operation target through two-dimensional code information, and the specific steps are as follows:
s1: collecting a depth map, and converting the depth map into a point cloud;
s2: determining a plane area of the cabinet surface and an area of an object on the cabinet surface by using a cloth simulation filtering algorithm;
s3: calculating the average value of the distances of the points around the three points in a traversing translation mode, thereby determining the point positions on the three cabinet surfaces; the three points are A, B, C points acquired after the depth camera (13) scans the two-dimensional code information;
s4: calculating the current postures of the camera and the tool end according to the geometric relation between the point position and the distance;
the specific method of S1 is as follows: defining the depth of the cabinet surface as the positive direction of a z-axis of a coordinate axis, the right side facing the cabinet surface as the positive direction of an x-axis, and the upper side facing the cabinet surface as the positive direction of a y-axis, and under the coordinate system, taking a point on the actual cabinet surface corresponding to the lower left corner of the depth map as a coordinate origin, and creating a point cloud of the cabinet surface according to the value of each pixel on the depth map;
the specific method of S2 is as follows: assuming that the cabinet surface is horizontally placed on the ground, a mathematical method is used for simulating a piece of cloth to fall on the cabinet surface from top to bottom, so that the parts which are holes and the parts which are convex objects can be obtained, and the main body plane of the cabinet surface can be found;
the specific method of S3 is as follows: calculating the average value of distances between points around A, B, C to determine the points on three cabinets, obtaining which pixel points in the depth map are points of a plane on the cabinet, which points are points of a switch and a hole on the cabinet from the calculation in S2, determining three points A, B, C belonging to the cabinets for calculating the postures of the camera and the tool, wherein point A is a point traversed from the upper left corner to the lower right corner, and for point AijIf all the points B (i + m), j and Ci (j + m) belong to the cabinet surface points, the three points are taken, if not, A moves to the next point to continue calculation, and if not, the value of m is changed to continue calculation;
the specific method of S4 is as follows: a set of A, B and C points is obtained in said S3, and since a is the same as B in ordinate and different in abscissa, it is used to calculate the tilt angle of the camera, and its calculation formula is as follows:
Figure FDA0003493499030000031
the distance value of the point B is the distance value of the point B, the distance information of the point A is the distance information of the point A, the m is the value m set in the step S3, the S is the actual distance of each pixel point on the cabinet surface, the point can stay at a fixed position during the robot navigation, the value is unchanged after the value is measured, the pitch angle is calculated, and the yaw angle value can be measured by using the points A and C.
10. The substation indoor multifunctional operation robot according to claim 9, wherein the S2 comprises the steps of:
s21: establishing an equation:
Figure FDA0003493499030000041
x represents the position of particles in the cloth at the time t, Fext (X, t) represents an external driving factor (gravity and collision), Fint (X, t) represents an internal driving factor (internal connection among the particles), and the position of the cloth particles is influenced by two factors of Fext (X, t) and Fint (X, t);
if only the external factors are considered, the derivation calculation of the cloth particles at the time t + Δ t can be:
Figure FDA0003493499030000042
m is the weight of the particle and is set to be 1, G is a universal gravitation constant, and the position of the particle at the next moment due to external factors can be obtained by setting the step length of time delta t;
regarding the internal factors, in order to simulate the relationship between the particles, two adjacent particles are arbitrarily selected, and if both particles are movable, they are moved in opposite directionsThe same distance; if one is immovable, move the other; if both have the same height, no movement is made, according to this simulation criterion, the formula is defined:
Figure FDA0003493499030000043
wherein
Figure FDA0003493499030000044
B is equal to 1 when the particle can move and is equal to 0 when the particle cannot move; pi is the neighboring particle of p0, n is the unit vector (0, 0, 1) T that normalizes the point to the vertical;
s22: and (2) reversing the point cloud in the S1, setting a 'cloth' with the same length and width specification as the point cloud generated by the depth map, falling at a position above the point cloud at a certain distance, firstly considering only external factors in the formula (1), calculating the position of each point of the cloth under the current iteration frequency, judging whether each point falls on a plane, calculating the distance needing to be displaced according to the formula (2) after whether each point can be moved, repeating the two calculations to reach the specified frequency or no particle can be moved, and considering the point cloud of the S1 as the point of the cabinet surface if the distance between the point cloud of the S1 and the cloth particle corresponding to the point cloud of the S2 after the iteration is smaller than a relative parameter value.
CN202210105971.7A 2022-01-28 2022-01-28 Indoor multifunctional operation robot of transformer substation Pending CN114211504A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210105971.7A CN114211504A (en) 2022-01-28 2022-01-28 Indoor multifunctional operation robot of transformer substation
CN202211027753.2A CN115256398A (en) 2022-01-28 2022-08-25 Indoor multifunctional operation robot of transformer substation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210105971.7A CN114211504A (en) 2022-01-28 2022-01-28 Indoor multifunctional operation robot of transformer substation

Publications (1)

Publication Number Publication Date
CN114211504A true CN114211504A (en) 2022-03-22

Family

ID=80709024

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202210105971.7A Pending CN114211504A (en) 2022-01-28 2022-01-28 Indoor multifunctional operation robot of transformer substation
CN202211027753.2A Pending CN115256398A (en) 2022-01-28 2022-08-25 Indoor multifunctional operation robot of transformer substation

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202211027753.2A Pending CN115256398A (en) 2022-01-28 2022-08-25 Indoor multifunctional operation robot of transformer substation

Country Status (1)

Country Link
CN (2) CN114211504A (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117086904B (en) * 2023-10-20 2024-01-12 国网安徽省电力有限公司合肥供电公司 Inspection robot for switch cabinet deflector rod type emergency brake separating mechanism

Also Published As

Publication number Publication date
CN115256398A (en) 2022-11-01

Similar Documents

Publication Publication Date Title
CN101625573B (en) Digital signal processor based inspection robot monocular vision navigation system
CN111693050B (en) Indoor medium and large robot navigation method based on building information model
CN104723318A (en) Autonomous working robot system
CN107765145B (en) Automatic partial discharge detection device, system and method
CN111634636B (en) Full-automatic material taking control system of bucket wheel machine
CN110362090A (en) A kind of crusing robot control system
CN112008722B (en) Control method and control device for construction robot and robot
CN216265979U (en) Indoor autonomous mobile robot
CN114224226A (en) Obstacle avoidance cleaning robot, robot mechanical arm obstacle avoidance planning system and method
CN116620802B (en) Transportation method utilizing indoor construction intelligent material transportation system
CN212515475U (en) Autonomous navigation obstacle avoidance system of intelligent patrol robot of power transmission and transformation station
CN208379839U (en) A kind of large space robot module partition furred ceiling decorating apparatus
CN111531560B (en) Patrol and examine robot applied to indoor and outdoor environment of transformer substation
CN114211504A (en) Indoor multifunctional operation robot of transformer substation
CN112828853A (en) Indoor autonomous mobile robot
CN110927813B (en) Automatic detection device and method for aircraft fuel tank
CN204546508U (en) Utonomous working robot system
CN116352722A (en) Multi-sensor fused mine inspection rescue robot and control method thereof
CN218398132U (en) Indoor multifunctional operation robot of transformer substation
CN114167866B (en) Intelligent logistics robot and control method
CN113218384B (en) Indoor AGV self-adaptive positioning method based on laser SLAM
Guo et al. Design and control of the open apple-picking-robot manipulator
CN215522515U (en) Automatic remove over-and-under type collection intelligence check equipment
CN115188091B (en) Unmanned aerial vehicle gridding inspection system and method integrating power transmission and transformation equipment
CN108838997A (en) Novel wooden robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20220322

WD01 Invention patent application deemed withdrawn after publication