WO2022021739A1 - Humanoid inspection operation method and system for semantic intelligent substation robot - Google Patents
Humanoid inspection operation method and system for semantic intelligent substation robot Download PDFInfo
- Publication number
- WO2022021739A1 WO2022021739A1 PCT/CN2020/135608 CN2020135608W WO2022021739A1 WO 2022021739 A1 WO2022021739 A1 WO 2022021739A1 CN 2020135608 W CN2020135608 W CN 2020135608W WO 2022021739 A1 WO2022021739 A1 WO 2022021739A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- inspection
- robot
- equipment
- semantic
- inspected
- Prior art date
Links
- 238000007689 inspection Methods 0.000 title claims abstract description 276
- 238000000034 method Methods 0.000 title claims abstract description 144
- 230000008569 process Effects 0.000 claims abstract description 64
- 238000004422 calculation algorithm Methods 0.000 claims description 40
- 238000001514 detection method Methods 0.000 claims description 36
- 238000004458 analytical method Methods 0.000 claims description 29
- 230000033001 locomotion Effects 0.000 claims description 27
- 238000010276 construction Methods 0.000 claims description 19
- 238000009826 distribution Methods 0.000 claims description 19
- 238000013135 deep learning Methods 0.000 claims description 14
- 238000012545 processing Methods 0.000 claims description 10
- 238000013480 data collection Methods 0.000 claims description 9
- 238000005516 engineering process Methods 0.000 claims description 9
- 238000005457 optimization Methods 0.000 claims description 7
- 238000013441 quality evaluation Methods 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 6
- 230000007613 environmental effect Effects 0.000 claims description 6
- 238000010223 real-time analysis Methods 0.000 claims description 6
- 238000003860 storage Methods 0.000 claims description 5
- 238000005070 sampling Methods 0.000 claims description 4
- 230000009466 transformation Effects 0.000 claims description 4
- 238000002347 injection Methods 0.000 claims description 3
- 239000007924 injection Substances 0.000 claims description 3
- 238000009877 rendering Methods 0.000 claims description 3
- 230000001131 transforming effect Effects 0.000 claims description 3
- 238000011156 evaluation Methods 0.000 claims description 2
- 238000012423 maintenance Methods 0.000 description 15
- 239000013598 vector Substances 0.000 description 9
- 238000010586 diagram Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 7
- 230000004927 fusion Effects 0.000 description 6
- 238000013461 design Methods 0.000 description 5
- 238000012549 training Methods 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 238000013136 deep learning model Methods 0.000 description 4
- 230000007547 defect Effects 0.000 description 4
- 238000013507 mapping Methods 0.000 description 4
- 239000000243 solution Substances 0.000 description 4
- 230000003068 static effect Effects 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 230000008447 perception Effects 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000009529 body temperature measurement Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000007405 data analysis Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000003032 molecular docking Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000013139 quantization Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000008676 import Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0234—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons
- G05D1/0236—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons in combination with a laser
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0214—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0221—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0223—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0238—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
- G05D1/024—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0242—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
- G05D1/0251—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0257—Control of position or course in two dimensions specially adapted to land vehicles using a radar
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0276—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0276—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
- G05D1/0278—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using satellite positioning signals, e.g. GPS
Definitions
- the invention belongs to the field of robots, and in particular relates to a humanoid patrol operation method and system of a semantic intelligent substation robot.
- Existing inspection robots generally use a docking-preset operation mode, and the deployment and implementation process is divided into two stages: configuration and operation.
- configuration stage for a new substation with unknown environmental information, a large amount of manual work is required.
- the inspection point of the inspection robot is usually manually set by the on-site personnel according to the inspection task.
- the on-site personnel When setting, the on-site personnel firstly control the robot to run along the inspection route, and then stop when it runs to the surrounding of the electric equipment to be inspected; then it remotely adjusts the robot.
- the gimbal posture enables the gimbal to drive the visible light camera, infrared thermal imager and other non-contact detection sensors to sequentially align each device to be inspected around the robot and record the corresponding preset position of the gimbal, thereby completing the setting of a detection point. Repeat the above process to complete the setting of all inspection points of the equipment to be inspected included in the inspection task.
- the inspection robot runs along the inspection route, and stops at the inspection points one by one, and then calls the pre-position of the PTZ to complete the inspection of the equipment.
- the present invention provides a human-like inspection operation method and system for a semantic intelligent substation robot, which breaks the "stop-preset" operation mode of the traditional substation inspection robot, and realizes the fully autonomous operation of the robot inspection. .
- a first aspect of the present invention provides a humanoid patrol operation method for a semantic intelligent substation robot.
- a humanoid patrol operation method for a semantic intelligent substation robot comprising:
- the robot Based on the 3D semantic map, combined with inspection/job tasks and the current position of the robot, the robot autonomously plans the walking path;
- the location information of the equipment in the station is automatically obtained, so that the robot can independently construct a three-dimensional semantic map of the substation without configuration-free information injection.
- the spatial distribution of the current environmental objects is obtained, and the inspection image data is used for real-time analysis to identify the device identification code in the image, locate the target area of the device, and simultaneously obtain the device identification and location in the spatial information.
- the automatic identification of the unknown area around the robot is realized, the local path planning method is used to realize the robot motion planning in the unknown area, and the map construction of the unknown environment is performed until the whole station environment semantic map is completed. Construct.
- the process of performing the map construction of the unknown environment includes:
- the semantic information of the road, equipment and obstacle objects in the current environment is obtained, and the spatial position coordinate transformation is used to project the spatial information of the road, equipment and obstacles to the 3D point cloud data to establish a semantic map.
- the robot arm is driven to move, so that the end of the robot arm faces the position of the device and moves to the local range of the target device;
- target recognition is automatically performed at the front end of the robot, automatic analysis of image data at the front end is realized, and status information of the equipment is obtained in real time.
- the robot arm is controlled to adjust the pose and always aim at the device to be inspected, so that the robot is always connected to the device to be inspected during data collection. Maintain the best relative pose relationship;
- the deep learning algorithm is used to identify and obtain the position of the device in the image, and the relative pose relationship between the robot and the device to be inspected is used to determine Realize the spatial pose control of the acquisition device carried at the end of the robotic arm;
- the quality of the collected data is evaluated and optimized, so as to realize the optimal collection of the inspection data of the equipment to be inspected.
- the relationship model of the optimal image collection points for inspection based on historical data changes with time is used to realize the independent optimal selection of inspection points in different seasons and different time periods.
- the confidence level of the inspection data at different locations and under different lighting conditions is evaluated. Check status data.
- a panoramic three-dimensional model of the substation is constructed based on the digital twin method, and the immersive inspection operation of the substation based on virtual reality technology is realized through the real-time reproduction of image, sound and tactile information.
- a second aspect of the present invention provides a robot.
- a robot which adopts the above-mentioned semantic intelligent substation robot humanoid patrol operation method for patrol inspection.
- a third aspect of the present invention provides a humanoid patrol operation system for a semantic intelligent substation robot.
- the present invention provides a humanoid patrol operation system for a semantic intelligent substation robot, which includes at least one robot as described above.
- Another semantic intelligent substation robot humanoid patrol operation system includes:
- the robot is deployed in various areas in the substation;
- Each robot includes a robot body, the robot body is provided with a robotic arm, and the end of the robotic arm is equipped with an inspection/work tool;
- a computer program is stored in the control center, and when the program is executed by the processor, the steps in the above-mentioned semantic intelligent substation robot humanoid patrol operation method are realized.
- a fourth aspect of the present invention provides a computer-readable storage medium.
- the present invention creatively invents a human-like patrol operation method of semantic intelligent substation robots, and proposes an autonomous construction method for a three-dimensional semantic map of substations, which realizes active autonomous perception of road and equipment information in an unknown substation environment, and completely gets rid of traditional
- the robot's reliance on manual stops and preset configuration breaks the traditional "stop-preset" inspection operation mode of the traditional substation inspection robot, and solves the lack of intelligent level of traditional robot inspection and high dependence on manual configuration.
- the present invention proposes an automatic construction method of a robot semantic map, which realizes the automatic construction of the robot semantic map, adopts the visual laser fusion navigation method, realizes the three-dimensional autonomous navigation of the robot, and solves the ineffectiveness of the traditional robot single navigation method and intelligent perception.
- the problem of insufficient capacity is a problem of insufficient capacity.
- the present invention proposes an AI front-end recognition method for substation inspection video, which uses the deep learning model quantization and cutting method to reduce the computational complexity of the algorithm, improve the real-time performance of the system, and develop a low-power and high-performance substation.
- the inspection video real-time analysis hardware system reduces the network transmission pressure and improves the real-time performance of data processing.
- the present invention proposes an immersive operation method for robots, which combines image, video, sound and other multimodal information, and reconstructs the panoramic information of the robot operating environment through the deep fusion of multi-source and multimodal information, so that the staff can The control room can truly understand the substation environment and equipment conditions, and realize the immersive inspection operation of the substation robot.
- Fig. 1 is the flow chart of the humanoid patrol operation method of the semantic intelligent substation robot according to the embodiment of the present invention
- FIG. 2 is a schematic diagram of an optimal data collection process for substation inspection equipment according to an embodiment of the present invention
- Fig. 3 is the structure diagram of the humanoid patrol operation system of the substation robot according to the embodiment of the present invention.
- FIG. 4 is a flowchart of autonomous construction of a positioning and navigation map of an inspection robot according to an embodiment of the present invention
- FIG. 5 is a flowchart of a three-dimensional electronic map semantic analysis according to an embodiment of the present invention.
- FIG. 6 is a frame diagram of a real-time identification frame diagram of a substation inspection video according to an embodiment of the present invention
- FIG. 7 is a flowchart of real-time identification of substation inspection video according to an embodiment of the present invention.
- FIG. 8 is a schematic structural diagram of a robot according to an embodiment of the present invention.
- Figure 9 (a) is a schematic structural diagram of a main arm of an embodiment of the present invention.
- FIG. 9(b) is a schematic diagram of the slave arm structure according to an embodiment of the present invention.
- FIG. 10 is a schematic diagram of a quick replacement structure according to an embodiment of the present invention.
- orientation or positional relationship is based on the orientation or positional relationship shown in the accompanying drawings, and is only a relational word determined for the convenience of describing the structural relationship of each component or element of the present invention, and does not specifically refer to any component or element in the present invention, and should not be construed as a reference to the present invention. Invention limitations.
- a humanoid patrol operation method of a semantic intelligent substation robot of the present embodiment including:
- S102 Based on the three-dimensional semantic map, combined with the inspection/job task and the current position of the robot, autonomously plan the walking path of the robot;
- S103 Control the robot to move according to the planned walking path, and carry out inspection/operation tasks during the traveling process;
- S104 In the process of carrying out inspection/operation tasks, adjust the pose of the robotic arm equipped with inspection/operation tools in real time, so as to automatically collect and identify images of the equipment to be inspected at the best angle or automatically at the best angle Perform operation tasks and complete fully autonomous inspection/operation tasks of the substation environment.
- steps S101 and S102 based on the prior knowledge of the substation, the location information of the equipment in the station is automatically obtained, and the robot can independently construct a three-dimensional semantic map of the substation without configuration-free information injection.
- the specific process of constructing the three-dimensional semantic map of the unknown substation environment is as follows:
- the spatial distribution of the current environmental objects is obtained.
- the identification code of the equipment in the image is identified, and the target area of the equipment is located to realize the simultaneous identification and location of the equipment in the spatial information.
- the automatic identification of the unknown area around the robot is realized, the local path planning method is used to realize the motion planning of the robot in the unknown area, and the map construction of the unknown environment is performed until the semantic map of the environment in the entire station is completed. Construct.
- the process of performing the map construction of the unknown environment includes:
- the semantic information of the road, equipment and obstacle objects in the current environment is obtained, and the spatial position coordinate transformation is used to project the spatial information of the road, equipment and obstacles to the 3D point cloud data to establish a semantic map.
- the three-dimensional semantic map is a pre-stored semantic map, wherein the method for formulating an inspection/operation path includes:
- Receive inspection/operation tasks include designated inspection/operation areas or designated inspection/operation equipment;
- the three-dimensional space projection coordinates of all equipment to be inspected/operated in the semantic map are used as points on the robot's walking route, and the inspection/operation route is planned based on the current location of the robot.
- the semantic map includes a three-dimensional map of a substation and semantic information of equipment on the three-dimensional map.
- the construction method includes:
- the embedded AI analysis module pre-stores deep learning models for identifying roads, equipment, and various obstacles.
- Target detection that is, the semantic information of roads, equipment and obstacles in the current environment is obtained;
- the binocular image and 3D point cloud data the spatial position distribution of roads, equipment, and obstacles in the current environment is obtained;
- the image and 3D point cloud data can obtain the distance information of the robot's peripheral equipment or obstacles from the robot body (the binocular image is used to identify close-range obstacles, and the 3D point cloud data is used to identify long-distance obstacles), and then combined with the robot in the inspection task. From the running direction information, the spatial distribution of obstacles centered on the robot body can be obtained.
- the automatic identification of the passable unknown area around the robot is realized. If there is an unknown passable area, the local path planning method is used to realize the motion planning of the robot in the unknown area, and to the industrial control of the robot.
- the robot sends motion commands to make the robot move to the passable unknown area, and then go to step (4); if there is no passable unknown area, it means that all the unknown areas have been explored and the map construction is over;
- step (1) Build a 3D SLAM map according to the binocular image and the 3D point cloud data, and return to step (1).
- the three-dimensional SLAM map is constructed according to the binocular image and the three-dimensional point cloud data, which specifically includes:
- step (2) Semantic information of roads, equipment and obstacles in the current environment to build a semantic map.
- the 3D position of the device to be inspected in the 3D navigation map and the accurate clustering and semantics of the point cloud can be achieved, and the roaming can be obtained.
- the roaming semantic map includes the three-dimensional space position of the equipment in the substation and its semantics.
- semantic information such as passable roads, towers, meters, etc. identified by the 2D image can be assigned to the 3D point cloud.
- the 3D point can be more accurately identified. Clouds are clustered, making the constructed map closer to reality.
- the robot After the robot has established a 3D navigation semantic map, it can use the 3D navigation map and the ROS navigation module to realize the robot's motion navigation in the substation.
- the robot adopts a combination of static map and dynamic map: the static map method uses the roaming semantic map to project the three-dimensional space coordinates of the equipment on the walking route, and the space of the equipment to be inspected is projected. The vertical fan-shaped area of the position is used as the task navigation point; the dynamic map method is that the robot obtains the current three-dimensional coordinates of the device after dynamically identifying the device concerned by the task during the movement process, realizes the dynamic identification of the device, and updates the map information in real time.
- This embodiment proposes an autonomous construction method for a robot inspection, positioning and navigation map, which realizes the roaming construction of a three-dimensional visual semantic map, and proposes a task-oriented inspection and navigation control method integrating binocular vision and three-dimensional laser.
- the laser vision fusion navigation planning of the robot is realized, and the navigation failure problem caused by the sparse laser point cloud of the traditional robot is solved.
- step S104 according to the positional relationship between the robot and the equipment to be inspected, the robot arm is driven to move, so that the end of the robot arm faces the position of the device and moves to the local range of the target device;
- target recognition is automatically performed at the front end of the robot, automatic analysis of image data at the front end is realized, and status information of the equipment is obtained in real time.
- the robot arm is controlled to adjust the pose and always align with the device to be inspected, so that the robot always maintains the best relative pose relationship with the device to be inspected during data collection;
- the deep learning algorithm is used to identify and obtain the position of the device in the image, and the relative pose relationship between the robot and the device to be inspected is used to determine Realize the spatial pose control of the acquisition device carried at the end of the robotic arm;
- the quality of the collected data is evaluated and optimized, so as to realize the optimal collection of the inspection data of the equipment to be inspected.
- the relationship model of the optimal image collection points for inspection over time based on historical data is used to realize the autonomous optimal selection of inspection points in different seasons and different time periods.
- the confidence level of the inspection data at different locations and under different lighting conditions is evaluated.
- the inspection data with the highest confidence is selected as the inspection status data of the equipment to be inspected , to improve the effectiveness of inspection data.
- R 0.5*R position +0.5*R l
- R position cos(C dx )
- R l 1-(LL x )/L x ; L>L x
- R is the execution degree of the robot's current inspection data
- R position is the position confidence
- C dx is the angle between the current robot end position and the surface normal vector of the device to be detected
- cos is the cosine calculation function
- R l is the illumination confidence
- L is the current light intensity
- L x is the standard light intensity
- the value is the light intensity under normal light conditions, generally 100000Lux.
- the real-time positions of the equipment to be inspected and the robot in the task inspection are obtained, the robot is controlled to move to the work point based on the inspection path or the operation path, and the end of the robot arm is driven to face the position of the equipment to be inspected. Location.
- the relative motion relationship between the robot and the equipment to be inspected is calculated, and the robot arm is controlled to adjust the pose to always align with the equipment to be inspected, so that the The sensor module mounted on the end of the robot arm collects the inspection data of the equipment to be inspected.
- the detection according to the optimal inspection pose includes: determining the current actual pose of the robot based on the 3D semantic map and binocular vision and 3D laser sensor data; calculating the relative pose according to the actual pose and the optimal inspection pose Deviation; control the robot to adjust the pose according to the relative pose deviation, and perform detection.
- binocular vision and 3D laser sensor data are obtained in real time to determine whether the layout of the equipment on the walking route is inconsistent with the 3D semantic map, and if so, the 3D semantic map is updated.
- the equipment images are also collected in a refined manner, and the process is as follows:
- the substation environment is complex, and the collected images may contain multiple types of equipment at the same time.
- the deep learning device recognition algorithm library is built here, including mainstream target recognition algorithms such as faster-rcnn, ssd, and yolo.
- the algorithm library is based on the fully convolutional deep neural network, combined with the equipment information contained in the inspection task, to extract the target detection features and semantic features, and then classify and detect the fused features to realize the accurate identification of the equipment in the inspection images. .
- This embodiment designs a target detection algorithm (not limited to faster-rcnn algorithm, SSD, yolo, etc.) that combines the characteristics of the spatial position relationship of power equipment, constructs an automatic scheduling method for high-performance computing resources, and proposes a The equipment target detection and tracking method realizes real-time and efficient identification of inspection videos and improves the accuracy of identification of substation equipment.
- a target detection algorithm not limited to faster-rcnn algorithm, SSD, yolo, etc.
- the optimal relative positional relationship between the inspection camera at the end of the robot arm and the equipment to be inspected during data collection is calculated, according to the current position of the robot, inspection route and setting.
- the inspection speed of the robot arm is calculated at the next moment in the non-stop state, so that the inspection camera at the end of the robot arm and the device to be inspected can maintain the best relative position relationship, that is, to align the device to be inspected. .
- n x , n y , n z are the normal vectors of the inspection surface of the device to be inspected (such as the dial surface of the marked reading), x, y, z are the spatial coordinates of the device to be inspected, and x r , y r , z r and n xr , n yr , n zr are the robot space pose vectors.
- the robot's running pose makes the above formula get the maximum value, the best relative pose between the robot and the device to be detected can be obtained.
- the spatial pose of the end of the robotic arm is:
- n x , n y , n z are the normal vectors of the detection surface of the equipment to be inspected (such as the dial surface of the marked reading), n xa , n ya , and n za are the spatial attitude vectors of the robotic arm. Detect the best data collection attitude of the equipment, and control the manipulator so that the above formula can achieve the maximum value.
- the distance information between the device to be detected and the end of the robot arm is used to automatically calculate the configuration focal length of the inspection camera to ensure that the information of the device to be detected is clearly visible in the image.
- the image data collected by the binocular vision camera is acquired in real time, the equipment to be detected in the image is identified based on the deep learning method, and the posture of the robotic arm is fine-tuned to ensure that the area of the equipment to be detected is always in the central area of the image.
- a deep learning algorithm is used to perform device identification on each frame of images in the inspection video, and when a target device is identified, a binocular stereo algorithm is used to obtain the three-dimensional space position coordinates of the target device.
- a motion compensation algorithm for robot image acquisition is proposed.
- Robot motion compensation is used to improve the stability of inspection image acquisition during motion and ensure the validity of inspection images. Since the robot needs to keep the device to be detected in the central area of the image during the process of traveling, to achieve accurate acquisition of the device to be detected, the robot motion needs to be compensated for this.
- This embodiment proposes a motion compensation algorithm for the robot to capture images. The formula as follows:
- Control_x Kpx*delta_x+Vx*Kbx*D
- Control_y Kpy*delta_y+Vy*Kby*D
- Control_x and Control_y are the control adjustments of the robot end posture in the X and Y directions
- delta_x and delta_y are the coordinate deviations in the X and Y directions between the center of the device area and the center of the image in the image captured by the robot at a certain moment
- Kpx and Kpy are The proportional coefficient of the control adjustment amount of the robot end posture in the X and Y directions
- Vx and Vy are the movement speeds of the robot end in the X and Y directions respectively
- Kbx and Kby are the control amount compensation coefficients of the robot end posture in the X and Y directions
- D is the distance between the end of the robot and the device to be detected.
- the non-stop inspection robot of this embodiment can be used on a substation inspection robot, and can also be used in operations.
- the equipment area is calibrated in a small number of images of the object to be identified collected by the inspection;
- the picture is transformed to simulate the situation of shooting the device from different angles and different distances;
- the background picture is updated to obtain images of the equipment to be inspected in different backgrounds, thereby generating a large number of calibrated pictures.
- Preprocess a small number of images of objects to be identified collected by power inspection to enhance the quality of the images.
- a small number of images are preprocessed, including image preprocessing such as deblurring and debounce.
- a small amount of collected images are calibrated to realize the calibration of the device area in the image. Calibration is performed in this step, and the number of calibrations to be performed is small.
- the calibrated image is processed to remove the background to obtain a real picture with a transparent background.
- background-removing processing is performed to obtain a physical image with a transparent background, so that the background can be replaced later to realize physical images under different backgrounds.
- Transform the real picture after the background is removed specifically: randomly scaling, rotating, and radiating the image with the transparent background. Simulate shooting the device from different angles and distances.
- Blender software Import the image into the Blender software, add different lighting rendering to the image, simulate the situation under different lighting conditions, and obtain image data under different lighting conditions.
- Update the texture background or background environment and obtain the images of the object to be recognized in different texture backgrounds or background environments, thereby generating a large number of calibrated pictures, realizing the expansion of various sample image data and sample annotation files, and enriching the sample image data.
- the SMOTE Synthetic Minority Over-sampling Technique
- the multi-sample data when the multi-sample data is enhanced, it is specifically:
- Class imbalance is common and refers to the fact that the number of classes in a dataset is not approximately equal. If the sample categories are very different, it will affect the classification effect of the classifier. Assuming that the number of small sample data is very small, such as only 1% of the population, even if all small samples are mistakenly identified as large samples, the accuracy of classifier recognition under the empirical risk minimization strategy can still reach 99%, but due to Without learning the features of small samples, the actual classification effect will be very poor.
- the SMOTE method is an interpolation-based method, which can synthesize new samples for small sample classes.
- the main process is as follows:
- the first step is to define the feature space, correspond each sample to a certain point in the feature space, and determine a sampling ratio N according to the sample imbalance ratio;
- the third step is to repeat the above steps until the number of large and small samples is balanced.
- the background image is a background image captured in reality or a background image in an open source texture library, and the two images are in a certain ratio so that the training image takes into account both virtual and real data.
- the selection of the combination ratio of the two textures (50%, 50%, realizes the effective fusion of virtual and real background images, makes the training images take into account both virtual and real data, and better improves the recognition accuracy of the training model).
- This embodiment proposes an autonomous analysis method for robot inspection image data, designs an automatic identification algorithm for substation equipment based on few-sample images, realizes automatic analysis and screening of inspection equipment status information, and improves inspection image data. Analysis quality.
- the method for enhancing image data with few samples for power inspection in this embodiment can be applied to common inspection robots, UAV inspections, etc., and processes the collected images to obtain a large number of calibrated images.
- the coordinates of the robotic arm are adjusted so that the equipment to be inspected is located in the center of the image, so as to realize real-time adjustment of the state of the equipment to be inspected.
- the location of the equipment to be inspected is also tracked, and the real-time location information of the equipment to be inspected is sent to the robotic arm control module.
- This embodiment also performs real-time identification of substation inspection videos based on AI front-end, as shown in Figure 7, and the process includes:
- A) Sample and model construction steps collect the image data of the equipment and various states of the equipment in the station, mark it, form the image sample library of the substation equipment, and use the deep learning target detection algorithm to train the sample images to form the substation equipment model.
- the status recognition model of substation equipment the substation equipment model is used for the identification and positioning of the equipment in the inspection video; the substation equipment status recognition model is used for the identification of the equipment status in the inspection video;
- AI analysis module loads substation equipment model and substation equipment state identification model; wherein, as shown in Figure 6, the components involved in the real-time identification process of substation inspection video include at least one fixed-point camera , at least one robot camera and AI analysis module;
- the robot camera is installed on the substation inspection robot to collect equipment and environmental video information in the substation inspection robot's inspection route coverage area; fixed-point cameras are distributed in the substation equipment area to collect robot inspection in the substation equipment area. Equipment and environmental video information in the area; the AI analysis module processes the substation inspection video collected by fixed-point cameras and robot cameras in real time, identifies and outputs equipment location information, and analyzes and processes equipment image information in the collected video. Real-time tracking of equipment status.
- the AI analysis module starts the device identification service function, detects the device target of the fixed-point surveillance camera and the robot inspection video, realizes the real-time identification and positioning of the device to be detected in the video, and outputs the inspection image of the target device.
- the detection frame in , including the center position of the target device and the length and width of the device area;
- D) Device target tracking step After the AI analysis module realizes the identification of the target device, in order to ensure the real-time and accuracy of target acquisition, the target device is tracked, and the KCF method is used to track the target device. Because the target tracking algorithm has serious prospects The problem of tracking target loss under changing circumstances,
- (X t , Y t , W t , H t ) is the coordinate output tracked by the KCF algorithm at time t
- R(t) is the coordinate of the target device output by the target detection algorithm at time t
- Floor is the rounding function; every d t
- the time interval is calculated for the target detection and recognition algorithm, and it is used as the input coordinates of the KCF algorithm.
- the target detection algorithm is used to regularly update the input coordinates of the KCF algorithm to eliminate the problem of false tracking, improve the accuracy of target tracking, and improve the real-time performance of the algorithm.
- Step of fine image acquisition During the target tracking process, by sending the real-time position information of the device to the robotic arm control module, adjusting the coordinates of the end of the robotic arm so that the device is located in the center of the image, and adjusting the focal length of the camera to obtain the image capture device Detailed image information.
- Equipment status identification step The AI analysis module starts the substation equipment status identification service, realizes the intelligent analysis of the detailed image of the equipment, completes the real-time acquisition of the identification status, and sends back the substation inspection video background.
- the target recognition algorithm uses the YOLOV3 algorithm, and the target tracking algorithm uses the KCF target tracking algorithm.
- the target tracking algorithm builds a device target detection framework that interacts with key frame target detection and non-key frame target tracking, and uses deep learning model quantization and clipping technology to reduce the computational complexity of the algorithm and improve the real-time performance of the system.
- the AI analysis module adopts the automatic scheduling method of high-performance computing resources to realize the analysis function of multi-channel video of substation robots and fixed point inspection.
- a panoramic three-dimensional model of the substation is constructed based on the digital twin method, and the immersive inspection operation of the substation based on the virtual reality technology is realized through the real-time reproduction of image, sound, and tactile information.
- the virtual reality module on the robot body can be used to construct the virtual environment of the substation operation site.
- the VR virtual reality module includes a VR camera, which can collect the on-site environment and build a virtual environment on the job site. Through this module, the operation and maintenance personnel can remotely perceive the on-site working environment virtually, so as to realize the precise operation of the on-site equipment.
- the robot can conduct autonomous inspection; when the robot finds equipment defects or problems, it will send the information to the operation and maintenance personnel in time, and give the corresponding problem category and corresponding solution for the operation and maintenance personnel to refer to.
- This embodiment proposes a panoramic immersive inspection and operation method of a robot, which can combine multi-modal information such as images, videos, and sounds to construct panoramic 3D information of substations based on digital twin technology (which can be 3D images, 3D lasers, virtual models, etc. ), proposes an immersive robot operation method (which can be heterogeneous or isomorphic, master-slave or automated), and reconstructs robot operations through the deep fusion of multi-source and multi-modal information
- the panoramic environment information enables the staff to truly understand the substation environment and equipment conditions in the control room, and realizes the immersive inspection operation of the substation robot.
- This embodiment provides a robot, which adopts the humanoid patrol operation method of a semantic intelligent substation robot as described in Embodiment 1 to perform patrol inspection.
- the robot includes a robot body 1 with a multi-degree-of-freedom mechanical arm 2 , and the end of the multi-degree-of-freedom mechanical arm is equipped with an inspection device 3 .
- the inspection equipment carried at the end of the multi-degree-of-freedom robotic arm includes: visible light camera, infrared camera, hand grasp, suction cup, partial discharge detector, etc.
- the multi-degree-of-freedom manipulator arm on the robot body is used as the slave arm 4, and a control master arm 5 is additionally set.
- the master arm is a portable operating system suitable for human operation. After wearing the main arm 5, the centralized control operation and maintenance personnel can remotely control the slave arm 4 through 5G communication from the centralized control room to realize the inspection operation of the equipment in the substation.
- a VR virtual reality module 3 is also set on the robot body, which is used to construct a virtual environment of the substation operation site.
- the VR virtual reality module 3 includes a VR camera, which can collect the on-site environment and build a virtual environment on the job site. Through this module, the operation and maintenance personnel can remotely perceive the on-site working environment virtually, so as to realize the precise operation of the on-site equipment.
- the robot can conduct autonomous inspection; when the robot finds equipment defects or problems, it will send the information to the operation and maintenance personnel in time, and give the corresponding problem category and corresponding solution. For the reference of operation and maintenance personnel.
- the operation and maintenance personnel can issue online operation orders, and the robot will automatically switch to the remote operation mode after receiving the order.
- the robot will come to the equipment that needs to be repaired by itself, turn on the robot VR virtual reality module, and remotely construct the on-site virtual environment in the centralized control center through the 5G communication module;
- the operation and maintenance personnel use the 5G communication module to remotely control the slave arm on the substation on-site robot; through the VR virtual reality, the operation and maintenance personnel can perceive the environment of the equipment to be repaired in the substation in real time, and use the slave arm to carry out refined operations.
- the remote maintenance of the substation equipment by the operation and maintenance personnel is realized, the timeliness of the maintenance of the substation is improved, and the personal safety of the operation and maintenance personnel is ensured.
- the front end of the robot body is provided with an AI front-end data processing module, and the AI front-end data processing module is configured to realize front-end recognition of images of substation inspection equipment.
- the process of target recognition based on images is carried out at the front end of the robot, which avoids the problem of untimely video analysis due to the time delay of data transmission in the process of returning massive data to the background; at the same time, it reduces the bandwidth requirements.
- the current substation inspection robots mainly focus on visible light and infrared temperature measurement, most of them fix the visible light camera and the infrared camera on both sides of the gimbal, and then directly fix the gimbal directly on the gimbal with screws or bolts.
- the robot body due to factors such as appearance and IP protection, it is difficult to replace the gimbal fixed on the robot body, so it is impossible to quickly replace the detection equipment.
- a quick-change joint is provided at the end of the multi-degree-of-freedom manipulator, which can achieve a breakthrough in that a single robot can complete various detection operations in the substation, and solves the problem that the detection function of the traditional inspection robot is single and cannot be arbitrarily changed. equipment problem.
- connection sleeve 7 is used to realize quick connection and replacement; threaded connectors are provided at the end 8 of the multi-degree-of-freedom manipulator arm and the end of the detection device 6.
- threaded connectors are provided at the end 8 of the multi-degree-of-freedom manipulator arm and the end of the detection device 6.
- the end of the manipulator 8 and the detection device 6 are aligned, and then rotated.
- the prefabricated connecting sleeve 7 at the end of the manipulator uses the threaded threads shared by the end of the manipulator and the detection equipment, so that the connecting sleeve gradually fixes the manipulator and the terminal equipment together.
- the equipment that can be converted includes four modules: visible light camera, infrared temperature measurement, partial discharge detection, manipulator grasping, and electric suction head.
- This embodiment provides a humanoid patrol operation system of a semantic intelligent substation robot, which includes at least one robot according to the second embodiment.
- the humanoid inspection operation system of the semantic intelligent substation robot in this embodiment includes: an embedded AI analysis module, a multi-degree-of-freedom robotic arm, an inspection camera, a binocular vision camera, and a three-dimensional laser radar connected to the embedded AI analysis module , Inertial Navigation Sensor, Robot IPC, and Robot Arm; wherein, the binocular vision camera is located at the front end of the robot, and the inspection camera is located at the end of the robotic arm through the robotic arm.
- the robotic robotic IPC is connected to the robot motion platform, which can realize multiple Vision, laser, GPS, inertial navigation and other sensor data access and synchronous acquisition, so as to realize the panoramic perception of the robot itself and the surrounding environment, as shown in Figure 3.
- the binocular vision camera is used to construct a semantic map
- the inspection data acquisition camera is used to collect fine images of the equipment to perform detection.
- the embedded AI analysis module is the key node of system data analysis and processing.
- the operation node of ROS-Core it is responsible for the information collection of each sensor of the robot, the realization of the ROS interface driven by the robot chassis, the three-dimensional information analysis and fusion of laser/vision, the robot navigation control, and control of the robotic arm.
- the system adopts the storage ROS interface, and the laser, vision, and driver all use the standard ROS interface.
- the system design mainly includes 11 node function packages, and the security functions are classified into roaming semantic map building module, inspection and navigation control module, and equipment image refinement. Acquisition module and device status identification module.
- the walkthrough semantic map building block is configured to:
- the roaming semantic map includes a three-dimensional map of a substation and semantic information of devices on the three-dimensional map, and the construction method includes:
- Binocular image and 3D point cloud data can obtain the distance information of robot peripheral equipment or obstacles from the robot body (binocular image is used to identify short-range obstacles, and 3D point cloud data is used to identify long-distance obstacles), and then combined with inspection tasks
- the spatial distribution of obstacles centered on the robot body can be obtained by using the information of the robot's running direction.
- the automatic identification of the passable unknown area around the robot is realized. If there is an unknown passable area, the local path planning method is used to realize the motion planning of the robot in the unknown area, and report to the robot industrial computer. Send a motion command to make the robot move to a passable unknown area, and go to step (4); if there is no passable unknown area, it means that all unknown areas have been explored and the map construction is over;
- step (1) Build a 3D SLAM map according to the binocular image and the 3D point cloud data, and return to step (1).
- the three-dimensional SLAM map is constructed according to the binocular image and the three-dimensional point cloud data, which specifically includes:
- Semantic information of roads, equipment and obstacles in the current environment to build a semantic map.
- the equipment identified according to the binocular camera to the 3D point cloud map, and then combining the point cloud density distribution of the 3D point cloud map, the accurate clustering and semantics of the 3D position of the device to be inspected and the point cloud in the 3D navigation map can be achieved, and we get Roaming Semantic Maps.
- the roaming semantic map includes the three-dimensional space position of the equipment in the substation and its semantics.
- semantic information such as passable roads, towers, meters, etc. identified by the 2D image can be assigned to the 3D point cloud.
- the 3D point can be more accurately identified. Clouds are clustered, making the constructed map closer to reality.
- the robot After the robot has established a 3D navigation semantic map, it can use the 3D navigation map and the ROS navigation module to realize the robot's motion navigation in the substation.
- the robot adopts a combination of static map and dynamic map: the static map method uses the roaming semantic map to project the three-dimensional space coordinates of the equipment on the walking route, and the space of the equipment to be inspected is projected. The vertical fan-shaped area of the position is used as the task navigation point; the dynamic map method is that the robot obtains the current three-dimensional coordinates of the device after dynamically identifying the device concerned by the task during the movement process, realizes the dynamic identification of the device, and updates the map information in real time.
- the inspection and navigation control module is configured as:
- Step 1 Receive an inspection task, where the inspection task includes a designated inspection area or designated inspection equipment;
- Step 2 Determine the detectable area information of the equipment to be inspected corresponding to the inspection task according to the semantic map;
- Step 3 Integrate the detectable area information of all the devices to be detected in the robot's current inspection task, combine the robot's current location, and plan the inspection route based on the inspection road information in the semantic map;
- the three-dimensional space projection coordinates of the equipment to be inspected are used as points on the robot's walking route, and the inspection route is planned based on the current location of the robot;
- the optimal inspection pose of the robot for each device to be inspected is determined, and when reaching each device to be inspected according to the inspection route, the detection is performed according to the optimal inspection pose;
- Step 4 Perform the inspection according to the inspection route, and if the optimal inspection pose is obtained, the inspection is performed according to the optimal inspection pose.
- the binocular vision and 3D laser sensor data are obtained in real time, and it is judged whether the layout of the equipment on the walking route is inconsistent with the roaming semantic map. If there is, the roaming semantic map is updated.
- the device image refinement acquisition module is configured as:
- Step 1 During the inspection process, image data is acquired in real time, and the equipment to be detected in the image is identified.
- the substation environment is complex, and the collected images may contain multiple types of equipment at the same time.
- a deep learning device recognition algorithm library is built here, including mainstream target recognition algorithms such as faster-rcnn, ssd, and yolo.
- the algorithm library is based on the fully convolutional deep neural network, combined with the equipment information contained in the inspection task, to extract the target detection features and semantic features, and then classify and detect the fused features to realize the accurate identification of the equipment in the inspection images. .
- Step 2 Calculate the optimal relative pose relationship between the robotic arm and the device to be inspected according to the position of the device in the semantic map in advance; during the inspection process, according to the corresponding relative position relationship, as well as the current position of the robot, inspection route and settings It controls the robot arm to adjust the pose, so that the inspection camera is always aimed at the device to be inspected, so as to collect the image of the device to be inspected from the best angle, perform inspection, and improve the accuracy of device inspection.
- the optimal relative positional relationship between the inspection camera at the end of the robot arm and the equipment to be inspected during data collection is calculated.
- the inspection route and the set inspection speed are used to calculate the robot arm pose control parameters at the next moment in the non-stop state, so that the inspection camera at the end of the robot arm and the device to be inspected can maintain the best relative position relationship, that is, Aim at the device to be tested.
- the optimal relative pose relationship between the robotic arm and the device to be inspected is:
- n x , n y , n z are the normal vectors of the inspection surface of the device to be inspected (such as the dial surface of the marked reading), x, y, z are the spatial coordinates of the device to be inspected, and x r , y r , z r and n xr , n yr , n zr are the robot space pose vectors.
- the robot's running pose makes the above formula get the maximum value, the best relative pose between the robot and the device to be detected can be obtained.
- the spatial pose of the end of the robotic arm is:
- n x , n y , n z are the normal vectors of the detection surface of the equipment to be inspected (such as the dial surface of the marked reading), n xa , n ya , and n za are the spatial attitude vectors of the robotic arm. Detect the best data collection attitude of the equipment, and control the manipulator so that the above formula can achieve the maximum value.
- the distance information between the device to be detected and the end of the robot arm is used to automatically calculate the configuration focal length of the inspection camera to ensure that the information of the device to be detected is clearly visible in the image.
- the image data collected by the binocular vision camera is acquired in real time, the equipment to be detected in the image is identified based on the deep learning method, and the posture of the robotic arm is fine-tuned to ensure that the area of the equipment to be detected is always in the central area of the image.
- the embedded AI analysis module further includes a device state identification module configured to:
- the robot After the robot completes the refined snapshot of the equipment to be tested, it uses the front-end technology of the deep learning algorithm, and relies on the front-end computing power provided by the embedded AI analysis module to realize the front-end real-time analysis of the equipment status, and discover the equipment to be tested in time.
- the operation defects of the equipment are improved, and the operation safety of the equipment is improved.
- the robotic industrial computer is further configured to perform the following steps:
- a panoramic three-dimensional model of the substation is constructed to realize the immersive inspection of the substation based on the virtual reality technology.
- the robotic industrial computer is further configured to perform the following steps:
- the binocular stereo algorithm is used to obtain the three-dimensional space position coordinates of the equipment to be inspected;
- the equipment area is calibrated in a small number of images of the object to be identified collected by the inspection; Shoot the device from different angles and distances; update the background image to obtain images of the device to be inspected in different backgrounds, thereby generating a large number of calibrated images;
- the robotic industrial computer is further configured to perform the following steps:
- Control the robot arm to adjust the pose and always aim at the device to be inspected, so that the robot always maintains the best relative pose relationship with the device to be inspected during data collection;
- the deep learning algorithm is used to identify and obtain the position of the equipment in the image, and the relative pose relationship between the robot and the equipment to be inspected is used to obtain Realize the spatial pose control of the acquisition device carried at the end of the robotic arm;
- the quality of the collected data is evaluated and optimized, so as to realize the optimal collection of the inspection data of the equipment to be inspected.
- the relationship model of the optimal image collection points for inspection over time based on historical data is used to realize the autonomous optimal selection of inspection points in different seasons and different time periods.
- confidence evaluation is performed on inspection data at different locations and under different lighting conditions, and during the robot inspection process, the detection data with the highest confidence is selected as the inspection status data of the device to be inspected.
- This embodiment provides a semantic intelligent substation robot humanoid patrol operation system, including:
- the robot is deployed in various areas in the substation;
- Each robot includes a robot body, the robot body is provided with a robotic arm, and the end of the robotic arm is equipped with an inspection/work tool;
- a computer program is stored in the control center, and when the program is executed by the processor, the steps in the humanoid patrol operation method of the semantic intelligent substation robot described in the first embodiment are implemented.
- This embodiment provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the steps in the humanoid patrol operation method for a semantic intelligent substation robot described in the first embodiment.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Aviation & Aerospace Engineering (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Electromagnetism (AREA)
- Optics & Photonics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Manipulator (AREA)
Abstract
A humanoid inspection operation method and system for a semantic intelligent substation robot. The humanoid inspection operation method for a semantic intelligent substation robot comprises: autonomously constructing a three-dimensional semantic map of an unknown substation environment (S101); autonomously planning a walking path of a robot on the basis of the three-dimensional semantic map and in combination with an inspection/operation task and the current position of the robot (S102); controlling the robot to move according to the planned walking path, and carrying out the inspection/operation task during an advancing process (S103); and during the process of carrying out the inspection/operation task, adjusting, in real time, the posture of a mechanical arm (2) that carries an inspection/operation tool (3), so as to automatically collect and recognize, at the optimal angle, an image of a device to be inspected or automatically execute the operation task at the optimal angle, and to complete a fully autonomous inspection/operation task of the substation environment (S104).
Description
本发明属于机器人领域,尤其涉及一种语义智能变电站机器人仿人巡视作业方法及系统。The invention belongs to the field of robots, and in particular relates to a humanoid patrol operation method and system of a semantic intelligent substation robot.
本部分的陈述仅仅是提供了与本发明相关的背景技术信息,不必然构成在先技术。The statements in this section merely provide background information related to the present invention and do not necessarily constitute prior art.
现有的巡检机器人一般采用的是停靠点-预置位的作业方式,其部署实施过程分为配置和运行两个阶段。在配置阶段,对于一所新的环境信息未知的变电站,需要大量的人工参与工作量。巡检机器人检测点通常由现场人员依据巡检任务人工设定,设定时现场人员首先遥控机器人沿巡检路线运行,当运行至待巡检电力设备周边后停靠;之后其再遥控调整机器人上云台姿态,使云台带动可见光摄像机、红外热像仪等非接触检测传感器依次对准机器人周边待巡检的各个设备并记录对应的云台预置位,从而完成一个检测点的设置。重复上述过程,以完成对巡检任务中包含所有待检设备检测点的设置。在运行阶段,在所有检测点设置完成后,巡检机器人沿巡检路线运行,并依次在检测点停靠后,调用云台预置位,完成对设备巡检作业。Existing inspection robots generally use a docking-preset operation mode, and the deployment and implementation process is divided into two stages: configuration and operation. In the configuration stage, for a new substation with unknown environmental information, a large amount of manual work is required. The inspection point of the inspection robot is usually manually set by the on-site personnel according to the inspection task. When setting, the on-site personnel firstly control the robot to run along the inspection route, and then stop when it runs to the surrounding of the electric equipment to be inspected; then it remotely adjusts the robot. The gimbal posture enables the gimbal to drive the visible light camera, infrared thermal imager and other non-contact detection sensors to sequentially align each device to be inspected around the robot and record the corresponding preset position of the gimbal, thereby completing the setting of a detection point. Repeat the above process to complete the setting of all inspection points of the equipment to be inspected included in the inspection task. In the running phase, after all the inspection points are set, the inspection robot runs along the inspection route, and stops at the inspection points one by one, and then calls the pre-position of the PTZ to complete the inspection of the equipment.
发明人发现,现有的变电站机器人对设备巡检作业的过程中存在以下问题:The inventor found that the existing substation robots have the following problems in the process of equipment inspection operations:
(1)在机器人现场部署时,巡检检测点设置过程繁琐,需要大量的人工参与工作量,现场配置人员劳动量大且效率低;巡检检测点的设置受现场配置人员的主观判断影响较大,设置的标准不一致,从而导致检测点的设置质量无法保证;机器人采用停靠作业方式,每个检测点都需停车作业,巡检效率低,且频繁启停对机器人的稳定运行造成隐患。(1) When the robot is deployed on-site, the process of setting up inspection points is cumbersome, requiring a lot of manual work, and the on-site configuration of personnel is labor-intensive and inefficient; the setting of inspection and inspection points is more affected by the subjective judgment of the field configuration personnel. The quality of the inspection points cannot be guaranteed due to the inconsistent setting standards. The robot adopts the docking operation method, and each inspection point needs to be stopped for operation. The inspection efficiency is low, and frequent start and stop will cause hidden dangers to the stable operation of the robot.
(2)在机器人巡检过程中,传统机器人采用激光单一导航方式,其存在激光点云稀疏导致导航失效的问题,导航的精度无法保证。(2) In the process of robot inspection, traditional robots use a single laser navigation method, which has the problem of sparse laser point cloud leading to navigation failure, and the accuracy of navigation cannot be guaranteed.
(3)在巡检数据分析方面,现有的变电站巡检机器人,机器人前端视频图像处理能力较弱,图像视频数据目前大都采用网络回传后台服务器分析处理方式,受传输网络带宽的影响,数据分析存在延时,无法满足机器人导航、视觉伺服、缺陷及时检测等实时性要求较高的应用场景需求。(3) In terms of inspection data analysis, the existing substation inspection robots have weak video and image processing capabilities at the front end of the robot. At present, most of the image and video data are analyzed and processed by the network back-end server, which is affected by the transmission network bandwidth. There is a delay in the analysis, which cannot meet the needs of application scenarios with high real-time requirements such as robot navigation, visual servoing, and timely defect detection.
(4)工作人员在控制室不能真实地了解变电站环境及设备状况。(4) The staff cannot truly understand the substation environment and equipment status in the control room.
发明内容SUMMARY OF THE INVENTION
为了解决上述问题,本发明提供一种语义智能变电站机器人仿人巡视作业方法及系统,打破了传统变电站巡检机器人“停靠点-预置位”作业方式,其实现了机器人巡检的全自主作业。In order to solve the above problems, the present invention provides a human-like inspection operation method and system for a semantic intelligent substation robot, which breaks the "stop-preset" operation mode of the traditional substation inspection robot, and realizes the fully autonomous operation of the robot inspection. .
为了实现上述目的,本发明采用如下技术方案:In order to achieve the above object, the present invention adopts the following technical solutions:
本发明的第一个方面提供了一种语义智能变电站机器人仿人巡视作业方法。A first aspect of the present invention provides a humanoid patrol operation method for a semantic intelligent substation robot.
一种语义智能变电站机器人仿人巡视作业方法,包括:A humanoid patrol operation method for a semantic intelligent substation robot, comprising:
自主构建未知变电站环境的三维语义地图;Independently build a 3D semantic map of the unknown substation environment;
基于三维语义地图,结合巡检/作业任务和机器人当前位置,机器人自主规划行走路径;Based on the 3D semantic map, combined with inspection/job tasks and the current position of the robot, the robot autonomously plans the walking path;
控制机器人按照规划的行走路径运动,并在行进过程中开展巡检/作业任务;Control the robot to move according to the planned walking path, and carry out inspection/operation tasks during the traveling process;
在开展巡检/作业任务的过程中,实时调整搭载有巡检/作业工具的机械臂的位姿,实现以最佳角度自动采集与识别待巡检设备的图像或以最佳角度自动执行作业任务,完成变电站环境的全自主巡检/作业任务。In the process of carrying out inspection/operation tasks, adjust the pose of the robotic arm equipped with inspection/operation tools in real time, so as to automatically collect and identify images of the equipment to be inspected at the best angle or automatically perform operations at the best angle task, complete the fully autonomous inspection/operation task of the substation environment.
进一步地,基于变电站先验知识,自动获取站内设备的位置信息,实现机器人免配置信息注入情况下,自主构建变电站三维语义地图。Further, based on the prior knowledge of the substation, the location information of the equipment in the station is automatically obtained, so that the robot can independently construct a three-dimensional semantic map of the substation without configuration-free information injection.
进一步地,构建未知变电站环境的三维语义地图的具体过程为:Further, the specific process of constructing the 3D semantic map of the unknown substation environment is as follows:
实时获取当前环境的双目图像数据、巡检图像数据以及三维点云数据;Real-time acquisition of binocular image data, inspection image data and 3D point cloud data of the current environment;
基于双目图像数据和三维点云数据获取当前环境对象的空间分布,通过巡检图像数据进行实时分析,识别图像中设备标识码,定位设备目标区域,实现空间信息中设备标识和位置的同时获取;Based on the binocular image data and 3D point cloud data, the spatial distribution of the current environmental objects is obtained, and the inspection image data is used for real-time analysis to identify the device identification code in the image, locate the target area of the device, and simultaneously obtain the device identification and location in the spatial information. ;
根据当前环境中对象的空间分布,实现机器人周边可通行未知区域的自动识别,利用局部路径规划方法,实现机器人在未知区域的运动规划,执行未知环境的地图构建,直至完成整个站内环境语义地图的构建。According to the spatial distribution of objects in the current environment, the automatic identification of the unknown area around the robot is realized, the local path planning method is used to realize the robot motion planning in the unknown area, and the map construction of the unknown environment is performed until the whole station environment semantic map is completed. Construct.
进一步地,执行未知环境的地图构建的过程包括:Further, the process of performing the map construction of the unknown environment includes:
基于双目图像数据及三维激光数据获取当前环境中对象的空间分布;Obtain the spatial distribution of objects in the current environment based on binocular image data and 3D laser data;
基于双目图像数据及巡检图像数据获取当前环境中道路、设备及障碍物对象的语义信息,利用空间位置坐标变换,将道路、设备及障碍物空间信息投影至三维点云数据,建立语义地图。Based on the binocular image data and inspection image data, the semantic information of the road, equipment and obstacle objects in the current environment is obtained, and the spatial position coordinate transformation is used to project the spatial information of the road, equipment and obstacles to the 3D point cloud data to establish a semantic map. .
进一步地,根据机器人与待巡检设备间的位置关系,驱动机器人机械臂运动,以使机器人机械臂末端朝向设备的位置且运动到目标设备的局部范围内;Further, according to the positional relationship between the robot and the equipment to be inspected, the robot arm is driven to move, so that the end of the robot arm faces the position of the device and moves to the local range of the target device;
实时获取巡检相机图像数据,自动识别跟踪并定位待巡检的设备位置,驱动机械臂位置进行精确调整,以使机械臂末端图像采集设备为最佳的拍摄角度,并驱动图像采集设备调整焦距,补偿由于机器人运动对图像造成的影响,获取目标巡检设备图像,实现目标图像精准拍摄;Real-time acquisition of inspection camera image data, automatic identification, tracking and positioning of the equipment to be inspected, driving the position of the robotic arm for precise adjustment, so that the image acquisition device at the end of the robotic arm is at the best shooting angle, and driving the image acquisition device to adjust the focal length , compensating for the influence of the robot motion on the image, obtaining the image of the target inspection equipment, and realizing the accurate shooting of the target image;
基于获取到的设备精细图像,在机器人前端自动进行目标识别,实现图像数据在前端的自动分析,实时获取设备的状态信息。Based on the acquired fine images of the equipment, target recognition is automatically performed at the front end of the robot, automatic analysis of image data at the front end is realized, and status information of the equipment is obtained in real time.
进一步地,在机器人机械臂末端朝向设备的位置且运动到目标设备的局部范围内的过程中,控制机械臂调整位姿始终对准待巡检设备,使得机器人在数据采集时始终与待检设备保持最佳相对位姿关系;Further, in the process of moving the end of the robot arm toward the device and moving to the local range of the target device, the robot arm is controlled to adjust the pose and always aim at the device to be inspected, so that the robot is always connected to the device to be inspected during data collection. Maintain the best relative pose relationship;
当机器人到达待检设备的最佳观测位姿且进入巡检数据采集装置范围内,利用深度学习算法识别并获取设备在图像中的位置,结合机器人与待巡检设备的相对位姿关系,以实现机械臂末端携带采集装置的空间位姿控制;When the robot reaches the optimal observation pose of the device to be inspected and enters the range of the inspection data acquisition device, the deep learning algorithm is used to identify and obtain the position of the device in the image, and the relative pose relationship between the robot and the device to be inspected is used to determine Realize the spatial pose control of the acquisition device carried at the end of the robotic arm;
对采集数据质量进行评估优化,从而实现对待检设备巡检数据的最优采集。The quality of the collected data is evaluated and optimized, so as to realize the optimal collection of the inspection data of the equipment to be inspected.
进一步地,在采集数据质量评估优化的过程中,采用基于历史数据建立的巡检最优图像采集点随时间变化的关系模型,以实现不同季节和不同时间段内巡检点的自主最优选择。Further, in the process of quality evaluation and optimization of the collected data, the relationship model of the optimal image collection points for inspection based on historical data changes with time is used to realize the independent optimal selection of inspection points in different seasons and different time periods. .
进一步地,在采集数据质量评估优化的过程中,对不同位置及不同光照条件下的巡检数据进行置信度评价,在机器人巡检过程中,选取置信度最高的检测数据作为待检测设备的巡检状态数据。Further, in the process of collecting data quality evaluation and optimization, the confidence level of the inspection data at different locations and under different lighting conditions is evaluated. Check status data.
进一步地,基于数字孪生方法构建变电站全景三维模型,通过图像、声音及触感信息的实时再现方式,实现基于虚拟现实技术的变电站沉浸式巡检作业。Further, a panoramic three-dimensional model of the substation is constructed based on the digital twin method, and the immersive inspection operation of the substation based on virtual reality technology is realized through the real-time reproduction of image, sound and tactile information.
本发明的第二个方面提供一种机器人。A second aspect of the present invention provides a robot.
一种机器人,其采用如上述所述的语义智能变电站机器人仿人巡视作业方法进行巡检。A robot, which adopts the above-mentioned semantic intelligent substation robot humanoid patrol operation method for patrol inspection.
本发明的第三个方面提供一种语义智能变电站机器人仿人巡视作业系统。A third aspect of the present invention provides a humanoid patrol operation system for a semantic intelligent substation robot.
本发明提供的一种语义智能变电站机器人仿人巡视作业系统,其包括至少一个如上述所述的机器人。The present invention provides a humanoid patrol operation system for a semantic intelligent substation robot, which includes at least one robot as described above.
本发明提供的另一种语义智能变电站机器人仿人巡视作业系统,其包括:Another semantic intelligent substation robot humanoid patrol operation system provided by the present invention includes:
控制中心;control center;
至少一个机器人;所述机器人部署于变电站内各区域;at least one robot; the robot is deployed in various areas in the substation;
每个机器人包括机器人本体,机器人本体上设置有机械臂,机械臂末端搭载有巡检/作业工具;Each robot includes a robot body, the robot body is provided with a robotic arm, and the end of the robotic arm is equipped with an inspection/work tool;
所述控制中心上存储有计算机程序,该程序被处理器执行时实现如上述所述的语义智能变电站机器人仿人巡视作业方法中的步骤。A computer program is stored in the control center, and when the program is executed by the processor, the steps in the above-mentioned semantic intelligent substation robot humanoid patrol operation method are realized.
本发明的第四个方面提供一种计算机可读存储介质。A fourth aspect of the present invention provides a computer-readable storage medium.
一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如上述所述的语义智能变电站机器人仿人巡视作业方法中的步骤。A computer-readable storage medium on which a computer program is stored, when the program is executed by a processor, implements the steps in the above-mentioned humanoid patrol operation method of a semantic intelligent substation robot.
与现有技术相比,本发明的有益效果是:Compared with the prior art, the beneficial effects of the present invention are:
(1)本发明开创性地发明了语义智能变电站机器人仿人巡视作业方法,提出变电站三维语义地图的自主构建方法,实现了未知变电站环境下道路及设备信息的主动式自主感知,彻底摆脱了传统机器人对人工停靠点以及预置位配置的依赖,打破了传统变电站巡检机器人“停靠点-预置位”巡检作业方式,解决了传统机器人巡检智能化水平不足,人工配置依赖度高,停靠作业效率低的主要问题。(1) The present invention creatively invents a human-like patrol operation method of semantic intelligent substation robots, and proposes an autonomous construction method for a three-dimensional semantic map of substations, which realizes active autonomous perception of road and equipment information in an unknown substation environment, and completely gets rid of traditional The robot's reliance on manual stops and preset configuration breaks the traditional "stop-preset" inspection operation mode of the traditional substation inspection robot, and solves the lack of intelligent level of traditional robot inspection and high dependence on manual configuration. The main problem of inefficient docking operations.
(2)本发明提出了一种机器人语义地图的自动构建方法,实现了机器人语义地图的自动构建,采用视觉激光融合导航方式,实现了机器人三维自主导航,解决传统机器人单一导航方式无效及智能感知能力不足的问题。(2) The present invention proposes an automatic construction method of a robot semantic map, which realizes the automatic construction of the robot semantic map, adopts the visual laser fusion navigation method, realizes the three-dimensional autonomous navigation of the robot, and solves the ineffectiveness of the traditional robot single navigation method and intelligent perception. The problem of insufficient capacity.
(3)本发明提出了一种变电站巡检视频AI前端化识别方法,利用深度学习模型量化裁剪方法降低了算法运算复杂度,提升了系统的实时性,研制了低功耗高性能的变电巡检视频实时分析硬件系统,降低了网络传输压力,提升了数据处理的实时性。(3) The present invention proposes an AI front-end recognition method for substation inspection video, which uses the deep learning model quantization and cutting method to reduce the computational complexity of the algorithm, improve the real-time performance of the system, and develop a low-power and high-performance substation. The inspection video real-time analysis hardware system reduces the network transmission pressure and improves the real-time performance of data processing.
(4)本发明提出一种机器人沉浸式作业方式,结合图像,视频、声音等多模态信息,通过多源多模态信息的深度融合,重构了机器人作业环境全景信息,使工作人员在控制室就能够真实地了解变电站环境及设备状况,实现了变电站机器人的沉浸式巡检作业。(4) The present invention proposes an immersive operation method for robots, which combines image, video, sound and other multimodal information, and reconstructs the panoramic information of the robot operating environment through the deep fusion of multi-source and multimodal information, so that the staff can The control room can truly understand the substation environment and equipment conditions, and realize the immersive inspection operation of the substation robot.
构成本发明的一部分的说明书附图用来提供对本发明的进一步理解,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。The accompanying drawings forming a part of the present invention are used to provide further understanding of the present invention, and the exemplary embodiments of the present invention and their descriptions are used to explain the present invention, and do not constitute an improper limitation of the present invention.
图1是本发明实施例的语义智能变电站机器人仿人巡视作业方法流程图;Fig. 1 is the flow chart of the humanoid patrol operation method of the semantic intelligent substation robot according to the embodiment of the present invention;
图2是本发明实施例的变电站巡检设备最优数据采集过程示意图;2 is a schematic diagram of an optimal data collection process for substation inspection equipment according to an embodiment of the present invention;
图3是本发明实施例的变电站机器人仿人巡视作业系统架构图;Fig. 3 is the structure diagram of the humanoid patrol operation system of the substation robot according to the embodiment of the present invention;
图4是本发明实施例的巡检机器人定位导航地图自主构建流程图;4 is a flowchart of autonomous construction of a positioning and navigation map of an inspection robot according to an embodiment of the present invention;
图5是本发明实施例的三维电子地图语义分析流程图;5 is a flowchart of a three-dimensional electronic map semantic analysis according to an embodiment of the present invention;
图6是本发明实施例的变电站巡检视频实时识别框架图;6 is a frame diagram of a real-time identification frame diagram of a substation inspection video according to an embodiment of the present invention;
图7是本发明实施例的变电站巡检视频实时识别流程图;7 is a flowchart of real-time identification of substation inspection video according to an embodiment of the present invention;
图8是本发明实施例的机器人结构示意图;8 is a schematic structural diagram of a robot according to an embodiment of the present invention;
图9(a)是本发明实施例的主臂结构示意图;Figure 9 (a) is a schematic structural diagram of a main arm of an embodiment of the present invention;
图9(b)是本发明实施例的从臂结构示意图;FIG. 9(b) is a schematic diagram of the slave arm structure according to an embodiment of the present invention;
图10是本发明实施例的快速更换结构示意图。FIG. 10 is a schematic diagram of a quick replacement structure according to an embodiment of the present invention.
下面结合附图与实施例对本发明作进一步说明。The present invention will be further described below with reference to the accompanying drawings and embodiments.
应该指出,以下详细说明都是例示性的,旨在对本发明提供进一步的说明。除非另有指明,本文使用的所有技术和科学术语具有与本发明所属技术领域的普通技术人员通常理解的相同含义。It should be noted that the following detailed description is exemplary and intended to provide further explanation of the invention. Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
需要注意的是,这里所使用的术语仅是为了描述具体实施方式,而非意图限制根据本发明的示例性实施方式。如在这里所使用的,除非上下文另外明确指出,否则单数形式也意图包括复数形式,此外,还应当理解的是,当在本说明书中使用术语“包含”和/或“包括”时,其指明存在特征、步骤、操作、器件、组件和/或它们的组合。It should be noted that the terminology used herein is for the purpose of describing specific embodiments only, and is not intended to limit the exemplary embodiments according to the present invention. As used herein, unless the context clearly dictates otherwise, the singular is intended to include the plural as well, furthermore, it is to be understood that when the terms "comprising" and/or "including" are used in this specification, it indicates that There are features, steps, operations, devices, components and/or combinations thereof.
在本发明中,术语如“上”、“下”、“左”、“右”、“前”、“后”、“竖直”、“水平”、“侧”、“底”等指示的方位或位置关系为基于附图所示的方位或位置关系,只是为了便于叙述本发明各部件或元件结构关系而确定的关系词,并非特指本发明中任一部件或元件,不能理解为对本发明的限制。In the present invention, terms such as "upper", "lower", "left", "right", "front", "rear", "vertical", "horizontal", "side", "bottom", etc. The orientation or positional relationship is based on the orientation or positional relationship shown in the accompanying drawings, and is only a relational word determined for the convenience of describing the structural relationship of each component or element of the present invention, and does not specifically refer to any component or element in the present invention, and should not be construed as a reference to the present invention. Invention limitations.
本发明中,术语如“固接”、“相连”、“连接”等应做广义理解,表示可以是固定连接,也可以是一体地连接或可拆卸连接;可以是直接相连,也可以通过中间媒介间接相连。对于本领域的相关科研或技术人员,可以根据具体情况确定上述术语在本发明中的具体含义,不能理解为对本发明的限制。In the present invention, terms such as "fixed connection", "connected", "connected", etc. should be understood in a broad sense, indicating that it can be a fixed connection, an integral connection or a detachable connection; it can be directly connected, or through the middle media are indirectly connected. For the relevant scientific research or technical personnel in the field, the specific meanings of the above terms in the present invention can be determined according to the specific situation, and should not be construed as a limitation of the present invention.
实施例一Example 1
参照图1给出了本实施例的语义智能变电站机器人仿人巡视作业方法,包括:Referring to FIG. 1, a humanoid patrol operation method of a semantic intelligent substation robot of the present embodiment is provided, including:
S101:自主构建未知变电站环境的三维语义地图;S101: Build a three-dimensional semantic map of the unknown substation environment independently;
S102:基于三维语义地图,结合巡检/作业任务和机器人当前位置,自主规划机器人行走路径;S102: Based on the three-dimensional semantic map, combined with the inspection/job task and the current position of the robot, autonomously plan the walking path of the robot;
S103:控制机器人按照规划的行走路径运动,并在行进过程中开展巡检/作业任务;S103: Control the robot to move according to the planned walking path, and carry out inspection/operation tasks during the traveling process;
S104:在开展巡检/作业任务的过程中,实时调整搭载有巡检/作业工具的机械臂的位姿,实现以最佳角度自动采集与识别待巡检设备的图像或以最佳角度自动执行作业任务,完成变电站环境的全自主巡检/作业任务。S104: In the process of carrying out inspection/operation tasks, adjust the pose of the robotic arm equipped with inspection/operation tools in real time, so as to automatically collect and identify images of the equipment to be inspected at the best angle or automatically at the best angle Perform operation tasks and complete fully autonomous inspection/operation tasks of the substation environment.
在步骤S101和S102的具体实施中,基于变电站先验知识,自动获取站内设备的位置信息,实现机器人免配置信息注入情况下,自主构建变电站三维语义地图。In the specific implementation of steps S101 and S102, based on the prior knowledge of the substation, the location information of the equipment in the station is automatically obtained, and the robot can independently construct a three-dimensional semantic map of the substation without configuration-free information injection.
在具体实施中,构建未知变电站环境的三维语义地图的具体过程为:In the specific implementation, the specific process of constructing the three-dimensional semantic map of the unknown substation environment is as follows:
实时获取当前环境的双目图像数据、巡检图像数据以及三维点云数据;Real-time acquisition of binocular image data, inspection image data and 3D point cloud data of the current environment;
基于双目图像数据和三维点云数据获取当前环境对象的空间分布,通过对巡检图像数据进行实时分析,识别图像中设备标识码,定位设备目标区域,实现空间信息中设备标识和位置的同时获取;Based on the binocular image data and 3D point cloud data, the spatial distribution of the current environmental objects is obtained. Through real-time analysis of the inspection image data, the identification code of the equipment in the image is identified, and the target area of the equipment is located to realize the simultaneous identification and location of the equipment in the spatial information. Obtain;
根据当前环境中对象的空间分布,实现机器人周边可通行未知区域的自动识别,利用局部路径规划方法, 实现机器人在未知区域的运动规划,执行未知环境的地图构建,直至完成整个站内环境语义地图的构建。According to the spatial distribution of objects in the current environment, the automatic identification of the unknown area around the robot is realized, the local path planning method is used to realize the motion planning of the robot in the unknown area, and the map construction of the unknown environment is performed until the semantic map of the environment in the entire station is completed. Construct.
其中,执行未知环境的地图构建的过程包括:Among them, the process of performing the map construction of the unknown environment includes:
基于双目图像数据及三维激光数据获取当前环境中对象的空间分布;Obtain the spatial distribution of objects in the current environment based on binocular image data and 3D laser data;
基于双目图像数据及巡检图像数据获取当前环境中道路、设备及障碍物对象的语义信息,利用空间位置坐标变换,将道路、设备及障碍物空间信息投影至三维点云数据,建立语义地图。Based on the binocular image data and inspection image data, the semantic information of the road, equipment and obstacle objects in the current environment is obtained, and the spatial position coordinate transformation is used to project the spatial information of the road, equipment and obstacles to the 3D point cloud data to establish a semantic map. .
所述三维语义地图为预先存储的语义地图,其中,巡检/作业路径的制定方法包括:The three-dimensional semantic map is a pre-stored semantic map, wherein the method for formulating an inspection/operation path includes:
接收巡检/作业任务,所述巡检/作业任务包括指定巡检/作业区域或指定巡检/作业设备;Receive inspection/operation tasks, the inspection/operation tasks include designated inspection/operation areas or designated inspection/operation equipment;
根据巡检/作业任务相应的待巡检/作业设备;Equipment to be inspected/worked according to inspection/job tasks;
将语义地图中所有待巡检/作业设备的三维空间投影坐标作为机器人行走路线上的点,结合机器人当前所在位置,规划巡检/作业路线。The three-dimensional space projection coordinates of all equipment to be inspected/operated in the semantic map are used as points on the robot's walking route, and the inspection/operation route is planned based on the current location of the robot.
所述语义地图包括变电站三维地图,以及三维地图上设备的语义信息,其构建方法,参照图4,包括:The semantic map includes a three-dimensional map of a substation and semantic information of equipment on the three-dimensional map. The construction method, referring to FIG. 4 , includes:
获取变电站的图纸、电气设计图等先验知识数据,利用知识图谱、知识理解技术,基于所述先验知识数据,形成粗精度的语义地图,并自动构建机器人构建语义地图的任务路径;根据所述任务路径控制机器人运动,运动过程中,通过执行以下步骤实现漫游式语义地图的构建,如图5所示:Obtain prior knowledge data such as substation drawings, electrical design drawings, etc., use knowledge graph and knowledge understanding technology, and form a coarse-precision semantic map based on the prior knowledge data, and automatically construct a task path for the robot to construct the semantic map; The robot motion is controlled by the task path described above. During the motion process, the construction of a roaming semantic map is realized by executing the following steps, as shown in Figure 5:
(1)自双目视觉相机、巡检相机和三维激光传感器获取当前环境的双目图像、巡检图像和三维点云数据;(1) Obtain binocular images, inspection images and 3D point cloud data of the current environment from binocular vision cameras, inspection cameras and 3D laser sensors;
(2)根据巡检图像对当前环境中的道路、设备以及障碍物等对象进行识别;嵌入式AI分析模块中预存用于识别道路、设备和各类障碍物的深度学习模型,基于这些模型进行目标检测;即得到了当前环境中的道路、设备以及障碍物语义信息;根据双目图像及三维点云数据获取当前环境中的道路、设备,以及障碍物的空间位置分布;具体地,双目图像和三维点云数据可以获取机器人周边设备或障碍物距机器人本体的距离信息(双目图像用于识别近距离障碍,三维点云数据用于识别远距离障碍),再结合巡检任务中机器人运行方向信息即可得到障碍物以机器人本体为中心的空间分布。(2) Identify objects such as roads, equipment, and obstacles in the current environment according to the inspection images; the embedded AI analysis module pre-stores deep learning models for identifying roads, equipment, and various obstacles. Target detection; that is, the semantic information of roads, equipment and obstacles in the current environment is obtained; according to the binocular image and 3D point cloud data, the spatial position distribution of roads, equipment, and obstacles in the current environment is obtained; The image and 3D point cloud data can obtain the distance information of the robot's peripheral equipment or obstacles from the robot body (the binocular image is used to identify close-range obstacles, and the 3D point cloud data is used to identify long-distance obstacles), and then combined with the robot in the inspection task. From the running direction information, the spatial distribution of obstacles centered on the robot body can be obtained.
(3)根据当前环境中对象的空间分布,实现机器人周边可通行未知区域的自动识别,若存在可通行未知区域,利用局部路径规划方法,实现机器人在未知区域的运动规划,并向机器人的工控机发送运动指令,使机器人运动至可通行未知区域,进入步骤(4);若不存在可通行未知区域,表示所有未知区域均探索完成,地图构建结束;(3) According to the spatial distribution of objects in the current environment, the automatic identification of the passable unknown area around the robot is realized. If there is an unknown passable area, the local path planning method is used to realize the motion planning of the robot in the unknown area, and to the industrial control of the robot. The robot sends motion commands to make the robot move to the passable unknown area, and then go to step (4); if there is no passable unknown area, it means that all the unknown areas have been explored and the map construction is over;
(4)根据双目图像和三维点云数据进行三维SLAM地图构建,返回步骤(1)。(4) Build a 3D SLAM map according to the binocular image and the 3D point cloud data, and return to step (1).
所述步骤(4)中根据双目图像和三维点云数据进行三维SLAM地图构建,具体包括:In the step (4), the three-dimensional SLAM map is constructed according to the binocular image and the three-dimensional point cloud data, which specifically includes:
步骤(4.1):读取双目相机获取的双目图像、巡检相机获取的巡检图像和三维激光传感器数据;Step (4.1): read the binocular image obtained by the binocular camera, the inspection image obtained by the inspection camera and the 3D laser sensor data;
步骤(4.2):基于双目图像数据和三维激光数据获取设备、设备及障碍物的空间位置分布,以及基于三维激光传感器数据构建三维点云图;Step (4.2): based on binocular image data and three-dimensional laser data, obtain the spatial position distribution of equipment, equipment and obstacles, and construct a three-dimensional point cloud map based on three-dimensional laser sensor data;
步骤(4.3):基于双目图像数据及巡检图像数据获取当前环境中设备、设备及障碍物的语义信息;Step (4.3): obtain semantic information of equipment, equipment and obstacles in the current environment based on binocular image data and inspection image data;
步骤(4.4):利用空间位置坐标变化,根据双目图像,以及根据设备的空间位置,将设备的空间位置投影到所述三维点云图,实现二维到三维点云图的映射,结合步骤(2)当前环境中道路、设备以及障碍物的语义信息,建立语义地图。通过将双目相机识别的设备投影至三维点云图,再结合三维点云图的点云密度分布,能够实现三维导航地图中待检设备的三维位置及点云的准确聚类与语义化,得到漫游语义地图。所述漫游语义地图中包括变电站中设备的三维空间位置及其语义。Step (4.4): using the spatial position coordinate change, according to the binocular image, and according to the spatial position of the device, the spatial position of the device is projected to the three-dimensional point cloud map, and the mapping from two-dimensional to three-dimensional point cloud map is realized. Combined with step (2) ) Semantic information of roads, equipment and obstacles in the current environment to build a semantic map. By projecting the device identified by the binocular camera to the 3D point cloud map, and then combining the point cloud density distribution of the 3D point cloud map, the 3D position of the device to be inspected in the 3D navigation map and the accurate clustering and semantics of the point cloud can be achieved, and the roaming can be obtained. Semantic map. The roaming semantic map includes the three-dimensional space position of the equipment in the substation and its semantics.
通过二维到三维点云的映射,能够将通过二维图像识别出的可通行道路、杆塔、表计等语义信息赋予三维点云,结合基于二维图像的定位,能够更准确的对三维点云进行聚类,使得构建的地图更接近于现实。Through the mapping from 2D to 3D point cloud, semantic information such as passable roads, towers, meters, etc. identified by the 2D image can be assigned to the 3D point cloud. Combined with the positioning based on the 2D image, the 3D point can be more accurately identified. Clouds are clustered, making the constructed map closer to reality.
机器人在建立好三维导航语义地图后,可利用三维导航地图,利用ROS导航模块,实现机器人在变电站内的运动导航。机器人对于任务规定的巡检设备的不停车检测,采用静态地图和动态地图相结合的方式:静态地图方式是利用漫游语义地图,将设备三维空间坐标投影到行走路线上,将待检设备的空间位置垂直扇形区域作为任务导航点;动态地图方式是机器人在运动过程中,动态识别到任务关注设备后,获取设备当前三维坐标,实现设备的动态识别,并实时更新地图信息。After the robot has established a 3D navigation semantic map, it can use the 3D navigation map and the ROS navigation module to realize the robot's motion navigation in the substation. For the non-stop detection of the inspection equipment specified by the task, the robot adopts a combination of static map and dynamic map: the static map method uses the roaming semantic map to project the three-dimensional space coordinates of the equipment on the walking route, and the space of the equipment to be inspected is projected. The vertical fan-shaped area of the position is used as the task navigation point; the dynamic map method is that the robot obtains the current three-dimensional coordinates of the device after dynamically identifying the device concerned by the task during the movement process, realizes the dynamic identification of the device, and updates the map information in real time.
本实施例提出了一种机器人巡检定位导航地图自主构建方法,实现了三维视觉语义地图的漫游式建图,提出了一种面向任务的双目视觉与三维激光融合的巡检导航控制方法,实现了机器人的激光视觉融合导航规划, 解决传统机器人激光点云稀疏导致的导航失效问题。This embodiment proposes an autonomous construction method for a robot inspection, positioning and navigation map, which realizes the roaming construction of a three-dimensional visual semantic map, and proposes a task-oriented inspection and navigation control method integrating binocular vision and three-dimensional laser. The laser vision fusion navigation planning of the robot is realized, and the navigation failure problem caused by the sparse laser point cloud of the traditional robot is solved.
在步骤S104的具体实施中,根据机器人与待巡检设备间的位置关系,驱动机器人机械臂运动,以使机器人机械臂末端朝向设备的位置且运动到目标设备的局部范围内;In the specific implementation of step S104, according to the positional relationship between the robot and the equipment to be inspected, the robot arm is driven to move, so that the end of the robot arm faces the position of the device and moves to the local range of the target device;
实时获取巡检相机图像数据,自动识别跟踪并定位待巡检的设备位置,驱动机械臂位置进行精确调整,以使机械臂末端图像采集设备为最佳的拍摄角度,并驱动图像采集设备调整焦距,补偿由于机器人运动对图像造成的影响,获取目标巡检设备图像,实现目标图像精准拍摄;Real-time acquisition of inspection camera image data, automatic identification, tracking and positioning of the equipment to be inspected, driving the position of the robotic arm for precise adjustment, so that the image acquisition device at the end of the robotic arm is at the best shooting angle, and driving the image acquisition device to adjust the focal length , compensating for the influence of the robot motion on the image, obtaining the image of the target inspection equipment, and realizing the accurate shooting of the target image;
基于获取到的设备精细图像,在机器人前端自动进行目标识别,实现图像数据在前端的自动分析,实时获取设备的状态信息。Based on the acquired fine images of the equipment, target recognition is automatically performed at the front end of the robot, automatic analysis of image data at the front end is realized, and status information of the equipment is obtained in real time.
参照图2,控制机械臂调整位姿始终对准待巡检设备,使得机器人在数据采集时始终与待检设备保持最佳相对位姿关系;Referring to Fig. 2, the robot arm is controlled to adjust the pose and always align with the device to be inspected, so that the robot always maintains the best relative pose relationship with the device to be inspected during data collection;
当机器人到达待检设备的最佳观测位姿且进入巡检数据采集装置范围内,利用深度学习算法识别并获取设备在图像中的位置,结合机器人与待巡检设备的相对位姿关系,以实现机械臂末端携带采集装置的空间位姿控制;When the robot reaches the optimal observation pose of the device to be inspected and enters the range of the inspection data acquisition device, the deep learning algorithm is used to identify and obtain the position of the device in the image, and the relative pose relationship between the robot and the device to be inspected is used to determine Realize the spatial pose control of the acquisition device carried at the end of the robotic arm;
对采集数据质量进行评估优化,从而实现对待检设备巡检数据的最优采集。The quality of the collected data is evaluated and optimized, so as to realize the optimal collection of the inspection data of the equipment to be inspected.
其中,在采集数据质量评估优化的过程中,采用基于历史数据建立的巡检最优图像采集点随时间变化的关系模型,以实现不同季节和不同时间段内巡检点的自主最优选择。Among them, in the process of quality evaluation and optimization of the collected data, the relationship model of the optimal image collection points for inspection over time based on historical data is used to realize the autonomous optimal selection of inspection points in different seasons and different time periods.
在采集数据质量评估优化的过程中,对不同位置及不同光照条件下的巡检数据进行置信度评价,在机器人巡检过程中,选取置信度最高的检测数据作为待检测设备的巡检状态数据,提升巡检数据的有效性。In the process of collecting data quality evaluation and optimization, the confidence level of the inspection data at different locations and under different lighting conditions is evaluated. During the robot inspection process, the inspection data with the highest confidence is selected as the inspection status data of the equipment to be inspected , to improve the effectiveness of inspection data.
R=0.5*R
position+0.5*R
l
R=0.5*R position +0.5*R l
R
position=cos(C
dx)
R position = cos(C dx )
R
l=1-(L-L
x)/L
x;L>L
x
R l =1-(LL x )/L x ; L>L x
R
l=1;L<L
x
R l = 1; L<L x
其中R为机器人当前巡检数据的执行度,R
position为位置置信度,C
dx是当前机器人末端位置与待检测设备表面法向量间的夹角,cos为余弦计算函数;R
l为光照置信度,在机械臂末端与巡检相机同轴安装光照强度传感器,实现当前光照方向及强度的计算,L为当前光照度,L
x为标准光照度,取值为正常光照条件下的光照度,一般取100000Lux。
where R is the execution degree of the robot's current inspection data, R position is the position confidence, C dx is the angle between the current robot end position and the surface normal vector of the device to be detected, cos is the cosine calculation function; R l is the illumination confidence , Install the light intensity sensor at the end of the robot arm and the inspection camera coaxially to realize the calculation of the current light direction and intensity, L is the current light intensity, L x is the standard light intensity, and the value is the light intensity under normal light conditions, generally 100000Lux.
在具体实施中,基于三维语义地图,获取任务巡检中待巡检设备及机器人的实时位置,基于巡检路径或作业路径控制机器人运动到作业点,驱动机器人机械臂末端朝向待巡检设备的位置。In the specific implementation, based on the three-dimensional semantic map, the real-time positions of the equipment to be inspected and the robot in the task inspection are obtained, the robot is controlled to move to the work point based on the inspection path or the operation path, and the end of the robot arm is driven to face the position of the equipment to be inspected. Location.
根据机器人当前所在位置、巡检路径或作业路径及设定的巡检速度,计算机器人与待巡检设备之间的相对运动关系,控制机械臂调整位姿始终对准待巡检设备,以使机器人机械臂末端所搭载的传感器模块对待检设备巡检数据进行采集。According to the current position of the robot, the inspection path or working path and the set inspection speed, the relative motion relationship between the robot and the equipment to be inspected is calculated, and the robot arm is controlled to adjust the pose to always align with the equipment to be inspected, so that the The sensor module mounted on the end of the robot arm collects the inspection data of the equipment to be inspected.
根据三维语义地图,确定机器人针对每个待巡检设备的最佳巡检位姿,根据巡检路线到达各个待巡检设备时,根据最佳巡检位姿进行检测。其中,根据最佳巡检位姿进行检测包括:基于三维语义地图以及双目视觉与三维激光传感器数据,确定机器人当前的实际位姿;根据实际位姿和最佳巡检位姿计算相对位姿偏差;根据相对位姿偏差控制机器人调整位姿,执行检测。According to the three-dimensional semantic map, determine the best inspection pose of the robot for each device to be inspected, and when it arrives at each device to be inspected according to the inspection route, it will perform detection according to the best inspection pose. Among them, the detection according to the optimal inspection pose includes: determining the current actual pose of the robot based on the 3D semantic map and binocular vision and 3D laser sensor data; calculating the relative pose according to the actual pose and the optimal inspection pose Deviation; control the robot to adjust the pose according to the relative pose deviation, and perform detection.
在巡检过程中,实时获取双目视觉和三维激光传感器数据,判断行走路线上是否存在设备的布局与三维语义地图不一致,若存在,对三维语义地图进行更新。During the inspection process, binocular vision and 3D laser sensor data are obtained in real time to determine whether the layout of the equipment on the walking route is inconsistent with the 3D semantic map, and if so, the 3D semantic map is updated.
具体地,在巡检过程,还对设备图像精细化采集,其过程为:Specifically, during the inspection process, the equipment images are also collected in a refined manner, and the process is as follows:
1):巡检过程中,实时获取图像数据,对图像中的待检测设备进行识别。1): During the inspection process, the image data is acquired in real time, and the equipment to be detected in the image is identified.
变电站环境复杂,采集的图像中可能同时包含多种类型设备。此处构建了深度学习设备识别算法库,包含faster-rcnn、ssd、yolo等主流目标识别算法。该算法库在全卷积深度神经网络的基础上,结合巡检任务包含的设备信息,提取目标检测特征和语义特征,然后对融合的特征进行分类和检测,实现巡检图像中设备的精确识别。The substation environment is complex, and the collected images may contain multiple types of equipment at the same time. The deep learning device recognition algorithm library is built here, including mainstream target recognition algorithms such as faster-rcnn, ssd, and yolo. The algorithm library is based on the fully convolutional deep neural network, combined with the equipment information contained in the inspection task, to extract the target detection features and semantic features, and then classify and detect the fused features to realize the accurate identification of the equipment in the inspection images. .
2):预先根据语义地图中设备的位置,计算机器人机械臂与待检设备最佳相对位姿关系;巡检过程中,根据相应的相对位置关系,以及机器人当前所在位置、巡检路线以及设定的巡检速度,控制机器人机械臂调整位 姿,使得巡检相机始终对准所述待检测设备,从而从最佳角度采集待检测设备图像,执行检测,提高设备检测的准确性。2): Calculate the optimal relative pose relationship between the robot arm and the device to be inspected according to the position of the device in the semantic map in advance; during the inspection process, according to the corresponding relative position relationship, as well as the current position of the robot, inspection route and equipment. The inspection speed is fixed, and the robot arm is controlled to adjust the pose, so that the inspection camera is always aimed at the device to be inspected, so as to collect the image of the device to be inspected from the best angle, perform inspection, and improve the accuracy of device inspection.
本实施例设计了一种结合电力设备空间位置关系特征的目标检测算法(不局限于faster-rcnn算法、ssd、yolo等)算法,构建了一种高性能计算资源自动调度方法,提出了一种设备目标检测及跟踪方法,实现了巡检视频实时高效识别,提升了变电设备识别的准确率。This embodiment designs a target detection algorithm (not limited to faster-rcnn algorithm, SSD, yolo, etc.) that combines the characteristics of the spatial position relationship of power equipment, constructs an automatic scheduling method for high-performance computing resources, and proposes a The equipment target detection and tracking method realizes real-time and efficient identification of inspection videos and improves the accuracy of identification of substation equipment.
具体地,还根据变电站三维语义电子地图及机器人位姿计算,计算数据采集时机器人机械臂末端的巡检相机与待检设备最佳相对位置关系,根据机器人当前所在位置、巡检路线以及设定的巡检速度,计算在不停车状态下的下一时刻的机械臂位姿控制参数,使得机器人机械臂末端的巡检相机与待检设备能够保持最佳相对位置关系,即对准待检测设备。Specifically, according to the three-dimensional semantic electronic map of the substation and the robot pose calculation, the optimal relative positional relationship between the inspection camera at the end of the robot arm and the equipment to be inspected during data collection is calculated, according to the current position of the robot, inspection route and setting. The inspection speed of the robot arm is calculated at the next moment in the non-stop state, so that the inspection camera at the end of the robot arm and the device to be inspected can maintain the best relative position relationship, that is, to align the device to be inspected. .
其中,机械臂与待检设备最佳相对位姿关系为:Among them, the best relative pose relationship between the robotic arm and the equipment to be inspected is:
max[|n
x(x-x
r)+n
y(y-y
r)+n
z(z-z
r)|+|n
x*n
xr+n
y*n
yr+n
z*n
zr|]
max[|n x (xx r )+n y (yy r )+n z (zz r )|+|n x *n xr +n y *n yr +n z *n zr |]
式中:n
x,n
y,n
z为待检设备检测表面(如标记读数的表盘表面)法向量,x,y,z为待检设备空间坐标,而x
r,y
r,z
r和n
xr,n
yr,n
zr为机器人空间位姿向量,当机器人运行位姿使上式取得最大值时,即可得到机器人与待检测设备的最佳相对位姿。
In the formula: n x , n y , n z are the normal vectors of the inspection surface of the device to be inspected (such as the dial surface of the marked reading), x, y, z are the spatial coordinates of the device to be inspected, and x r , y r , z r and n xr , n yr , n zr are the robot space pose vectors. When the robot's running pose makes the above formula get the maximum value, the best relative pose between the robot and the device to be detected can be obtained.
机械臂末端的空间位姿为:The spatial pose of the end of the robotic arm is:
max[|n
x*n
xa+n
y*n
ya+n
z*n
za|]
max[|n x *n xa +n y *n ya +n z *n za |]
式中:n
x,n
y,n
z为待检设备检测表面(如标记读数的表盘表面)法向量,n
xa,n
ya,n
za为机械臂空间姿态向量,若要得到机械臂与待检测设备的最佳数据采集姿态,控制机械臂使上式取得最大值即可。
In the formula: n x , n y , n z are the normal vectors of the detection surface of the equipment to be inspected (such as the dial surface of the marked reading), n xa , n ya , and n za are the spatial attitude vectors of the robotic arm. Detect the best data collection attitude of the equipment, and control the manipulator so that the above formula can achieve the maximum value.
在机械臂姿态调整过程中,利用待检测设备与机械臂末端的距离信息,自动计算巡检相机的配置焦距,保证待检测设备信息在图像中清晰可见。During the attitude adjustment process of the robot arm, the distance information between the device to be detected and the end of the robot arm is used to automatically calculate the configuration focal length of the inspection camera to ensure that the information of the device to be detected is clearly visible in the image.
同时,实时获取双目视觉相机采集的图像数据,基于深度学习方法对图像中的待检测设备进行识别,并对机械臂姿态进行微调,保证待检测设备区域始终在图像中央区域。At the same time, the image data collected by the binocular vision camera is acquired in real time, the equipment to be detected in the image is identified based on the deep learning method, and the posture of the robotic arm is fine-tuned to ensure that the area of the equipment to be detected is always in the central area of the image.
在具体实施中,利用深度学习算法对巡检视频中的每帧图像进行设备识别,当识别出目标设备时,利用双目立体算法获取目标设备的三维空间位置坐标。提出一种巡检相机姿态局部自调整方法,采用DeblurGAN运动视频去模糊算法。In a specific implementation, a deep learning algorithm is used to perform device identification on each frame of images in the inspection video, and when a target device is identified, a binocular stereo algorithm is used to obtain the three-dimensional space position coordinates of the target device. This paper proposes a local self-adjustment method for inspection camera pose, using DeblurGAN motion video deblurring algorithm.
提出了一种机器人采集图像运动补偿算法,采用了机器人运动补偿提升运动过程中巡检图像采集的稳定性,保证巡检图像的有效性。由于机器人需在行进过程中,始终保持待检测设备在图像中心区域,实现待检测设备的精确采集,为此需对机器人运动进行补偿,本实施例提出了一种机器人采集图像运动补偿算法,公式如下:A motion compensation algorithm for robot image acquisition is proposed. Robot motion compensation is used to improve the stability of inspection image acquisition during motion and ensure the validity of inspection images. Since the robot needs to keep the device to be detected in the central area of the image during the process of traveling, to achieve accurate acquisition of the device to be detected, the robot motion needs to be compensated for this. This embodiment proposes a motion compensation algorithm for the robot to capture images. The formula as follows:
Control_x=Kpx*delta_x+Vx*Kbx*DControl_x=Kpx*delta_x+Vx*Kbx*D
Control_y=Kpy*delta_y+Vy*Kby*DControl_y=Kpy*delta_y+Vy*Kby*D
其中:Control_x、Control_y为机器人末端姿态在X、Y方向的控制调整量,delta_x、delta_y为某一时刻机器人采集图像中设备区域中心与图像中心间在X、Y方向的坐标偏差,Kpx、Kpy为机器人末端姿态在X、Y方向的控制调整量比例系数,Vx,Vy分别为机器人末端在X、Y方向的运动速度,Kbx、Kby为机器人末端姿态在X、Y方向的控制量补偿系数,D为机器人末端与待检测设备间的距离。本实施例的不停车巡检机器人可以用变电站巡检机器人上,可以巡检也可以用在作业上。Among them: Control_x and Control_y are the control adjustments of the robot end posture in the X and Y directions, delta_x and delta_y are the coordinate deviations in the X and Y directions between the center of the device area and the center of the image in the image captured by the robot at a certain moment, and Kpx and Kpy are The proportional coefficient of the control adjustment amount of the robot end posture in the X and Y directions, Vx and Vy are the movement speeds of the robot end in the X and Y directions respectively, Kbx and Kby are the control amount compensation coefficients of the robot end posture in the X and Y directions, D is the distance between the end of the robot and the device to be detected. The non-stop inspection robot of this embodiment can be used on a substation inspection robot, and can also be used in operations.
当机械臂姿态及机器人巡检相机焦距调整到位后完成设备图像的精细化采集。When the posture of the robot arm and the focal length of the robot inspection camera are adjusted in place, the refined acquisition of the device image is completed.
在具体实施中,在设备识别的过程中,对巡检所采集的待识别实物的少量图像中设备区域进行标定;将标定后的图像进行去背景处理,并对去背景后的待巡检设备图片进行变换,模拟从不同角度、不同距离拍摄该设备的情况;更新背景图片,获得待巡检设备在不同背景中的图像,从而生成海量已经标定的图片。In the specific implementation, in the process of equipment identification, the equipment area is calibrated in a small number of images of the object to be identified collected by the inspection; The picture is transformed to simulate the situation of shooting the device from different angles and different distances; the background picture is updated to obtain images of the equipment to be inspected in different backgrounds, thereby generating a large number of calibrated pictures.
通过上述方式实现多种样本图像数据及样本标注文件的扩充,丰富样本的图像数据,便于后续利用人工智能深度学习算法进行图像训练的实现,从而更准确的实现图像设备状态的识别。Through the above method, the expansion of various sample image data and sample annotation files is realized, and the image data of the sample is enriched, which is convenient for the subsequent realization of image training by using artificial intelligence deep learning algorithm, so as to realize the recognition of image equipment status more accurately.
为了更好的说明上述技术方案,下面以变电站内某仪表为例进行说明少样本到海量样本的生成过程:In order to better illustrate the above technical solutions, the following takes an instrument in a substation as an example to illustrate the generation process from a few samples to a large number of samples:
首先,利用巡检机器人现场采集变电站某仪表设备的正面、侧面、背面图像各数张。First, use the inspection robot to collect several images of the front, side and back of a certain instrument equipment in the substation.
对电力巡检所采集的待识别实物的少量图像进行预处理,增强图像的质量。少量图像进行预处理,包括对 图像进行去模糊、去抖动等图像预处理。Preprocess a small number of images of objects to be identified collected by power inspection to enhance the quality of the images. A small number of images are preprocessed, including image preprocessing such as deblurring and debounce.
将采集到的少量图像,进行标定,实现图像中设备区域的标定。由此步骤进行标定,进行标定数量较少。A small amount of collected images are calibrated to realize the calibration of the device area in the image. Calibration is performed in this step, and the number of calibrations to be performed is small.
将标定后的图像进行去背景处理,获得带有透明背景的实物图片。在该步骤中进行去背景处理,从而获得带有透明背景的实物图片,以便后续进行背景的更换,实现不同背景下的实物图像。The calibrated image is processed to remove the background to obtain a real picture with a transparent background. In this step, background-removing processing is performed to obtain a physical image with a transparent background, so that the background can be replaced later to realize physical images under different backgrounds.
对去背景后的实物图片进行变换,具体为:对透明背景的图像做随机放缩、旋转、放射变换。模拟从不同角度、不同距离拍摄该设备的情况。Transform the real picture after the background is removed, specifically: randomly scaling, rotating, and radiating the image with the transparent background. Simulate shooting the device from different angles and distances.
针对所获得少量图像,由于采集时有些部位不方便采集或者无法进行采集从而得不到对应角度的图像,通过上述步骤的变换处理,可以得到相对全面的实物图像,更能有助于表明实物的结构状态。For a small number of images obtained, because some parts are inconvenient to collect or cannot be collected, the image of the corresponding angle cannot be obtained. Through the transformation processing of the above steps, a relatively comprehensive physical image can be obtained, which is more helpful to show the real object. Structural state.
对去背景后的实物图片进行变换后,还包括:After transforming the real picture after the background is removed, it also includes:
将图像导入到Blender软件中,对图像增加不同光照渲染,模拟不同光照条件下的情况,并获取不同光照条件下的图像数据。Import the image into the Blender software, add different lighting rendering to the image, simulate the situation under different lighting conditions, and obtain image data under different lighting conditions.
由于获得的少量图像是在某一定时间下某种光照的图像,不能满足不同情况光照下的图像的需求,因此,进行上述操作,获得不同光照条件下的图像数据,能够使得图像的种类满足训练时的要求。Since a small number of images obtained are images of a certain illumination at a certain time, they cannot meet the needs of images under different lighting conditions. Therefore, performing the above operations to obtain image data under different lighting conditions can make the types of images meet the training requirements. time requirements.
更新纹理背景或背景环境,获得待识别实物在不同纹理背景或背景环境中的图像,从而生成海量已经标定的图片,实现多种样本图像数据及样本标注文件的扩充,丰富样本的图像数据。Update the texture background or background environment, and obtain the images of the object to be recognized in different texture backgrounds or background environments, thereby generating a large number of calibrated pictures, realizing the expansion of various sample image data and sample annotation files, and enriching the sample image data.
生成海量已经标定的图片之后,所获得海量图像存在大小样本的情况,样本数量不平衡,因此,采用SMOTE即SyntheticMinorityOver-samplingTechnique方法,解决样本不平衡问题,从而进一步提升分类器性能。After generating a large number of calibrated pictures, there are large and small samples in the obtained massive images, and the number of samples is unbalanced. Therefore, the SMOTE (Synthetic Minority Over-sampling Technique) method is used to solve the problem of unbalanced samples, thereby further improving the performance of the classifier.
具体实施例中,多样本数据增强时,具体为:In a specific embodiment, when the multi-sample data is enhanced, it is specifically:
定义特征空间,将每个样本对应到特征空间中的某一点,根据样本不平衡比例确定采样倍率;Define the feature space, correspond each sample to a certain point in the feature space, and determine the sampling ratio according to the sample imbalance ratio;
对每一个小样本类样本,按欧氏距离找出最近邻样本,从中随机选取一个样本点,在特征空间中样本点与最近邻样本点的连线段上随机选取一点作为新样本点,实现直到大、小样本数量平衡。For each small sample class sample, find the nearest neighbor sample according to the Euclidean distance, randomly select a sample point from it, and randomly select a point on the line segment connecting the sample point and the nearest neighbor sample point in the feature space as a new sample point. until the large and small sample sizes are balanced.
类不平衡现象是很常见的,具体指的是数据集中各类别数量不近似相等。如果样本类别之间相差很大,会影响分类器的分类效果。假设小样本数据数量极少,如仅占总体的1%,则即使小样本被错误地全部识别为大样本,在经验风险最小化策略下的分类器识别准确率仍能达到99%,但由于没有学习到小样本的特征,实际分类效果就会很差。Class imbalance is common and refers to the fact that the number of classes in a dataset is not approximately equal. If the sample categories are very different, it will affect the classification effect of the classifier. Assuming that the number of small sample data is very small, such as only 1% of the population, even if all small samples are mistakenly identified as large samples, the accuracy of classifier recognition under the empirical risk minimization strategy can still reach 99%, but due to Without learning the features of small samples, the actual classification effect will be very poor.
SMOTE方法是基于插值的方法,它可以为小样本类合成新的样本,主要流程为:The SMOTE method is an interpolation-based method, which can synthesize new samples for small sample classes. The main process is as follows:
第一步,定义好特征空间,将每个样本对应到特征空间中的某一点,根据样本不平衡比例确定好一个采样倍率N;The first step is to define the feature space, correspond each sample to a certain point in the feature space, and determine a sampling ratio N according to the sample imbalance ratio;
第二步,对每一个小样本类样本(x,y),按欧氏距离找出K个最近邻样本,从中随机选取一个样本点,假设选择的近邻点为(x
n,y
n)。在特征空间中样本点与最近邻样本点的连线段上随机选取一点作为新样本点,满足以下公式:
In the second step, for each small sample class sample (x, y), find the K nearest neighbor samples according to the Euclidean distance, and randomly select a sample point from them, assuming that the selected nearest neighbor point is (x n , y n ). A point is randomly selected as a new sample point on the line segment connecting the sample point and the nearest neighbor sample point in the feature space, which satisfies the following formula:
(x
new,y
new)=(x,y)+rand(0-1)*((x
n-x),(y
n-y))
(x new , y new )=(x, y)+rand(0-1)*((x n -x), (y n -y))
第三步,重复以上的步骤,直到大、小样本数量平衡。The third step is to repeat the above steps until the number of large and small samples is balanced.
具体实施例中,背景图片为现实中拍摄的背景图像或开源的纹理库中背景图像,两种图像呈一定的比例以使训练图像兼顾虚拟和现实数据。In a specific embodiment, the background image is a background image captured in reality or a background image in an open source texture library, and the two images are in a certain ratio so that the training image takes into account both virtual and real data.
随机现实中拍摄的背景图像,通过固定收集,重复使用。Background images captured in random reality, collected through fixation, and reused.
网上一些开源的纹理库,通过固定收集,重复使用。Some open source texture libraries on the Internet are collected and reused through fixed collections.
两种纹理的组合比例的选择(50%,50%,实现虚拟和现实背景图像的有效融合,使训练图像兼顾虚拟和现实数据,更好提升训练模型的识别精度)。The selection of the combination ratio of the two textures (50%, 50%, realizes the effective fusion of virtual and real background images, makes the training images take into account both virtual and real data, and better improves the recognition accuracy of the training model).
本实施例提出了一种机器人巡检图像数据自主分析方法,设计了一种基于少样本图像的变电设备自动识别算法,实现了巡检设备状态信息自动分析及筛选,提升了巡检图像数据分析质量。该实施例的电力巡检少样本图像数据增强方法可以应用到普通巡检机器人、无人机巡检等方面,针对所采集的图像进行处理,获得海量已经标定的图片。This embodiment proposes an autonomous analysis method for robot inspection image data, designs an automatic identification algorithm for substation equipment based on few-sample images, realizes automatic analysis and screening of inspection equipment status information, and improves inspection image data. Analysis quality. The method for enhancing image data with few samples for power inspection in this embodiment can be applied to common inspection robots, UAV inspections, etc., and processes the collected images to obtain a large number of calibrated images.
具体地,基于待巡检设备位置数据,调整机械臂坐标,以使待巡检设备位于图像中央,实现待巡检设备状 态的实时调整。Specifically, based on the position data of the equipment to be inspected, the coordinates of the robotic arm are adjusted so that the equipment to be inspected is located in the center of the image, so as to realize real-time adjustment of the state of the equipment to be inspected.
识别出待巡检设备位置后,还对待巡检设备位置进行跟踪,并将待巡检设备的实时位置信息发送给机械臂控制模块。After identifying the location of the equipment to be inspected, the location of the equipment to be inspected is also tracked, and the real-time location information of the equipment to be inspected is sent to the robotic arm control module.
本实施例还基于AI前端化对变电站巡检视频进行实时识别,如图7所示,其过程包括:This embodiment also performs real-time identification of substation inspection videos based on AI front-end, as shown in Figure 7, and the process includes:
A)样本及模型构建步骤:采集站内设备及设备各种状态的图像数据,进行标注,形成变电设备的图像样本库,并采用深度学习目标检测算法对样本图像进行训练,形成变电设备模型及变电设备状态识别模型,变电设备模型用于巡视视频中设备的识别及定位;变电设备状态识别模型用于巡视视频中设备状态的识别;A) Sample and model construction steps: collect the image data of the equipment and various states of the equipment in the station, mark it, form the image sample library of the substation equipment, and use the deep learning target detection algorithm to train the sample images to form the substation equipment model. And the status recognition model of substation equipment, the substation equipment model is used for the identification and positioning of the equipment in the inspection video; the substation equipment status recognition model is used for the identification of the equipment status in the inspection video;
B)识别模型初始化步骤:AI分析模块加载变电设备模型和变电设备状态识别模型;其中,如图6所示,变电站巡检视频实时识别过程中所涉及到的部件包括至少一个固定点相机、至少一个机器人相机及AI分析模块;B) Identification model initialization step: AI analysis module loads substation equipment model and substation equipment state identification model; wherein, as shown in Figure 6, the components involved in the real-time identification process of substation inspection video include at least one fixed-point camera , at least one robot camera and AI analysis module;
机器人相机安装在变电站巡检机器人上,用于采集变电站巡检机器人巡视路线覆盖区域内设备及环境视频信息;固定点相机分布于变电站设备区内,用于采集变电站设备区内机器人巡检无法到达区域内设备及环境视频信息;AI分析模块对固定点相机、机器人相机所采集的变电站巡检视频实时进行处理,识别并输出设备位置信息,并对采集视频中设备图像信息进行分析处理,在前端实现设备状态的实时跟踪。The robot camera is installed on the substation inspection robot to collect equipment and environmental video information in the substation inspection robot's inspection route coverage area; fixed-point cameras are distributed in the substation equipment area to collect robot inspection in the substation equipment area. Equipment and environmental video information in the area; the AI analysis module processes the substation inspection video collected by fixed-point cameras and robot cameras in real time, identifies and outputs equipment location information, and analyzes and processes equipment image information in the collected video. Real-time tracking of equipment status.
C)设备识别步骤:AI分析模块启动设备识别服务功能,对固定点监控相机及机器人巡检视频进行设备目标的检测,实现视频中待检测设备实时识别与定位,并输出目标设备在巡检图像中的检测框,包括目标设备中心位置及设备区域的长和宽;C) Device identification step: The AI analysis module starts the device identification service function, detects the device target of the fixed-point surveillance camera and the robot inspection video, realizes the real-time identification and positioning of the device to be detected in the video, and outputs the inspection image of the target device. The detection frame in , including the center position of the target device and the length and width of the device area;
D)设备目标跟踪步骤:AI分析模块实现目标设备的识别后,为保证目标采集的实时性及准确性,对目标设备进行跟踪,使用KCF方法对目标设备进行跟踪,由于目标跟踪算法存在前景剧烈变化情况下,跟踪目标丢失的问题,D) Device target tracking step: After the AI analysis module realizes the identification of the target device, in order to ensure the real-time and accuracy of target acquisition, the target device is tracked, and the KCF method is used to track the target device. Because the target tracking algorithm has serious prospects The problem of tracking target loss under changing circumstances,
(X
t,Y
t,W
t,H
t)=KCF(R(t-Floor((t-d
t)/d
t)*d
t))
(X t , Y t , W t , H t )=KCF(R(t-Floor((td t )/d t )*d t ))
(X
t,Y
t,W
t,H
t)为t时刻KCF算法跟踪的坐标输出,R(t)为目标检测算法t时刻输出的目标设备的坐标,Floor是取整函数;每隔d
t时间间隔计算一次目标检测识别算法,并作为KCF算法的输入坐标,使用目标检测算法定期更新KCF算法的输入坐标,消除错误跟踪的问题,提高目标跟踪的准确性,同时提高了算法的实时性。
(X t , Y t , W t , H t ) is the coordinate output tracked by the KCF algorithm at time t, R(t) is the coordinate of the target device output by the target detection algorithm at time t, Floor is the rounding function; every d t The time interval is calculated for the target detection and recognition algorithm, and it is used as the input coordinates of the KCF algorithm. The target detection algorithm is used to regularly update the input coordinates of the KCF algorithm to eliminate the problem of false tracking, improve the accuracy of target tracking, and improve the real-time performance of the algorithm.
E)图像精细采集步骤:在目标跟踪过程中,通过并将设备的实时位置信息发送给机械臂控制模块,调整机械臂末端坐标使得设备位于图像中央,并调整相机的焦距,获得抓图设备的细节图像信息。E) Step of fine image acquisition: During the target tracking process, by sending the real-time position information of the device to the robotic arm control module, adjusting the coordinates of the end of the robotic arm so that the device is located in the center of the image, and adjusting the focal length of the camera to obtain the image capture device Detailed image information.
F)设备状态识别步骤:AI分析模块启动变电设备状态识别服务,实现对设备细节图像的智能分析,完成识别状态的实时获取,并回传变电站巡视视频后台。F) Equipment status identification step: The AI analysis module starts the substation equipment status identification service, realizes the intelligent analysis of the detailed image of the equipment, completes the real-time acquisition of the identification status, and sends back the substation inspection video background.
目标识别算法采用YOLOV3算法,目标跟踪算法使用KCF目标跟踪算法。The target recognition algorithm uses the YOLOV3 algorithm, and the target tracking algorithm uses the KCF target tracking algorithm.
目标跟踪算法,构架关键帧目标检测与非关键帧目标跟踪交互的设备目标检测框架,利用深度学习模型量化裁剪技术,降低算法运算复杂度,提升系统的实时性。The target tracking algorithm builds a device target detection framework that interacts with key frame target detection and non-key frame target tracking, and uses deep learning model quantization and clipping technology to reduce the computational complexity of the algorithm and improve the real-time performance of the system.
AI分析模块采用高性能计算资源自动调度方法,实现了变电站机器人及固定点巡检多通道视频的分析功能。The AI analysis module adopts the automatic scheduling method of high-performance computing resources to realize the analysis function of multi-channel video of substation robots and fixed point inspection.
当有多路视频分析处理时,要保证分析的实时性,需一种高性能计算资源自动调度方法,该方面描述如下:When there are multiple channels of video analysis and processing, to ensure the real-time analysis, an automatic scheduling method of high-performance computing resources is required. This aspect is described as follows:
(1)动态监测当前的待识别视频数量;(1) Dynamically monitor the current number of videos to be identified;
(2)查看当前显卡资源的使用情况;(2) Check the current usage of graphics card resources;
(3)当发现有空闲显卡后,分配识别任务到空闲显卡;(3) When an idle graphics card is found, assign the identification task to the idle graphics card;
(4)当发现没有空闲显卡时,启动轮训分析模式(多路视频帧交替使用显卡资源),交替实现多路视频的处理,提升视频分析的实时性和有效性。(4) When it is found that there is no free graphics card, start the rotation analysis mode (multi-channel video frames alternately use graphics card resources), and realize the processing of multi-channel video alternately to improve the real-time and effectiveness of video analysis.
在其他实施例中,基于数字孪生方法构建变电站全景三维模型,通过图像、声音、触感信息的实时再现方式,实现基于虚拟现实技术的变电站沉浸式巡检作业。In other embodiments, a panoramic three-dimensional model of the substation is constructed based on the digital twin method, and the immersive inspection operation of the substation based on the virtual reality technology is realized through the real-time reproduction of image, sound, and tactile information.
例如:可采用机器人本体上的虚拟现实模块构建变电站作业现场的虚拟环境。VR虚拟现实模块包括VR相机,能够采集现场环境并构建作业现场虚拟环境,运维人员通过此模块,可远程虚拟感知现场作业环境,从而可以实现对现场设备的精准作业。在正常情况下,机器人能够进行自主巡检;当机器人发现设备缺陷、问题后,会将信息及时发送给运维人员,并给出相应的问题类别及对应的解决方案供运维人员参考。For example, the virtual reality module on the robot body can be used to construct the virtual environment of the substation operation site. The VR virtual reality module includes a VR camera, which can collect the on-site environment and build a virtual environment on the job site. Through this module, the operation and maintenance personnel can remotely perceive the on-site working environment virtually, so as to realize the precise operation of the on-site equipment. Under normal circumstances, the robot can conduct autonomous inspection; when the robot finds equipment defects or problems, it will send the information to the operation and maintenance personnel in time, and give the corresponding problem category and corresponding solution for the operation and maintenance personnel to refer to.
本实施例提出机器人全景沉浸式巡视及作业方法,可结合图像,视频、声音等多模态信息,基于数字孪生技术构建变电站全景三维信息(可以是图像三维,也可以是激光三维,虚拟模型等),提出一种沉浸式的机器人作业方式(可以是异构也可以是同构,可以是主从也可以是自动化式的),通过多源、多模态信息的深度融合,重构机器人作业环境全景信息,使工作人员在控制室就能够真实的了解变电站环境及设备状况,实现的变电站机器人的沉浸式巡检作业。This embodiment proposes a panoramic immersive inspection and operation method of a robot, which can combine multi-modal information such as images, videos, and sounds to construct panoramic 3D information of substations based on digital twin technology (which can be 3D images, 3D lasers, virtual models, etc. ), proposes an immersive robot operation method (which can be heterogeneous or isomorphic, master-slave or automated), and reconstructs robot operations through the deep fusion of multi-source and multi-modal information The panoramic environment information enables the staff to truly understand the substation environment and equipment conditions in the control room, and realizes the immersive inspection operation of the substation robot.
实施例二 Embodiment 2
本实施例提供了一种机器人,其采用如实施例一所述的语义智能变电站机器人仿人巡视作业方法进行巡检。This embodiment provides a robot, which adopts the humanoid patrol operation method of a semantic intelligent substation robot as described in Embodiment 1 to perform patrol inspection.
如图8所示,机器人包括机器人本体1上设置多自由度机械臂2,所述多自由度机械臂末端搭载巡检设备3。As shown in FIG. 8 , the robot includes a robot body 1 with a multi-degree-of-freedom mechanical arm 2 , and the end of the multi-degree-of-freedom mechanical arm is equipped with an inspection device 3 .
具体地,多自由度机械臂末端搭载的巡检设备包括:可见光相机、红外相机、手抓、吸盘、局放检测仪等。Specifically, the inspection equipment carried at the end of the multi-degree-of-freedom robotic arm includes: visible light camera, infrared camera, hand grasp, suction cup, partial discharge detector, etc.
参照图9(a)和图9(b),机器人本体上的多自由度机械臂作为从臂4,另外设置操控主臂5,主臂为适于人员操作的轻便型操作系统,运维人员佩戴上主臂5后,集控运维人员可从集控室内通过5G通信,远程控制从臂4,实现对变电站内设备的巡检作业操作。Referring to Figure 9(a) and Figure 9(b), the multi-degree-of-freedom manipulator arm on the robot body is used as the slave arm 4, and a control master arm 5 is additionally set. The master arm is a portable operating system suitable for human operation. After wearing the main arm 5, the centralized control operation and maintenance personnel can remotely control the slave arm 4 through 5G communication from the centralized control room to realize the inspection operation of the equipment in the substation.
另外,机器人本体上还设置VR虚拟现实模块3,用于构建变电站作业现场的虚拟环境。VR虚拟现实模块3包括VR相机,能够采集现场环境并构建作业现场虚拟环境,运维人员通过此模块,可远程虚拟感知现场作业环境,从而可以实现对现场设备的精准作业。In addition, a VR virtual reality module 3 is also set on the robot body, which is used to construct a virtual environment of the substation operation site. The VR virtual reality module 3 includes a VR camera, which can collect the on-site environment and build a virtual environment on the job site. Through this module, the operation and maintenance personnel can remotely perceive the on-site working environment virtually, so as to realize the precise operation of the on-site equipment.
采用本实施例的结构,在正常情况下,机器人能够进行自主巡检;当机器人发现设备缺陷、问题后,会将信息及时发送给运维人员,并给出相应的问题类别及对应的解决方案供运维人员参考。Using the structure of this embodiment, under normal circumstances, the robot can conduct autonomous inspection; when the robot finds equipment defects or problems, it will send the information to the operation and maintenance personnel in time, and give the corresponding problem category and corresponding solution. For the reference of operation and maintenance personnel.
如果发现的设备问题可以通过远程作业解决的话,运维人员就可以发出在线作业命令,机器人在接受到命令后,会自动切换到远程作业模式。If the discovered equipment problems can be solved through remote operation, the operation and maintenance personnel can issue online operation orders, and the robot will automatically switch to the remote operation mode after receiving the order.
在进行作业检修时,机器人自行来到需作业检修的设备旁,打开机器人VR虚拟现实模块,通过5G通信模块,在集控中心远程构建现场虚拟环境;During operation maintenance, the robot will come to the equipment that needs to be repaired by itself, turn on the robot VR virtual reality module, and remotely construct the on-site virtual environment in the centralized control center through the 5G communication module;
运维人员通过集控中心的主臂,利用5G通信模块,远程控制变电站现场机器人上的从臂;运维人员通过VR虚拟现实,实时感知变电站待检修设备环境,利用从臂进行精细化作业,实现了运维人员对变电站设备的远程检修作业,提高了变电站维护的及时性,确保了运维人员的人身安全。Through the main arm of the centralized control center, the operation and maintenance personnel use the 5G communication module to remotely control the slave arm on the substation on-site robot; through the VR virtual reality, the operation and maintenance personnel can perceive the environment of the equipment to be repaired in the substation in real time, and use the slave arm to carry out refined operations. The remote maintenance of the substation equipment by the operation and maintenance personnel is realized, the timeliness of the maintenance of the substation is improved, and the personal safety of the operation and maintenance personnel is ensured.
作为一种可选的实施方式,机器人本体前端设有AI前端数据处理模块,AI前端数据处理模块被配置为实现变电站巡检设备图像的前端识别。将基于图像进行目标识别的过程在机器人前端进行,避免了海量的数据回传后台过程中,由于数据传输存在时间延迟,导致存在视频分析不及时的问题;同时降低了对于带宽的要求。As an optional implementation manner, the front end of the robot body is provided with an AI front-end data processing module, and the AI front-end data processing module is configured to realize front-end recognition of images of substation inspection equipment. The process of target recognition based on images is carried out at the front end of the robot, which avoids the problem of untimely video analysis due to the time delay of data transmission in the process of returning massive data to the background; at the same time, it reduces the bandwidth requirements.
另外,由于目前的变电站巡检机器人主要集中在可见光及红外测温两方面,大多是将可见光相机及红外相机分别固定在云台的两侧,然后将此云台直接通过螺钉或者螺栓直接固定在机器人本体上,处于外观及IP防护等因素的考虑,固定在机器人本体上的云台就很难在更换下来,所以无法实现检测设备的快速更换。In addition, since the current substation inspection robots mainly focus on visible light and infrared temperature measurement, most of them fix the visible light camera and the infrared camera on both sides of the gimbal, and then directly fix the gimbal directly on the gimbal with screws or bolts. On the robot body, due to factors such as appearance and IP protection, it is difficult to replace the gimbal fixed on the robot body, so it is impossible to quickly replace the detection equipment.
本实施例中,参照图10,在多自由度机械臂末端设置快换接头,能够实现单一机器人在变电站内完成多种检测作业的突破,解决了传统巡检机器人检测功能单一,无法任意改变检测设备的问题。In this embodiment, referring to FIG. 10 , a quick-change joint is provided at the end of the multi-degree-of-freedom manipulator, which can achieve a breakthrough in that a single robot can complete various detection operations in the substation, and solves the problem that the detection function of the traditional inspection robot is single and cannot be arbitrarily changed. equipment problem.
具体地,采用的是连接套7的方式实现快速连接和更换;在多自由度机械臂末端8和检测设备6末端均设置螺纹连接头,首先将机械臂末端8及检测设备6对齐,然后转动机械臂末端预制的连接套7,利用机械臂末端及检测设备共有的螺纹丝,使连接套逐渐将机械臂及末端设备固定在一起。Specifically, the connection sleeve 7 is used to realize quick connection and replacement; threaded connectors are provided at the end 8 of the multi-degree-of-freedom manipulator arm and the end of the detection device 6. First, the end of the manipulator 8 and the detection device 6 are aligned, and then rotated. The prefabricated connecting sleeve 7 at the end of the manipulator uses the threaded threads shared by the end of the manipulator and the detection equipment, so that the connecting sleeve gradually fixes the manipulator and the terminal equipment together.
在本实施例中,可供转换的设备包括可见光相机、红外测温、局放检测、机械手抓、电动吸头等四大模块。In this embodiment, the equipment that can be converted includes four modules: visible light camera, infrared temperature measurement, partial discharge detection, manipulator grasping, and electric suction head.
实施例三 Embodiment 3
本实施例提供了一种语义智能变电站机器人仿人巡视作业系统,其包括至少一个如实施例二所述的机器人。This embodiment provides a humanoid patrol operation system of a semantic intelligent substation robot, which includes at least one robot according to the second embodiment.
本实施例的语义智能变电站机器人仿人巡视作业系统包括:嵌入式AI分析模块,以及与所述嵌入式AI分析模块连接的多自由度机械臂、巡检相机、双目视觉相机、三维激光雷达、惯导传感器、机器人工控机和机械臂;其中,双目视觉相机设于机器人前端,巡检相机通过机械臂设于机械臂末端,所述机器人机器人工控机连接机器人运动平台,能够实现多个视觉、激光、GPS、惯导等传感器数据接入与同步采集,从而实现对机器 人自身及周边环境的全景感知,如图3所示。其中,双目视觉相机用于构建语义地图;巡检数据采集相机用于采集设备的精细图像以执行检测。The humanoid inspection operation system of the semantic intelligent substation robot in this embodiment includes: an embedded AI analysis module, a multi-degree-of-freedom robotic arm, an inspection camera, a binocular vision camera, and a three-dimensional laser radar connected to the embedded AI analysis module , Inertial Navigation Sensor, Robot IPC, and Robot Arm; wherein, the binocular vision camera is located at the front end of the robot, and the inspection camera is located at the end of the robotic arm through the robotic arm. The robotic robotic IPC is connected to the robot motion platform, which can realize multiple Vision, laser, GPS, inertial navigation and other sensor data access and synchronous acquisition, so as to realize the panoramic perception of the robot itself and the surrounding environment, as shown in Figure 3. Among them, the binocular vision camera is used to construct a semantic map; the inspection data acquisition camera is used to collect fine images of the equipment to perform detection.
所有设备都通过网络与网络交换机连接,构成机器人ROS控制网络。其中嵌入式AI分析模块是系统数据分析处理关键节点,其作为ROS-Core的运行节点,负责机器人各传感器的信息采集、实现机器人底盘驱动的ROS接口、激光/视觉的三维信息分析和融合、机器人的导航控制、及机械臂的控制等。系统采用存ROS接口形式,激光、视觉、驱动都采用标准ROS接口,系统设计主要包括11个node功能包,安功能分类划分为漫游式语义地图构建模块、巡检导航控制模块、设备图像精细化采集模块以及设备状态识别模块。All devices are connected with network switches through the network to form a robot ROS control network. Among them, the embedded AI analysis module is the key node of system data analysis and processing. As the operation node of ROS-Core, it is responsible for the information collection of each sensor of the robot, the realization of the ROS interface driven by the robot chassis, the three-dimensional information analysis and fusion of laser/vision, the robot navigation control, and control of the robotic arm. The system adopts the storage ROS interface, and the laser, vision, and driver all use the standard ROS interface. The system design mainly includes 11 node function packages, and the security functions are classified into roaming semantic map building module, inspection and navigation control module, and equipment image refinement. Acquisition module and device status identification module.
所述漫游式语义地图构建模块被配置为:The walkthrough semantic map building block is configured to:
所述漫游语义地图包括变电站三维地图,以及三维地图上设备的语义信息,其构建方法包括:The roaming semantic map includes a three-dimensional map of a substation and semantic information of devices on the three-dimensional map, and the construction method includes:
获取变电站的图纸、电气设计图等先验知识数据,利用知识图谱、知识理解技术,基于所述先验知识数据,形成粗精度的语义地图,并自动构建机器人构建语义地图的任务路径;根据所述任务路径控制机器人运动,运动过程中,通过执行以下步骤实现漫游式语义地图的构建:Obtain prior knowledge data such as substation drawings, electrical design drawings, etc., use knowledge graph and knowledge understanding technology, and form a coarse-precision semantic map based on the prior knowledge data, and automatically construct a task path for the robot to construct the semantic map; The above task path controls the robot movement. During the movement process, the construction of the roaming semantic map is realized by executing the following steps:
(1)自双目视觉相机、巡检相机和三维激光传感器获取当前环境的双目图像、巡检图像和三维点云数据;(1) Obtain binocular images, inspection images and 3D point cloud data of the current environment from binocular vision cameras, inspection cameras and 3D laser sensors;
(2)根据巡检图像对当前环境中的道路、设备,以及障碍物等对象进行识别;嵌入式AI分析模块中预存用于识别道路、设备和各类障碍物的深度学习模型,基于这些模型进行目标检测;即得到了当前环境中的道路、设备,以及障碍物语义信息;根据双目图像及三维点云数据获取当前环境中的道路、设备,以及障碍物的空间位置分布;具体地,双目图像和三维点云数据可以获取机器人周边设备或障碍物距机器人本体的距离信息(双目图像用于识别近距离障碍,三维点云数据用于识别远距离障碍),再结合巡检任务中机器人运行方向信息即可得到障碍物以机器人本体为中心的空间分布。(2) Identify objects such as roads, equipment, and obstacles in the current environment based on inspection images; the embedded AI analysis module pre-stores deep learning models for identifying roads, equipment, and various obstacles. Based on these models Perform target detection; that is, obtain the semantic information of roads, equipment, and obstacles in the current environment; obtain the spatial position distribution of roads, equipment, and obstacles in the current environment according to the binocular image and 3D point cloud data; Specifically, Binocular image and 3D point cloud data can obtain the distance information of robot peripheral equipment or obstacles from the robot body (binocular image is used to identify short-range obstacles, and 3D point cloud data is used to identify long-distance obstacles), and then combined with inspection tasks The spatial distribution of obstacles centered on the robot body can be obtained by using the information of the robot's running direction.
(3)根据当前环境中对象的空间分布,实现机器人周边可通行未知区域的自动识别,若存在可通行未知区域,利用局部路径规划方法,实现机器人在未知区域的运动规划,并向机器人工控机发送运动指令,使机器人运动至可通行未知区域,进入步骤(4);若不存在可通行未知区域,表示所有未知区域均探索完成,地图构建结束;(3) According to the spatial distribution of objects in the current environment, the automatic identification of the passable unknown area around the robot is realized. If there is an unknown passable area, the local path planning method is used to realize the motion planning of the robot in the unknown area, and report to the robot industrial computer. Send a motion command to make the robot move to a passable unknown area, and go to step (4); if there is no passable unknown area, it means that all unknown areas have been explored and the map construction is over;
(4)根据双目图像和三维点云数据进行三维SLAM地图构建,返回步骤(1)。(4) Build a 3D SLAM map according to the binocular image and the 3D point cloud data, and return to step (1).
所述步骤(4)中根据双目图像和三维点云数据进行三维SLAM地图构建,具体包括:In the step (4), the three-dimensional SLAM map is constructed according to the binocular image and the three-dimensional point cloud data, which specifically includes:
步骤(4.1):读取双目相机获取的双目图像、巡检相机获取的巡检图像和三维激光传感器数据;Step (4.1): read the binocular image obtained by the binocular camera, the inspection image obtained by the inspection camera and the 3D laser sensor data;
步骤(4.2):基于双目图像数据和三维激光数据获取设备、设备及障碍物的空间位置分布,以及基于三维激光传感器数据构建三维点云图;Step (4.2): based on binocular image data and three-dimensional laser data, obtain the spatial position distribution of equipment, equipment and obstacles, and construct a three-dimensional point cloud map based on three-dimensional laser sensor data;
步骤(4.3):基于双目图像数据及巡检图像数据获取当前环境中设备、设备及障碍物等对象的语义信息;Step (4.3): obtain semantic information of objects such as equipment, equipment and obstacles in the current environment based on the binocular image data and the inspection image data;
步骤(4.4):利用空间位置坐标变化,根据双目图像,以及根据设备的空间位置,将设备的空间位置投影到所述三维点云图,实现二维到三维点云图的映射,结合步骤(2)当前环境中道路、设备以及障碍物的语义信息,建立语义地图。通过将根据双目相机识别的设备投影至三维点云图,再结合三维点云图的点云密度分布,能够实现三维导航地图中待检设备的三维位置及点云的准确聚类与语义化,得到漫游语义地图。所述漫游语义地图中包括变电站中设备的三维空间位置及其语义。Step (4.4): using the spatial position coordinate change, according to the binocular image, and according to the spatial position of the device, the spatial position of the device is projected to the three-dimensional point cloud map, and the mapping from two-dimensional to three-dimensional point cloud map is realized. Combined with step (2) ) Semantic information of roads, equipment and obstacles in the current environment to build a semantic map. By projecting the equipment identified according to the binocular camera to the 3D point cloud map, and then combining the point cloud density distribution of the 3D point cloud map, the accurate clustering and semantics of the 3D position of the device to be inspected and the point cloud in the 3D navigation map can be achieved, and we get Roaming Semantic Maps. The roaming semantic map includes the three-dimensional space position of the equipment in the substation and its semantics.
通过二维到三维点云的映射,能够将通过二维图像识别出的可通行道路、杆塔、表计等语义信息赋予三维点云,结合基于二维图像的定位,能够更准确的对三维点云进行聚类,使得构建的地图更接近于现实。Through the mapping from 2D to 3D point cloud, semantic information such as passable roads, towers, meters, etc. identified by the 2D image can be assigned to the 3D point cloud. Combined with the positioning based on the 2D image, the 3D point can be more accurately identified. Clouds are clustered, making the constructed map closer to reality.
机器人在建立好三维导航语义地图后,可利用三维导航地图,利用ROS导航模块,实现机器人在变电站内的运动导航。机器人对于任务规定的巡检设备的不停车检测,采用静态地图和动态地图相结合的方式:静态地图方式是利用漫游语义地图,将设备三维空间坐标投影到行走路线上,将待检设备的空间位置垂直扇形区域作为任务导航点;动态地图方式是机器人在运动过程中,动态识别到任务关注设备后,获取设备当前三维坐标,实现设备的动态识别,并实时更新地图信息。After the robot has established a 3D navigation semantic map, it can use the 3D navigation map and the ROS navigation module to realize the robot's motion navigation in the substation. For the non-stop detection of the inspection equipment specified by the task, the robot adopts a combination of static map and dynamic map: the static map method uses the roaming semantic map to project the three-dimensional space coordinates of the equipment on the walking route, and the space of the equipment to be inspected is projected. The vertical fan-shaped area of the position is used as the task navigation point; the dynamic map method is that the robot obtains the current three-dimensional coordinates of the device after dynamically identifying the device concerned by the task during the movement process, realizes the dynamic identification of the device, and updates the map information in real time.
所述巡检导航控制模块,被配置为:The inspection and navigation control module is configured as:
步骤1:接收巡检任务,所述巡检任务包括指定巡检区域或指定巡检设备;Step 1: Receive an inspection task, where the inspection task includes a designated inspection area or designated inspection equipment;
步骤2:根据语义地图确定巡检任务相应的待巡检设备的可检测区域信息;Step 2: Determine the detectable area information of the equipment to be inspected corresponding to the inspection task according to the semantic map;
步骤3:融合机器人当前巡检任务中所有待检测设备的可检测区域信息,结合机器人当前所在位置,基于 语义地图中的巡检道路信息,规划巡检路线;具体地,将漫游语义地图中所有待巡检设备的三维空间投影坐标作为机器人行走路线上的点,结合机器人当前所在位置,规划巡检路线;Step 3: Integrate the detectable area information of all the devices to be detected in the robot's current inspection task, combine the robot's current location, and plan the inspection route based on the inspection road information in the semantic map; The three-dimensional space projection coordinates of the equipment to be inspected are used as points on the robot's walking route, and the inspection route is planned based on the current location of the robot;
进一步地,还根据漫游语义地图,确定机器人针对每个待巡检设备的最佳巡检位姿,根据巡检路线到达各个待巡检设备时,根据最佳巡检位姿进行检测;Further, according to the roaming semantic map, the optimal inspection pose of the robot for each device to be inspected is determined, and when reaching each device to be inspected according to the inspection route, the detection is performed according to the optimal inspection pose;
步骤4:根据所述巡检路线进行巡检,若求取了最佳巡检位姿,根据最佳巡检位姿执行检测。Step 4: Perform the inspection according to the inspection route, and if the optimal inspection pose is obtained, the inspection is performed according to the optimal inspection pose.
巡检过程中,实时获取双目视觉和三维激光传感器数据,判断行走路线上是否存在设备的布局与漫游语义地图不一致,若存在,对漫游语义地图进行更新。During the inspection process, the binocular vision and 3D laser sensor data are obtained in real time, and it is judged whether the layout of the equipment on the walking route is inconsistent with the roaming semantic map. If there is, the roaming semantic map is updated.
所述设备图像精细化采集模块,被配置为:The device image refinement acquisition module is configured as:
步骤1:巡检过程中,实时获取图像数据,对图像中的待检测设备进行识别。Step 1: During the inspection process, image data is acquired in real time, and the equipment to be detected in the image is identified.
变电站环境复杂,采集的图像中可能同时包含多种类型设备。此处构建了深度学习设备识别算法库,包含faster-rcnn、ssd、yolo等主流目标识别算法。该算法库在全卷积深度神经网络的基础上,结合巡检任务包含的设备信息,提取目标检测特征和语义特征,然后对融合的特征进行分类和检测,实现巡检图像中设备的精确识别。The substation environment is complex, and the collected images may contain multiple types of equipment at the same time. A deep learning device recognition algorithm library is built here, including mainstream target recognition algorithms such as faster-rcnn, ssd, and yolo. The algorithm library is based on the fully convolutional deep neural network, combined with the equipment information contained in the inspection task, to extract the target detection features and semantic features, and then classify and detect the fused features to realize the accurate identification of the equipment in the inspection images. .
步骤2:预先根据语义地图中设备的位置,计算机械臂与待检设备最佳相对位姿关系;巡检过程中,根据相应的相对位置关系,以及机器人当前所在位置、巡检路线以及设定的巡检速度,控制机械臂调整位姿,使得巡检相机始终对准所述待检测设备,从而从最佳角度采集待检测设备图像,执行检测,提高设备检测的准确性。Step 2: Calculate the optimal relative pose relationship between the robotic arm and the device to be inspected according to the position of the device in the semantic map in advance; during the inspection process, according to the corresponding relative position relationship, as well as the current position of the robot, inspection route and settings It controls the robot arm to adjust the pose, so that the inspection camera is always aimed at the device to be inspected, so as to collect the image of the device to be inspected from the best angle, perform inspection, and improve the accuracy of device inspection.
具体地,所述步骤2中还根据变电站三维语义电子地图及机器人位姿计算,计算数据采集时机器人机械臂末端的巡检相机与待检设备最佳相对位置关系,根据机器人当前所在位置、巡检路线以及设定的巡检速度,计算在不停车状态下的下一时刻的机械臂位姿控制参数,使得机器人机械臂末端的巡检相机与待检设备能够保持最佳相对位置关系,即对准待检测设备。Specifically, in the step 2, according to the three-dimensional semantic electronic map of the substation and the robot pose calculation, the optimal relative positional relationship between the inspection camera at the end of the robot arm and the equipment to be inspected during data collection is calculated. The inspection route and the set inspection speed are used to calculate the robot arm pose control parameters at the next moment in the non-stop state, so that the inspection camera at the end of the robot arm and the device to be inspected can maintain the best relative position relationship, that is, Aim at the device to be tested.
具体地,机械臂与待检设备最佳相对位姿关系为:Specifically, the optimal relative pose relationship between the robotic arm and the device to be inspected is:
max[|n
x(x-x
r)+n
y(y-y
r)+n
z(z-z
r)|+|n
x*n
xr+n
y*n
yr+n
z*n
zr|]
max[|n x (xx r )+n y (yy r )+n z (zz r )|+|n x *n xr +n y *n yr +n z *n zr |]
式中:n
x,n
y,n
z为待检设备检测表面(如标记读数的表盘表面)法向量,x,y,z为待检设备空间坐标,而x
r,y
r,z
r和n
xr,n
yr,n
zr为机器人空间位姿向量,当机器人运行位姿使上式取得最大值时,即可得到机器人与待检测设备的最佳相对位姿。
In the formula: n x , n y , n z are the normal vectors of the inspection surface of the device to be inspected (such as the dial surface of the marked reading), x, y, z are the spatial coordinates of the device to be inspected, and x r , y r , z r and n xr , n yr , n zr are the robot space pose vectors. When the robot's running pose makes the above formula get the maximum value, the best relative pose between the robot and the device to be detected can be obtained.
机械臂末端的空间位姿为:The spatial pose of the end of the robotic arm is:
max[|n
x*n
xa+n
y*n
ya+n
z*n
za|]
max[|n x *n xa +n y *n ya +n z *n za |]
式中:n
x,n
y,n
z为待检设备检测表面(如标记读数的表盘表面)法向量,n
xa,n
ya,n
za为机械臂空间姿态向量,若要得到机械臂与待检测设备的最佳数据采集姿态,控制机械臂使上式取得最大值即可。
In the formula: n x , n y , n z are the normal vectors of the detection surface of the equipment to be inspected (such as the dial surface of the marked reading), n xa , n ya , and n za are the spatial attitude vectors of the robotic arm. Detect the best data collection attitude of the equipment, and control the manipulator so that the above formula can achieve the maximum value.
在机械臂姿态调整过程中,利用待检测设备与机械臂末端的距离信息,自动计算巡检相机的配置焦距,保证待检测设备信息在图像中清晰可见。During the attitude adjustment process of the robot arm, the distance information between the device to be detected and the end of the robot arm is used to automatically calculate the configuration focal length of the inspection camera to ensure that the information of the device to be detected is clearly visible in the image.
同时,实时获取双目视觉相机采集的图像数据,基于深度学习方法对图像中的待检测设备进行识别,并对机械臂姿态进行微调,保证待检测设备区域始终在图像中央区域。At the same time, the image data collected by the binocular vision camera is acquired in real time, the equipment to be detected in the image is identified based on the deep learning method, and the posture of the robotic arm is fine-tuned to ensure that the area of the equipment to be detected is always in the central area of the image.
当机械臂姿态及机器人巡检相机焦距调整到位后完成设备图像的精细化采集。When the posture of the robot arm and the focal length of the robot inspection camera are adjusted in place, the refined acquisition of the device image is completed.
所述嵌入式AI分析模块还包括设备状态识别模块,被配置为:The embedded AI analysis module further includes a device state identification module configured to:
当机器人完成待检测设备的精细化抓图后,利用深度学习算法的前端化技术,依托所述嵌入式AI分析模块提供的前端计算能力,实现设备状态的前端实时化分析,及时发现待检测设备的运行缺陷,提升设备的运行安全。After the robot completes the refined snapshot of the equipment to be tested, it uses the front-end technology of the deep learning algorithm, and relies on the front-end computing power provided by the embedded AI analysis module to realize the front-end real-time analysis of the equipment status, and discover the equipment to be tested in time. The operation defects of the equipment are improved, and the operation safety of the equipment is improved.
在一个具体实施中,机器人工控机还被配置为执行以下步骤:In a specific implementation, the robotic industrial computer is further configured to perform the following steps:
基于数字孪生方法构建变电站全景三维模型,实现基于虚拟现实技术的变电站沉浸式巡检作业。Based on the digital twin method, a panoramic three-dimensional model of the substation is constructed to realize the immersive inspection of the substation based on the virtual reality technology.
在其他实施例中,机器人工控机还被配置为执行以下步骤:In other embodiments, the robotic industrial computer is further configured to perform the following steps:
利用深度学习算法对巡检视频中的每帧图像进行设备识别,当识别出待巡检设备时,利用双目立体算法获取待巡检设备的三维空间位置坐标;Use the deep learning algorithm to identify the equipment in each frame of the inspection video. When the equipment to be inspected is identified, the binocular stereo algorithm is used to obtain the three-dimensional space position coordinates of the equipment to be inspected;
在设备识别的过程中,对巡检所采集的待识别实物的少量图像中设备区域进行标定;将标定后的图像进行去背景处理,并对去背景后的待巡检设备图片进行变换,模拟从不同角度、不同距离拍摄该设备的情况;更新 背景图片,获得待巡检设备在不同背景中的图像,从而生成海量已经标定的图片;In the process of equipment identification, the equipment area is calibrated in a small number of images of the object to be identified collected by the inspection; Shoot the device from different angles and distances; update the background image to obtain images of the device to be inspected in different backgrounds, thereby generating a large number of calibrated images;
进行标定之前,对电力巡检所采集的待识别实物的少量图像进行预处理,增强图像的质量。Before calibration, a small number of images of the object to be recognized collected by the power inspection are preprocessed to enhance the quality of the images.
对去背景后的实物图片进行变换后,还包括:After transforming the real picture after the background is removed, it also includes:
对图像增加不同光照渲染,模拟不同光照条件下的情况,并获取不同光照条件下的图像数据。Add different lighting rendering to the image, simulate the situation under different lighting conditions, and obtain image data under different lighting conditions.
在其他实施例中,机器人工控机还被配置为执行以下步骤:In other embodiments, the robotic industrial computer is further configured to perform the following steps:
控制机械臂调整位姿始终对准待巡检设备,使得机器人在数据采集时始终与待检设备保持最佳相对位姿关系;Control the robot arm to adjust the pose and always aim at the device to be inspected, so that the robot always maintains the best relative pose relationship with the device to be inspected during data collection;
当机器人到达待检设备的最佳观测位姿且进入巡检数据采集装置范围内,利用深度学习算法识别并获取设备在图像中的位置,结合机器人与待巡检设备的相对位姿关系,以实现机械臂末端携带采集装置的空间位姿控制;When the robot reaches the optimal observation pose of the equipment to be inspected and enters the range of the inspection data acquisition device, the deep learning algorithm is used to identify and obtain the position of the equipment in the image, and the relative pose relationship between the robot and the equipment to be inspected is used to obtain Realize the spatial pose control of the acquisition device carried at the end of the robotic arm;
对采集数据质量进行评估优化,从而实现对待检设备巡检数据的最优采集。The quality of the collected data is evaluated and optimized, so as to realize the optimal collection of the inspection data of the equipment to be inspected.
在采集数据质量评估优化的过程中,采用基于历史数据建立的巡检最优图像采集点随时间变化的关系模型,以实现不同季节和不同时间段内巡检点的自主最优选择。In the process of quality evaluation and optimization of the collected data, the relationship model of the optimal image collection points for inspection over time based on historical data is used to realize the autonomous optimal selection of inspection points in different seasons and different time periods.
在其他实施例中,对不同位置及不同光照条件下的巡检数据进行置信度评价,在机器人巡检过程中,选取置信度最高的检测数据作为待检测设备的巡检状态数据。In other embodiments, confidence evaluation is performed on inspection data at different locations and under different lighting conditions, and during the robot inspection process, the detection data with the highest confidence is selected as the inspection status data of the device to be inspected.
实施例四 Embodiment 4
本实施例提供了一种语义智能变电站机器人仿人巡视作业系统,包括:This embodiment provides a semantic intelligent substation robot humanoid patrol operation system, including:
控制中心;control center;
至少一个机器人;所述机器人部署于变电站内各区域;at least one robot; the robot is deployed in various areas in the substation;
每个机器人包括机器人本体,机器人本体上设置有机械臂,机械臂末端搭载有巡检/作业工具;Each robot includes a robot body, the robot body is provided with a robotic arm, and the end of the robotic arm is equipped with an inspection/work tool;
所述控制中心上存储有计算机程序,该程序被处理器执行时实现如实施例一所述的语义智能变电站机器人仿人巡视作业方法中的步骤。A computer program is stored in the control center, and when the program is executed by the processor, the steps in the humanoid patrol operation method of the semantic intelligent substation robot described in the first embodiment are implemented.
实施例五 Embodiment 5
本实施例提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如实施例一所述的语义智能变电站机器人仿人巡视作业方法中的步骤。This embodiment provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the steps in the humanoid patrol operation method for a semantic intelligent substation robot described in the first embodiment.
以上所述仅为本发明的优选实施例而已,并不用于限制本发明,对于本领域的技术人员来说,本发明可以有各种更改和变化。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. For those skilled in the art, the present invention may have various modifications and changes. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention shall be included within the protection scope of the present invention.
Claims (23)
- 一种语义智能变电站机器人仿人巡视作业方法,其特征在于,包括:A humanoid patrol operation method for a semantic intelligent substation robot, characterized by comprising:构建未知变电站环境的三维语义地图;Build a 3D semantic map of the unknown substation environment;基于三维语义地图,结合巡检/作业任务和机器人当前位置,规划机器人行走路径;Based on the 3D semantic map, combined with inspection/job tasks and the current position of the robot, plan the walking path of the robot;控制机器人按照规划的行走路径运动,并在行进过程中开展巡检/作业任务;Control the robot to move according to the planned walking path, and carry out inspection/operation tasks during the traveling process;在开展巡检/作业任务的过程中,实时调整机器人搭载有巡检/作业工具的机械臂的位姿,实现以最佳角度自动采集与识别待巡检设备的图像或以最佳角度自动执行作业任务,完成变电站环境的全自主巡检/作业任务。In the process of carrying out inspection/operation tasks, adjust the pose of the robot arm equipped with inspection/operation tools in real time, so as to automatically collect and identify the images of the equipment to be inspected at the best angle or automatically execute at the best angle. Operation tasks, complete the fully autonomous inspection/operation tasks of the substation environment.
- 如权利要求1所述的语义智能变电站机器人仿人巡视作业方法,其特征在于,所述构建未知变电站环境的三维语义地图,包括:The humanoid patrol operation method of a semantic intelligent substation robot according to claim 1, wherein the constructing a three-dimensional semantic map of the unknown substation environment comprises:基于数字孪生方法构建变电站全景三维模型,通过图像、声音及触感信息的实时再现方法,实现基于虚拟现实技术的变电站沉浸式巡检作业。The panoramic three-dimensional model of the substation is constructed based on the digital twin method, and the immersive inspection operation of the substation based on virtual reality technology is realized through the real-time reproduction method of image, sound and tactile information.
- 如权利要求1所述的语义智能变电站机器人仿人巡视作业方法,其特征在于,所述构建未知变电站环境的三维语义地图,包括:The humanoid patrol operation method of a semantic intelligent substation robot according to claim 1, wherein the constructing a three-dimensional semantic map of the unknown substation environment comprises:基于变电站先验知识,自动获取站内设备的位置信息,实现机器人免配置信息注入情况下,构建未知变电站环境的三维语义地图。Based on the prior knowledge of the substation, the location information of the equipment in the station is automatically obtained, and the 3D semantic map of the unknown substation environment can be constructed without the configuration-free information injection of the robot.
- 如权利要求3所述的语义智能变电站机器人仿人巡视作业方法,其特征在于,所述构建未知变电站环境的三维语义地图的具体过程为:The semantic intelligent substation robot humanoid patrol operation method according to claim 3, wherein the specific process of constructing the three-dimensional semantic map of the unknown substation environment is:实时获取当前环境的双目图像数据、巡检图像数据以及三维点云数据;所述机器人上设有双目视觉相机、巡检相机和三维激光传感器,所述双目视觉相机拍摄得到双目图像数据,所述巡检相机拍摄得到巡检图像数据,所述三维激光传感器检测得到三维点云数据和三维激光数据;Real-time acquisition of binocular image data, inspection image data and three-dimensional point cloud data of the current environment; the robot is provided with a binocular vision camera, an inspection camera and a three-dimensional laser sensor, and the binocular vision camera captures a binocular image The inspection camera captures the inspection image data, and the three-dimensional laser sensor detects the three-dimensional point cloud data and the three-dimensional laser data;基于双目图像数据和三维点云数据获取当前环境对象的空间分布,通过对巡检图像数据进行实时分析,识别图像中设备标识码,定位设备目标区域,实现空间信息中设备标识和位置的同时获取;Based on the binocular image data and 3D point cloud data, the spatial distribution of the current environmental objects is obtained. Through real-time analysis of the inspection image data, the identification code of the equipment in the image is identified, and the target area of the equipment is located to realize the simultaneous identification and location of the equipment in the spatial information. Obtain;根据当前环境中对象的空间分布,实现机器人周边可通行未知区域的自动识别,利用局部路径规划方法,实现机器人在未知区域的运动规划,执行未知环境的地图构建,直至完成整个站内环境语义地图的构建。According to the spatial distribution of objects in the current environment, the automatic identification of the unknown area around the robot is realized, the local path planning method is used to realize the robot motion planning in the unknown area, and the map construction of the unknown environment is performed until the whole station environment semantic map is completed. Construct.
- 如权利要求4所述的语义智能变电站机器人仿人巡视作业方法,其特征在于,所述执行未知环境的地图构建的过程包括:The humanoid patrol operation method of a semantic intelligent substation robot according to claim 4, wherein the process of executing the map construction of the unknown environment comprises:基于双目图像数据及三维激光数据获取当前环境中对象的空间分布;Obtain the spatial distribution of objects in the current environment based on binocular image data and 3D laser data;基于双目图像数据及巡检图像数据获取当前环境中道路、设备及障碍物对象的语义信息,利用空间位置坐标变换,将道路、设备及障碍物空间信息投影至三维点云数据,建立语义地图。Based on the binocular image data and inspection image data, the semantic information of the road, equipment and obstacle objects in the current environment is obtained, and the spatial position coordinate transformation is used to project the spatial information of the road, equipment and obstacle to the 3D point cloud data to establish a semantic map. .
- 如权利要求4所述的语义智能变电站机器人仿人巡视作业方法,其特征在于,所述巡检/作业工具包括图像采集设备;The humanoid inspection operation method of a semantic intelligent substation robot according to claim 4, wherein the inspection/operation tool comprises an image acquisition device;所述在开展巡检/作业任务的过程中,实时调整机器人搭载有巡检/作业工具的机械臂的位姿,包括:In the process of carrying out inspection/operation tasks, the real-time adjustment of the pose of the robot arm equipped with inspection/operation tools includes:根据机器人与待巡检设备间的位置关系,驱动机器人机械臂运动,以使机器人机械臂末端朝向设备的位置且运动到目标设备的局部范围内;According to the positional relationship between the robot and the equipment to be inspected, drive the robot arm to move, so that the end of the robot arm faces the position of the equipment and moves to the local range of the target equipment;实时获取所述巡检相机拍摄的图像数据,自动识别跟踪并定位待巡检的设备位置,驱动机械臂位置进行精确调整,以使机械臂末端图像采集设备为最佳的拍摄角度,驱动图像采集设备调整焦距,获取目标巡检设备图像,实现目标图像精准拍摄;Real-time acquisition of the image data captured by the inspection camera, automatic identification, tracking and positioning of the position of the equipment to be inspected, and precise adjustment of the position of the robotic arm, so that the image acquisition device at the end of the robotic arm is at the best shooting angle and drives the image acquisition The equipment adjusts the focal length, obtains the image of the target inspection equipment, and realizes the accurate shooting of the target image;基于获取到的设备目标图像,在机器人前端自动进行目标识别,实现图像数据在前端的自动分析,实时获取设备的状态信息。Based on the obtained equipment target image, target recognition is automatically performed at the front end of the robot, which realizes the automatic analysis of image data at the front end, and obtains the status information of the equipment in real time.
- 如权利要求4所述的语义智能变电站机器人仿人巡视作业方法,其特征在于,所述在开展巡检/作业任务的过程中,实时调整机器人搭载有巡检/作业工具的机械臂的位姿,包括:The humanoid patrol operation method of a semantic intelligent substation robot according to claim 4, characterized in that, in the process of performing patrol inspection/operation tasks, the pose of the robot arm equipped with inspection/operation tools is adjusted in real time ,include:根据机器人当前所在位置、巡检/作业路径及设定的巡检速度,计算机器人与待巡检设备之间的相对运动关系,控制机械臂调整位姿始终对准待巡检设备,以使机器人机械臂末端所搭载的传感器对待检设备巡检数据进行采集。Calculate the relative motion relationship between the robot and the equipment to be inspected according to the current position of the robot, the inspection/work path and the set inspection speed, and control the robot arm to adjust the pose to always align with the equipment to be inspected, so that the robot can The sensor mounted on the end of the robotic arm collects the inspection data of the equipment to be inspected.
- 如权利要求4所述的语义智能变电站机器人仿人巡视作业方法,其特征在于,所述基于三维语义地图,结合巡检/作业任务和机器人当前位置,规划机器人行走路径,包括:The human-like patrol operation method of a semantic intelligent substation robot according to claim 4, wherein the planning of the robot walking path based on the three-dimensional semantic map, combined with the patrol inspection/operation task and the current position of the robot, includes:根据三维语义地图,确定机器人针对每个待巡检设备的最佳巡检位姿,根据巡检路线到达各个待巡检设备时,根据最佳巡检位姿进行检测。According to the three-dimensional semantic map, determine the optimal inspection pose of the robot for each device to be inspected, and when it arrives at each device to be inspected according to the inspection route, it will perform detection according to the optimal inspection pose.
- 如权利要求8所述的语义智能变电站机器人仿人巡视作业方法,其特征在于,所述根据最佳巡检位姿进行检测包括:基于三维语义地图以及所述双目视觉相机的双目视觉与三维激光传感器数据,确定机器人当前的实际位姿;根据实际位姿和所述最佳巡检位姿计算相对位姿偏差;根据相对位姿偏差控制机器人调整位姿,执行检测。The humanoid patrol operation method of a semantic intelligent substation robot according to claim 8, wherein the detecting according to the best patrol posture comprises: based on a three-dimensional semantic map and the binocular vision and binocular vision of the binocular vision camera. The three-dimensional laser sensor data is used to determine the current actual pose of the robot; the relative pose deviation is calculated according to the actual pose and the optimal inspection pose; the robot is controlled to adjust the pose according to the relative pose deviation to perform detection.
- 如权利要求4所述的语义智能变电站机器人仿人巡视作业方法,其特征在于,巡检过程中,实时获取所述双目视觉相机的双目视觉和三维激光数据,判断行走路径上是否存在设备的布局与三维语义地图不一致,若存在,对三维语义地图进行更新。The humanoid patrol operation method of a semantic intelligent substation robot according to claim 4, characterized in that, in the process of patrol inspection, the binocular vision and three-dimensional laser data of the binocular vision camera are acquired in real time, and it is judged whether there is equipment on the walking path. The layout of the 3D semantic map is inconsistent with the 3D semantic map. If it exists, the 3D semantic map is updated.
- 如权利要求1所述的语义智能变电站机器人仿人巡视作业方法,其特征在于,所述机器人上设有巡检相机,所述巡检相机拍摄得到巡检视频,所述巡检视频中各帧图像为巡检图像数据;The human-like inspection operation method of a semantic intelligent substation robot according to claim 1, wherein the robot is provided with an inspection camera, and the inspection camera captures an inspection video, and each frame in the inspection video The image is the inspection image data;利用深度学习算法对巡检视频中的每帧图像进行设备识别,当识别出待巡检设备时,利用双目立体算法获取待巡检设备的三维空间位置坐标。The deep learning algorithm is used to identify the equipment in each frame of the inspection video. When the equipment to be inspected is identified, the binocular stereo algorithm is used to obtain the three-dimensional space position coordinates of the equipment to be inspected.
- 如权利要求11所述的语义智能变电站机器人仿人巡视作业方法,其特征在于,在设备识别的过程中,对巡检所采集的待识别实物的少量图像中设备区域进行标定;将标定后的图像进行去背景处理,并对去背景后的待巡检设备图片进行变换,模拟从不同角度、不同距离拍摄该设备的情况;更新背景图片,获得待巡检设备在不同背景中的图像,从而生成海量已经标定的图片。The human-like patrol operation method of a semantic intelligent substation robot according to claim 11, characterized in that, in the process of equipment identification, the equipment area in a small number of images of the object to be identified collected by the inspection is calibrated; The image is subjected to background removal processing, and the background image of the equipment to be inspected is transformed to simulate the situation of shooting the equipment from different angles and distances; the background image is updated to obtain images of the equipment to be inspected in different backgrounds, thereby Generate a large number of calibrated pictures.
- 如权利要求12所述的语义智能变电站机器人仿人巡视作业方法,其特征在于,进行标定之前,对电力巡检所采集的待识别实物的少量图像进行预处理,增强图像的质量。The humanoid patrol operation method of a semantic intelligent substation robot according to claim 12, characterized in that, before the calibration, a small number of images of the objects to be recognized collected by the power patrol are preprocessed to enhance the quality of the images.
- 如权利要求12所述的语义智能变电站机器人仿人巡视作业方法,其特征在于,对去背景后的实物图片进行变换后,还包括:The humanoid patrol operation method of a semantic intelligent substation robot as claimed in claim 12, characterized in that, after transforming the real picture after the background is removed, the method further comprises:对图像增加不同光照渲染,模拟不同光照条件下的情况,并获取不同光照条件下的图像数据。Add different lighting rendering to the image, simulate the situation under different lighting conditions, and obtain image data under different lighting conditions.
- 如权利要求12所述的语义智能变电站机器人仿人巡视作业方法,其特征在于,生成海量已经标定的图片之后,进行多样本数据增强,具体为:The humanoid patrol operation method of a semantic intelligent substation robot according to claim 12, wherein after generating a large number of calibrated pictures, multi-sample data enhancement is performed, specifically:定义特征空间,将每个样本对应到特征空间中的某一点,根据样本不平衡比例确定采样倍率;每个所述样本为每个所述已经标定的图片;Define the feature space, correspond each sample to a certain point in the feature space, and determine the sampling ratio according to the sample imbalance ratio; each of the samples is each of the calibrated pictures;对每一个小样本类样本,按欧氏距离找出最近邻样本,从中随机选取一个样本点,在特征空间中样本点与最近邻样本点的连线段上随机选取一点作为新样本点,实现直到大、小样本数量平衡。For each small sample class sample, find the nearest neighbor sample according to the Euclidean distance, randomly select a sample point from it, and randomly select a point on the line segment connecting the sample point and the nearest neighbor sample point in the feature space as a new sample point. until the large and small sample sizes are balanced.
- 如权利要求1所述的语义智能变电站机器人仿人巡视作业方法,其特征在于,所述在开展巡检/作业任务的过程中,实时调整机器人搭载有巡检/作业工具的机械臂的位姿,包括:The humanoid patrol operation method of a semantic intelligent substation robot according to claim 1, characterized in that, in the process of performing patrol inspection/operation tasks, the pose of the robot arm equipped with inspection/operation tools is adjusted in real time ,include:基于待巡检设备位置数据,调整机械臂末端位姿,以使待巡检设备位于图像中央,实现待巡检设备状态的实时调整。Based on the position data of the equipment to be inspected, the posture of the end of the robotic arm is adjusted so that the equipment to be inspected is located in the center of the image, and the state of the equipment to be inspected can be adjusted in real time.
- 如权利要求1所述的语义智能变电站机器人仿人巡视作业方法,其特征在于,识别出待巡检设备位置后,还对待巡检设备位置进行跟踪,并将待巡检设备的实时位置信息发送给机械臂控制模块。The humanoid patrol operation method of a semantic intelligent substation robot according to claim 1, characterized in that after identifying the position of the equipment to be inspected, the position of the equipment to be inspected is also tracked, and the real-time location information of the equipment to be inspected is sent. Give the robot arm control module.
- 如权利要求1所述的语义智能变电站机器人仿人巡视作业方法,其特征在于,所述在开展巡检/作业任务的过程中,实时调整机器人搭载有巡检/作业工具的机械臂的位姿,包括:The humanoid patrol operation method of a semantic intelligent substation robot according to claim 1, characterized in that, in the process of performing patrol inspection/operation tasks, the pose of the robot arm equipped with inspection/operation tools is adjusted in real time ,include:控制机械臂调整位姿始终对准待巡检设备,使得机器人在数据采集时始终与待检设备保持最佳相对位姿关系;Control the robot arm to adjust the pose and always aim at the device to be inspected, so that the robot always maintains the best relative pose relationship with the device to be inspected during data collection;当机器人到达待检设备的最佳观测位姿且进入巡检数据采集装置范围内,利用深度学习算法识别并获取设备在图像中的位置,结合机器人与待巡检设备的相对位姿关系,以实现机械臂末端携带采集装置的空间位姿控制;When the robot reaches the optimal observation pose of the device to be inspected and enters the range of the inspection data acquisition device, the deep learning algorithm is used to identify and obtain the position of the device in the image, and the relative pose relationship between the robot and the device to be inspected is used to determine Realize the spatial pose control of the acquisition device carried at the end of the robotic arm;对采集数据质量进行评估优化,从而实现对待检设备巡检数据的最优采集。The quality of the collected data is evaluated and optimized, so as to realize the optimal collection of the inspection data of the equipment to be inspected.
- 如权利要求18所述的语义智能变电站机器人仿人巡视作业方法,其特征在于,在采集数据质量评估优化的过程中,采用基于历史数据建立的巡检最优图像采集点随时间变化的关系模型,以实现不同季节和不同时间段内巡检点的自主最优选择。The human-like patrol operation method of a semantic intelligent substation robot according to claim 18, characterized in that, in the process of quality evaluation and optimization of the collected data, a relationship model of the optimal image acquisition points for patrol inspection based on historical data changes with time is adopted. , in order to realize the independent optimal selection of inspection points in different seasons and different time periods.
- 如权利要求18所述的语义智能变电站机器人仿人巡视作业方法,其特征在于,在采集数据质量评估优化的过程中,对不同位置及不同光照条件下的巡检数据进行置信度评价,在机器人巡检过程中,选取置信度最高的检测数据作为待检测设备的巡检状态数据。The human-like patrol operation method of a semantic intelligent substation robot according to claim 18, characterized in that, in the process of collecting data quality evaluation and optimization, a confidence evaluation is performed on the patrol data at different locations and under different lighting conditions, and the robot During the inspection process, the detection data with the highest confidence is selected as the inspection status data of the equipment to be inspected.
- 一种机器人,其特征在于,采用如权利要求1-20中任一项所述的语义智能变电站机器人仿人巡视作业方法进行巡检。A robot, characterized in that the patrol inspection is carried out by adopting the humanoid patrol operation method of a semantic intelligent substation robot according to any one of claims 1-20.
- 一种语义智能变电站机器人仿人巡视作业系统,其特征在于,包括:A semantic intelligent substation robot humanoid patrol operation system, characterized by comprising:控制中心;control center;至少一个机器人;所述机器人部署于变电站内各区域;at least one robot; the robot is deployed in various areas in the substation;每个机器人包括机器人本体,机器人本体上设置有机械臂,机械臂末端搭载有巡检/作业工具;Each robot includes a robot body, the robot body is provided with a robotic arm, and the end of the robotic arm is equipped with an inspection/work tool;所述控制中心上存储有计算机程序,该程序被处理器执行时实现如权利要求1-20中任一项所述的语义智能变电站机器人仿人巡视作业方法中的步骤。A computer program is stored in the control center, and when the program is executed by the processor, the steps in the humanoid patrol operation method of a semantic intelligent substation robot according to any one of claims 1-20 are implemented.
- 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1-20中任一项所述的语义智能变电站机器人仿人巡视作业方法中的步骤。A computer-readable storage medium on which a computer program is stored, characterized in that, when the program is executed by a processor, the humanoid patrol operation method of a semantic intelligent substation robot according to any one of claims 1-20 is realized. A step of.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010752208.4 | 2020-07-30 | ||
CN202010752208.4A CN111897332B (en) | 2020-07-30 | 2020-07-30 | Semantic intelligent substation robot humanoid inspection operation method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022021739A1 true WO2022021739A1 (en) | 2022-02-03 |
Family
ID=73182661
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/135608 WO2022021739A1 (en) | 2020-07-30 | 2020-12-11 | Humanoid inspection operation method and system for semantic intelligent substation robot |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111897332B (en) |
WO (1) | WO2022021739A1 (en) |
Cited By (97)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114500858A (en) * | 2022-03-28 | 2022-05-13 | 浙江大华技术股份有限公司 | Parameter determination method, device, equipment and medium for preset bits |
CN114474103A (en) * | 2022-03-28 | 2022-05-13 | 西安理工大学 | Distribution network cable corridor inspection method and equipment |
CN114545969A (en) * | 2022-02-23 | 2022-05-27 | 平顶山天安煤业股份有限公司 | Intelligent power grid inspection method and system based on digital twins |
CN114594770A (en) * | 2022-03-04 | 2022-06-07 | 深圳市千乘机器人有限公司 | Inspection method for inspection robot without stopping |
CN114615344A (en) * | 2022-02-08 | 2022-06-10 | 广东智有盈能源技术有限公司 | Intelligent protocol conversion method and device for electric power instrument |
CN114627119A (en) * | 2022-05-16 | 2022-06-14 | 山东通广电子有限公司 | Visual neural network-based appearance defect intelligent identification system and identification method |
CN114618802A (en) * | 2022-03-17 | 2022-06-14 | 国网辽宁省电力有限公司电力科学研究院 | GIS cavity operation device and GIS cavity operation method |
CN114661049A (en) * | 2022-03-29 | 2022-06-24 | 联想(北京)有限公司 | Inspection method, inspection device and computer readable medium |
CN114677777A (en) * | 2022-03-16 | 2022-06-28 | 中车唐山机车车辆有限公司 | Equipment inspection method, inspection system and terminal equipment |
CN114708395A (en) * | 2022-04-01 | 2022-07-05 | 东南大学 | Ammeter identification, positioning and three-dimensional mapping method for transformer substation inspection robot |
CN114700946A (en) * | 2022-04-15 | 2022-07-05 | 山东新一代信息产业技术研究院有限公司 | Equipment vibration frequency acquisition method based on inspection robot |
CN114721403A (en) * | 2022-06-02 | 2022-07-08 | 中国海洋大学 | Automatic driving control method and device based on OpenCV and storage medium |
CN114779679A (en) * | 2022-03-23 | 2022-07-22 | 北京英智数联科技有限公司 | Augmented reality inspection system and method |
CN114783188A (en) * | 2022-05-17 | 2022-07-22 | 阿波罗智联(北京)科技有限公司 | Inspection method and device |
CN114821032A (en) * | 2022-03-11 | 2022-07-29 | 山东大学 | Special target abnormal state detection and tracking method based on improved YOLOv5 network |
CN114842570A (en) * | 2022-06-01 | 2022-08-02 | 国网安徽省电力有限公司铜陵供电公司 | Intelligent inspection system for overhead optical cable |
CN114842426A (en) * | 2022-07-06 | 2022-08-02 | 广东电网有限责任公司肇庆供电局 | Transformer substation equipment state monitoring method and system based on accurate alignment camera shooting |
CN114862620A (en) * | 2022-04-29 | 2022-08-05 | 江苏中科云墨数字科技有限公司 | Intelligent substation management system based on digital twins |
CN114848155A (en) * | 2022-04-29 | 2022-08-05 | 电子科技大学 | Verification device for delay measurement of surgical robot |
CN114905512A (en) * | 2022-05-16 | 2022-08-16 | 安徽元古纪智能科技有限公司 | Panoramic tracking and obstacle avoidance method and system for intelligent inspection robot |
CN114926916A (en) * | 2022-05-10 | 2022-08-19 | 上海咪啰信息科技有限公司 | 5G unmanned aerial vehicle developments AI system of patrolling and examining |
CN114997359A (en) * | 2022-05-17 | 2022-09-02 | 哈尔滨工业大学 | Complete set of technical equipment for embankment dangerous case patrol based on bionic machine dog |
CN114995449A (en) * | 2022-06-21 | 2022-09-02 | 华能(广东)能源开发有限公司海门电厂 | Robot inspection design method and system based on electronic map |
CN115035260A (en) * | 2022-05-27 | 2022-09-09 | 哈尔滨工程大学 | Indoor mobile robot three-dimensional semantic map construction method |
CN115061490A (en) * | 2022-05-30 | 2022-09-16 | 广州中科云图智能科技有限公司 | Reservoir inspection method, device and equipment based on unmanned aerial vehicle and storage medium |
CN115101067A (en) * | 2022-06-16 | 2022-09-23 | 陈明华 | Smart power grids voice system based on block chain technique |
CN115118008A (en) * | 2022-06-15 | 2022-09-27 | 国网山东省电力公司梁山县供电公司 | Transformer substation intelligent robot operation method |
CN115150559A (en) * | 2022-09-06 | 2022-10-04 | 国网天津市电力公司高压分公司 | Remote vision system with acquisition self-adjustment calculation compensation and calculation compensation method |
CN115171237A (en) * | 2022-07-12 | 2022-10-11 | 国网河北省电力有限公司超高压分公司 | 3D formation of image tours record appearance |
CN115185268A (en) * | 2022-06-22 | 2022-10-14 | 国网山东省电力公司鱼台县供电公司 | Transformer substation inspection path planning method and system based on bilinear interpolation |
CN115200570A (en) * | 2022-09-15 | 2022-10-18 | 国网山东省电力公司费县供电公司 | Navigation equipment for power grid inspection and navigation method thereof |
CN115278209A (en) * | 2022-06-13 | 2022-11-01 | 上海研鼎信息技术有限公司 | Camera test system based on intelligent walking robot |
CN115256333A (en) * | 2022-07-26 | 2022-11-01 | 国核信息科技有限公司 | Photovoltaic engineering intelligent installation robot and working method thereof |
CN115313658A (en) * | 2022-08-27 | 2022-11-08 | 国网湖北省电力有限公司黄石供电公司 | Intelligent operation and maintenance system of digital twin transformer substation |
CN115366118A (en) * | 2022-08-09 | 2022-11-22 | 广东机电职业技术学院 | Relay detection system and method based on robot and vision technology |
CN115390581A (en) * | 2022-07-13 | 2022-11-25 | 国网江苏省电力有限公司兴化市供电分公司 | Unmanned aerial vehicle optimal cruise path planning system based on power equipment single line diagram |
CN115426668A (en) * | 2022-07-11 | 2022-12-02 | 浪潮通信信息系统有限公司 | Intelligent operation and maintenance system of base station |
CN115431266A (en) * | 2022-08-24 | 2022-12-06 | 阿里巴巴达摩院(杭州)科技有限公司 | Inspection method, inspection device and inspection robot |
CN115562332A (en) * | 2022-09-01 | 2023-01-03 | 北京普利永华科技发展有限公司 | Efficient processing method and system for airborne recorded data of unmanned aerial vehicle |
CN115597659A (en) * | 2022-09-21 | 2023-01-13 | 山东锐翊电力工程有限公司(Cn) | Intelligent safety management and control method for transformer substation |
CN115610250A (en) * | 2022-11-03 | 2023-01-17 | 北京华商三优新能源科技有限公司 | Automatic charging equipment control method and system |
CN115639842A (en) * | 2022-12-23 | 2023-01-24 | 北京中飞艾维航空科技有限公司 | Inspection method and system using unmanned aerial vehicle |
CN115661965A (en) * | 2022-09-06 | 2023-01-31 | 贵州博睿科讯科技发展有限公司 | Intelligent inspection system integrated with automatic airport for highway unmanned aerial vehicle |
CN115685736A (en) * | 2022-11-04 | 2023-02-03 | 合肥工业大学 | Wheeled robot of patrolling and examining based on thermal imaging and convolution neural network |
CN115690923A (en) * | 2022-11-17 | 2023-02-03 | 深圳市谷奇创新科技有限公司 | Sign distributed monitoring method and system based on optical fiber sensor |
CN115816450A (en) * | 2022-11-29 | 2023-03-21 | 国电商都县光伏发电有限公司 | Robot inspection control method |
CN115858714A (en) * | 2023-02-27 | 2023-03-28 | 国网江西省电力有限公司电力科学研究院 | Automatic modeling management system and method for collecting GIS data by unmanned aerial vehicle |
CN115861855A (en) * | 2022-12-15 | 2023-03-28 | 福建亿山能源管理有限公司 | Operation and maintenance monitoring method and system for photovoltaic power station |
CN115860296A (en) * | 2022-11-26 | 2023-03-28 | 宝钢工程技术集团有限公司 | Remote inspection method and system based on 3D road network planning |
CN115847446A (en) * | 2023-01-16 | 2023-03-28 | 泉州通维科技有限责任公司 | Inspection robot in bridge compartment beam |
CN115980062A (en) * | 2022-12-30 | 2023-04-18 | 南通诚友信息技术有限公司 | Industrial production line whole-process vision inspection method based on 5G |
CN115979249A (en) * | 2023-03-20 | 2023-04-18 | 西安国智电子科技有限公司 | Navigation method and device of inspection robot |
CN116052300A (en) * | 2022-12-22 | 2023-05-02 | 清华大学 | Digital twinning-based power inspection system and method |
CN116048865A (en) * | 2023-02-21 | 2023-05-02 | 海南电网有限责任公司信息通信分公司 | Automatic verification method for failure elimination verification under automatic operation and maintenance |
CN116071368A (en) * | 2023-04-07 | 2023-05-05 | 国网山西省电力公司电力科学研究院 | Insulator pollution multi-angle image detection and fineness analysis method and device |
CN116091952A (en) * | 2023-04-10 | 2023-05-09 | 江苏智绘空天技术研究院有限公司 | Ground-air integrated intelligent cloud control management system and method based on big data |
CN116233219A (en) * | 2022-11-04 | 2023-06-06 | 国电湖北电力有限公司鄂坪水电厂 | Inspection method and device based on personnel positioning algorithm |
CN116225062A (en) * | 2023-03-14 | 2023-06-06 | 广州天勤数字科技有限公司 | Unmanned aerial vehicle navigation method applied to bridge inspection and unmanned aerial vehicle |
CN116245230A (en) * | 2023-02-03 | 2023-06-09 | 南方电网调峰调频发电有限公司运行分公司 | Operation inspection and trend analysis method and system for power station equipment |
CN116307638A (en) * | 2023-05-18 | 2023-06-23 | 华北科技学院(中国煤矿安全技术培训中心) | Coal mine gas inspection method |
CN116310185A (en) * | 2023-05-10 | 2023-06-23 | 江西丹巴赫机器人股份有限公司 | Three-dimensional reconstruction method for farmland field and intelligent agricultural robot thereof |
CN116452648A (en) * | 2023-06-15 | 2023-07-18 | 武汉科技大学 | Point cloud registration method and system based on normal vector constraint correction |
CN116520853A (en) * | 2023-06-08 | 2023-08-01 | 江苏商贸职业学院 | Agricultural inspection robot based on artificial intelligence technology |
CN116667531A (en) * | 2023-05-19 | 2023-08-29 | 国网江苏省电力有限公司泰州供电分公司 | Acousto-optic-electric collaborative inspection method and device based on digital twin transformer substation |
CN116700247A (en) * | 2023-05-30 | 2023-09-05 | 东莞市华复实业有限公司 | Intelligent cruising management method and system for household robot |
CN116755451A (en) * | 2023-08-16 | 2023-09-15 | 泰山学院 | Intelligent patrol robot path planning method and system |
CN116918593A (en) * | 2023-09-14 | 2023-10-24 | 众芯汉创(江苏)科技有限公司 | Binocular vision unmanned image-based power transmission line channel tree obstacle monitoring system |
CN116993676A (en) * | 2023-07-03 | 2023-11-03 | 中铁九局集团电务工程有限公司 | Subway rail fastener counting and positioning method based on deep learning |
CN117008602A (en) * | 2023-06-02 | 2023-11-07 | 国网山东省电力公司邹城市供电公司 | Path planning method and system for inspection robot in transformer substation |
CN117030974A (en) * | 2023-08-17 | 2023-11-10 | 天津大学 | Polluted site sampling robot and automatic sampling method |
CN117119500A (en) * | 2023-10-25 | 2023-11-24 | 国网山东省电力公司东营供电公司 | Intelligent CPE (customer premise equipment) module-based inspection robot data transmission optimization method |
CN117128975A (en) * | 2023-10-24 | 2023-11-28 | 国网山东省电力公司济南供电公司 | Navigation method, system, medium and equipment for switch cabinet inspection operation robot |
CN117146826A (en) * | 2023-10-26 | 2023-12-01 | 国网湖北省电力有限公司经济技术研究院 | Method and device for planning hidden danger inspection path of power transmission line |
CN117196480A (en) * | 2023-09-19 | 2023-12-08 | 西湾智慧(广东)信息科技有限公司 | Intelligent logistics park management system based on digital twinning |
CN117196210A (en) * | 2023-09-08 | 2023-12-08 | 广州方驰信息科技有限公司 | Big data management control method based on digital twin three-dimensional scene |
CN117213468A (en) * | 2023-11-02 | 2023-12-12 | 北京亮亮视野科技有限公司 | Method and device for inspecting outside of airplane and electronic equipment |
CN117270545A (en) * | 2023-11-21 | 2023-12-22 | 合肥工业大学 | Convolutional neural network-based substation wheel type inspection robot and method |
CN117428774A (en) * | 2023-11-23 | 2024-01-23 | 中国船舶集团有限公司第七一六研究所 | Industrial robot control method and system for ship inspection |
CN117607636A (en) * | 2023-11-30 | 2024-02-27 | 华北电力大学 | Multispectral fusion sensing and storing calculation integrated high-voltage discharge detection method |
CN117637136A (en) * | 2023-12-22 | 2024-03-01 | 南京天溯自动化控制系统有限公司 | Method and device for automatically inspecting medical equipment by robot |
CN117782088A (en) * | 2023-12-13 | 2024-03-29 | 深圳大学 | Collaborative target map building positioning navigation method |
CN117944058A (en) * | 2024-03-27 | 2024-04-30 | 韦氏(苏州)医疗科技有限公司 | Scheduling method and system of self-propelled functional mechanical arm and mechanical arm |
CN117970932A (en) * | 2024-04-01 | 2024-05-03 | 中数智科(杭州)科技有限公司 | Task allocation method for collaborative inspection of multiple robots of rail train |
CN117984333A (en) * | 2024-04-03 | 2024-05-07 | 广东电网有限责任公司东莞供电局 | Inspection method, device and equipment for oil immersed transformer and storage medium |
CN118093706A (en) * | 2024-04-25 | 2024-05-28 | 国网瑞嘉(天津)智能机器人有限公司 | Distribution network live working robot, system and working method |
CN118089794A (en) * | 2024-04-26 | 2024-05-28 | 北京航宇测通电子科技有限公司 | Simulation method for self-adaptive multi-information integrated navigation based on multi-source information |
CN118092654A (en) * | 2024-03-07 | 2024-05-28 | 瑞丰宝丽(北京)科技有限公司 | Virtual reality application method, system, terminal and storage medium for operation and maintenance industry |
CN118115882A (en) * | 2024-04-26 | 2024-05-31 | 山东省农业机械科学研究院 | Agricultural robot inspection identification method based on multi-source perception fusion |
CN118181302A (en) * | 2024-05-14 | 2024-06-14 | 长春中医药大学 | Traditional Chinese medicine grabbing control management system based on artificial intelligence |
CN118226861A (en) * | 2024-05-24 | 2024-06-21 | 广州市城市排水有限公司 | Underwater intelligent robot cruise control method and system based on intelligent algorithm |
CN118275786A (en) * | 2024-03-27 | 2024-07-02 | 云南电投绿能科技有限公司 | Operation monitoring method, device and equipment of power equipment and storage medium |
CN118274845A (en) * | 2024-05-29 | 2024-07-02 | 天津地铁智慧科技有限公司 | Subway station robot inspection system and inspection method |
CN118351157A (en) * | 2024-06-18 | 2024-07-16 | 山东广瑞电力科技有限公司 | Blind spot-free inspection method and system based on multi-perception equipment combination |
CN118372258A (en) * | 2024-06-21 | 2024-07-23 | 西湖大学 | Distributed vision cluster robot system |
CN118396125A (en) * | 2024-06-27 | 2024-07-26 | 杭州海康威视数字技术股份有限公司 | Intelligent store patrol method and device, storage medium and electronic equipment |
CN118521433A (en) * | 2024-07-24 | 2024-08-20 | 东方电子股份有限公司 | Knowledge-graph-based digital twin early warning decision method and system for transformer substation |
CN118649976A (en) * | 2024-08-20 | 2024-09-17 | 通用技术集团工程设计有限公司 | Unmanned intelligent cleaning method and system for photovoltaic panel based on improved YOLOv model |
Families Citing this family (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111897332B (en) * | 2020-07-30 | 2022-10-11 | 国网智能科技股份有限公司 | Semantic intelligent substation robot humanoid inspection operation method and system |
CN112381963B (en) * | 2020-11-12 | 2022-02-22 | 广东电网有限责任公司 | Intelligent power Internet of things inspection method and system based on digital twin technology |
CN112091982B (en) * | 2020-11-16 | 2021-01-29 | 杭州景业智能科技股份有限公司 | Master-slave linkage control method and system based on digital twin mapping |
CN112668687B (en) * | 2020-12-01 | 2022-08-26 | 达闼机器人股份有限公司 | Cloud robot system, cloud server, robot control module and robot |
CN112549034B (en) * | 2020-12-21 | 2021-09-03 | 南方电网电力科技股份有限公司 | Robot task deployment method, system, equipment and storage medium |
CN112667717B (en) * | 2020-12-23 | 2023-04-07 | 贵州电网有限责任公司电力科学研究院 | Transformer substation inspection information processing method and device, computer equipment and storage medium |
CN112693541B (en) * | 2020-12-31 | 2022-02-22 | 国网智能科技股份有限公司 | Foot type robot of transformer substation, inspection system and method |
CN112828913A (en) * | 2021-02-08 | 2021-05-25 | 深圳泰豪信息技术有限公司 | Patrol robot control method |
CN112860521A (en) * | 2021-02-24 | 2021-05-28 | 北京玄马知能科技有限公司 | Data diagnosis and analysis method and system based on multi-robot cooperative inspection operation |
CN112990310B (en) * | 2021-03-12 | 2023-09-05 | 国网智能科技股份有限公司 | Artificial intelligence system and method for serving electric robot |
CN113240132B (en) * | 2021-03-19 | 2022-07-12 | 招商局重庆交通科研设计院有限公司 | City public space road system of patrolling and examining |
CN113146628B (en) * | 2021-04-13 | 2023-03-31 | 中国铁道科学研究院集团有限公司通信信号研究所 | Brake hose picking robot system suitable for marshalling station |
CN113345016A (en) * | 2021-04-22 | 2021-09-03 | 国网浙江省电力有限公司嘉兴供电公司 | Positioning pose judgment method for binocular recognition |
CN113177918B (en) * | 2021-04-28 | 2022-04-19 | 上海大学 | Intelligent and accurate inspection method and system for electric power tower by unmanned aerial vehicle |
CN113301306A (en) * | 2021-05-24 | 2021-08-24 | 中国工商银行股份有限公司 | Intelligent inspection method and system |
CN113190019B (en) * | 2021-05-26 | 2023-05-16 | 立得空间信息技术股份有限公司 | Virtual simulation-based routing inspection robot task point arrangement method and system |
CN113421356B (en) * | 2021-07-01 | 2023-05-12 | 北京华信傲天网络技术有限公司 | Inspection system and method for equipment in complex environment |
CN113671955B (en) * | 2021-08-03 | 2023-10-20 | 国网浙江省电力有限公司嘉兴供电公司 | Inspection sequence control method based on intelligent robot of transformer substation |
CN113671966B (en) * | 2021-08-24 | 2022-08-02 | 成都杰启科电科技有限公司 | Method for realizing remote obstacle avoidance of smart grid power inspection robot based on 5G and obstacle avoidance system |
CN113504780B (en) * | 2021-08-26 | 2022-09-23 | 上海同岩土木工程科技股份有限公司 | Full-automatic intelligent inspection robot and inspection method for tunnel structure |
CN113727022B (en) * | 2021-08-30 | 2023-06-20 | 杭州申昊科技股份有限公司 | Method and device for collecting inspection image, electronic equipment and storage medium |
CN113703462B (en) * | 2021-09-02 | 2023-06-16 | 东北大学 | Unknown space autonomous exploration system based on quadruped robot |
CN113778110B (en) * | 2021-11-11 | 2022-02-15 | 山东中天宇信信息技术有限公司 | Intelligent agricultural machine control method and system based on machine learning |
CN114050649A (en) * | 2021-11-12 | 2022-02-15 | 国网山东省电力公司临朐县供电公司 | Transformer substation inspection system and inspection method thereof |
CN114067200A (en) * | 2021-11-19 | 2022-02-18 | 上海微电机研究所(中国电子科技集团公司第二十一研究所) | Intelligent inspection method of quadruped robot based on visual target detection |
CN113821941B (en) * | 2021-11-22 | 2022-03-11 | 武汉华中思能科技有限公司 | Patrol simulation verification device |
CN114186859B (en) * | 2021-12-13 | 2022-05-31 | 哈尔滨工业大学 | Multi-machine cooperative multi-target task allocation method in complex unknown environment |
CN114657874B (en) * | 2022-04-08 | 2022-11-29 | 哈尔滨工业大学 | Intelligent inspection robot for bridge structure diseases |
WO2023203367A1 (en) * | 2022-04-20 | 2023-10-26 | 博歌科技有限公司 | Automatic inspection system |
CN114784701B (en) * | 2022-04-21 | 2023-07-25 | 中国电力科学研究院有限公司 | Autonomous navigation method, system, equipment and storage medium for live working of power distribution network |
CN115686014B (en) * | 2022-11-01 | 2023-08-29 | 广州城轨科技有限公司 | Subway inspection robot based on BIM model |
CN115828125B (en) * | 2022-11-17 | 2023-06-16 | 盐城工学院 | Information entropy feature-based weighted fuzzy clustering method and system |
CN116148614B (en) * | 2023-04-18 | 2023-06-30 | 江苏明月软件技术股份有限公司 | Cable partial discharge detection system and method based on unmanned mobile carrier |
CN116824481B (en) * | 2023-05-18 | 2024-04-09 | 国网信息通信产业集团有限公司北京分公司 | Substation inspection method and system based on image recognition |
CN117608401B (en) * | 2023-11-23 | 2024-08-13 | 北京理工大学 | Digital-body-separation-based robot remote interaction system and interaction method |
CN117557931B (en) * | 2024-01-11 | 2024-04-02 | 速度科技股份有限公司 | Planning method for meter optimal inspection point based on three-dimensional scene |
CN118211741B (en) * | 2024-05-21 | 2024-08-20 | 山东道万电气有限公司 | Intelligent scheduling management method for inspection robot based on multipath inspection data |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9463574B2 (en) * | 2012-03-01 | 2016-10-11 | Irobot Corporation | Mobile inspection robot |
CN109117718A (en) * | 2018-07-02 | 2019-01-01 | 东南大学 | A kind of semantic map structuring of three-dimensional towards road scene and storage method |
CN109816686A (en) * | 2019-01-15 | 2019-05-28 | 山东大学 | Robot semanteme SLAM method, processor and robot based on object example match |
CN110614638A (en) * | 2019-09-19 | 2019-12-27 | 国网山东省电力公司电力科学研究院 | Transformer substation inspection robot autonomous acquisition method and system |
CN111210518A (en) * | 2020-01-15 | 2020-05-29 | 西安交通大学 | Topological map generation method based on visual fusion landmark |
CN111462135A (en) * | 2020-03-31 | 2020-07-28 | 华东理工大学 | Semantic mapping method based on visual S L AM and two-dimensional semantic segmentation |
CN111897332A (en) * | 2020-07-30 | 2020-11-06 | 国网智能科技股份有限公司 | Semantic intelligent substation robot humanoid inspection operation method and system |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11205119B2 (en) * | 2015-12-22 | 2021-12-21 | Applied Materials Israel Ltd. | Method of deep learning-based examination of a semiconductor specimen and system thereof |
CN106443387B (en) * | 2016-10-25 | 2019-03-08 | 广东电网有限责任公司珠海供电局 | A kind of method, apparatus and system controlling crusing robot Partial Discharge Detection |
CN106506955A (en) * | 2016-11-10 | 2017-03-15 | 国网江苏省电力公司南京供电公司 | A kind of transformer substation video polling path planing method based on GIS map |
CN108039084A (en) * | 2017-12-15 | 2018-05-15 | 郑州日产汽车有限公司 | Automotive visibility evaluation method and system based on virtual reality |
CN208520404U (en) * | 2018-04-24 | 2019-02-19 | 北京拓盛智联技术有限公司 | A kind of intelligent inspection system |
CN108724190A (en) * | 2018-06-27 | 2018-11-02 | 西安交通大学 | A kind of industrial robot number twinned system emulation mode and device |
CN110737212B (en) * | 2018-07-18 | 2021-01-01 | 华为技术有限公司 | Unmanned aerial vehicle control system and method |
CN108983729A (en) * | 2018-08-15 | 2018-12-11 | 广州易行信息技术有限公司 | A kind of twin method and system of industrial production line number |
CN109325605A (en) * | 2018-11-06 | 2019-02-12 | 国网河南省电力公司驻马店供电公司 | Electric power based on augmented reality AR technology believes logical computer room inspection platform and method for inspecting |
CN109461211B (en) * | 2018-11-12 | 2021-01-26 | 南京人工智能高等研究院有限公司 | Semantic vector map construction method and device based on visual point cloud and electronic equipment |
CN109764869A (en) * | 2019-01-16 | 2019-05-17 | 中国矿业大学 | A kind of positioning of autonomous crusing robot and the three-dimensional map construction method of binocular camera and inertial navigation fusion |
CN110134148A (en) * | 2019-05-24 | 2019-08-16 | 中国南方电网有限责任公司超高压输电公司检修试验中心 | A kind of transmission line of electricity helicopter make an inspection tour in tracking along transmission line of electricity |
CN110189406B (en) * | 2019-05-31 | 2023-11-28 | 创新先进技术有限公司 | Image data labeling method and device |
CN110472671B (en) * | 2019-07-24 | 2023-05-12 | 西安工程大学 | Multi-stage-based fault data preprocessing method for oil immersed transformer |
CN110991227B (en) * | 2019-10-23 | 2023-06-30 | 东北大学 | Three-dimensional object identification and positioning method based on depth type residual error network |
CN110989594A (en) * | 2019-12-02 | 2020-04-10 | 交控科技股份有限公司 | Intelligent robot inspection system and method |
CN111063051A (en) * | 2019-12-20 | 2020-04-24 | 深圳市优必选科技股份有限公司 | Communication system of inspection robot |
-
2020
- 2020-07-30 CN CN202010752208.4A patent/CN111897332B/en active Active
- 2020-12-11 WO PCT/CN2020/135608 patent/WO2022021739A1/en active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9463574B2 (en) * | 2012-03-01 | 2016-10-11 | Irobot Corporation | Mobile inspection robot |
CN109117718A (en) * | 2018-07-02 | 2019-01-01 | 东南大学 | A kind of semantic map structuring of three-dimensional towards road scene and storage method |
CN109816686A (en) * | 2019-01-15 | 2019-05-28 | 山东大学 | Robot semanteme SLAM method, processor and robot based on object example match |
CN110614638A (en) * | 2019-09-19 | 2019-12-27 | 国网山东省电力公司电力科学研究院 | Transformer substation inspection robot autonomous acquisition method and system |
CN111210518A (en) * | 2020-01-15 | 2020-05-29 | 西安交通大学 | Topological map generation method based on visual fusion landmark |
CN111462135A (en) * | 2020-03-31 | 2020-07-28 | 华东理工大学 | Semantic mapping method based on visual S L AM and two-dimensional semantic segmentation |
CN111897332A (en) * | 2020-07-30 | 2020-11-06 | 国网智能科技股份有限公司 | Semantic intelligent substation robot humanoid inspection operation method and system |
Cited By (141)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114615344B (en) * | 2022-02-08 | 2023-07-28 | 广东智有盈能源技术有限公司 | Intelligent protocol conversion method and device for electric power instrument |
CN114615344A (en) * | 2022-02-08 | 2022-06-10 | 广东智有盈能源技术有限公司 | Intelligent protocol conversion method and device for electric power instrument |
CN114545969A (en) * | 2022-02-23 | 2022-05-27 | 平顶山天安煤业股份有限公司 | Intelligent power grid inspection method and system based on digital twins |
CN114594770A (en) * | 2022-03-04 | 2022-06-07 | 深圳市千乘机器人有限公司 | Inspection method for inspection robot without stopping |
CN114594770B (en) * | 2022-03-04 | 2024-04-26 | 深圳市千乘机器人有限公司 | Inspection method for inspection robot without stopping |
CN114821032A (en) * | 2022-03-11 | 2022-07-29 | 山东大学 | Special target abnormal state detection and tracking method based on improved YOLOv5 network |
CN114677777A (en) * | 2022-03-16 | 2022-06-28 | 中车唐山机车车辆有限公司 | Equipment inspection method, inspection system and terminal equipment |
CN114677777B (en) * | 2022-03-16 | 2023-07-21 | 中车唐山机车车辆有限公司 | Equipment inspection method, inspection system and terminal equipment |
CN114618802B (en) * | 2022-03-17 | 2023-05-05 | 国网辽宁省电力有限公司电力科学研究院 | GIS cavity operation device and GIS cavity operation method |
CN114618802A (en) * | 2022-03-17 | 2022-06-14 | 国网辽宁省电力有限公司电力科学研究院 | GIS cavity operation device and GIS cavity operation method |
CN114779679A (en) * | 2022-03-23 | 2022-07-22 | 北京英智数联科技有限公司 | Augmented reality inspection system and method |
CN114500858A (en) * | 2022-03-28 | 2022-05-13 | 浙江大华技术股份有限公司 | Parameter determination method, device, equipment and medium for preset bits |
CN114500858B (en) * | 2022-03-28 | 2022-07-08 | 浙江大华技术股份有限公司 | Parameter determination method, device, equipment and medium for preset bits |
CN114474103B (en) * | 2022-03-28 | 2023-06-30 | 西安理工大学 | Distribution network cable corridor inspection method and equipment |
CN114474103A (en) * | 2022-03-28 | 2022-05-13 | 西安理工大学 | Distribution network cable corridor inspection method and equipment |
CN114661049A (en) * | 2022-03-29 | 2022-06-24 | 联想(北京)有限公司 | Inspection method, inspection device and computer readable medium |
CN114708395A (en) * | 2022-04-01 | 2022-07-05 | 东南大学 | Ammeter identification, positioning and three-dimensional mapping method for transformer substation inspection robot |
CN114700946A (en) * | 2022-04-15 | 2022-07-05 | 山东新一代信息产业技术研究院有限公司 | Equipment vibration frequency acquisition method based on inspection robot |
CN114862620A (en) * | 2022-04-29 | 2022-08-05 | 江苏中科云墨数字科技有限公司 | Intelligent substation management system based on digital twins |
CN114848155A (en) * | 2022-04-29 | 2022-08-05 | 电子科技大学 | Verification device for delay measurement of surgical robot |
CN114848155B (en) * | 2022-04-29 | 2023-04-25 | 电子科技大学 | Verification device for time delay measurement of surgical robot |
CN114926916A (en) * | 2022-05-10 | 2022-08-19 | 上海咪啰信息科技有限公司 | 5G unmanned aerial vehicle developments AI system of patrolling and examining |
CN114905512B (en) * | 2022-05-16 | 2024-05-14 | 安徽元古纪智能科技有限公司 | Panoramic tracking and obstacle avoidance method and system for intelligent inspection robot |
CN114627119A (en) * | 2022-05-16 | 2022-06-14 | 山东通广电子有限公司 | Visual neural network-based appearance defect intelligent identification system and identification method |
CN114627119B (en) * | 2022-05-16 | 2022-08-02 | 山东通广电子有限公司 | Visual neural network-based appearance defect intelligent identification system and identification method |
CN114905512A (en) * | 2022-05-16 | 2022-08-16 | 安徽元古纪智能科技有限公司 | Panoramic tracking and obstacle avoidance method and system for intelligent inspection robot |
CN114783188A (en) * | 2022-05-17 | 2022-07-22 | 阿波罗智联(北京)科技有限公司 | Inspection method and device |
CN114997359A (en) * | 2022-05-17 | 2022-09-02 | 哈尔滨工业大学 | Complete set of technical equipment for embankment dangerous case patrol based on bionic machine dog |
CN115035260A (en) * | 2022-05-27 | 2022-09-09 | 哈尔滨工程大学 | Indoor mobile robot three-dimensional semantic map construction method |
CN115061490A (en) * | 2022-05-30 | 2022-09-16 | 广州中科云图智能科技有限公司 | Reservoir inspection method, device and equipment based on unmanned aerial vehicle and storage medium |
CN115061490B (en) * | 2022-05-30 | 2024-04-05 | 广州中科云图智能科技有限公司 | Unmanned aerial vehicle-based reservoir inspection method, unmanned aerial vehicle-based reservoir inspection device, unmanned aerial vehicle-based reservoir inspection equipment and storage medium |
CN114842570B (en) * | 2022-06-01 | 2024-05-31 | 国网安徽省电力有限公司铜陵供电公司 | Intelligent inspection system for aerial optical cable |
CN114842570A (en) * | 2022-06-01 | 2022-08-02 | 国网安徽省电力有限公司铜陵供电公司 | Intelligent inspection system for overhead optical cable |
CN114721403A (en) * | 2022-06-02 | 2022-07-08 | 中国海洋大学 | Automatic driving control method and device based on OpenCV and storage medium |
CN115278209A (en) * | 2022-06-13 | 2022-11-01 | 上海研鼎信息技术有限公司 | Camera test system based on intelligent walking robot |
CN115278209B (en) * | 2022-06-13 | 2024-09-27 | 上海研鼎信息技术有限公司 | Camera test system based on intelligent walking robot |
CN115118008A (en) * | 2022-06-15 | 2022-09-27 | 国网山东省电力公司梁山县供电公司 | Transformer substation intelligent robot operation method |
CN115101067A (en) * | 2022-06-16 | 2022-09-23 | 陈明华 | Smart power grids voice system based on block chain technique |
CN115101067B (en) * | 2022-06-16 | 2024-04-16 | 陈明华 | Smart power grids voice system based on blockchain technique |
CN114995449A (en) * | 2022-06-21 | 2022-09-02 | 华能(广东)能源开发有限公司海门电厂 | Robot inspection design method and system based on electronic map |
CN115185268A (en) * | 2022-06-22 | 2022-10-14 | 国网山东省电力公司鱼台县供电公司 | Transformer substation inspection path planning method and system based on bilinear interpolation |
CN114842426B (en) * | 2022-07-06 | 2022-10-04 | 广东电网有限责任公司肇庆供电局 | Transformer substation equipment state monitoring method and system based on accurate alignment camera shooting |
CN114842426A (en) * | 2022-07-06 | 2022-08-02 | 广东电网有限责任公司肇庆供电局 | Transformer substation equipment state monitoring method and system based on accurate alignment camera shooting |
CN115426668A (en) * | 2022-07-11 | 2022-12-02 | 浪潮通信信息系统有限公司 | Intelligent operation and maintenance system of base station |
CN115171237A (en) * | 2022-07-12 | 2022-10-11 | 国网河北省电力有限公司超高压分公司 | 3D formation of image tours record appearance |
CN115390581A (en) * | 2022-07-13 | 2022-11-25 | 国网江苏省电力有限公司兴化市供电分公司 | Unmanned aerial vehicle optimal cruise path planning system based on power equipment single line diagram |
CN115256333A (en) * | 2022-07-26 | 2022-11-01 | 国核信息科技有限公司 | Photovoltaic engineering intelligent installation robot and working method thereof |
CN115366118A (en) * | 2022-08-09 | 2022-11-22 | 广东机电职业技术学院 | Relay detection system and method based on robot and vision technology |
CN115431266A (en) * | 2022-08-24 | 2022-12-06 | 阿里巴巴达摩院(杭州)科技有限公司 | Inspection method, inspection device and inspection robot |
CN115313658B (en) * | 2022-08-27 | 2023-09-08 | 国网湖北省电力有限公司黄石供电公司 | Intelligent operation and maintenance system of digital twin transformer substation |
CN115313658A (en) * | 2022-08-27 | 2022-11-08 | 国网湖北省电力有限公司黄石供电公司 | Intelligent operation and maintenance system of digital twin transformer substation |
CN115562332A (en) * | 2022-09-01 | 2023-01-03 | 北京普利永华科技发展有限公司 | Efficient processing method and system for airborne recorded data of unmanned aerial vehicle |
CN115562332B (en) * | 2022-09-01 | 2023-05-16 | 北京普利永华科技发展有限公司 | Efficient processing method and system for airborne record data of unmanned aerial vehicle |
CN115661965A (en) * | 2022-09-06 | 2023-01-31 | 贵州博睿科讯科技发展有限公司 | Intelligent inspection system integrated with automatic airport for highway unmanned aerial vehicle |
CN115150559A (en) * | 2022-09-06 | 2022-10-04 | 国网天津市电力公司高压分公司 | Remote vision system with acquisition self-adjustment calculation compensation and calculation compensation method |
CN115661965B (en) * | 2022-09-06 | 2024-01-12 | 贵州博睿科讯科技发展有限公司 | Highway unmanned aerial vehicle intelligence inspection system of integration automatic airport |
CN115200570A (en) * | 2022-09-15 | 2022-10-18 | 国网山东省电力公司费县供电公司 | Navigation equipment for power grid inspection and navigation method thereof |
CN115597659A (en) * | 2022-09-21 | 2023-01-13 | 山东锐翊电力工程有限公司(Cn) | Intelligent safety management and control method for transformer substation |
CN115597659B (en) * | 2022-09-21 | 2023-04-14 | 山东锐翊电力工程有限公司 | Intelligent safety management and control method for transformer substation |
CN115610250A (en) * | 2022-11-03 | 2023-01-17 | 北京华商三优新能源科技有限公司 | Automatic charging equipment control method and system |
CN116233219A (en) * | 2022-11-04 | 2023-06-06 | 国电湖北电力有限公司鄂坪水电厂 | Inspection method and device based on personnel positioning algorithm |
CN116233219B (en) * | 2022-11-04 | 2024-04-30 | 国电湖北电力有限公司鄂坪水电厂 | Inspection method and device based on personnel positioning algorithm |
CN115685736A (en) * | 2022-11-04 | 2023-02-03 | 合肥工业大学 | Wheeled robot of patrolling and examining based on thermal imaging and convolution neural network |
CN115690923B (en) * | 2022-11-17 | 2024-02-02 | 深圳市谷奇创新科技有限公司 | Physical sign distributed monitoring method and system based on optical fiber sensor |
CN115690923A (en) * | 2022-11-17 | 2023-02-03 | 深圳市谷奇创新科技有限公司 | Sign distributed monitoring method and system based on optical fiber sensor |
CN115860296A (en) * | 2022-11-26 | 2023-03-28 | 宝钢工程技术集团有限公司 | Remote inspection method and system based on 3D road network planning |
CN115816450A (en) * | 2022-11-29 | 2023-03-21 | 国电商都县光伏发电有限公司 | Robot inspection control method |
CN115861855A (en) * | 2022-12-15 | 2023-03-28 | 福建亿山能源管理有限公司 | Operation and maintenance monitoring method and system for photovoltaic power station |
CN115861855B (en) * | 2022-12-15 | 2023-10-24 | 福建亿山能源管理有限公司 | Operation and maintenance monitoring method and system for photovoltaic power station |
CN116052300A (en) * | 2022-12-22 | 2023-05-02 | 清华大学 | Digital twinning-based power inspection system and method |
CN115639842A (en) * | 2022-12-23 | 2023-01-24 | 北京中飞艾维航空科技有限公司 | Inspection method and system using unmanned aerial vehicle |
CN115980062A (en) * | 2022-12-30 | 2023-04-18 | 南通诚友信息技术有限公司 | Industrial production line whole-process vision inspection method based on 5G |
CN115847446A (en) * | 2023-01-16 | 2023-03-28 | 泉州通维科技有限责任公司 | Inspection robot in bridge compartment beam |
CN116245230A (en) * | 2023-02-03 | 2023-06-09 | 南方电网调峰调频发电有限公司运行分公司 | Operation inspection and trend analysis method and system for power station equipment |
CN116245230B (en) * | 2023-02-03 | 2024-03-19 | 南方电网调峰调频发电有限公司运行分公司 | Operation inspection and trend analysis method and system for power station equipment |
CN116048865A (en) * | 2023-02-21 | 2023-05-02 | 海南电网有限责任公司信息通信分公司 | Automatic verification method for failure elimination verification under automatic operation and maintenance |
CN116048865B (en) * | 2023-02-21 | 2024-06-07 | 海南电网有限责任公司信息通信分公司 | Automatic verification method for failure elimination verification under automatic operation and maintenance |
CN115858714A (en) * | 2023-02-27 | 2023-03-28 | 国网江西省电力有限公司电力科学研究院 | Automatic modeling management system and method for collecting GIS data by unmanned aerial vehicle |
CN115858714B (en) * | 2023-02-27 | 2023-06-16 | 国网江西省电力有限公司电力科学研究院 | Unmanned aerial vehicle collected GIS data automatic modeling management system and method |
CN116225062A (en) * | 2023-03-14 | 2023-06-06 | 广州天勤数字科技有限公司 | Unmanned aerial vehicle navigation method applied to bridge inspection and unmanned aerial vehicle |
CN116225062B (en) * | 2023-03-14 | 2024-01-16 | 广州天勤数字科技有限公司 | Unmanned aerial vehicle navigation method applied to bridge inspection and unmanned aerial vehicle |
CN115979249B (en) * | 2023-03-20 | 2023-06-20 | 西安国智电子科技有限公司 | Navigation method and device of inspection robot |
CN115979249A (en) * | 2023-03-20 | 2023-04-18 | 西安国智电子科技有限公司 | Navigation method and device of inspection robot |
CN116071368A (en) * | 2023-04-07 | 2023-05-05 | 国网山西省电力公司电力科学研究院 | Insulator pollution multi-angle image detection and fineness analysis method and device |
CN116091952B (en) * | 2023-04-10 | 2023-06-30 | 江苏智绘空天技术研究院有限公司 | Ground-air integrated intelligent cloud control management system and method based on big data |
CN116091952A (en) * | 2023-04-10 | 2023-05-09 | 江苏智绘空天技术研究院有限公司 | Ground-air integrated intelligent cloud control management system and method based on big data |
CN116310185B (en) * | 2023-05-10 | 2023-09-05 | 江西丹巴赫机器人股份有限公司 | Three-dimensional reconstruction method for farmland field and intelligent agricultural robot thereof |
CN116310185A (en) * | 2023-05-10 | 2023-06-23 | 江西丹巴赫机器人股份有限公司 | Three-dimensional reconstruction method for farmland field and intelligent agricultural robot thereof |
CN116307638A (en) * | 2023-05-18 | 2023-06-23 | 华北科技学院(中国煤矿安全技术培训中心) | Coal mine gas inspection method |
CN116307638B (en) * | 2023-05-18 | 2023-10-10 | 华北科技学院(中国煤矿安全技术培训中心) | Coal mine gas inspection method |
CN116667531A (en) * | 2023-05-19 | 2023-08-29 | 国网江苏省电力有限公司泰州供电分公司 | Acousto-optic-electric collaborative inspection method and device based on digital twin transformer substation |
CN116700247B (en) * | 2023-05-30 | 2024-03-19 | 东莞市华复实业有限公司 | Intelligent cruising management method and system for household robot |
CN116700247A (en) * | 2023-05-30 | 2023-09-05 | 东莞市华复实业有限公司 | Intelligent cruising management method and system for household robot |
CN117008602A (en) * | 2023-06-02 | 2023-11-07 | 国网山东省电力公司邹城市供电公司 | Path planning method and system for inspection robot in transformer substation |
CN116520853A (en) * | 2023-06-08 | 2023-08-01 | 江苏商贸职业学院 | Agricultural inspection robot based on artificial intelligence technology |
CN116452648B (en) * | 2023-06-15 | 2023-09-22 | 武汉科技大学 | Point cloud registration method and system based on normal vector constraint correction |
CN116452648A (en) * | 2023-06-15 | 2023-07-18 | 武汉科技大学 | Point cloud registration method and system based on normal vector constraint correction |
CN116993676B (en) * | 2023-07-03 | 2024-05-07 | 中铁九局集团电务工程有限公司 | Subway rail fastener counting and positioning method based on deep learning |
CN116993676A (en) * | 2023-07-03 | 2023-11-03 | 中铁九局集团电务工程有限公司 | Subway rail fastener counting and positioning method based on deep learning |
CN116755451B (en) * | 2023-08-16 | 2023-11-07 | 泰山学院 | Intelligent patrol robot path planning method and system |
CN116755451A (en) * | 2023-08-16 | 2023-09-15 | 泰山学院 | Intelligent patrol robot path planning method and system |
CN117030974A (en) * | 2023-08-17 | 2023-11-10 | 天津大学 | Polluted site sampling robot and automatic sampling method |
CN117196210A (en) * | 2023-09-08 | 2023-12-08 | 广州方驰信息科技有限公司 | Big data management control method based on digital twin three-dimensional scene |
CN116918593A (en) * | 2023-09-14 | 2023-10-24 | 众芯汉创(江苏)科技有限公司 | Binocular vision unmanned image-based power transmission line channel tree obstacle monitoring system |
CN116918593B (en) * | 2023-09-14 | 2023-12-01 | 众芯汉创(江苏)科技有限公司 | Binocular vision unmanned image-based power transmission line channel tree obstacle monitoring system |
CN117196480A (en) * | 2023-09-19 | 2023-12-08 | 西湾智慧(广东)信息科技有限公司 | Intelligent logistics park management system based on digital twinning |
CN117196480B (en) * | 2023-09-19 | 2024-05-03 | 西湾智慧(广东)信息科技有限公司 | Intelligent logistics park management system based on digital twinning |
CN117128975B (en) * | 2023-10-24 | 2024-03-12 | 国网山东省电力公司济南供电公司 | Navigation method, system, medium and equipment for switch cabinet inspection operation robot |
CN117128975A (en) * | 2023-10-24 | 2023-11-28 | 国网山东省电力公司济南供电公司 | Navigation method, system, medium and equipment for switch cabinet inspection operation robot |
CN117119500A (en) * | 2023-10-25 | 2023-11-24 | 国网山东省电力公司东营供电公司 | Intelligent CPE (customer premise equipment) module-based inspection robot data transmission optimization method |
CN117119500B (en) * | 2023-10-25 | 2024-01-12 | 国网山东省电力公司东营供电公司 | Intelligent CPE (customer premise equipment) module-based inspection robot data transmission optimization method |
CN117146826A (en) * | 2023-10-26 | 2023-12-01 | 国网湖北省电力有限公司经济技术研究院 | Method and device for planning hidden danger inspection path of power transmission line |
CN117146826B (en) * | 2023-10-26 | 2024-01-02 | 国网湖北省电力有限公司经济技术研究院 | Method and device for planning hidden danger inspection path of power transmission line |
CN117213468A (en) * | 2023-11-02 | 2023-12-12 | 北京亮亮视野科技有限公司 | Method and device for inspecting outside of airplane and electronic equipment |
CN117213468B (en) * | 2023-11-02 | 2024-04-05 | 北京亮亮视野科技有限公司 | Method and device for inspecting outside of airplane and electronic equipment |
CN117270545B (en) * | 2023-11-21 | 2024-03-29 | 合肥工业大学 | Convolutional neural network-based substation wheel type inspection robot and method |
CN117270545A (en) * | 2023-11-21 | 2023-12-22 | 合肥工业大学 | Convolutional neural network-based substation wheel type inspection robot and method |
CN117428774A (en) * | 2023-11-23 | 2024-01-23 | 中国船舶集团有限公司第七一六研究所 | Industrial robot control method and system for ship inspection |
CN117607636A (en) * | 2023-11-30 | 2024-02-27 | 华北电力大学 | Multispectral fusion sensing and storing calculation integrated high-voltage discharge detection method |
CN117607636B (en) * | 2023-11-30 | 2024-05-14 | 华北电力大学 | Multispectral fusion sensing and storing calculation integrated high-voltage discharge detection method |
CN117782088B (en) * | 2023-12-13 | 2024-07-19 | 深圳大学 | Collaborative target map building positioning navigation method |
CN117782088A (en) * | 2023-12-13 | 2024-03-29 | 深圳大学 | Collaborative target map building positioning navigation method |
CN117637136A (en) * | 2023-12-22 | 2024-03-01 | 南京天溯自动化控制系统有限公司 | Method and device for automatically inspecting medical equipment by robot |
CN118092654A (en) * | 2024-03-07 | 2024-05-28 | 瑞丰宝丽(北京)科技有限公司 | Virtual reality application method, system, terminal and storage medium for operation and maintenance industry |
CN118275786A (en) * | 2024-03-27 | 2024-07-02 | 云南电投绿能科技有限公司 | Operation monitoring method, device and equipment of power equipment and storage medium |
CN117944058B (en) * | 2024-03-27 | 2024-05-28 | 韦氏(苏州)医疗科技有限公司 | Scheduling method and system of self-propelled functional mechanical arm and mechanical arm |
CN117944058A (en) * | 2024-03-27 | 2024-04-30 | 韦氏(苏州)医疗科技有限公司 | Scheduling method and system of self-propelled functional mechanical arm and mechanical arm |
CN117970932A (en) * | 2024-04-01 | 2024-05-03 | 中数智科(杭州)科技有限公司 | Task allocation method for collaborative inspection of multiple robots of rail train |
CN117970932B (en) * | 2024-04-01 | 2024-06-07 | 中数智科(杭州)科技有限公司 | Task allocation method for collaborative inspection of multiple robots of rail train |
CN117984333A (en) * | 2024-04-03 | 2024-05-07 | 广东电网有限责任公司东莞供电局 | Inspection method, device and equipment for oil immersed transformer and storage medium |
CN118093706A (en) * | 2024-04-25 | 2024-05-28 | 国网瑞嘉(天津)智能机器人有限公司 | Distribution network live working robot, system and working method |
CN118115882A (en) * | 2024-04-26 | 2024-05-31 | 山东省农业机械科学研究院 | Agricultural robot inspection identification method based on multi-source perception fusion |
CN118089794A (en) * | 2024-04-26 | 2024-05-28 | 北京航宇测通电子科技有限公司 | Simulation method for self-adaptive multi-information integrated navigation based on multi-source information |
CN118181302A (en) * | 2024-05-14 | 2024-06-14 | 长春中医药大学 | Traditional Chinese medicine grabbing control management system based on artificial intelligence |
CN118226861A (en) * | 2024-05-24 | 2024-06-21 | 广州市城市排水有限公司 | Underwater intelligent robot cruise control method and system based on intelligent algorithm |
CN118274845A (en) * | 2024-05-29 | 2024-07-02 | 天津地铁智慧科技有限公司 | Subway station robot inspection system and inspection method |
CN118351157A (en) * | 2024-06-18 | 2024-07-16 | 山东广瑞电力科技有限公司 | Blind spot-free inspection method and system based on multi-perception equipment combination |
CN118372258A (en) * | 2024-06-21 | 2024-07-23 | 西湖大学 | Distributed vision cluster robot system |
CN118396125A (en) * | 2024-06-27 | 2024-07-26 | 杭州海康威视数字技术股份有限公司 | Intelligent store patrol method and device, storage medium and electronic equipment |
CN118521433A (en) * | 2024-07-24 | 2024-08-20 | 东方电子股份有限公司 | Knowledge-graph-based digital twin early warning decision method and system for transformer substation |
CN118649976A (en) * | 2024-08-20 | 2024-09-17 | 通用技术集团工程设计有限公司 | Unmanned intelligent cleaning method and system for photovoltaic panel based on improved YOLOv model |
Also Published As
Publication number | Publication date |
---|---|
CN111897332B (en) | 2022-10-11 |
CN111897332A (en) | 2020-11-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022021739A1 (en) | Humanoid inspection operation method and system for semantic intelligent substation robot | |
CN111958592B (en) | Image semantic analysis system and method for transformer substation inspection robot | |
CN111958591B (en) | Autonomous inspection method and system for semantic intelligent substation inspection robot | |
CN112650255B (en) | Robot positioning navigation method based on visual and laser radar information fusion | |
WO2020192000A1 (en) | Livestock and poultry information perception robot based on autonomous navigation, and map building method | |
CN111968262B (en) | Semantic intelligent substation inspection operation robot navigation system and method | |
CN103389699B (en) | Based on the supervisory control of robot of distributed intelligence Monitoring and Controlling node and the operation method of autonomous system | |
CN108073167A (en) | A kind of positioning and air navigation aid based on depth camera and laser radar | |
CN105222760A (en) | The autonomous obstacle detection system of a kind of unmanned plane based on binocular vision and method | |
CN111813130A (en) | Autonomous navigation obstacle avoidance system of intelligent patrol robot of power transmission and transformation station | |
Ding et al. | Research on computer vision enhancement in intelligent robot based on machine learning and deep learning | |
CN112734765A (en) | Mobile robot positioning method, system and medium based on example segmentation and multi-sensor fusion | |
CN115200588A (en) | SLAM autonomous navigation method and device for mobile robot | |
CN214520204U (en) | Port area intelligent inspection robot based on depth camera and laser radar | |
CN111958593B (en) | Vision servo method and system for inspection operation robot of semantic intelligent substation | |
CN118020038A (en) | Two-wheeled self-balancing robot | |
CN114527763A (en) | Intelligent inspection system and method based on target detection and SLAM composition | |
CN116352722A (en) | Multi-sensor fused mine inspection rescue robot and control method thereof | |
CN115685736A (en) | Wheeled robot of patrolling and examining based on thermal imaging and convolution neural network | |
Li et al. | Depth camera based remote three-dimensional reconstruction using incremental point cloud compression | |
Wang et al. | Micro aerial vehicle navigation with visual-inertial integration aided by structured light | |
CN116957360A (en) | Space observation and reconstruction method and system based on unmanned aerial vehicle | |
CN114290313A (en) | Inspection robot, automatic navigation inspection robot system and control method | |
CN113510691A (en) | Intelligent vision system of plastering robot | |
Zhang et al. | RGBD Navigation: A 2D navigation framework for visual SLAM with pose compensation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20946850 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20946850 Country of ref document: EP Kind code of ref document: A1 |