CN112528979A - Transformer substation inspection robot obstacle distinguishing method and system - Google Patents

Transformer substation inspection robot obstacle distinguishing method and system Download PDF

Info

Publication number
CN112528979A
CN112528979A CN202110182326.0A CN202110182326A CN112528979A CN 112528979 A CN112528979 A CN 112528979A CN 202110182326 A CN202110182326 A CN 202110182326A CN 112528979 A CN112528979 A CN 112528979A
Authority
CN
China
Prior art keywords
point cloud
detected
cloud data
aerial view
inspection robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110182326.0A
Other languages
Chinese (zh)
Other versions
CN112528979B (en
Inventor
朱明�
张葛祥
杨强
王恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu University of Information Technology
Original Assignee
Chengdu University of Information Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu University of Information Technology filed Critical Chengdu University of Information Technology
Priority to CN202110182326.0A priority Critical patent/CN112528979B/en
Publication of CN112528979A publication Critical patent/CN112528979A/en
Application granted granted Critical
Publication of CN112528979B publication Critical patent/CN112528979B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/647Three-dimensional objects by matching two-dimensional images to three-dimensional objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a system for judging obstacles of a transformer substation inspection robot, wherein the method comprises the following steps: s1: acquiring 3D laser radar original point cloud data around a substation scene spot inspection robot, extracting partial original point cloud data according to an ROI (region of interest), converting the three-dimensional original point cloud data into a two-dimensional original point cloud aerial view, inputting the two-dimensional original point cloud aerial view into a deep convolutional neural network, and training a deep convolutional neural network model by using a random gradient descent algorithm to obtain an obstacle discrimination model; s2: acquiring point cloud data to be detected in real time, and converting the point cloud data to be detected into a bird's-eye view of the point cloud to be detected; s3: and inputting the aerial view of the point cloud to be detected subjected to normalization processing into an obstacle discrimination model to obtain the category information of the obstacle. The method is not limited by light line parts, and can realize night work of the inspection robot, so that the all-weather inspection target of the transformer substation inspection robot becomes possible.

Description

Transformer substation inspection robot obstacle distinguishing method and system
Technical Field
The invention belongs to the technical field of substation power inspection equipment, and particularly relates to a method and a system for judging obstacles of a substation inspection robot.
Background
The intelligent power grid is the trend and the direction of power grid development, and the intelligent substation is the power transformation link of the intelligent power grid and is the important foundation and the support of the strong intelligent power grid. Because high-voltage equipment is numerous in the transformer substation electric power place, the environment is complicated, need regularly to patrol and examine for guaranteeing electric power safety, most transformer substations still adopt the mode of artifical patrol and examine at present, along with the continuous development of robot technology, more and more transformer substations begin to use and patrol and examine the robot and patrol and examine, and traditional manual work is replaced to the automation technology, can reduce the cost of labor. However, the transformer substation has narrow road and often has obstacles to influence the inspection of the robot, so that a new challenge is brought to the inspection task.
With the rise of deep learning technology in recent years, the method for distinguishing the obstacles by using the two-dimensional images can be applied to daily inspection, a transformer substation inspection robot is usually provided with various sensors, such as a laser radar, an ultrasonic sensor, a visible light sensor and the like, most inspection robots use the visible light sensor to distinguish the obstacles, however, the work of the visible light sensor depends on light conditions, and at night, due to dim light, the obstacles cannot be distinguished by using the visible light sensor, so that the method cannot meet the requirement of all-weather inspection of the inspection robot.
Disclosure of Invention
The invention aims to provide a method for judging obstacles of a transformer substation inspection robot, which can solve the problem that the obstacle judgment accuracy is low or a working mechanism fails due to the fact that a visible light image recognition module carried by the existing transformer substation inspection robot is easily influenced by illumination factors during inspection.
In order to achieve the purpose, the technical scheme of the invention is as follows: a method for judging obstacles of a transformer substation inspection robot comprises the following steps:
s1: acquiring 3D laser radar original point cloud data around a substation scene spot inspection robot, extracting partial original point cloud data according to an ROI (region of interest), converting the three-dimensional original point cloud data into a two-dimensional original point cloud aerial view, inputting the two-dimensional original point cloud aerial view into a deep convolutional neural network, and training a deep convolutional neural network model by using a random gradient descent algorithm to obtain an obstacle discrimination model;
the method comprises the following steps of inputting an original point cloud aerial view into a deep convolution neural network:
s11: loading an SSD network, and adding a DenseNet dense block in a VGG-16 backbone network in the SSD network;
s12: meanwhile, a prediction module of the SSD is improved by replacing a feature extraction structure, designing a multi-scale fusion module and residual prediction, and then a residual block ResBlock is added to each prediction layer of the improved prediction module of the SSD;
s13: inputting the two-dimensional original point cloud aerial view into a deep convolution neural network;
s2: acquiring point cloud data to be detected in real time, and converting the point cloud data to be detected into a bird's-eye view of the point cloud to be detected;
s3: and inputting the aerial view of the point cloud to be detected subjected to normalization processing into an obstacle discrimination model to obtain the category information of the obstacle.
Further, the step S2 specifically includes:
selecting a partial area in front of the inspection robot as a target discrimination area, and acquiring 3D laser radar point cloud data of the target discrimination area in real time;
and extracting the ROI, processing the point cloud data of the 3D laser radar and converting the point cloud data into a two-dimensional aerial view of the point cloud to be detected.
Further, the conversion from the 3D laser radar coordinate system to the two-dimensional image coordinate system is completed through the gray value of 0-255 corresponding to the Z-axis height value of the 3D laser radar point cloud data, and the aerial view of the point cloud to be detected is obtained.
Further, before normalization processing of the aerial view of the point cloud to be detected in step S3, duplicate samples in the aerial view of the point cloud to be detected are taken out by performing duplicate removal processing.
The invention also aims to provide a transformer substation inspection robot obstacle distinguishing system which is not limited by light line parts and can realize night work of an inspection robot.
In order to achieve the purpose, the technical scheme of the invention is as follows: the utility model provides a transformer substation patrols and examines robot barrier discrimination system, includes:
the obstacle distinguishing model establishing module is used for acquiring 3D laser radar original point cloud data around the substation scene spot inspection robot, extracting partial original point cloud data according to the ROI area, converting the three-dimensional original point cloud data into a two-dimensional original point cloud aerial view, inputting the two-dimensional original point cloud aerial view into a deep convolutional neural network, and training the deep convolutional neural network model by using a random gradient descent algorithm to obtain an obstacle distinguishing model; the obstacle discrimination model building module further comprises an SSD network optimizing unit, a prediction module and a residual block ResBlock, wherein the DenseNet dense block is used for being added to a VGG-16 backbone network in the SSD network, the prediction module is used for improving the SSD by replacing a feature extraction structure, designing a multi-scale fusion module and residual prediction, and the residual block ResBlock is added to each prediction layer of the improved SSD prediction module;
the to-be-detected data acquisition module is used for acquiring point cloud data to be detected in real time;
the data conversion module is connected with the to-be-detected data acquisition module and used for converting the to-be-detected point cloud data into a to-be-detected point cloud aerial view;
and the barrier distinguishing module is connected with the data conversion module and the barrier distinguishing model establishing module and is used for receiving the barrier distinguishing model established by the barrier distinguishing model establishing module and the aerial view of the point cloud to be detected, normalizing the aerial view of the point cloud to be detected and inputting the normalized aerial view into the barrier distinguishing model to obtain the category information of the barrier.
Furthermore, the data acquisition module to be detected also comprises a region to be detected selection unit and a data acquisition unit;
the inspection robot comprises a to-be-inspected area selection unit, an inspection robot detection unit and a target judgment unit, wherein the to-be-inspected area selection unit is used for selecting a partial area in front of the inspection robot as a target judgment area;
and the data acquisition unit acquires the 3D laser radar point cloud data and the corresponding camera picture in real time according to the target discrimination area selected by the to-be-detected area selection unit.
Further, the data conversion module receives the 3D laser radar point cloud data, extracts the ROI, then corresponds the Z-axis height value of the 3D laser radar point cloud data to a gray value of 0-255, completes the conversion from a 3D laser radar coordinate system to a two-dimensional image coordinate system, and obtains the aerial view of the point cloud to be detected.
And the terminal is used for receiving the obstacle type information of the obstacle judging module and storing and displaying the obstacle type information.
Compared with the prior art, the invention has the following advantages:
(1) according to the obstacle distinguishing method and system for the transformer substation inspection robot, the obstacle information around the inspection robot is obtained by acquiring the point cloud data of the 3D laser radar sent by the terminal equipment, processing the point cloud data in the point cloud issuing processing and converting module, and inputting the processed point cloud data into the obstacle distinguishing model trained in the model training module in advance; the three-dimensional point cloud data are converted into the two-dimensional point cloud aerial view and then are judged according to the image recognition technology based on deep learning, barrier information in the aerial view is sent to the terminal device through the information sending module, automatic identification of the barriers around the inspection robot is achieved, the traditional manual operation is replaced by the automatic technology, labor cost and manual errors can be reduced, and inspection accuracy and work efficiency of the intelligent inspection robot are improved.
(2) The data required by the judging method is 3D laser radar point cloud data, is not limited by light line parts, and only the 3D laser radar point cloud data is used for judging the barrier, so that the problem that the visible light image identification module carried by the conventional substation inspection robot is easily influenced by illumination factors during inspection, so that the barrier judging accuracy is low or a working mechanism fails is solved, the night barrier judgment of the inspection robot is realized, and the all-weather inspection target of the substation inspection robot becomes possible.
(3) The invention optimizes and improves the obstacle discrimination target detection network:
1. and (3) improving a backbone network: the SSD backbone network is optimized through the modified DenseNet, and compared with a VGG-16 backbone network with a relatively shallow SSD layer number, the network feature extraction capability is improved, especially the feature extraction capability on a small target;
2. and (3) improving a prediction module: by replacing the feature extraction structure and designing a multi-scale fusion module and a residual prediction module, the feature fusion between different layers is enhanced; by adding a residual block ResBlock to each prediction layer, the gradient of the loss function can not directly flow into a backbone network, and the calculation cost can be effectively reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is obvious that the drawings in the following description are some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive exercise.
Fig. 1 is a schematic structural diagram of a barrier discrimination system of a transformer substation inspection robot according to the present invention;
FIG. 2 is a flow chart of a method for judging obstacles of a transformer substation inspection robot according to the invention;
FIG. 3 is a block diagram of an embodiment of a deep learning neural network of the present invention;
fig. 4 is a schematic diagram of the conversion from the 3D laser radar point cloud data coordinate system to the two-dimensional to-be-detected point cloud aerial view coordinate system in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The examples are given for the purpose of better illustration of the invention, but the invention is not limited to the examples. Therefore, those skilled in the art should make insubstantial modifications and adaptations to the embodiments of the present invention in light of the above teachings and remain within the scope of the invention.
It should be noted that the subscripts or superscripts of the formula or representative formula in the present invention are merely used for distinction, unless otherwise specified.
Example 1
Referring to fig. 1, a schematic structural diagram of a barrier determination system of a substation inspection robot in this embodiment is shown, specifically, the system includes:
the obstacle distinguishing model establishing module 1 is used for acquiring 3D laser radar original point cloud data around the substation scene spot inspection robot, extracting partial original point cloud data according to an ROI (region of interest), converting the three-dimensional original point cloud data into a two-dimensional original point cloud aerial view, inputting the two-dimensional original point cloud aerial view into a deep convolutional neural network, and training the deep convolutional neural network model by using a random gradient descent algorithm to obtain an obstacle distinguishing model;
in this embodiment, the obstacle discrimination model building module 1 further includes an SSD network optimization unit, configured to add a DenseNet dense block to a VGG-16 backbone network in the SSD network, and configured to improve a prediction module of the SSD by replacing a feature extraction structure and designing a prediction module including a multi-scale fusion module and residual prediction, and add a residual block reblock to each prediction layer of the improved prediction module of the SSD;
specifically, an obstacle distinguishing model establishing module 1 acquires 3D laser radar original point cloud data around a substation scene spot inspection robot; converting the three-dimensional original point cloud data into a two-dimensional original point cloud aerial view; preprocessing an original point cloud aerial view pattern book to accord with the input of a deep learning neural network, specifically, the obtained original point cloud aerial view is a gray scale image, but the gray scale image is a single channel and can be input into the deep convolution neural network only by converting the gray scale 60 into a RGB image with three channels of 60, 60 and 60, and then the RGB image is input into the deep convolution neural network; then randomly extracting samples from the point cloud aerial view pattern set to obtain a training set and a testing set; inputting the training set into a deep convolution neural network, training the deep convolution neural network by adopting a random gradient descent algorithm, and obtaining an obstacle discrimination model by utilizing the test set effect test;
the to-be-detected data acquisition module 2 is used for acquiring point cloud data to be detected in real time;
further, the module 2 for acquiring data to be detected further comprises a unit for selecting a region to be detected and a unit for acquiring data;
the to-be-detected area selection unit is used for selecting a partial area in front of the inspection robot as a target discrimination area;
the data acquisition unit acquires the 3D laser radar point cloud data and the camera photo corresponding to each frame of point cloud in real time according to the target discrimination area selected by the to-be-detected area selection unit.
The data conversion module 3 is connected with the to-be-detected data acquisition module 2 and used for converting the point cloud data to be detected into a bird's-eye view of the point cloud to be detected;
the data conversion module 3 in this embodiment receives the 3D lidar point cloud data acquired by the data acquisition unit, performs ROI extraction first, and then corresponds the Z-axis height value of the 3D lidar point cloud data to a gray value of 0 to 255, completes the conversion from the 3D lidar coordinate system to the two-dimensional image coordinate system, and obtains the aerial view of the point cloud to be detected.
And the barrier distinguishing module 4 is connected with the data conversion module 3 and the barrier distinguishing model establishing module 1, and is used for receiving the barrier distinguishing model established by the barrier distinguishing model establishing module and the point cloud aerial view to be detected, normalizing the point cloud aerial view to be detected and inputting the point cloud aerial view to be detected into the barrier distinguishing model to obtain the category information of the barrier. The obstacle judging module also comprises a terminal 5 which is used for receiving the obstacle type information of the obstacle judging module and storing and displaying the obstacle type information.
The terminal 5 in this embodiment may be a server or a mobile terminal of the substation inspection robot, where the server is configured to receive category information of the obstacle, and perform operations such as changing a route and monitoring according to the obstacle, and the mobile terminal is configured to prompt a holder of the obstacle information, and the holder performs corresponding operations.
Example 2
Based on the system of embodiment 1, the embodiment discloses a method for judging obstacles of a transformer substation inspection robot, and with reference to fig. 2, the method includes the following steps:
s1: acquiring 3D laser radar original point cloud data around a substation scene spot inspection robot, extracting partial original point cloud data according to an ROI (region of interest), converting the three-dimensional original point cloud data into a two-dimensional original point cloud aerial view, inputting the two-dimensional original point cloud aerial view into a deep convolutional neural network, and training a deep convolutional neural network model by using a random gradient descent algorithm to obtain an obstacle discrimination model;
specifically, in the embodiment, the 3D laser radar point cloud data of different scenes of the transformer substation and a camera photo corresponding to each frame of point cloud are acquired through the transformer substation inspection robot, the three-dimensional original point cloud data are converted into a two-dimensional original point cloud aerial view according to a selected region of interest, meanwhile, the camera photo is referred to, three types of main barriers of a transformer substation road, including the inspection robot, pedestrians and stones, are marked in the original point cloud aerial view, a transformer substation scene point cloud aerial view library is constructed, then, a aerial view pattern is preprocessed, the original point cloud aerial view is converted into three channels from a single channel to meet the input of a deep learning neural network, and then, random samples are extracted from the point cloud aerial view pattern set to obtain a training set and a testing set;
preferably, the present embodiment further optimizes the SSD destination detection network:
s11: loading an SSD network, and adding a DenseNet dense block in a VGG-16 backbone network in the SSD network;
referring to fig. 3, a diagram of a deep learning neural network structure for obstacle determination after optimization in this embodiment is applicable to the present invention, in this embodiment, a backbone network uses 4 sense blocks to extract features, and a 7 × 7 convolutional layer and 3 × 3 maximum pooling are provided before a first Dense Block, so as to obtain an output value of 75 × 64;
s12: meanwhile, a prediction module of the SSD is improved by replacing a feature extraction structure and designing a prediction module comprising a multi-scale fusion module and a residual error prediction module, and then a residual error block ResBlock is added to each prediction layer of the improved prediction module of the SSD;
in the embodiment, the efficiency of discrimination precision is improved by optimizing the backbone network and the prediction module of the SSD target detection network; the backbone network of the SSD is improved: and optimizing the backbone network of the SSD by the modified DenseNet, and adding 4 Dense blocks to the backbone network to extract features. A complex function of the original DenseNet is used, which contains three successive operations: batch Normalization (BN), followed by a Rectified Linear Unit (ReLU, modified Linear Unit) and convolution (Conv). Compared with a VGG-16 backbone network with a shallow SSD layer number, the method improves the feature extraction capability of the network, especially the feature extraction capability of small targets; the convolution prediction method of the SSD is improved: by replacing the feature extraction structure and redesigning the front-end prediction network comprising the multi-scale fusion module and the residual prediction module, the feature fusion and reuse among different layers are enhanced. By adding a residual block or ResBlock to each prediction layer, the gradient of the loss function does not flow directly into the backbone network;
preferably, the residual prediction block applies a 1 × 1 convolution kernel to predict the category score and the frame offset, and using ResBlock can reduce the calculation cost while improving the detection accuracy.
And checking the barrier distinguishing model by using the test set to obtain an equipment oil leakage defect identification model. Partial point cloud data in front of the robot are extracted through the scene ROI, the integral pixel proportion of the barrier in the original point cloud aerial view is improved, and the identification precision is effectively improved.
S13: inputting the original point cloud aerial view into a deep convolution neural network;
s2: acquiring point cloud data to be detected in real time, and converting the point cloud data to be detected into a bird's-eye view of the point cloud to be detected;
in the embodiment, a partial area in front of the inspection robot is selected as a target distinguishing area, and 3D laser radar point cloud data and a corresponding camera photo of the target distinguishing area are obtained in real time; then, ROI extraction is carried out on the 3D laser radar point cloud data, the gray value of 0-255 is corresponding to the Z-axis height value of the 3D laser radar point cloud data, a principle diagram for converting a 3D laser radar coordinate system into a two-dimensional image coordinate system in the figure 4 can be referred, and the 3D laser radar point cloud data are processed and converted into a two-dimensional to-be-detected point cloud aerial view;
s3: and inputting the aerial view of the point cloud to be detected subjected to normalization processing into the obstacle discrimination model to obtain the category information of the obstacle.
In this embodiment, the point cloud aerial view to be detected in step S2 is first subjected to deduplication processing to take out duplicate samples in the point cloud aerial view to be detected, then subjected to normalization processing to adjust the size of the point cloud aerial view so as to match the size of the point cloud aerial view with the input parameters of the obstacle discrimination model, and finally the obstacle discrimination model obtained in step S1 is input, and finally the category information of the obstacle is obtained.
According to the obstacle distinguishing method for the transformer substation inspection robot, the point cloud data of a scene of the transformer substation is obtained through the 3D laser radar data and camera data obtaining module, the point cloud data is processed in the point cloud data processing and converting module and then is input into the obstacle distinguishing model trained in the model training module in advance, obstacle category information is obtained, the three-dimensional point cloud data is converted into the two-dimensional point cloud aerial view and then is sent to the terminal device through the information sending module, the obstacle information in the aerial view is distinguished based on the image recognition technology of deep learning, the obstacle information in the aerial view is sent out, automatic recognition of obstacles around the inspection robot is achieved, the traditional manual operation is replaced by the automatic technology, the labor cost and the human errors can be reduced, and the inspection accuracy and the working efficiency of the intelligent inspection robot are improved.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (8)

1. A method for distinguishing obstacles of a transformer substation inspection robot is characterized by comprising the following steps:
s1: acquiring 3D laser radar original point cloud data around a substation scene spot inspection robot, extracting partial original point cloud data according to an ROI (region of interest), converting the three-dimensional original point cloud data into a two-dimensional original point cloud aerial view, inputting the two-dimensional original point cloud aerial view into a deep convolutional neural network, and training a deep convolutional neural network model by using a random gradient descent algorithm to obtain an obstacle discrimination model;
the method comprises the following steps of inputting the two-dimensional original point cloud aerial view into a deep convolution neural network, and specifically comprises the following steps:
s11: loading an SSD network, and adding a DenseNet dense block in a VGG-16 backbone network in the SSD network;
s12: meanwhile, a prediction module of the SSD is improved by replacing a feature extraction structure, designing a multi-scale fusion module and residual prediction, and then a residual block ResBlock is added to each prediction layer of the improved prediction module of the SSD;
s13: inputting the two-dimensional original point cloud aerial view into a deep convolution neural network;
s2: acquiring point cloud data to be detected in real time, and converting the point cloud data to be detected into a bird's-eye view of the point cloud to be detected;
s3: and inputting the aerial view of the point cloud to be detected subjected to normalization processing into an obstacle discrimination model to obtain the category information of the obstacle.
2. The substation inspection robot obstacle determination method according to claim 1, wherein the step S2 specifically includes:
selecting a partial area in front of the inspection robot as a target discrimination area, and acquiring 3D laser radar point cloud data of the target discrimination area in real time;
and extracting the ROI, processing the point cloud data of the 3D laser radar and converting the point cloud data into a two-dimensional aerial view of the point cloud to be detected.
3. The substation inspection robot obstacle distinguishing method according to claim 2, wherein the conversion from a 3D laser radar coordinate system to a two-dimensional image coordinate system is completed through the fact that the Z-axis height value of the 3D laser radar point cloud data corresponds to a gray value of 0-255, and the aerial view of the point cloud to be detected is obtained.
4. The substation inspection robot obstacle distinguishing method according to claim 1, wherein before normalization processing of the aerial view of the point clouds to be detected in the step S3, duplicate removal processing is performed to take out duplicate samples in the aerial view of the point clouds to be detected.
5. The utility model provides a transformer substation patrols and examines robot barrier discrimination system which characterized in that includes:
the obstacle distinguishing model establishing module is used for acquiring 3D laser radar original point cloud data around the substation scene spot inspection robot, extracting partial original point cloud data according to the ROI area, converting the three-dimensional original point cloud data into a two-dimensional original point cloud aerial view, inputting the two-dimensional original point cloud aerial view into a deep convolutional neural network, and training the deep convolutional neural network model by using a random gradient descent algorithm to obtain an obstacle distinguishing model; the obstacle discrimination model building module further comprises an SSD network optimizing unit, a prediction module and a residual block ResBlock, wherein the DenseNet dense block is used for being added to a VGG-16 backbone network in the SSD network, the prediction module is used for improving the SSD by replacing a feature extraction structure, designing a multi-scale fusion module and residual prediction, and the residual block ResBlock is added to each prediction layer of the improved SSD prediction module;
the to-be-detected data acquisition module is used for acquiring point cloud data to be detected in real time;
the data conversion module is connected with the to-be-detected data acquisition module and used for converting the to-be-detected point cloud data into a to-be-detected point cloud aerial view;
and the barrier distinguishing module is connected with the data conversion module and the barrier distinguishing model establishing module and is used for receiving the barrier distinguishing model established by the barrier distinguishing model establishing module and the aerial view of the point cloud to be detected, normalizing the aerial view of the point cloud to be detected and inputting the normalized aerial view into the barrier distinguishing model to obtain the category information of the barrier.
6. The substation inspection robot obstacle discrimination system according to claim 5, wherein the data acquisition module to be tested further includes a region to be tested selection unit and a data acquisition unit;
the inspection robot comprises a to-be-inspected area selection unit, an inspection robot detection unit and a target judgment unit, wherein the to-be-inspected area selection unit is used for selecting a partial area in front of the inspection robot as a target judgment area;
and the data acquisition unit acquires the 3D laser radar point cloud data and the corresponding camera picture in real time according to the target discrimination area selected by the to-be-detected area selection unit.
7. The substation inspection robot obstacle discrimination system according to claim 6, wherein the data conversion module receives the 3D lidar point cloud data, performs ROI extraction on the point cloud data, then corresponds the Z-axis height value of the 3D lidar point cloud data to a gray value of 0-255, completes the conversion from a 3D lidar coordinate system to a two-image coordinate system, and obtains the aerial view of the point cloud to be detected.
8. The substation inspection robot obstacle distinguishing system according to claim 5, further comprising a terminal for receiving the obstacle category information of the obstacle distinguishing module and storing and displaying the obstacle category information.
CN202110182326.0A 2021-02-10 2021-02-10 Transformer substation inspection robot obstacle distinguishing method and system Active CN112528979B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110182326.0A CN112528979B (en) 2021-02-10 2021-02-10 Transformer substation inspection robot obstacle distinguishing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110182326.0A CN112528979B (en) 2021-02-10 2021-02-10 Transformer substation inspection robot obstacle distinguishing method and system

Publications (2)

Publication Number Publication Date
CN112528979A true CN112528979A (en) 2021-03-19
CN112528979B CN112528979B (en) 2021-05-11

Family

ID=74975678

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110182326.0A Active CN112528979B (en) 2021-02-10 2021-02-10 Transformer substation inspection robot obstacle distinguishing method and system

Country Status (1)

Country Link
CN (1) CN112528979B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113375664A (en) * 2021-06-09 2021-09-10 成都信息工程大学 Autonomous mobile device positioning method based on dynamic point cloud map loading
CN113807184A (en) * 2021-08-17 2021-12-17 北京百度网讯科技有限公司 Obstacle detection method and device, electronic equipment and automatic driving vehicle
CN114998856A (en) * 2022-06-17 2022-09-02 苏州浪潮智能科技有限公司 3D target detection method, device, equipment and medium of multi-camera image

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103381603A (en) * 2013-06-29 2013-11-06 湖南大学 Autonomous obstacle crossing programming method of deicing and line inspecting robot for high-voltage transmission line
CN108171796A (en) * 2017-12-25 2018-06-15 燕山大学 A kind of inspection machine human visual system and control method based on three-dimensional point cloud
CN109376589A (en) * 2018-09-07 2019-02-22 中国海洋大学 ROV deformation target and Small object recognition methods based on convolution kernel screening SSD network
CN109508675A (en) * 2018-11-14 2019-03-22 广州广电银通金融电子科技有限公司 A kind of pedestrian detection method for complex scene
CN110084292A (en) * 2019-04-18 2019-08-02 江南大学 Object detection method based on DenseNet and multi-scale feature fusion
CN110232316A (en) * 2019-05-05 2019-09-13 杭州电子科技大学 A kind of vehicle detection and recognition method based on improved DSOD model
CN110554409A (en) * 2019-08-30 2019-12-10 江苏徐工工程机械研究院有限公司 Concave obstacle detection method and system
CN110610458A (en) * 2019-04-30 2019-12-24 北京联合大学 Method and system for GAN image enhancement interactive processing based on ridge regression
CN110705639A (en) * 2019-09-30 2020-01-17 吉林大学 Medical sperm image recognition system based on deep learning
CN110969600A (en) * 2019-11-12 2020-04-07 华北电力大学扬中智能电气研究中心 Product defect detection method and device, electronic equipment and storage medium
CN111461248A (en) * 2020-04-09 2020-07-28 上海城诗信息科技有限公司 Photographic composition line matching method, device, equipment and storage medium
CN111539246A (en) * 2020-03-10 2020-08-14 西安电子科技大学 Cross-spectrum face recognition method and device, electronic equipment and storage medium thereof
CN111583337A (en) * 2020-04-25 2020-08-25 华南理工大学 Omnibearing obstacle detection method based on multi-sensor fusion
CN111797650A (en) * 2019-04-09 2020-10-20 广州文远知行科技有限公司 Obstacle identification method and device, computer equipment and storage medium
CN111860493A (en) * 2020-06-12 2020-10-30 北京图森智途科技有限公司 Target detection method and device based on point cloud data
CN111913484A (en) * 2020-07-30 2020-11-10 杭州电子科技大学 Path planning method of transformer substation inspection robot in unknown environment
CN111967930A (en) * 2020-07-10 2020-11-20 西安工程大学 Clothing style recognition recommendation method based on multi-network fusion
CN112001287A (en) * 2020-08-17 2020-11-27 禾多科技(北京)有限公司 Method and device for generating point cloud information of obstacle, electronic device and medium
CN112115801A (en) * 2020-08-25 2020-12-22 深圳市优必选科技股份有限公司 Dynamic gesture recognition method and device, storage medium and terminal equipment
CN112130583A (en) * 2020-09-14 2020-12-25 国网天津市电力公司 Method and device for detecting partial discharge of unmanned aerial vehicle during night patrol
CN112150363A (en) * 2020-09-29 2020-12-29 中科方寸知微(南京)科技有限公司 Convolution neural network-based image night scene processing method, and computing module and readable storage medium for operating method
CN112183393A (en) * 2020-09-30 2021-01-05 深兰人工智能(深圳)有限公司 Laser radar point cloud target detection method, system and device

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103381603A (en) * 2013-06-29 2013-11-06 湖南大学 Autonomous obstacle crossing programming method of deicing and line inspecting robot for high-voltage transmission line
CN108171796A (en) * 2017-12-25 2018-06-15 燕山大学 A kind of inspection machine human visual system and control method based on three-dimensional point cloud
CN109376589A (en) * 2018-09-07 2019-02-22 中国海洋大学 ROV deformation target and Small object recognition methods based on convolution kernel screening SSD network
CN109508675A (en) * 2018-11-14 2019-03-22 广州广电银通金融电子科技有限公司 A kind of pedestrian detection method for complex scene
CN111797650A (en) * 2019-04-09 2020-10-20 广州文远知行科技有限公司 Obstacle identification method and device, computer equipment and storage medium
CN110084292A (en) * 2019-04-18 2019-08-02 江南大学 Object detection method based on DenseNet and multi-scale feature fusion
CN110610458A (en) * 2019-04-30 2019-12-24 北京联合大学 Method and system for GAN image enhancement interactive processing based on ridge regression
CN110232316A (en) * 2019-05-05 2019-09-13 杭州电子科技大学 A kind of vehicle detection and recognition method based on improved DSOD model
CN110554409A (en) * 2019-08-30 2019-12-10 江苏徐工工程机械研究院有限公司 Concave obstacle detection method and system
CN110705639A (en) * 2019-09-30 2020-01-17 吉林大学 Medical sperm image recognition system based on deep learning
CN110969600A (en) * 2019-11-12 2020-04-07 华北电力大学扬中智能电气研究中心 Product defect detection method and device, electronic equipment and storage medium
CN111539246A (en) * 2020-03-10 2020-08-14 西安电子科技大学 Cross-spectrum face recognition method and device, electronic equipment and storage medium thereof
CN111461248A (en) * 2020-04-09 2020-07-28 上海城诗信息科技有限公司 Photographic composition line matching method, device, equipment and storage medium
CN111583337A (en) * 2020-04-25 2020-08-25 华南理工大学 Omnibearing obstacle detection method based on multi-sensor fusion
CN111860493A (en) * 2020-06-12 2020-10-30 北京图森智途科技有限公司 Target detection method and device based on point cloud data
CN111967930A (en) * 2020-07-10 2020-11-20 西安工程大学 Clothing style recognition recommendation method based on multi-network fusion
CN111913484A (en) * 2020-07-30 2020-11-10 杭州电子科技大学 Path planning method of transformer substation inspection robot in unknown environment
CN112001287A (en) * 2020-08-17 2020-11-27 禾多科技(北京)有限公司 Method and device for generating point cloud information of obstacle, electronic device and medium
CN112115801A (en) * 2020-08-25 2020-12-22 深圳市优必选科技股份有限公司 Dynamic gesture recognition method and device, storage medium and terminal equipment
CN112130583A (en) * 2020-09-14 2020-12-25 国网天津市电力公司 Method and device for detecting partial discharge of unmanned aerial vehicle during night patrol
CN112150363A (en) * 2020-09-29 2020-12-29 中科方寸知微(南京)科技有限公司 Convolution neural network-based image night scene processing method, and computing module and readable storage medium for operating method
CN112183393A (en) * 2020-09-30 2021-01-05 深兰人工智能(深圳)有限公司 Laser radar point cloud target detection method, system and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LISHA CUI等: "MDSSD: Multi-scale Deconvolutional Single Shot Detector for Small Objects", 《《ARXIV:1805.07009V3 [CS.CV] 》 *
朱纯: "基于深度学习的目标实时检测模型的研究与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
黄和锟: "基于 SSD 目标检测算法的多尺度特征融合技术", 《现代信息科技》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113375664A (en) * 2021-06-09 2021-09-10 成都信息工程大学 Autonomous mobile device positioning method based on dynamic point cloud map loading
CN113375664B (en) * 2021-06-09 2023-09-01 成都信息工程大学 Autonomous mobile device positioning method based on dynamic loading of point cloud map
CN113807184A (en) * 2021-08-17 2021-12-17 北京百度网讯科技有限公司 Obstacle detection method and device, electronic equipment and automatic driving vehicle
CN114998856A (en) * 2022-06-17 2022-09-02 苏州浪潮智能科技有限公司 3D target detection method, device, equipment and medium of multi-camera image
CN114998856B (en) * 2022-06-17 2023-08-08 苏州浪潮智能科技有限公司 3D target detection method, device, equipment and medium for multi-camera image

Also Published As

Publication number Publication date
CN112528979B (en) 2021-05-11

Similar Documents

Publication Publication Date Title
CN112528979B (en) Transformer substation inspection robot obstacle distinguishing method and system
CN111223088B (en) Casting surface defect identification method based on deep convolutional neural network
CN107808133B (en) Unmanned aerial vehicle line patrol-based oil and gas pipeline safety monitoring method and system and software memory
CN112199993B (en) Method for identifying transformer substation insulator infrared image detection model in any direction based on artificial intelligence
CN106548182B (en) Pavement crack detection method and device based on deep learning and main cause analysis
CN111339882B (en) Power transmission line hidden danger detection method based on example segmentation
CN108734143A (en) A kind of transmission line of electricity online test method based on binocular vision of crusing robot
CN111458721B (en) Exposed garbage identification and positioning method, device and system
CN107492094A (en) A kind of unmanned plane visible detection method of high voltage line insulator
CN108648169A (en) The method and device of high voltage power transmission tower defects of insulator automatic identification
CN114241298A (en) Tower crane environment target detection method and system based on laser radar and image fusion
CN110186375A (en) Intelligent high-speed rail white body assemble welding feature detection device and detection method
CN114049356B (en) Method, device and system for detecting structure apparent crack
TW202225730A (en) High-efficiency LiDAR object detection method based on deep learning through direct processing of 3D point data to obtain a concise and fast 3D feature to solve the shortcomings of complexity and time-consuming of the current voxel network model
CN112508911A (en) Rail joint touch net suspension support component crack detection system based on inspection robot and detection method thereof
CN115035141A (en) Tunnel crack remote monitoring and early warning method based on image processing
CN116486287A (en) Target detection method and system based on environment self-adaptive robot vision system
CN116681979A (en) Power equipment target detection method under complex environment
CN113962973A (en) Power transmission line unmanned aerial vehicle intelligent inspection system and method based on satellite technology
CN107767366B (en) A kind of transmission line of electricity approximating method and device
CN115830302B (en) Multi-scale feature extraction fusion power distribution network equipment positioning identification method
CN111738264A (en) Intelligent acquisition method for data of display panel of machine room equipment
CN116862712A (en) Electric power construction potential safety risk detection method and system based on thunder fusion
CN115731545A (en) Cable tunnel inspection method and device based on fusion perception
CN116385477A (en) Tower image registration method based on image segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant