CN112199980A - Overhead line robot obstacle identification method - Google Patents

Overhead line robot obstacle identification method Download PDF

Info

Publication number
CN112199980A
CN112199980A CN202010499203.5A CN202010499203A CN112199980A CN 112199980 A CN112199980 A CN 112199980A CN 202010499203 A CN202010499203 A CN 202010499203A CN 112199980 A CN112199980 A CN 112199980A
Authority
CN
China
Prior art keywords
network
low
compression
obstacle
overhead line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010499203.5A
Other languages
Chinese (zh)
Other versions
CN112199980B (en
Inventor
刘荣海
郑欣
郭新良
蔡晓斌
杨迎春
许宏伟
周静波
虞鸿江
陈国坤
焦宗寒
代克顺
何运华
孔旭晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Electric Power Research Institute of Yunnan Power Grid Co Ltd
Original Assignee
Electric Power Research Institute of Yunnan Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Electric Power Research Institute of Yunnan Power Grid Co Ltd filed Critical Electric Power Research Institute of Yunnan Power Grid Co Ltd
Priority to CN202010499203.5A priority Critical patent/CN112199980B/en
Publication of CN112199980A publication Critical patent/CN112199980A/en
Application granted granted Critical
Publication of CN112199980B publication Critical patent/CN112199980B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Abstract

The application discloses an overhead line robot obstacle identification method, which is characterized in that deployment is carried out on the basis of an interlayer compression formula with strict closed form guarantee, and then relearning is carried out, so that the response between an original network and a compressed network is kept consistent. The method of the present application proposes a new compression scheme based on low-order decomposition that simultaneously accelerates convolutional layers and decompresses fully connected layers with precise closure. In order to further reduce the precision loss caused by the compression scheme of the order decomposition under the high compression rate, the method of the application further provides an effective knowledge transfer scheme, and the output and the intermediate response of the original network are aligned with the compression network in an 'explicit' mode. The proposed knowledge transfer scheme format operates a non-linear transformation function within all layers and between layers and minimizes "local" and "global" reconstruction errors in a uniform manner.

Description

Overhead line robot obstacle identification method
Technical Field
The application relates to the technical field of electrical equipment, in particular to an overhead line robot obstacle identification method.
Background
The operation condition of the high-voltage overhead transmission line directly influences the distribution condition of the power system, and plays a key role in the safe and stable operation of the power system, so that the operation condition of the high-voltage overhead transmission line needs to be regularly checked. In recent years, the development of mobile robot technology provides a new idea for the inspection of overhead power lines, and the inspection mode of high-voltage overhead power transmission lines is converted from manual inspection to special robot inspection.
The overhead line inspection robot (hereinafter referred to as an overhead line robot) is used for detecting and repairing line defects, and when the overhead line inspection robot is used in practice and crawls along a power transmission line at a certain speed, various obstacle types such as a vibration damper, a strain clamp and the like on a high-voltage wire need to be identified, and corresponding measures can be taken to cross the obstacles.
At present, when an overhead line robot performs line patrol operation, a camera mounted on the overhead line robot captures image information on a high-voltage line in real time, and then obstacle identification is performed through a convolutional neural network model. However, in the process of training the convolutional neural network model, a large amount of running memory and calculation amount of a processor are usually consumed, and the fast update of the convolutional neural network model is influenced, so that the identification efficiency of the overhead line robot on the obstacle is influenced.
Disclosure of Invention
The application provides an overhead line robot obstacle identification method, which aims to solve the problems that the existing convolutional neural network model for identifying obstacles on a high-voltage overhead transmission line consumes a large amount of processor running memory and calculated amount in the training process, and the fast update of the convolutional neural network model is influenced, so that the identification efficiency of the overhead line robot on the obstacles is influenced.
The application provides an overhead line robot obstacle identification method, which comprises the following steps:
constructing a convolutional neural network identification model;
acquiring a training set, wherein the training set is an obstacle image sample on a high-voltage overhead transmission line;
training a convolutional neural network recognition model by using a training set by adopting a low-order decomposition learning method with knowledge transfer;
the overhead line inspection robot captures line condition image information on the high-voltage overhead transmission line in real time by using a camera carried by the overhead line inspection robot and sends the line condition image information to the trained convolutional neural network identification model;
and the trained convolutional neural network recognition model is used for recognizing the obstacles on the high-voltage overhead transmission line according to the line condition image information.
Optionally, a low-order decomposition learning method with knowledge transfer is adopted, and the convolutional neural network recognition model is trained by using a training set, including,
constructing a set of low-rank filter bases with rank 1 in the spatial domain, decomposing each convolutional layer of the convolutional neural network identification model into two new convolutional layers with rectangular filters by using the low-rank filter bases, and expressing the new convolutional layers with the rectangular filters as follows:
Figure BDA0002524057410000011
in the formula, Ki,j,c,nIs a tensor with size of dxdxCxN;
using formulas
Figure BDA0002524057410000021
Solving the approximate solution of the filter bases gamma and V, wherein the solved (gamma, V) is a group of low-order constraint filters, and T is represented by the formulanIs a filter bank;
Figure BDA0002524057410000022
using formulas
Figure BDA0002524057410000023
Solving an optimization problem to obtain an approximate low-rank subspace;
using low rank decomposition, two small convolution kernels replace the large convolution kernel in the convolutional layer, two small matrix weights replace the large matrix weight in the fully-connected layer, and the corresponding acceleration ratio S in the convolutional layerrAnd compression ratio in full connection layer CrComprises the following steps:
Figure BDA0002524057410000024
Figure BDA0002524057410000025
and transferring the whole and local knowledge between the original network and the compressed network by adopting a knowledge transfer method.
The global and local knowledge is transferred between the original network and the compressed network, optionally using a knowledge transfer method, including,
establishing a local loss function by using the Euclidean distance between the ith layer lead block of the neural network and the output of the base block; suppressing the vanishing gradients by using a local loss function;
learning parameters of a compression network from a fixed original network to form asymmetric connections of the original network and the compression network with different depths, wherein for the decomposed low-rank network, a loss function is as follows:
Figure BDA0002524057410000026
Figure BDA0002524057410000027
in the formula (I), the compound is shown in the specification,
Figure BDA0002524057410000028
the output of the i layer neuron guide block and the output of the base block are respectively, and the size is mi
Combining the global knowledge with the local knowledge, the compression network is trained by minimizing the global loss function, as shown in the following formula:
Figure BDA0002524057410000029
where H represents the cross-entropy loss in the knowledge transfer, L represents the guide and base blocks, and λiIs a set of penalty parameters for balancing the global penalty and each local penalty;
training based on the formula (6) and the formula (8) to obtain a neuron network, and in a network full-connection layer, performing matrix multiplication operation on an input matrix
Figure BDA00025240574100000210
And weight matrix
Figure BDA00025240574100000211
Multiplying to obtain an output matrix
Figure BDA00025240574100000212
Z=WX (9)。
Optionally, the process of acquiring the obstacle image sample is: an obstacle recognition positioning camera on the overhead line robot acquires an obstacle image sample.
Optionally, the obstacle of the high-voltage overhead transmission line comprises a damper, a strain clamp, an insulator and a suspension clamp.
The method aims to jointly compress the convolution layer and the complete connection layer so as to accelerate online reasoning and reduce memory consumption at the same time. The method of the application is firstly deployed based on an interlayer compression formula with strict closed form guarantee, and then relearning is carried out, so that the response between the original network and the compression network is kept consistent. The method of the present application proposes a new compression scheme based on low-order decomposition that simultaneously accelerates convolutional layers and decompresses fully connected layers with precise closure. In order to further reduce the precision loss caused by the compression scheme of the order decomposition under the high compression rate, the method of the application further provides an effective knowledge transfer scheme, and the output and the intermediate response of the original network are aligned with the compression network in an 'explicit' mode. The proposed knowledge transfer scheme format operates a non-linear transformation function within all layers and between layers and minimizes "local" and "global" reconstruction errors in a uniform manner.
Drawings
In order to more clearly explain the technical solution of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious to those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of an overhead line robot obstacle identification method according to the present application.
Detailed Description
The application provides an overhead line robot obstacle identification method, which is used for identifying obstacles on a high-voltage overhead transmission line by an overhead line robot. Fig. 1 is a flowchart of an overhead line robot obstacle identification method according to the present application, where the overhead line robot obstacle identification method includes:
and S100, constructing a convolutional neural network identification model.
And S200, acquiring a training set, wherein the training set is an obstacle image sample on the high-voltage overhead transmission line.
In this application, the process of obtaining the obstacle image sample is: an obstacle recognition positioning camera on the overhead line robot acquires an obstacle image sample. The obstacle of the high-voltage overhead transmission line comprises an anti-vibration hammer, a strain clamp, an insulator and a suspension clamp.
And step S300, training the convolutional neural network recognition model by using a training set by adopting a low-order decomposition learning method with knowledge transfer.
In the application, a low-order decomposition learning method with knowledge transfer is adopted, and a convolutional neural network recognition model is trained by using a training set, comprising,
step S310, constructing the neural network convolution layer with the filter, which specifically includes the following steps.
Step S311, constructing a set of low-rank filter bases with rank 1 in the spatial domain, and decomposing each convolutional layer of the convolutional neural network identification model into two new convolutional layers with rectangular filters by using the low-rank filter bases, where the new convolutional layers with rectangular filters are represented as:
Figure BDA0002524057410000031
in the formula, Ki,j,c,nIs a tensor of size d × d × C × N.
A convolutional neural network can be seen as a forward multi-layer network structure that maps an input image to a particular output vector. The cells in a convolutional neural network are organized into a series of stereo tensors with two spatial dimensions and a third "map" or "channel" squared latitude.
Step S312, for the formula
Figure BDA0002524057410000032
A set of low rank filter bases in the spatial domain is constructed using a low order decomposition learning method, with each convolutional layer decomposed into two new convolutional layers with rectangular filters. The convolution with the filter is expressed as:
Figure BDA0002524057410000041
in the formula, TnIs a filter bank.
Step S313, using the formula
Figure BDA0002524057410000042
Solving the approximate solution of the filter bases gamma and V, wherein the solved (gamma, V) is a group of low-order constraint filters, and T is represented by the formulanIs a filter bank;
Figure BDA0002524057410000043
using formulas
Figure BDA0002524057410000044
Solving an optimization problem to obtain an approximate low-rank subspace;
step S320, using low rank decomposition, using two small convolution kernels for convolution layerReplacing the large matrix weight with two small matrix weights in the fully connected layer, the corresponding acceleration ratio S in the convolutional layerrAnd compression ratio in full connection layer CrComprises the following steps:
Figure BDA0002524057410000045
Figure BDA0002524057410000046
compared with the original convolution and matrix multiplication in the classical neuron network, the method only needs addition operation in the convolution layer and parameters in the complete connection layer.
And step S330, transferring the whole and local knowledge between the original network and the compressed network by adopting a knowledge transfer method.
In order to ensure the training effect of the neural network, the compressed network trimmed by the original network is usually trained together with the original network, and when the weight of a certain network in the training process is smaller (smaller than a set threshold), the network is set to be 0; shielding the network set to 0, and continuing training after updating; and after training for several rounds, continuously pruning to finally obtain the compressed network. Directly applying low rank decomposition to multiple layer neuron networks results in approximation errors for each layer, further accumulating and propagating.
The application constructs a local loss function using a knowledge transfer method to correct the output of the compressed network and the original network, and the function considers the local asymmetric reconstruction error and the global reconstruction error simultaneously so as to reduce the overall error of the final output layer. Within this framework, local reconstruction errors and global accumulation errors between the compressed network and the original network are incorporated into a comprehensive objective function to explicitly demonstrate knowledge. The method regards an original network as a resource field and regards a compressed network as a target field. To produce better representation knowledge, the method combines the incentivized global accumulated errors with local reconstructed errors to share the shared spatial components of the representation.
In the application, a knowledge transfer method is adopted to transfer whole and local knowledge between an original network and a compressed network, and the method comprises the following steps of S331, building a hidden layer for supervision, and learning local knowledge, and specifically comprises the following steps:
establishing a local loss function by using the Euclidean distance between the ith layer lead block of the neural network and the output of the base block; suppressing the vanishing gradients by using a local loss function;
learning parameters of a compression network from a fixed original network to form asymmetric connections of the original network and the compression network with different depths, wherein for the decomposed low-rank network, a loss function is as follows:
Figure BDA0002524057410000051
Figure BDA0002524057410000052
in the formula (I), the compound is shown in the specification,
Figure BDA0002524057410000053
the output of the i layer neuron guide block and the output of the base block are respectively, and the size is mi
Step S332, combining the global knowledge with the local knowledge, and training the compression network by minimizing the global loss function, as shown in the following formula:
Figure BDA0002524057410000054
where H represents the cross-entropy loss in the knowledge transfer, L represents the guide and base blocks, and λiIs a set of penalty parameters for balancing the global penalty and each local penalty;
based on formula
Figure BDA0002524057410000055
And formula
Figure BDA0002524057410000056
Training to obtain neuron network, and multiplying the input matrix by matrix in the network full-connection layer
Figure BDA0002524057410000057
And weight matrix
Figure BDA0002524057410000058
Multiplying to obtain an output matrix
Figure BDA0002524057410000059
I.e., Z ═ WX.
And S400, capturing the line condition image information on the high-voltage overhead transmission line in real time by using a camera carried by the overhead line inspection robot, and sending the line condition image information to the trained convolutional neural network identification model.
In this application, overhead line patrols and examines robot and utilizes the camera that it carried to catch the line condition image information on the high tension overhead transmission line in real time, include, install high definition camera on overhead line patrols and examines robot body and operation arm, make overhead line patrols and examines robot and can normally shoot the barrier that needs strideed across or dodge at the during operation high definition camera, the high definition camera visual angle should be able to cover the full physiognomy of barrier, makes its characteristic of catching the barrier as much as possible.
And S500, identifying the obstacle on the high-voltage overhead transmission line by the trained convolutional neural network identification model according to the line condition image information.
In this application, convolutional neural network identification model after the training discerns the barrier on the overhead transmission line of high tension according to line condition image information, includes: and step S510, inputting the line condition image information into the convolutional neural network identification model in real time. And step S520, identifying the obstacle type shot by the camera in real time by the trained convolutional neural network identification model.
The method aims to jointly compress the convolution layer and the complete connection layer so as to accelerate online reasoning and reduce memory consumption at the same time. The method of the application is firstly deployed based on an interlayer compression formula with strict closed form guarantee, and then relearning is carried out, so that the response between the original network and the compression network is kept consistent. The method of the present application proposes a new compression scheme based on low-order decomposition that simultaneously accelerates convolutional layers and decompresses fully connected layers with precise closure. In order to further reduce the precision loss caused by the compression scheme of the order decomposition under the high compression rate, the method of the application further provides an effective knowledge transfer scheme, and the output and the intermediate response of the original network are aligned with the compression network in an 'explicit' mode. The proposed knowledge transfer scheme format operates a non-linear transformation function within all layers and between layers and minimizes "local" and "global" reconstruction errors in a uniform manner.
The above-described embodiments of the present application do not limit the scope of the present application.

Claims (5)

1. An overhead line robot obstacle recognition method is characterized by comprising the following steps:
constructing a convolutional neural network identification model;
acquiring a training set, wherein the training set is an obstacle image sample on a high-voltage overhead transmission line;
training a convolutional neural network recognition model by using a training set by adopting a low-order decomposition learning method with knowledge transfer;
the overhead line inspection robot captures line condition image information on the high-voltage overhead transmission line in real time by using a camera carried by the overhead line inspection robot and sends the line condition image information to the trained convolutional neural network identification model;
and the trained convolutional neural network recognition model is used for recognizing the obstacles on the high-voltage overhead transmission line according to the line condition image information.
2. The overhead line robot obstacle recognition method according to claim 1,
training a convolutional neural network recognition model by using a training set by adopting a low-order decomposition learning method with knowledge transfer, comprising,
constructing a set of low-rank filter bases with rank 1 in the spatial domain, decomposing each convolutional layer of the convolutional neural network identification model into two new convolutional layers with rectangular filters by using the low-rank filter bases, and expressing the new convolutional layers with the rectangular filters as follows:
Figure FDA0002524057400000011
in the formula, Ki,j,c,nIs a tensor with size of dxdxCxN;
using formulas
Figure FDA0002524057400000012
Solving the approximate solution of the filter bases gamma and V, wherein the solved (gamma, V) is a group of low-order constraint filters, and T is represented by the formulanIs a filter bank;
Figure FDA0002524057400000013
using formulas
Figure FDA0002524057400000014
Solving an optimization problem to obtain an approximate low-rank subspace;
using low rank decomposition, two small convolution kernels replace the large convolution kernel in the convolutional layer, two small matrix weights replace the large matrix weight in the fully-connected layer, and the corresponding acceleration ratio S in the convolutional layerrAnd compression ratio in full connection layer CrComprises the following steps:
Figure FDA0002524057400000015
Figure FDA0002524057400000016
and transferring the whole and local knowledge between the original network and the compressed network by adopting a knowledge transfer method.
3. The overhead line robotic obstacle identification method of claim 2, wherein the global and local knowledge is transferred between the original network and the compressed network using a knowledge transfer method, comprising,
establishing a local loss function by using the Euclidean distance between the ith layer lead block of the neural network and the output of the base block; suppressing the vanishing gradients by using a local loss function;
learning parameters of a compression network from a fixed original network to form asymmetric connections of the original network and the compression network with different depths, wherein for the decomposed low-rank network, a loss function is as follows:
Figure FDA0002524057400000021
Figure FDA0002524057400000022
in the formula (I), the compound is shown in the specification,
Figure FDA0002524057400000023
the output of the i layer neuron guide block and the output of the base block are respectively, and the size is mi
Combining the global knowledge with the local knowledge, the compression network is trained by minimizing the global loss function, as shown in the following formula:
Figure FDA0002524057400000024
where H represents the cross-entropy loss in the knowledge transfer, L represents the guide and base blocks, and λiIs a set of penalty parameters for balancing the global penalty and each local penalty;
training based on the formula (6) and the formula (8) to obtain a neuron network, and in a network full-connection layer, performing matrix multiplication operation on an input matrix
Figure FDA0002524057400000025
And weight matrix
Figure FDA0002524057400000026
Multiplying to obtain an output matrix
Figure FDA0002524057400000027
Z=WX (9)。
4. The overhead line robot obstacle recognition method of claim 1, wherein the process of acquiring obstacle image samples is: an obstacle recognition positioning camera on the overhead line robot acquires an obstacle image sample.
5. The overhead line robot obstacle recognition method of claim 1, wherein the obstacle of the high voltage overhead transmission line comprises a damper, a strain clamp, an insulator, and a suspension clamp.
CN202010499203.5A 2020-06-04 2020-06-04 Overhead line robot obstacle recognition method Active CN112199980B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010499203.5A CN112199980B (en) 2020-06-04 2020-06-04 Overhead line robot obstacle recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010499203.5A CN112199980B (en) 2020-06-04 2020-06-04 Overhead line robot obstacle recognition method

Publications (2)

Publication Number Publication Date
CN112199980A true CN112199980A (en) 2021-01-08
CN112199980B CN112199980B (en) 2024-01-23

Family

ID=74006022

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010499203.5A Active CN112199980B (en) 2020-06-04 2020-06-04 Overhead line robot obstacle recognition method

Country Status (1)

Country Link
CN (1) CN112199980B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113193855A (en) * 2021-04-25 2021-07-30 西南科技大学 Robust adaptive filtering method for identifying low-rank acoustic system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101247030A (en) * 2007-08-01 2008-08-20 北京深浪电子技术有限公司 Overhead network obstacle detouring inspection robot and its obstacle detouring control method
CN110977973A (en) * 2019-12-11 2020-04-10 国电南瑞科技股份有限公司 Automatic obstacle crossing device of overhead transmission line inspection robot

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101247030A (en) * 2007-08-01 2008-08-20 北京深浪电子技术有限公司 Overhead network obstacle detouring inspection robot and its obstacle detouring control method
CN110977973A (en) * 2019-12-11 2020-04-10 国电南瑞科技股份有限公司 Automatic obstacle crossing device of overhead transmission line inspection robot

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
肖晓晖, 史铁林, 杜娥: "高压输电线路巡线作业机器人的动力学建模", 机械与电子, no. 10 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113193855A (en) * 2021-04-25 2021-07-30 西南科技大学 Robust adaptive filtering method for identifying low-rank acoustic system

Also Published As

Publication number Publication date
CN112199980B (en) 2024-01-23

Similar Documents

Publication Publication Date Title
CN109492822B (en) Air pollutant concentration time-space domain correlation prediction method
Alaloul et al. Data processing using artificial neural networks
CN110473592B (en) Multi-view human synthetic lethal gene prediction method
CN112445823A (en) Searching method of neural network structure, image processing method and device
CN110738309B (en) DDNN training method and DDNN-based multi-view target identification method and system
CN108921893A (en) A kind of image cloud computing method and system based on online deep learning SLAM
DE112020003498T5 (en) GENERATION OF TRAINING AND VALIDATION DATA FOR MACHINE LEARNING
CN110991027A (en) Robot simulation learning method based on virtual scene training
CN113449864B (en) Feedback type impulse neural network model training method for image data classification
CN111158401B (en) Distributed unmanned aerial vehicle path planning system and method for encouraging space-time data exploration
CN108573303A (en) It is a kind of that recovery policy is improved based on the complex network local failure for improving intensified learning certainly
CN109635763B (en) Crowd density estimation method
CN111209832B (en) Auxiliary obstacle avoidance training method, equipment and medium for substation inspection robot
CN115223049B (en) Knowledge distillation and quantification method for large model compression of electric power scene edge calculation
CN111401547A (en) Passenger flow analysis-oriented HTM design method based on cyclic learning unit
CN113486078A (en) Distributed power distribution network operation monitoring method and system
CN112308322A (en) Multi-wind-field space-time wind speed prediction method and device and electronic equipment
CN115457006B (en) Unmanned aerial vehicle inspection defect classification method and device based on similarity consistency self-distillation
US20220188658A1 (en) Method for automatically compressing multitask-oriented pre-trained language model and platform thereof
CN115471016B (en) Typhoon prediction method based on CISSO and DAED
CN115272981A (en) Cloud-edge co-learning power transmission inspection method and system
CN112269729A (en) Intelligent load analysis method for large-scale server cluster of online shopping platform
CN112784920A (en) Cloud-side-end-coordinated dual-anti-domain self-adaptive fault diagnosis method for rotating part
CN113657207B (en) Cloud-side cooperative power distribution station fire light intelligent monitoring method and system
CN112199980A (en) Overhead line robot obstacle identification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant