CN108734225A - A kind of transmission line construction subject image detection method based on deep learning - Google Patents
A kind of transmission line construction subject image detection method based on deep learning Download PDFInfo
- Publication number
- CN108734225A CN108734225A CN201810584363.2A CN201810584363A CN108734225A CN 108734225 A CN108734225 A CN 108734225A CN 201810584363 A CN201810584363 A CN 201810584363A CN 108734225 A CN108734225 A CN 108734225A
- Authority
- CN
- China
- Prior art keywords
- candidate region
- construction object
- transmission line
- prediction
- construction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The transmission line construction subject image detection method based on deep learning that the invention discloses a kind of.Acquire the transmission line of electricity monitoring image of several known construction object tags, build transmission line construction object detection neural network, image and its corresponding construction object tags are input in transmission line construction object detection neural network, neural network is trained using band momentum SGD algorithms;The monitoring image to be measured of unknown construction object tags is handled using the neural network after training, obtains the testing result for object of constructing in monitoring image to be measured.The method of the present invention can accurately detect the construction object in transmission line of electricity monitoring image, and have stronger robustness to illumination and climate change.
Description
Technical field
The present invention relates to image object detection methods, and in particular to a kind of transmission line construction object based on deep learning
Body image detecting method.
Background technology
With the rapid development of the national economy, on the one hand, demand of all trades and professions to electric power is more and more vigorous, transmission line of electricity
Layout it is more and more intensive;On the other hand, engineering construction in all parts of the country gradually increases, and highway, railway and building are repaiied
It founds a capital and inevitably causes the phenomenon that largely carrying out mechanical execution in line protection area.It considerably increases so defeated
The security risk of electric line.
Compared to excluding the potential hidden danger of transmission line of electricity by way of artificial line walking, the method for video monitoring pass through by
Transmission line of electricity monitoring image greatly reduces the resource that artificial line walking expends from monitoring center is passed back on the spot.But magnanimity simultaneously
Video data also substantially increases the work load of video monitoring personnel.Traditional method based on image procossing can not be very accurate
Construction object in true locating and monitoring image, and be easy to be illuminated by the light and the influence of climate change, applicability are poor.
Deep learning achieves breakthrough because of its superior learning ability and ability to express in extensive object detection field
The progress of property.This method has collected 2000 multiple transmission line of electricity monitoring images and carries out the mark of construction object to it, this is this
Method training deep neural network realizes that construction object target detection provides possibility.
Invention content
The transmission line construction subject image detection method based on deep learning that the purpose of the present invention is to provide a kind of.This
Method is trained the parameter of network by a large amount of data.Directly the image of unknown construction object tags is passed through when test
The propagated forward of neural network can be obtained the testing result of construction object.The precision of this method detection is high, and to light
There is good robustness according to climate change.
The method of the present invention has collected a collection of transmission line of electricity monitoring image first, to that external force destruction may be caused to apply in image
Work object is labeled.Then a neural network is trained using the data of mark, for object of constructing in predicting monitoring image
Position and classification.
The technical solution adopted by the present invention includes the following steps:
(1) the transmission line of electricity monitoring image I of several known construction object tags is acquired, construction object tags are [ci,xi,
yi,wi,hi], wherein i indicates i-th of construction object, ciIndicate the class categories of i-th of construction object, xi,yi,wiAnd hiRespectively
Indicate x coordinate, y-coordinate, width and the height of i-th of construction object central point;
(2) transmission line construction object detection neural network is built, image I and its corresponding construction object tags are inputted
Into transmission line construction object detection neural network, neural network is trained using band momentum SGD algorithms;
(3) monitoring image to be measured of unknown construction object tags is handled using the neural network after training, is obtained
The testing result of construction object in monitoring image to be measured.
The present invention is special to have built transmission line construction object detection neural network, is carried out for transmission line construction object
Image real time transfer so that can from image accurate judgement detection transmission line of electricity construction object.
The construction object of the transmission line of electricity specifically includes crane, tower crane and excavator.
Transmission line construction object detection neural network in the step (2) specifically includes convolution (Convs) module, time
Favored area network (RPN), area-of-interest pond (RoI Pooling) module, classification and coordinate Recurrent networks (CRN), candidate
Area loss function module LRWith Detectability loss function module LD;Image I is input to convolution module and exports to obtain characteristic pattern FI, so
Characteristic pattern F afterwardsIIt is input in the network of candidate region and generates set of candidate regions Ro, characteristic pattern FIWith set of candidate regions RoInput
To area-of-interest pond module according to set of candidate regions RoTo characteristic pattern FIEach upper candidate region carries out pond and obtains pair
The candidate region feature F answeredo, by candidate region feature FoBe input to classification and coordinate Recurrent networks obtain candidate region classification and
The testing result D that coordinate returns;Set of candidate regions RoIt is input to candidate region loss function module LRCalculate predicting candidate region
(loss function comprises more than the loss function of classification, further includes prediction result of the candidate region with respect to reference block deviant
Loss function) loss function value, testing result D is input to Detectability loss function module LDThe middle loss for calculating prediction target frame
Functional value;By candidate region loss function module LRWith Detectability loss function module LDThe penalty values obtained are calculated by reversely passing
Broadcast the training for being iterated and calculating and carrying out neural network.
The convolution module includes convolution, activation and pond operation in deep neural network, specifically mainly by five
Little module forms, and each little module is sequentially connected and is constituted by convolutional layer, active coating, pond layer, the pond layer of first four little module
Resolution ratio through handling image is reduced to the 1/2 of input, the characteristic pattern F finally exported through Convs modulesIResolution ratio be initial
The 1/16 of input picture.
The candidate region network presets the reference block of B different scale, then for characteristic pattern FIIt is upper each
A pixel position builds the reference block of B different scale, and predicts to calculate B candidate region of pixel position:Each
Candidate region is by prediction value set [s, tx,ty,tw,th] describe, wherein s indicates to include the prediction of construction object in candidate region
Probability, tx,tyIndicate candidate region central point with respect to reference block central point (x respectivelya,ya) prediction drift value, tw,thTable respectively
Show that the wide w and long h of candidate region are opposite and refers to frame width waWith long haPrediction drift value.At the center for having pre-defined reference block
Point (xa,ya) and width waWith long haIn the case of, specific position and the size that candidate region is calculated using following formula:
xr=txwa+xa
yr=tyha+ya
wr=exp (tw)wa
hr=exp (th)ha
Wherein, xr、yr、wr、hrThe transverse and longitudinal coordinate of expression candidate region central point and width and length respectively, xa、ya、wa、haPoint
Not Biao Shi reference block central point transverse and longitudinal coordinate and width and length, exp () indicates exponential function;
When step (3) the training neural network, initial calculation [s, tx,ty,tw,th] it is random generate, it is each later
In interative computation, after obtaining the corresponding candidate region in all pixels point position, candidate region is selected from all candidate regions
Include the highest preceding N of prediction probability s of construction objectoA candidate region composition set of candidate regions RoOutput.
The pool area module is by set of candidate regions RoIn each candidate region in characteristic pattern FIUpper corresponding area
Domain pond is melted into the feature F of regular lengtho, specifically:
First, candidate region is divided into k × k sub-box, the feature of each sub-box, group is calculated using following formula
Candidate region feature is tieed up at k × k:
Fo(i, j)=∑p∈bin(i,j)FI(p)/nij
Wherein, p indicates the pixel inside at sub-box bin (i, j), nijIndicate interior pixels at sub-box bin (i, j)
The sum of point, i and j indicate the transverse and longitudinal serial number of sub-box, i, j=1~k;FI(p) characteristic pattern F is indicatedIIn pixel p at spy
Sign, Fo(i, j) indicates the feature of sub-box bin (i, j);
Then, k × k dimension candidate region features of output are all N using two output neuron numberspFull articulamentum
It is N to obtain the corresponding regular length in each candidate regionpCandidate region feature Fo。
The classification and coordinate Recurrent networks mainly returns two submodules by classification arranged side by side and coordinate and forms;
Submodule of classifying is mainly (N by output neuron numberc+ 1) full articulamentum and Softmax layers of composition, NcTo apply
The sum of work object classification classification, candidate region feature FoIt is (N to obtain length by submodule of classifyingc+ 1) construction object exists
Candidate region belongs to the probability distribution under each classification (including background classification)
It is mainly 4 (N by output neuron number that coordinate, which returns submodule,c+ 1) full articulamentum composition, NcFor object of constructing
The sum of class categories returns submodule by coordinate and predicts to belong to each prediction constructed under object category to candidate region
Target frame compares the offset value set of candidate region Indicate respectively candidate region belong to the 0th~
NcPredict that target frame compares the prediction drift value of candidate region, wherein b under a construction object categoryi=[bxi,byi,bwi,bhi],
bxi,byi,bwi,bhiIndicating that candidate region belongs under i-th of construction object category respectively predicts target frame compared in candidate region
Heart point transverse and longitudinal coordinate deviant and wide and long deviant, according to the position of candidate region and size [xr,yr,wr,hr] and it is pre-
Coordinate shift value [the b of surveyx,by,bw,bh], it is specific that the prediction mesh of the corresponding construction object in candidate region is obtained using following formula
Mark frame:
X=wrbx+xr
Y=hrby+yr
W=exp (bw)wr
H=exp (bh)hr
Wherein, x, y, w, h indicate the transverse and longitudinal coordinate and width and length of candidate region central point, b respectivelyx,by,bw,bhRespectively
Indicate that prediction target frame compares the coordinate shift value of candidate region, exp () indicates exponential function.
The candidate region loss function module LRUsing following formula counting loss functional value:
Wherein, q indicates to include the true probability of construction object in candidate region, is obtained from known construction object tags, and q
∈ { 0,1 }, q=1 indicate that necessary being construction object in candidate region, q=0 indicate that there is no construction objects in candidate region;s
Include the prediction probability of construction object in expression candidate region;Indicate prediction drift value set of the candidate region with respect to reference block, Indicate that candidate region is real with respect to reference block transverse and longitudinal coordinate actual shifts value and wide and length respectively
Border deviant is obtained from known construction object tags;T indicates candidate region with respect to the prediction drift value set of reference block, t=
[tx,ty,tw,th];
Candidate region loss function module LRIt calculates and calculating progress god is iterated by backpropagation after obtaining penalty values
Training through network.
The Detectability loss function module LDUsing following formula counting loss functional value:
Wherein, δiIt indicates that candidate region belongs to the label of the i-th class construction object, is obtained from known construction object tags, when
δ when candidate region belongs to the i-th class construction objectiIt is 1, otherwise δiIt is 0;piIndicate that candidate region belongs to the general of the i-th class construction object
Rate;biIt indicates that candidate region belongs under i-th of construction object category and predicts that target frame compares the prediction drift value of candidate region;
It indicates that candidate region belongs under i-th of construction object category and predicts that target frame compares the actual shifts value of candidate region, from known
Object tags of constructing obtain, Indicate that candidate region belongs to i-th of construction object respectively
Transverse and longitudinal coordinate actual shifts value of the target frame compared to candidate region and wide and long actual shifts value are predicted under classification;NcExpression is applied
The sum of work object classification classification.
Detectability loss function module LDIt calculates and calculating progress nerve net is iterated by backpropagation after obtaining penalty values
The training of network.
The step (4) is specially:For the transmission line of electricity monitoring image of unknown construction object tags, after input training
And eliminate two loss function module LDAnd LRNeural network in, prediction obtain final set of candidate regions RoAnd it is wherein each
A candidate region belongs to the final probability distribution of each construction object category and final prediction target frame, for each candidate regions
Domain selects the corresponding classification of maximum probability as the prediction classification of the candidate region, and finally selection prediction class probability is more than threshold value
The prediction target frame of η is as final prediction result.
The beneficial effects of the invention are as follows:
The method of the present invention 2000 multiple marked construction object detection target transmission line of electricity monitoring image on instructed
Practice, fully study obtains the parameter of network.This method can accurately detect the construction object in monitoring image, and to light
There is good robustness according to climate change.
Description of the drawings
Fig. 1 is transmission line construction object detecting method logic diagram of the present invention.
Fig. 2 is the exemplary plot of the embodiment of the present invention.The first row left figure indicates one monitoring image for including excavator of input,
An excavator present in figure can be accurately located by model.The first row right figure indicates that input one includes tower crane
Monitoring image can be accurately located two tower cranes present in figure by model.Second row left figure indicates one packet of input
The monitoring image of snowy day containing excavator and crane, can be to an excavator present in figure and a crane standard by model
Really position.Second row right figure indicates the monitoring image of one snowy day comprising tower crane and excavator of input, can by model
A tower crane present in figure and two excavators are accurately located.The third line left figure indicates that input one includes tower crane and digging
The monitoring image in the greasy weather of pick machine, can be to a tower crane present in figure and an excavator accurately by model.Third
Row right figure indicates the monitoring image at one moment at dawn comprising tower crane of input, can be to a tower present in figure by model
It hangs and is accurately located.
Specific implementation mode
Invention is further explained below.
It is according to the embodiment and its implementation process of the complete method implementation of invention content of the present invention:
(1) the transmission line of electricity monitoring image I of the known construction object tags of one width of acquisition, construction object tags are [ci,xi,
yi,wi,hi], wherein i indicates i-th of construction object, ciIndicate the classification of i-th of construction object, xi,yi,wiAnd hiIt indicates respectively
The x coordinate of i-th of construction object central point, y-coordinate and width and height.
(2) neural network of the transmission line construction object detection based on deep learning is built;It include mainly convolution
(Convs) module, candidate region network (RPN), area-of-interest pond (RoI Pooling) module, classification and coordinate return
Network (CRN), candidate region loss function module LRWith Detectability loss function module LD.(3) by image I and its corresponding construction
Object tags are input in transmission line construction object detection neural network, are instructed using the SGD algorithms with momentum (momentum)
Practice neural network, obtains the parameter of neural network;
In specific implementation, momentum is set as 0.9, altogether iteration 70000 times, and preceding 50000 learning rates are 0.01, after
After 20000 learning rates are 0.001. training, the parameter of neural network is preserved.
(4) processing is carried out to the testing image of unknown construction object tags using the neural network after training and obtains construction object
Body testing result.Fig. 2 shows embodiment construction object detection result.
The present embodiment is finally tested on the transmission line of electricity monitoring image data set of collection, a shared crane, tower crane,
Excavator totally 3 kinds construction object type.Randomly select data set 75% is trained, and remaining 25% is tested.Make
It is evaluated and tested with Standard Judgement the criterion AP and mAP of target detection, table 1 gives AP and mAP value of this method on test set,
Wherein the value of mAP is being averaged for the AP values of each classification.AP and mAP values are bigger, illustrate that performance is better.
Assessment result of 1 this method of table on the transmission line of electricity monitoring image test set of collection
Model | Crane | Tower crane | Excavator | mAP |
This method | 91.5 | 90.1 | 88.7 | 90.1 |
It can be seen that, the mAP values of this method have reached 90, it is shown that this method can accurately detect defeated from upper table
Construction object in electric line.
Claims (8)
1. a kind of transmission line construction subject image detection method based on deep learning, characterized in that include the following steps:
(1) the transmission line of electricity monitoring image I of several known construction object tags is acquired, construction object tags are [ci,xi,yi,wi,
hi], wherein i indicates i-th of construction object, ciIndicate the class categories of i-th of construction object, xi,yi,wiAnd hiIs indicated respectively
X coordinate, y-coordinate, width and the height of i construction object central point;
(2) transmission line construction object detection neural network is built, image I and its corresponding construction object tags are input to defeated
Electric line is constructed in object detection neural network, and neural network is trained using band momentum SGD algorithms;
(3) monitoring image to be measured of unknown construction object tags is handled using the neural network after training, is obtained to be measured
The testing result of construction object in monitoring image.
2. a kind of transmission line construction subject image detection method based on deep learning according to claim 1, special
Sign is:Transmission line construction object detection neural network in the step (2) specifically includes convolution (Convs) module, candidate
Local Area Network (RPN), area-of-interest pond (RoI Pooling) module, classification and coordinate Recurrent networks (CRN), candidate regions
Domain loss function module LRWith Detectability loss function module LD;Image I is input to convolution module and exports to obtain characteristic pattern FI, then
Characteristic pattern FIIt is input in the network of candidate region and generates set of candidate regions Ro, area-of-interest pond module is according to candidate region
Set RoTo characteristic pattern FIEach upper candidate region carries out pond and obtains corresponding candidate region feature Fo, by candidate region spy
Levy FoIt is input to classification and coordinate Recurrent networks obtains the testing result D of candidate region classification and coordinate recurrence;Candidate region collection
Close RoIt is input to candidate region loss function module LRThe loss function value in predicting candidate region is calculated, testing result D is input to inspection
Survey loss function module LDThe middle loss function value for calculating prediction target frame;By candidate region loss function module LRIt is damaged with detection
Lose function module LDIt calculates the penalty values obtained and is iterated the training for calculating progress neural network by backpropagation.
3. a kind of transmission line construction subject image detection method based on deep learning according to claim 2, special
Sign is:
The convolution module is mainly made of five little modules, and each little module is connected successively by convolutional layer, active coating, pond layer
Composition is connect, the resolution ratio through handling image is reduced to the 1/2 of input by the pond layer of first four little module, finally through Convs modules
The characteristic pattern F of outputIResolution ratio be initial input image 1/16.
4. a kind of transmission line construction subject image detection method based on deep learning according to claim 2, special
Sign is:
The candidate region network presets the reference block of B different scale, then for characteristic pattern FIEach upper pixel
Point position builds the reference block of B different scale, and predicts to calculate B candidate region of pixel position:Each candidate regions
Domain is by prediction value set [s, tx,ty,tw,th] describe, wherein s indicates to include the prediction probability of construction object, t in candidate regionx,
tyIndicate candidate region central point with respect to reference block central point (x respectivelya,ya) prediction drift value, tw,thCandidate regions are indicated respectively
The wide w and long h in domain are opposite to refer to frame width waWith long haPrediction drift value, the specific position that candidate region is calculated using following formula
It sets and size:
xr=txwa+xa
yr=tyha+ya
wr=exp (tw)wa
hr=exp (th)ha
Wherein, xr、yr、wr、hrThe transverse and longitudinal coordinate of expression candidate region central point and width and length respectively, xa、ya、wa、haTable respectively
Show that the central point transverse and longitudinal coordinate and width and length of reference block, exp () indicate exponential function;
After obtaining the corresponding candidate region in all pixels point position, it includes construction that candidate region is selected from all candidate regions
The highest preceding N of prediction probability s of objectoA candidate region composition set of candidate regions RoOutput.
5. a kind of transmission line construction subject image detection method based on deep learning according to claim 2, special
Sign is:The pool area module is by set of candidate regions RoIn each candidate region in characteristic pattern FIUpper corresponding region
Pond is melted into the feature F of regular lengtho, specifically:
First, candidate region is divided into k × k sub-box, the feature of each sub-box is calculated using following formula, form k
× k ties up candidate region feature:
Fo(i, j)=∑p∈bin(i,j)FI(p)/nij
Wherein, p indicates the pixel inside at sub-box bin (i, j), nijIndicate interior pixels point at sub-box bin (i, j)
Sum, i and j indicate the transverse and longitudinal serial number of sub-box, i, j=1~k;FI(p) characteristic pattern F is indicatedIIn pixel p at feature,
Fo(i, j) indicates the feature of sub-box bin (i, j);
Then, k × k dimension candidate region features of output are all N using two output neuron numberspFull articulamentum obtain
The corresponding regular length in each candidate region is NpCandidate region feature Fo。
6. a kind of transmission line construction subject image detection method based on deep learning according to claim 2, special
Sign is:The classification and coordinate Recurrent networks mainly returns two submodules by classification and coordinate and forms;
Submodule of classifying is mainly (N by output neuron numberc+ 1) full articulamentum and Softmax layers of composition, NcFor object of constructing
The sum of body class categories, candidate region feature FoIt is (N to obtain length by submodule of classifyingc+ 1) construction object is in candidate
Region belongs to the probability distribution under each classification
It is mainly 4 (N by output neuron number that coordinate, which returns submodule,c+ 1) full articulamentum composition, NcFor object classification of constructing
The sum of classification returns submodule by coordinate and predicts to belong to each prediction target constructed under object category to candidate region
Frame compares the offset value set of candidate region Indicate that candidate region belongs to 0~N respectivelycIt is a
Prediction target frame compares the prediction drift value of candidate region, wherein b under object category of constructingi=[bxi,byi,bwi,bhi], bxi,
byi,bwi,bhiIt indicates that candidate region belongs under i-th of construction object category respectively and predicts that target frame compares the center of candidate region
Point transverse and longitudinal coordinate deviant and wide and long deviant are specific to obtain the corresponding construction object in candidate region using following formula
Predict target frame:
X=wrbx+xr
Y=hrby+yr
W=exp (bw)wr
H=exp (bh)hr
Wherein, x, y, w, h indicate the transverse and longitudinal coordinate and width and length of candidate region central point, b respectivelyx,by,bw,bhIt indicates respectively
Predict that target frame compares the coordinate shift value of candidate region, exp () indicates exponential function.
7. a kind of transmission line construction subject image detection method based on deep learning according to claim 2, special
Sign is:The candidate region loss function module LRUsing following formula counting loss functional value:
Wherein, q indicates to include the true probability of construction object in candidate region, and q ∈ { 0,1 }, q=1 are indicated in candidate region
Necessary being construction object, q=0 indicate that there is no construction objects in candidate region;S is indicated in candidate region comprising construction object
Prediction probability;Indicate prediction drift value set of the candidate region with respect to reference block, Respectively
Indicate candidate region with respect to reference block transverse and longitudinal coordinate actual shifts value and wide and long actual shifts value;T indicates candidate region phase
To the prediction drift value set of reference block, t=[tx,ty,tw,th];
The Detectability loss function module LDUsing following formula counting loss functional value:
Wherein, δiIndicate that candidate region belongs to the label of the i-th class construction object, the δ when candidate region belongs to the i-th class construction objecti
It is 1, otherwise δiIt is 0;piIndicate that candidate region belongs to the probability of the i-th class construction object;biIndicate that candidate region belongs to i-th and applies
Predict that target frame compares the prediction drift value of candidate region under work object category;Indicate that candidate region belongs to i-th of construction object
Predict that target frame compares the actual shifts value of candidate region under body classification, Table respectively
Show candidate region belong under i-th of construction object category predict target frame compared to candidate region transverse and longitudinal coordinate actual shifts value with
And wide and long actual shifts value;NcIndicate the sum of construction object classification classification.
8. a kind of transmission line construction subject image detection method based on deep learning according to claim 2, special
Sign is:The step (4) is specially:For it is unknown construction object tags transmission line of electricity monitoring image, will input training after and
Eliminate two loss function module LDAnd LRNeural network in, prediction obtain final set of candidate regions RoAnd it is wherein each
Candidate region belongs to the final probability distribution of each construction object category and final prediction target frame, for each candidate region
Select the corresponding classification of maximum probability as the prediction classification of the candidate region, finally selection prediction class probability is more than threshold value η
Prediction target frame as final prediction result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810584363.2A CN108734225A (en) | 2018-06-08 | 2018-06-08 | A kind of transmission line construction subject image detection method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810584363.2A CN108734225A (en) | 2018-06-08 | 2018-06-08 | A kind of transmission line construction subject image detection method based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108734225A true CN108734225A (en) | 2018-11-02 |
Family
ID=63932437
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810584363.2A Pending CN108734225A (en) | 2018-06-08 | 2018-06-08 | A kind of transmission line construction subject image detection method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108734225A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109902974A (en) * | 2019-04-11 | 2019-06-18 | 北京拓疆者智能科技有限公司 | A kind of generation method and device of arrangement and method for construction |
CN110070530A (en) * | 2019-04-19 | 2019-07-30 | 山东大学 | A kind of powerline ice-covering detection method based on deep neural network |
CN110109535A (en) * | 2019-03-18 | 2019-08-09 | 国网浙江省电力有限公司信息通信分公司 | Augmented reality generation method and device |
CN110705542A (en) * | 2019-04-15 | 2020-01-17 | 中国石油大学(华东) | Crane intrusion detection mechanism under power transmission scene based on HDNet |
CN110705414A (en) * | 2019-09-24 | 2020-01-17 | 智洋创新科技股份有限公司 | Power transmission line construction machinery hidden danger detection method based on deep learning |
CN111325708A (en) * | 2019-11-22 | 2020-06-23 | 济南信通达电气科技有限公司 | Power transmission line detection method and server |
CN111859779A (en) * | 2020-06-05 | 2020-10-30 | 北京市燃气集团有限责任公司 | Early warning method and device for preventing third-party construction damage risk of gas pipe network |
CN111881760A (en) * | 2020-06-30 | 2020-11-03 | 深圳金三立视频科技股份有限公司 | Transmission line external damage prevention identification method and terminal |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106326858A (en) * | 2016-08-23 | 2017-01-11 | 北京航空航天大学 | Road traffic sign automatic identification and management system based on deep learning |
CN106919978A (en) * | 2017-01-18 | 2017-07-04 | 西南交通大学 | A kind of high ferro contact net support meanss parts recognition detection method |
CN107563412A (en) * | 2017-08-09 | 2018-01-09 | 浙江大学 | A kind of infrared image power equipment real-time detection method based on deep learning |
-
2018
- 2018-06-08 CN CN201810584363.2A patent/CN108734225A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106326858A (en) * | 2016-08-23 | 2017-01-11 | 北京航空航天大学 | Road traffic sign automatic identification and management system based on deep learning |
CN106919978A (en) * | 2017-01-18 | 2017-07-04 | 西南交通大学 | A kind of high ferro contact net support meanss parts recognition detection method |
CN107563412A (en) * | 2017-08-09 | 2018-01-09 | 浙江大学 | A kind of infrared image power equipment real-time detection method based on deep learning |
Non-Patent Citations (2)
Title |
---|
M.FARENZENA ET AL.: "System"s Nonlinearity Measurement Based on the RPN Concept", 《IFAC PROCEEDINGS VOLUMES》 * |
伍伟明: "基于Faster_R_CNN的目标检测算法的研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110109535A (en) * | 2019-03-18 | 2019-08-09 | 国网浙江省电力有限公司信息通信分公司 | Augmented reality generation method and device |
CN109902974A (en) * | 2019-04-11 | 2019-06-18 | 北京拓疆者智能科技有限公司 | A kind of generation method and device of arrangement and method for construction |
CN110705542A (en) * | 2019-04-15 | 2020-01-17 | 中国石油大学(华东) | Crane intrusion detection mechanism under power transmission scene based on HDNet |
CN110070530A (en) * | 2019-04-19 | 2019-07-30 | 山东大学 | A kind of powerline ice-covering detection method based on deep neural network |
CN110070530B (en) * | 2019-04-19 | 2020-04-10 | 山东大学 | Transmission line icing detection method based on deep neural network |
CN110705414A (en) * | 2019-09-24 | 2020-01-17 | 智洋创新科技股份有限公司 | Power transmission line construction machinery hidden danger detection method based on deep learning |
CN111325708A (en) * | 2019-11-22 | 2020-06-23 | 济南信通达电气科技有限公司 | Power transmission line detection method and server |
CN111859779A (en) * | 2020-06-05 | 2020-10-30 | 北京市燃气集团有限责任公司 | Early warning method and device for preventing third-party construction damage risk of gas pipe network |
CN111859779B (en) * | 2020-06-05 | 2024-04-12 | 北京市燃气集团有限责任公司 | Method and device for early warning of third party construction damage risk of gas pipe network |
CN111881760A (en) * | 2020-06-30 | 2020-11-03 | 深圳金三立视频科技股份有限公司 | Transmission line external damage prevention identification method and terminal |
CN111881760B (en) * | 2020-06-30 | 2021-10-08 | 深圳金三立视频科技股份有限公司 | Transmission line external damage prevention identification method and terminal |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108734225A (en) | A kind of transmission line construction subject image detection method based on deep learning | |
CN109919108A (en) | Remote sensing images fast target detection method based on depth Hash auxiliary network | |
CN107316064B (en) | Asphalt pavement crack classification and identification method based on convolutional neural network | |
Arora et al. | Comparative evaluation of geospatial scenario-based land change simulation models using landscape metrics | |
CN107103754A (en) | A kind of road traffic condition Forecasting Methodology and system | |
Rahaman et al. | An efficient multilevel thresholding based satellite image segmentation approach using a new adaptive cuckoo search algorithm | |
Alkhasawneh et al. | Determination of important topographic factors for landslide mapping analysis using MLP network | |
CN106874688A (en) | Intelligent lead compound based on convolutional neural networks finds method | |
Maithani | A neural network based urban growth model of an Indian city | |
CN108229425A (en) | A kind of identifying water boy method based on high-resolution remote sensing image | |
CN104408481B (en) | Classification of Polarimetric SAR Image method based on depth wavelet neural network | |
CN107092870A (en) | A kind of high resolution image semantics information extracting method and system | |
CN103198480B (en) | Based on the method for detecting change of remote sensing image of region and Kmeans cluster | |
CN112561796B (en) | Laser point cloud super-resolution reconstruction method based on self-attention generation countermeasure network | |
CN105549009B (en) | A kind of SAR image CFAR object detection methods based on super-pixel | |
CN105205453A (en) | Depth-auto-encoder-based human eye detection and positioning method | |
CN113408423A (en) | Aquatic product target real-time detection method suitable for TX2 embedded platform | |
CN106991666A (en) | A kind of disease geo-radar image recognition methods suitable for many size pictorial informations | |
CN113505510B (en) | Ecological safety pattern recognition method fusing landscape index and random walk model | |
Zhou et al. | Concrete roadway crack segmentation using encoder-decoder networks with range images | |
CN105005983A (en) | SAR image background clutter modeling and target detection method | |
CN108460336A (en) | A kind of pedestrian detection method based on deep learning | |
CN114387270B (en) | Image processing method, image processing device, computer equipment and storage medium | |
Arif et al. | Adaptive deep learning detection model for multi-foggy images | |
CN115457001A (en) | Photovoltaic panel foreign matter detection method, system, device and medium based on VGG network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20181102 |
|
WD01 | Invention patent application deemed withdrawn after publication |