CN110532894A - Remote sensing target detection method based on boundary constraint CenterNet - Google Patents

Remote sensing target detection method based on boundary constraint CenterNet Download PDF

Info

Publication number
CN110532894A
CN110532894A CN201910718858.4A CN201910718858A CN110532894A CN 110532894 A CN110532894 A CN 110532894A CN 201910718858 A CN201910718858 A CN 201910718858A CN 110532894 A CN110532894 A CN 110532894A
Authority
CN
China
Prior art keywords
network
constraint
boundary constraint
training sample
centernet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910718858.4A
Other languages
Chinese (zh)
Other versions
CN110532894B (en
Inventor
冯婕
曾德宁
李迪
焦李成
张向荣
曹向海
刘若辰
尚荣华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Electronic Science and Technology
Original Assignee
Xian University of Electronic Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Electronic Science and Technology filed Critical Xian University of Electronic Science and Technology
Priority to CN201910718858.4A priority Critical patent/CN110532894B/en
Publication of CN110532894A publication Critical patent/CN110532894A/en
Application granted granted Critical
Publication of CN110532894B publication Critical patent/CN110532894B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The invention proposes a kind of remote sensing target detection method based on boundary constraint CenterNet, the lower technical problem of detection accuracy and recall rate for solving to exist in the prior art intensive Small object realizes step are as follows: obtain training sample set;Construct boundary constraint CenterNet network;Obtain the prediction label and insertion vector of training sample set;Calculate the loss of boundary constraint CenterNet network;Boundary constraint CenterNet network is trained;Object detection results are obtained based on trained boundary constraint CenterNet network.The present invention carries out maximum pond by corner constraint pond layer in controlled pond region, extract target surrounding fine feature, effectively improve the detection accuracy and recall rate of intensive Small object, the boundary constraint label constraint prediction block generated simultaneously using boundary constraint convolutional network, more accurate target prediction frame is obtained, the detection accuracy of target is further increased.

Description

Remote sensing target detection method based on boundary constraint CenterNet
Technical field
The invention belongs to technical field of machine vision, are related to a kind of image object detection method, and in particular to one kind is based on The object detection method of boundary constraint CenterNet, the target detection that can be used in remote sensing image.
Background technique
Object detection method is one of core research contents of field of machine vision, it is a kind of by extraction, processing figure As feature is returned and is classified to interested targets all in image, determine they position and its classification technology, extensively The general target detection applied in remote sensing image.The technical indicator of object detection method has detection accuracy, recall rate and inspection Degree of testing the speed etc. is influenced in remote sensing images by image resolution ratio, wherein these are close with the presence of a large amount of intensive Small object Collect Small object since images themselves resolution ratio is low, the ratio accounted in entire image is small, so that realizing quick detection and localization target When, it is difficult the presence of accurate detection to intensive Small object, influences detection accuracy and recall rate.
Object detection method is divided into single phase object detection method and two stages object detection method, single phase target detection Method can be divided into two classes again: based on regressive object rectangle frame and based on the single phase object detection method of regressive object key point, Wherein, the direct regressive object rectangle frame of single phase object detection method based on regressive object rectangle frame, and classify to rectangle frame, Position and its classification of target are obtained, this method needs to be arranged in planned network many network hyper parameters, so that planned network Process become complicated, while it needs to give a forecast under different scale, causes network training speed slowly and detection speed is slow.
Single phase object detection method based on regressive object key point is regressive object key point, is then based on key point Position and classification, obtain position and its classification of target, this method does not need setting network hyper parameter and multi-scale prediction, tool Have that network hyper parameter is few, training speed is fast and the fireballing advantage of detection, but according only to the target prediction that two angle steel joints obtain Frame can have many false retrieval targets, while be unfavorable for detecting intensive Small object using the feature that angle point global pool is extracted.Example Paper " the CenterNet:Keypoint Triplets for object delivered such as Kaiwen Duan et al. at it Detection " in (arXiv:1904.08189), proposes and calculated based on the target detection for returning two angle steel joints and central point Method, this method generate the preferable upper left angle point of quality, bottom right angle point and central point, benefit by cascade angle point global pool layer The prediction block of target is obtained with upper left angle point, bottom right angle point and central point, and using the position of target prediction frame and classification as mesh Target position and its classification.The thought of central point is added in this method, improves detection accuracy and recall rate, but its Shortcomings it It is in being extracted global characteristic information, affect since the angle point global pool of use is to carry out characteristic pattern global pool The detection accuracy and recall rate of intensive Small object, and false target prediction block is generated because of two angle steel joint meeting erroneous matchings, Affect the detection accuracy of target.
Summary of the invention
It is a kind of based on boundary constraint it is an object of the invention in view of the above shortcomings of the prior art, propose The object detection method of CenterNet, for solve to exist in the prior art intensive Small object detection accuracy and recall rate compared with Low technical problem.
To achieve the above object, the technical solution that the present invention takes includes the following steps:
(1) training sample set is obtained:
It is W × H × c image as training using the N width pixel size randomly selected from remote sensing image data set Sample set, wherein N >=1000;
(2) boundary constraint CenterNet network is constructed:
(2a) construction feature extracts network, boundary constraint convolutional network and key point and generates network, in which:
Feature extraction network includes multiple convolutional layers, multiple down-sampling layers and the multiple up-sampling layers that cascade stacks;Boundary Constraint convolutional network includes the convolutional layer that multiple cascades stack;Key point generate network include parallel connection angle steel joint boundary about Beam network and central point boundary constraint network, wherein angle steel joint boundary constraint network and central point boundary constraint network, respectively by Key point boundary constraint pond is carried out for extracting multiple convolutional layers of Deep Semantics feature, and for treating pond characteristic pattern Multiple corner constraint ponds level connection stack;
The output of feature extraction network is generated the input of network by (2b) as boundary constraint convolutional network and key point, The Chi Huahe that the output of boundary constraint convolutional network is generated to the corner constraint pond layer of network as key point, obtains boundary about Beam CenterNet network;
(3) the prediction label z of training sample set is obtained1With insertion vector e:
Training sample set is input to boundary constraint CenterNet network, obtains the prediction label z of training sample set1With The insertion vector e of training sample set, wherein z1Including boundary constraint prediction label, thermal map prediction label and the pre- mark of deviant Label;
(4) the loss L of boundary constraint CenterNet network is calculated:
Calculate the prediction label z of training sample set1With true tag z1' loss L1, while calculating the embedding of training sample set The range loss L of incoming vector e2, and by L1And L2The sum of loss L:L=L as boundary constraint CenterNet network1+L2
(5) boundary constraint CenterNet network is trained:
K suboptimization training is iterated to boundary constraint CenterNet network using gradient descent method, and by L, is obtained To trained boundary constraint CenterNet network, wherein k >=5000;
(6) object detection results are obtained based on trained boundary constraint CenterNet network:
(6a) will be input to trained boundary constraint CenterNet network with image to be detected of training sample same type In, obtain the boundary constraint prediction label Z of image to be detected1, thermal map prediction label Z2, deviant prediction label Z3With insertion to Measure E;
(6b) is according to Z2、Z3Target prediction frame is generated with E, and passes through Z1The confidence level s of target prediction frame is constrained, Confidence level s' after being constrained, and by s'> sthTarget prediction frame position and its classification as target of position and classification, Wherein, sthFor confidence threshold value, to the constraint formulations of s are as follows:
Wherein, α is constraint rate, and s' is confidence level after constraint, wtWith wbIt is Z respectively1In the width of upper angle point and lower angle point about Beam value, htWith hbRespectively Z1In upper angle point and lower angle point highly constrained value, tx and ty are two angle steel joints of target prediction frame In upper angle point (x, y) coordinate, bx and by are (x, y) coordinate of lower angle point in two angle steel joints of target prediction frame.
Compared with the prior art, the invention has the following advantages:
1. the present invention constructs boundary constraint CenterNet network, when obtaining crucial point feature, corner constraint is constructed Pond layer makes full use of the spy around target so that the target signature extracted is finer by constraint pond regional scope Sign is detected, and accurate target position and its classification are obtained, and is overcome global special because being extracted in characteristic pattern in the prior art Sign leads to the problem of intensive small target deteection effect difference, effectively increases the detection accuracy and recall rate of intensive Small object.
2. the present invention constructs boundary constraint CenterNet network, the side for making full use of boundary constraint convolutional network to generate Bound constrained prediction label constrains false target prediction block, rejects false target prediction block, obtains more accurate target Prediction block solves the problems, such as to further improve because two angle steel joint erroneous matchings cause detection accuracy low in the prior art Target detection precision.
Detailed description of the invention
Fig. 1 is implementation flow chart of the invention;
Fig. 2 is the simulation comparison figure of the present invention and prior art object detection results.
Specific embodiment
Below in conjunction with the drawings and specific embodiments, present invention is further described in detail.
Referring to Fig.1, implementation steps of the invention are as follows:
Step 1) obtains training sample set:
It is W × H × c image as training using the N width pixel size randomly selected from remote sensing image data set Sample set, wherein N=10000, W=H=511, c=3;
Step 2) constructs boundary constraint CenterNet network:
(2a) construction feature extracts network, boundary constraint convolutional network and key point and generates network, in which:
Feature extraction network, including the first input layer, the first down-sampling convolutional layer, the first convolutional layer, stacked gradually Two down-sampling convolutional layers, the second convolutional layer, third down-sampling convolutional layer, the 4th down-sampling convolutional layer, the 5th down-sampling convolutional layer, Third convolutional layer, the first up-sampling convolutional layer, the second up-sampling convolutional layer and third up-sample convolutional layer;
Boundary constraint convolutional network, including the second input layer, Volume Four lamination, the 5th convolutional layer stacked gradually;
Key point generates network, wherein angle steel joint boundary constraint network, the first corner constraint pond including arranged in parallel Layer and the second corner constraint pond layer, wherein the first corner constraint pond layer connects the 6th convolutional layer, the second corner constraint pond Layer the 7th convolutional layer of connection;Central point boundary constraint network, including stack gradually third corner constraint pond layer, the 8th convolution Layer, fourth angle point constrain pond layer, the 9th convolutional layer;
In order to improve detection accuracy and recall rate to intensive Small object, corner constraint pond layer is constructed, pond is constrained Regional scope extracts the fine-feature around target, and the feature around target is made full use of to be detected;Wherein, first jiao Point constraint pond layer, the pond layer that third corner constraint pond layer is upper left angle point, the second corner constraint pond layer, fourth angle point The pond layer that pond layer is bottom right angle point is constrained, maximum pond formula is respectively as follows:
Wherein,The feature of angle point and bottom right angle point on (x, y) coordinate, t respectively in the rear left of pondi,jFor The feature of (i, j) coordinate to pond characteristic pattern, wktWith wkbThe value of the width Chi Huahe of respectively upper angle point and lower angle point, hkt With hkbThe value of the height Chi Huahe of respectively upper angle point and lower angle point;
The output of feature extraction network is generated the input of network by (2b) as boundary constraint convolutional network and key point, The Chi Huahe that the output of boundary constraint convolutional network is generated to the corner constraint pond layer of network as key point, obtains boundary about Beam CenterNet network;
The prediction label z of step 3) acquisition training sample set1With insertion vector e:
(3a) extracts feature to training sample set by feature extraction network, obtains the characteristic pattern A of training sample set;
(3b) carries out Feature Mapping to A by boundary constraint convolutional network, obtains the boundary constraint prediction of training sample set Label;
(3c) generates the input of network using A as key point, using the boundary constraint prediction label of training sample set as pass Key point generate network in corner constraint pond layer pondization check A progress pond, obtain training sample set thermal map prediction label, The deviant prediction label of training sample set and the insertion vector e of training sample set, and by the boundary constraint of training sample set The prediction label z of prediction label, thermal map prediction label and deviant prediction label composition training sample set1
The loss L of step 4) calculating boundary constraint CenterNet network:
The prediction label z of (4a) calculating training sample set1With true tag z1' loss L1, while calculating training sample set Insertion vector e range loss L2, calculation formula is respectively as follows:
L1=Lk1+Lk2+Lk3
Wherein, Lk1For boundary constraint loss, Lk2For thermal map loss, Lk3For deviant loss, N3Mesh is concentrated for training sample Target number,For training sample set insertion vector in k-th of target the corresponding insertion vector of upper angle point,For training sample The corresponding insertion vector of lower angle point of k-th of target in the insertion vector of this collection,Lk1、Lk2With Lk3Calculation formula be respectively as follows:
Wherein, W=H=511, bcijFor z1Coordinate is the value of (c, i, j), b in middle boundary constraint prediction labelc'ijFor z1' Coordinate is the value of (c, i, j), N in middle boundary constraint true tag1、N2All elements respectively in boundary constraint true tag In the number greater than 0 and number equal to 0;C=12 is target category number, N3The number of target is concentrated for training sample,β=4, ycijFor z1Coordinate is the value of (c, i, j), y' in middle thermal map prediction labelcijFor z1' in sit in thermal map true tag It is designated as the value of (c, i, j);ocijFor z1Coordinate is the value of (c, i, j), o' in middle deviant prediction labelcijFor z1' in deviant it is true Coordinate is the value of (c, i, j) in real label;
(4b) is by L1And L2The sum of loss L:L=L as boundary constraint CenterNet network1+L2
Step 5) is trained boundary constraint CenterNet network:
K suboptimization training is iterated to boundary constraint CenterNet network using gradient descent method, and by L, is obtained To trained boundary constraint CenterNet network, wherein k=150000;
Step 6) is based on trained boundary constraint CenterNet network and obtains object detection results:
(6a) will be input to trained boundary constraint CenterNet network with image to be detected of training sample same type In, obtain the boundary constraint prediction label Z of image to be detected1, thermal map prediction label Z2, deviant prediction label Z3With insertion to Measure E;
(6b1) is according to thermal map prediction label Z2In all upper corner location and lower corner location, synthesize multiple horizontal squares Shape frame, retaining wherein includes thermal map prediction label Z2The horizontal rectangular frame of central point, and calculate its confidence level s1:
s1=(st+sb+sc)/3
Wherein, st、sbAnd scThe confidence level of angle point on horizontal rectangular frame before respectively correcting, the confidence level of lower angle point and in The confidence level of heart point;
(6b2) is according to deviant prediction label Z3Value horizontal rectangular frame is modified, horizontal rectangular after being corrected Frame, correction formula are as follows:
Wherein, (x', y') is (x, y) coordinate of horizontal rectangular frame, and (x ", y ") is (x, y) of horizontal rectangular frame after amendment Coordinate,Respectively in Z3In deviant prediction label on (x, y) coordinate;
(6b2) is filtered horizontal rectangular frame after amendment according to insertion vector E, will | et'-e'b| < ethAmendment after The confidence level zero setting of horizontal rectangular frame retains s1> sthAmendment after horizontal rectangular frame, obtain confidence level be s target prediction Frame, wherein et' and e'bThe insertion vector of the upper angle point of horizontal rectangular frame and lower angle point in insertion vector E after respectively correcting, eth=1, sth=0.4.
(6b3) passes through Z to further increase target detection precision1The confidence level s of target prediction frame is constrained, Reduce the confidence level of false target prediction block, confidence level s' after being constrained, and by s'> sthTarget prediction frame position and Position and its classification of the classification as target, obtain more accurate object detection results, wherein sth=0.4, it is public to the constraint of s Formula are as follows:
Wherein, α=1 is constraint rate, and s' is confidence level after constraint, wtWith wbIt is Z respectively1In upper angle point and lower angle point width Spend binding occurrence, htWith hbRespectively Z1In upper angle point and lower angle point highly constrained value, tx and ty is right for two of target prediction frame (x, y) coordinate of upper angle point in angle point, bx and by are (x, y) coordinate of lower angle point in two angle steel joints of target prediction frame.
Effect of the invention is further described below with reference to emulation experiment.
1. simulated conditions:
The hardware test platform of emulation experiment is: processor is Intel i7 5930k CPU, dominant frequency 3.5GHz, memory 48GB, video card are Nvidia GTX1080Ti 11G, software platform are as follows: 10 operating system of Windows and python 3.6.
2. emulation content and interpretation of result:
Emulation is compared to the testing result of the CenterNet object detection method of the present invention and the prior art, is tied Fruit is as shown in Figure 2.
Referring to Fig. 2, the image of test set of the Fig. 2 (a) in 2019 data set of VisDrone, Fig. 2 (b) is Fig. 2 (a) The reference position of test image and its classification, Fig. 2 (c) are the CenterNet object detection method using the prior art to Fig. 2 (a) the simulation experiment result figure, Fig. 2 (d) are using the method for the present invention to the simulation experiment result figure of Fig. 2 (a).
Compare Fig. 2 (c) and Fig. 2 (d) it can be seen that the present invention and the prior art CenterNet object detection method phase Than that can detected substantially for the intensive Small object present invention of image, and error prediction frame is few, illustrate the present invention to intensive The detection effect of Small object is more preferable, and detection accuracy and recall rate are higher.
In order to which the performance to two methods is evaluated, while utilizing six evaluation indexes (AP, AP50,AP70,AR100, AR500, Time) testing result is evaluated, specific as follows:
AP indicates that the average detected precision of the prediction block of the friendship with true frame and ratio in [0.50,0.95] section, value are got over Greatly, illustrate that detection effect is better;AP50Indicate that the average detected precision of the friendship with true frame and the prediction block than being 0.50, value are got over Greatly, illustrate that detection effect is better;AP70Indicate that the average detected precision of the friendship with true frame and the prediction block than being 0.70, value are got over Greatly, illustrate that detection effect is better;AR100Indicate the friendship with 100 true frames and than the prediction block in the section [0.50,0.95] Average recall rate, be worth it is bigger, illustrate that detection effect is better;AR500Indicate with the friendships of 500 true frames and compare [0.50, 0.95] the average recall rate of the prediction block in section, value is bigger, illustrates that detection effect is better;Time indicates that algorithm is defeated from image Enter to the time between testing result output, value is smaller, illustrates that detection speed is faster.
The parameter lookup table of the indices of target detection and the prior arts that obtained by emulation 2 is as follows:
Algorithm AP AP50 AP70 AR100 AR500 Time
The prior art 0.221 0.410 0.208 0.269 0.311 150ms
The present invention 0.240 0.437 0.228 0.297 0.336 100ms
Present invention comparison is existing it can be seen from accuracy of the mean and the parameter lookup table of average recall rate and the prior art Technology, mean accuracy improve 2 percentage points, and average recall rate improves 2.5 percentage points, and speed improves 50ms.
In conclusion the object detection method proposed by the present invention based on boundary constraint CenterNet, building is based on boundary The corner constraint pond layer of constraint constrains pond regional scope, extracts the fine-feature around target, effectively improves intensive small The detection accuracy and recall rate of target, while void is filtered out using the boundary constraint prediction label that boundary constraint convolutional network generates False prediction block obtains more accurately target prediction frame, the effective detection accuracy for improving target.

Claims (6)

1. a kind of remote sensing target detection method based on boundary constraint CenterNet, which comprises the steps of:
(1) training sample set is obtained:
It is W × H × c image as training sample using the N width pixel size randomly selected from remote sensing image data set Collection, wherein N >=1000;
(2) boundary constraint CenterNet network is constructed:
(2a) construction feature extracts network, boundary constraint convolutional network and key point and generates network, in which:
Feature extraction network includes multiple convolutional layers, multiple down-sampling layers and the multiple up-sampling layers that cascade stacks;Boundary constraint Convolutional network includes the convolutional layer that multiple cascades stack;Key point generates the angle steel joint boundary constraint net that network includes parallel connection Network and central point boundary constraint network, wherein angle steel joint boundary constraint network and central point boundary constraint network, respectively by being used for Multiple convolutional layers of Deep Semantics feature are extracted, and carry out the more of key point boundary constraint pond for treating pond characteristic pattern Level connection in a corner constraint pond stacks;
The output of feature extraction network is generated the input of network by (2b) as boundary constraint convolutional network and key point, by side The output of bound constrained convolutional network generates the Chi Huahe of the corner constraint pond layer of network as key point, obtains boundary constraint CenterNet network;
(3) the prediction label z of training sample set is obtained1With insertion vector e:
Training sample set is input to boundary constraint CenterNet network, obtains the prediction label z of training sample set1With training sample The insertion vector e of this collection, wherein z1Including boundary constraint prediction label, thermal map prediction label and deviant prediction label;
(4) the loss L of boundary constraint CenterNet network is calculated:
Calculate the prediction label z of training sample set1With true tag z '1Loss L1, while calculate the insertion of training sample set to Measure the range loss L of e2, and by L1And L2The sum of loss L:L=L as boundary constraint CenterNet network1+L2
(5) boundary constraint CenterNet network is trained:
K suboptimization training is iterated to boundary constraint CenterNet network using gradient descent method, and by L, is instructed The boundary constraint CenterNet network perfected, wherein k >=5000;
(6) object detection results are obtained based on trained boundary constraint CenterNet network:
(6a) will be input in trained boundary constraint CenterNet network with image to be detected of training sample same type, Obtain the boundary constraint prediction label Z of image to be detected1, thermal map prediction label Z2, deviant prediction label Z3With insertion vector E;
(6b) is according to Z2、Z3Target prediction frame is generated with E, and passes through Z1The confidence level s of target prediction frame is constrained, is obtained Confidence level s' after constraint, and by s'> sthTarget prediction frame position and its classification as target of position and classification, wherein sthFor confidence threshold value, to the constraint formulations of s are as follows:
Wherein, α is constraint rate, and s' is confidence level after constraint, wtWith wbIt is Z respectively1In upper angle point and lower angle point wide constraint Value, htWith hbRespectively Z1In upper angle point and lower angle point highly constrained value, tx and ty are in two angle steel joints of target prediction frame (x, y) coordinate of upper angle point, bx and by are (x, y) coordinate of lower angle point in two angle steel joints of target prediction frame.
2. the object detection method according to claim 1 based on boundary constraint CenterNet, which is characterized in that step Feature extraction network described in (2a), boundary constraint convolutional network and key point generate network, and specific structure is respectively as follows:
Feature extraction network, including stack gradually the first input layer, the first down-sampling convolutional layer, the first convolutional layer, under second Sample convolutional layer, the second convolutional layer, third down-sampling convolutional layer, the 4th down-sampling convolutional layer, the 5th down-sampling convolutional layer, third Convolutional layer, the first up-sampling convolutional layer, the second up-sampling convolutional layer and third up-sample convolutional layer;
Boundary constraint convolutional network, including the second input layer, Volume Four lamination, the 5th convolutional layer stacked gradually;
Key point generates network, wherein angle steel joint boundary constraint network, the first corner constraint pond layer including arranged in parallel with Second corner constraint pond layer, wherein the first corner constraint pond layer connects the 6th convolutional layer, the second corner constraint pond layer connects Connect the 7th convolutional layer;Central point boundary constraint network, including stack gradually third corner constraint pond layer, the 8th convolutional layer, Fourth angle point constrains pond layer, the 9th convolutional layer.
3. the object detection method according to claim 1 based on boundary constraint CenterNet, which is characterized in that step Pond characteristic pattern is treated described in (2a) carries out key point boundary constraint pond, specific steps are as follows: by corner constraint pond layer Chi Huahe value as the size to pond characteristic pattern pond region, and maximum pond is carried out in the region of pond, acquisition pond Pond characteristic pattern after change.
4. the object detection method according to claim 1 based on boundary constraint CenterNet, which is characterized in that step (3) the prediction label z of the training sample set described in1With the insertion vector e of training sample set, acquisition process step are as follows:
(3a) extracts feature to training sample set by feature extraction network, obtains the characteristic pattern A of training sample set;
(3b) carries out Feature Mapping to A by boundary constraint convolutional network, obtains the boundary constraint prediction label of training sample set;
(3c) generates the input of network using A as key point, using the boundary constraint prediction label of training sample set as key point The pondization verification A for generating corner constraint pond layer in network carries out pond, obtains thermal map prediction label, the training of training sample set The deviant prediction label of sample set and the insertion vector e of training sample set, and the boundary constraint of training sample set is predicted The prediction label z of label, thermal map prediction label and deviant prediction label composition training sample set1
5. the object detection method according to claim 1 based on boundary constraint CenterNet, which is characterized in that step (4) the prediction label z of the training sample set described in1With true tag z '1Loss L1And the insertion of training sample set to Measure the range loss L of e2, calculation formula is respectively as follows:
L1=Lk1+Lk2+Lk3
Wherein, Lk1For boundary constraint loss, Lk2For thermal map loss, Lk3For deviant loss, N3Target is concentrated for training sample Number,For training sample set insertion vector in k-th of target the corresponding insertion vector of upper angle point,For training sample set Insertion vector in k-th of target the corresponding insertion vector of lower angle point,Δ is the super ginseng of experimental setup Number;Lk1、Lk2And Lk3Calculation formula be respectively as follows:
Wherein, W and H is respectively the width and height of image, bcijFor z1Coordinate is the value of (c, i, j) in middle boundary constraint prediction label, b′cijFor z '1Coordinate is the value of (c, i, j), N in middle boundary constraint true tag1、N2Respectively in boundary constraint true tag Number in all elements greater than 0 and the number equal to 0;C is target category number, N3Of target is concentrated for training sample Number,β is the hyper parameter of experimental setup, ycijFor z1Coordinate is the value of (c, i, j), y' in middle thermal map prediction labelcijFor for z '1 Coordinate is the value of (c, i, j) in middle thermal map true tag;ocijFor z1Coordinate is the value of (c, i, j) in middle deviant prediction label, o'cijFor for z '1Coordinate is the value of (c, i, j) in middle deviant true tag.
6. the object detection method according to claim 1 based on boundary constraint CenterNet, which is characterized in that step According to thermal map prediction label Z described in (6b)2, deviant prediction label Z3Target prediction frame is generated with insertion vector E, specifically Step are as follows:
(6b1) is according to thermal map prediction label Z2In all upper corner location and lower corner location, synthesize multiple horizontal rectangular frames, Retaining wherein includes thermal map prediction label Z2The horizontal rectangular frame of central point, and calculate the confidence level s of institute's retention level rectangle frame1:
s1=(st+sb+sc)/3
Wherein, st、sbAnd scThe confidence level of upper angle point respectively in horizontal rectangular frame, the confidence level of lower angle point and central point Confidence level;
(6b2) is according to deviant prediction label Z3Value horizontal rectangular frame is modified, horizontal rectangular frame, is repaired after being corrected Positive formula are as follows:
Wherein, (x', y') is (x, y) coordinate of horizontal rectangular frame, and (x ", y ") is (x, y) coordinate of horizontal rectangular frame after amendment,Respectively in Z3In deviant prediction label on (x, y) coordinate;
(6b2) is filtered horizontal rectangular frame after amendment according to insertion vector E, will | e 't-e'b| < ethAmendment after horizontal square The confidence level zero setting of shape frame retains s1> sthAmendment after horizontal rectangular frame, obtain confidence level be s target prediction frame, wherein e′tAnd e'bThe insertion vector of the upper angle point of horizontal rectangular frame and lower angle point in insertion vector E, e after respectively correctingthFor away from From threshold value.
CN201910718858.4A 2019-08-05 2019-08-05 Remote sensing target detection method based on boundary constraint CenterNet Active CN110532894B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910718858.4A CN110532894B (en) 2019-08-05 2019-08-05 Remote sensing target detection method based on boundary constraint CenterNet

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910718858.4A CN110532894B (en) 2019-08-05 2019-08-05 Remote sensing target detection method based on boundary constraint CenterNet

Publications (2)

Publication Number Publication Date
CN110532894A true CN110532894A (en) 2019-12-03
CN110532894B CN110532894B (en) 2021-09-03

Family

ID=68661423

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910718858.4A Active CN110532894B (en) 2019-08-05 2019-08-05 Remote sensing target detection method based on boundary constraint CenterNet

Country Status (1)

Country Link
CN (1) CN110532894B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111242088A (en) * 2020-01-22 2020-06-05 上海商汤临港智能科技有限公司 Target detection method and device, electronic equipment and storage medium
CN111553348A (en) * 2020-04-26 2020-08-18 中南大学 Anchor-based target detection method based on centernet
CN111640089A (en) * 2020-05-09 2020-09-08 武汉精立电子技术有限公司 Defect detection method and device based on feature map center point
CN111652106A (en) * 2020-05-28 2020-09-11 韶关学院 Target monitoring method and device, electronic equipment and storage medium thereof
CN111753732A (en) * 2020-06-24 2020-10-09 佛山市南海区广工大数控装备协同创新研究院 Vehicle multi-target tracking method based on target center point
CN111832479A (en) * 2020-07-14 2020-10-27 西安电子科技大学 Video target detection method based on improved self-adaptive anchor R-CNN
CN111915628A (en) * 2020-06-24 2020-11-10 浙江大学 Single-stage instance segmentation method based on prediction target dense boundary points
CN111931764A (en) * 2020-06-30 2020-11-13 华为技术有限公司 Target detection method, target detection framework and related equipment
CN112257609A (en) * 2020-10-23 2021-01-22 重庆邮电大学 Vehicle detection method and device based on self-adaptive key point heat map
CN112270278A (en) * 2020-11-02 2021-01-26 重庆邮电大学 Key point-based blue top house detection method
CN112336342A (en) * 2020-10-29 2021-02-09 深圳市优必选科技股份有限公司 Hand key point detection method and device and terminal equipment
CN112364734A (en) * 2020-10-30 2021-02-12 福州大学 Abnormal dressing detection method based on yolov4 and CenterNet
CN112580529A (en) * 2020-12-22 2021-03-30 上海有个机器人有限公司 Mobile robot perception identification method, device, terminal and storage medium
CN112884742A (en) * 2021-02-22 2021-06-01 山西讯龙科技有限公司 Multi-algorithm fusion-based multi-target real-time detection, identification and tracking method
WO2021160184A1 (en) 2020-02-14 2021-08-19 Huawei Technologies Co., Ltd. Target detection method, training method, electronic device, and computer-readable medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018111940A1 (en) * 2016-12-12 2018-06-21 Danny Ziyi Chen Segmenting ultrasound images
CN109446992A (en) * 2018-10-30 2019-03-08 苏州中科天启遥感科技有限公司 Remote sensing image building extracting method and system, storage medium, electronic equipment based on deep learning
CN109685152A (en) * 2018-12-29 2019-04-26 北京化工大学 A kind of image object detection method based on DC-SPP-YOLO
CN110008882A (en) * 2019-03-28 2019-07-12 华南理工大学 Vehicle checking method based on mask and the loss of frame similitude

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018111940A1 (en) * 2016-12-12 2018-06-21 Danny Ziyi Chen Segmenting ultrasound images
CN109446992A (en) * 2018-10-30 2019-03-08 苏州中科天启遥感科技有限公司 Remote sensing image building extracting method and system, storage medium, electronic equipment based on deep learning
CN109685152A (en) * 2018-12-29 2019-04-26 北京化工大学 A kind of image object detection method based on DC-SPP-YOLO
CN110008882A (en) * 2019-03-28 2019-07-12 华南理工大学 Vehicle checking method based on mask and the loss of frame similitude

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KAIWEN DUAN: ""CenterNet: Keypoint Triplets for Object Detection"", 《ARXIV》 *
胡翔云: ""变分法遥感影像人工地物自动检测"", 《测绘学报》 *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111242088B (en) * 2020-01-22 2023-11-28 上海商汤临港智能科技有限公司 Target detection method and device, electronic equipment and storage medium
CN111242088A (en) * 2020-01-22 2020-06-05 上海商汤临港智能科技有限公司 Target detection method and device, electronic equipment and storage medium
WO2021160184A1 (en) 2020-02-14 2021-08-19 Huawei Technologies Co., Ltd. Target detection method, training method, electronic device, and computer-readable medium
EP4104096A4 (en) * 2020-02-14 2023-07-19 Huawei Technologies Co., Ltd. Target detection method, training method, electronic device, and computer-readable medium
CN111553348A (en) * 2020-04-26 2020-08-18 中南大学 Anchor-based target detection method based on centernet
CN111640089A (en) * 2020-05-09 2020-09-08 武汉精立电子技术有限公司 Defect detection method and device based on feature map center point
CN111640089B (en) * 2020-05-09 2023-08-15 武汉精立电子技术有限公司 Defect detection method and device based on feature map center point
CN111652106A (en) * 2020-05-28 2020-09-11 韶关学院 Target monitoring method and device, electronic equipment and storage medium thereof
CN111652106B (en) * 2020-05-28 2024-02-02 韶关学院 Target monitoring method and device, electronic equipment and storage medium thereof
CN111915628A (en) * 2020-06-24 2020-11-10 浙江大学 Single-stage instance segmentation method based on prediction target dense boundary points
CN111753732A (en) * 2020-06-24 2020-10-09 佛山市南海区广工大数控装备协同创新研究院 Vehicle multi-target tracking method based on target center point
CN111915628B (en) * 2020-06-24 2023-11-24 浙江大学 Single-stage instance segmentation method based on prediction target dense boundary points
CN111931764A (en) * 2020-06-30 2020-11-13 华为技术有限公司 Target detection method, target detection framework and related equipment
CN111931764B (en) * 2020-06-30 2024-04-16 华为云计算技术有限公司 Target detection method, target detection frame and related equipment
CN111832479A (en) * 2020-07-14 2020-10-27 西安电子科技大学 Video target detection method based on improved self-adaptive anchor R-CNN
CN111832479B (en) * 2020-07-14 2023-08-01 西安电子科技大学 Video target detection method based on improved self-adaptive anchor point R-CNN
CN112257609A (en) * 2020-10-23 2021-01-22 重庆邮电大学 Vehicle detection method and device based on self-adaptive key point heat map
CN112336342B (en) * 2020-10-29 2023-10-24 深圳市优必选科技股份有限公司 Hand key point detection method and device and terminal equipment
CN112336342A (en) * 2020-10-29 2021-02-09 深圳市优必选科技股份有限公司 Hand key point detection method and device and terminal equipment
CN112364734A (en) * 2020-10-30 2021-02-12 福州大学 Abnormal dressing detection method based on yolov4 and CenterNet
CN112270278A (en) * 2020-11-02 2021-01-26 重庆邮电大学 Key point-based blue top house detection method
CN112580529A (en) * 2020-12-22 2021-03-30 上海有个机器人有限公司 Mobile robot perception identification method, device, terminal and storage medium
CN112884742B (en) * 2021-02-22 2023-08-11 山西讯龙科技有限公司 Multi-target real-time detection, identification and tracking method based on multi-algorithm fusion
CN112884742A (en) * 2021-02-22 2021-06-01 山西讯龙科技有限公司 Multi-algorithm fusion-based multi-target real-time detection, identification and tracking method

Also Published As

Publication number Publication date
CN110532894B (en) 2021-09-03

Similar Documents

Publication Publication Date Title
CN110532894A (en) Remote sensing target detection method based on boundary constraint CenterNet
CN110287932B (en) Road blocking information extraction method based on deep learning image semantic segmentation
JP6830707B1 (en) Person re-identification method that combines random batch mask and multi-scale expression learning
CN110276269B (en) Remote sensing image target detection method based on attention mechanism
CN104867126B (en) Based on point to constraint and the diameter radar image method for registering for changing region of network of triangle
CN105069746B (en) Video real-time face replacement method and its system based on local affine invariant and color transfer technology
CN108334848A (en) A kind of small face identification method based on generation confrontation network
CN110766051A (en) Lung nodule morphological classification method based on neural network
Liu et al. Spatial feature fusion convolutional network for liver and liver tumor segmentation from CT images
CN109711288A (en) Remote sensing ship detecting method based on feature pyramid and distance restraint FCN
CN109919961A (en) A kind of processing method and processing device for aneurysm region in encephalic CTA image
CN108764063A (en) A kind of pyramidal remote sensing image time critical target identifying system of feature based and method
CN108334847A (en) A kind of face identification method based on deep learning under real scene
CN109284704A (en) Complex background SAR vehicle target detection method based on CNN
CN109241871A (en) A kind of public domain stream of people&#39;s tracking based on video data
CN106934795A (en) The automatic testing method and Forecasting Methodology of a kind of glue into concrete beam cracks
CN109800629A (en) A kind of Remote Sensing Target detection method based on convolutional neural networks
CN110473196A (en) A kind of abdominal CT images target organ method for registering based on deep learning
CN109558902A (en) A kind of fast target detection method
CN107274399A (en) A kind of Lung neoplasm dividing method based on Hession matrixes and 3D shape index
CN110232387A (en) A kind of heterologous image matching method based on KAZE-HOG algorithm
CN110399800A (en) Detection method of license plate and system, storage medium based on deep learning VGG16 frame
CN108122221A (en) The dividing method and device of diffusion-weighted imaging image midbrain ischemic area
CN109461163A (en) A kind of edge detection extraction algorithm for magnetic resonance standard water mould
CN108460336A (en) A kind of pedestrian detection method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant