CN108830130A - A kind of polarization EO-1 hyperion low-altitude reconnaissance image typical target detection method - Google Patents
A kind of polarization EO-1 hyperion low-altitude reconnaissance image typical target detection method Download PDFInfo
- Publication number
- CN108830130A CN108830130A CN201810274928.7A CN201810274928A CN108830130A CN 108830130 A CN108830130 A CN 108830130A CN 201810274928 A CN201810274928 A CN 201810274928A CN 108830130 A CN108830130 A CN 108830130A
- Authority
- CN
- China
- Prior art keywords
- target
- cnn
- dpcl
- polarization
- classification
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/194—Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB
Abstract
Present invention combination CNN frame, polarization high light spectrum image-forming technology and the development for distinguishing dictionary learning propose a kind of new polarization EO-1 hyperion low-altitude reconnaissance image typical target detection method under polarization EO-1 hyperion low target detection simulated environment.By being improved to Faster R-CNN more mature in CNN frame, doubledictionary driving CNN classifier is proposed for target detection, using doubledictionary backpropagation (DPBP) for the end-to-end study of doubledictionary classifier and the character representation of CNN, positioning performance is improved using sample weights method, the joint training that multitask loss is returned for DPCL and bounding box;Polarization high spectrum image is introduced into target detection, having chosen three classes typical target is tested, the preliminary identification validity of model and sample.The advantages of context of methods, is that different CNN frames can be combined, flexibility is higher, the contrast of polarization high spectrum image target and background can be enhanced, the complexity of background is reduced to a certain extent, so that target is more prominent, facilitate testing result, is of great significance for improving polarization imaging target acquisition and identification.
Description
Technical field
The present invention is under the jurisdiction of polarization imaging detection and computer vision field, is related to a kind of new target detection new method,
It is applicable in polarization EO-1 hyperion low-altitude reconnaissance image typical target detection.
Background technique
In recent years, unmanned plane increasing, the fast automatic knowledge to Research on Target that applies to battle reconnaissance, the scale of strike
It is not one of the important performance indicator in unmanned plane field and development trend, it is right while the quality of target image is constantly promoted
Algorithm of target detection carries out Improvement, can further promote the target detection efficiency of the low latitudes such as unmanned plane platform.
Development and training dataset due to deep convolutional neural networks (Convolution Neural Network, CNN)
Scale is continuously increased, and target detection produces breakthrough in recent years.State-of-the-art object detection method generally uses base
CNN frame in region, the frame include three component parts:Candidate region, feature extraction and target category classification.To current
Until, it has been proposed that many candidate region methods and depth CNN framework, but target category classification method is relatively simple, mainly
It although improving the precision and robustness of target detection, but still is directly from CNN spy based on SVM/softmax classifier
Study lacks the ability of the explicit labyrinth for excavating further feature to an optimum mapping in sign.
Distinguish that dictionary learning (Dictionary Pair Learning, DDL) is achieving huge success nearly ten years,
And the purpose of DDL is one dictionary of study and considers that it indicates precision and discriminating power, therefore is more suitable for target category point
The classifier of class.Existing DDL method it is main insufficient there are two:First, use the method (example of conventional portable feature
Such as, SIFT and HOG);Second, it is related to heavy " L0" or " L1" norm regularization to generate sparse coding vector, limit its
Utilization in scene with high characteristic dimension and mass data.For this problem, scholar proposes projection doubledictionary study
(Projective Dictionary Pair Learning, P-DPL) method, substantially increases computational efficiency, the present invention is in P-
On the basis of the model of DPL method, devise doubledictionary classifier layer (Dictionary Pair Classifier Layer,
DPCL it) is used for target detection, the further feature that DPCL needs is generated by deep layer CNN, is mentioned by the combination of CNN frame and DPCL
The image classification and target detection performance of high unmanned reconnaissance platforms.
In terms of the classification and identification of low latitude typical target, have at present based on principal component analysis (Principal
Components Analysis, PCA) albefaction convolutional neural networks structure come handle military target large-scale image classification
Problem;There is scholar to be based on deep learning and propose Target Recognition thinking, in conjunction with depth characteristic and spatial pyramid pond
The automatic detection of change technology realization military target.But target image used above is traditional otherwise visible light color image,
Polarization EO-1 hyperion detection can will acquire the Information expansion of image to various dimensions, while increase the contrast of target and background, more
Added be conducive to target detection work, therefore EO-1 hyperion polarization image military surveillance field have preferably it is perspective and extensive
Application prospect.
Summary of the invention
Based on this, the invention proposes a kind of polarization EO-1 hyperion low-altitude reconnaissance image typical target detection methods, including such as
Lower step:
Step 1: acquiring the image pattern collection under more scenes, described image sample set includes test sample collection and training sample
This collection;
It is handled Step 2: sending optimization system for the sample set;
Step 3: by the optimization system output test result.
Image pattern collection under the more scenes of the acquisition including the use of polarization EO-1 hyperion low target detect analog platform into
The acquisition of row sample set;
The optimization system includes deep convolutional neural networks CNN module and doubledictionary classifier layer DPCL module, the CNN
Module is made of convolutional layer, pond layer and full articulamentum, and for extracting characteristics of image, assessment score judges whether it is target, institute
Classification and positioning that DPCL module carries out target based on the characteristics of image that CNN is extracted are stated, is divided into target DPCL and classification DPCL, uses
In score of the calculating as specific objective classification.
It is described to send optimization system for the sample set and handle, including,
Step 2.1 learns joint training mechanism using feature learning and classifier, optimizes CNN parameter and DPCL;
Step 2.2 is extracted feature by CNN, and is replicated to the feature, while passing to DPCL layers of target and class
Other DPCL layers;
Step 2.3 calculates target category score, and determines target category;
Step 2.4 returns the position for calculating object boundary frame by bounding box.
It is described that joint training mechanism is learnt using feature learning and classifier, optimize CNN parameter and DPCL, including,
Firstly, it is as follows to define DPCL:
Wherein, λ>0,κ>0, it is scalar constant,Indicate XkComplementary data matrix,For bound term;
Secondly, to doubledictionary (Dk, Pk) be separately optimized,
{Pk,DkPartial derivative be defined as:
According toObtain XkPartial derivative:
OwnedLater, it executes backpropagation and updates CNN parameter.
It is described that feature is extracted by CNN, and the feature is replicated, while passing to DPCL layers of target and classification
DPCL layers, including:
A candidate region I in given test image, extracts CNN feature x from I first, then defines the reconstruct of kth classification
Residual error:
The classifying rules of DPCL is as follows:
As y ≠ 0, further uses bounding box and return the position that adjustment target initially positions, pass through CNN layers of extraction feature
Afterwards, feature is replicated and passes to DPCL layers and classification DPCL layers of target simultaneously.
It is described to calculate the target category score, and determine target category, including:
Firstly, the target fractional Q (x) of input area feature x is defined as:
Wherein the precision of T control detection and detection background recall rate, the present invention are set as 0.5 according to verifying collection experience, and
Background is identified based on whether Q (x) is 0;
Secondly, classification score S (x, k) is defined as:
Wherein, K is the quantity of target category, and β is set as 0.003, finally merged using product rule target fractional and
Classification score, x belong to the classification score of kth classIt is defined as:
φ, which is arranged, indicates CNN layer functions, IiIt indicates to have class label yiInput area, feature x=φ (I, ω), then
Final classification loss is defined as:
Wherein | ∈ { 0,1 } is target function, and R { ω, D, P } indicates the regularization about the parameter of CNN and two DPCL
?.
The position that calculating object boundary frame is returned by bounding box, including,
It enablesWithIt is prediction and the ground truth bounding box of candidate region I, wherein
K indicates that I belongs to k-th of target category, and bounding box is then returned loss and is defined as:
Wherein H1(z) it is Huber loss, there is robustness to exceptional value:
According to sum rule, merge LclsAnd Lloc, multitask loss be defined as:
WhereinIt is to indicate IiWhether be target index.
Finally, output object detection results.
Beneficial effects of the present invention:
Can be in conjunction with different CNN frames, flexibility is higher, can enhance pair of polarization high spectrum image target and background
Than degree, the complexity of background is reduced to a certain extent, so that target is more prominent, facilitates testing result, for
It improves polarization imaging target acquisition and identification is of great significance.
Detailed description of the invention
Fig. 1 is target detection frame diagram of the invention
Fig. 2 is model inspection process flow diagram flow chart of the invention
Fig. 3 is image capture device and scale model of the invention
Fig. 4 is the detection effect comparison of two kinds of images under Faster R-CNN frame
Fig. 5 is CNN+DPCL detection effect figure of the invention
Fig. 6 is target image acquired under simulation different scenes of the invention
Specific embodiment
The application is described in further detail with reference to the accompanying drawing, it is necessary to it is indicated herein to be, implement in detail below
Mode is served only for that the application is further detailed, and should not be understood as the limitation to the application protection scope, the field
Technical staff can make some nonessential modifications and adaptations to the application according to above-mentioned application content.
The frame of target detection is as shown in Figure 1, the acquisition of whole process sub-image data, network model training, target sample
The three phases such as detection.In the image data acquiring stage, analog platform is detected using polarization EO-1 hyperion low target, obtains target
Image training sample set under more scenes;Second stage learns joint training mechanism using using feature learning and classifier,
And end-to-end optimization CNN+DPCL frame is realized with DPBP algorithm;In the phase III, then feature is extracted by CNN and be replicated simultaneously together
When pass to DPCL, complete target category and determine, obtain testing result.
Model framework is made of two parts of CNN module and DPCL module.Wherein, CNN module by convolutional layer, pond layer and
Full articulamentum is constituted, and for extracting characteristics of image, DPCL module carries out the classification of target based on the characteristics of image that CNN is extracted and determines
Position, is divided into target DPCL and classification DPCL, the former assesses score and judges whether it is target, and the latter calculates and is used as specific objective class
Other score.Three in 3 × 1 grids values correspond to three input areas, the final score of each image-region be target and
The combination of classification, detection process are as shown in Figure 2.
Firstly, acquiring the image pattern collection under more scenes, described image sample set includes test sample collection and training sample
Collection;In the present invention, the acquisition of sample set is carried out using polarization EO-1 hyperion low target detection analog platform,
It is handled secondly, sending optimization system for the sample set, in the present invention, optimization system includes deep convolution
Neural network CNN module and doubledictionary classifier layer DPCL module, the CNN module is by convolutional layer, pond layer and full articulamentum
It constitutes, for extracting characteristics of image, assessment score judges whether it is target, and the DPCL module is special based on the image that CNN is extracted
Sign carries out the classification and positioning of target, is divided into target DPCL and classification DPCL, for calculating the score as specific objective classification.
For input picture region I, X=[X is enabled0,...,Xk,...,XK](nkFor the training sample of kth classification
This quantity) indicate that the d from K+1 classification of one group of previous layer ties up output.DPCL is intended to find a tool alanysis dictionary P
=[P0,...,Pk,...,PK]∈Rm(K+1)×d(Pk∈Rm×d) and tool class synthesis dictionary D=[D0,...,Dk,...,DK]∈Rd ×m(K+1)(Dk∈Rd×m) come analysis of encoding and reconstruction features X, wherein m is the quantity of dictionary atom.Sub- dictionary PkAnd DkFormation is used for
The doubledictionary of k-th of classification.Given PkAnd Dk, code coefficient AkIt can be with Ak=PkXkMode obtain.It is existing compared to most of
DDL method use cumbersome L0Norm or L1The non-linear sparse coding operation of norm, solving in DPL indicates XkCoding AkMore
It is efficient.The such analysis dictionary P of study and the DPL model formation for synthesizing dictionary D are:
Wherein, Y indicates that the class label matrix of the sample in X, Φ { P, D, X, Y } are that some differentiation items are used to promote D and P
Discriminating power.
Original DPL does not account for different training samples and distinguishes the importance that difference may occur in model in training,
Therefore diagonal weight matrix W is introducedkTo kth class training sample, WkBe introduced for improve positioning performance, model uses
In conjunction with the candidate region and real estate boundary of k-th of object category window intersection than union ratio (IoU,
Intersection over Union) define Wk, higher weight, higher weights are distributed to the sample more preferably positioned
Sample it is expected there is lower reconstructed residual, better positioning can be found using reconstructed residual.
In the present invention, target detection frame is made of network training and target detection two parts, uses training in advance
Netinit CNN parameter, to optimize Lmt, obtaining LmtTo Db、Pb、Do、Po、Dk、Pk、XkPartial derivative after, extension
DPBP finely tunes CNN+DPCL and returns to update doubledictionary, CNN parameter and bounding box.Double word is initialized with doubledictionary learning algorithm
Allusion quotation further optimizes CNN+DPCL with DPBP algorithm end to end, and algorithm of target detection is as follows:
Algorithm 1:Model learning process
Algorithm 2:Algorithm of target detection
Specifically, learning joint training mechanism using feature learning and classifier, optimize CNN parameter and DPCL, including, it is right
DPCL is defined as follows:
Wherein λ>0 and κ>0 is scalar constant,Indicate XkComplementary data matrix.In order to avoid Pk=0 trivial solution,
Increase an additional bound term
Using alternating minimization algorithm, doubledictionary study is realized in conjunction with code coefficient matrix A, formula is as follows:
Wherein τ is scalar constant.Above-mentioned objective function all items all by Frobenius norm square characterized by,
Therefore equation (3) can effectively be solved by alternating minimization algorithm.By initial with unit Frobenius norm random matrix
Change P and D, equation (3) is subjected to alternating minimization by following three steps:
(1) fixed { D, P, X }, updates A:
(2) fixed { D, A, X }, updates P:
Wherein constant γ is set as 0.0001 according to the experience of verifying collection.
(3) fixed { A, P, X }, updates D:
For { A, P, D }, since all steps all have closed solution, step 3 minimum is very effective.When two phases
The difference of adjacent iteration stops iteration when being less than threshold value, and the present invention sets a threshold to 0.01.
Model proposes DPBP algorithm, DPCL and CNN combination learning parameter is obtained in a manner of end to end to realize.DPCL
Doubledictionary (the D of modelk, Pk) can be separately optimized, therefore formula (2) can resolve into following K+1 subproblem:
In DPBP, { Dk, PkPartial derivative be defined as:
According toObtain XkPartial derivative:
OwnedLater, it executes backpropagation and updates CNN parameter.
A candidate region I in given test image, extracts CNN feature x from I first, then defines the reconstruct of kth classification
Residual error:
The classifying rules of DPCL is as follows:
As y ≠ 0, further uses bounding box and return the position that adjustment target initially positions.
DPCL is a category classification method, but is not appropriate for for completing location tasks.It, can in order to improve positioning performance
To be lost using multitask come balanced sort and positioning.In this approach, each candidate region be divided into background or in which
One target category, this may not be able to distinguish background from target category well.In order to solve this problem, new model into
Classification task is decomposed into two relevant issues by one step, as shown in Fig. 2, feature is replicated simultaneously same after by CNN layers of extraction feature
When pass to DPCL layers of target and classification DPCL layers.
Object definition is the target fractional for covering any classification target by model.In order to measure the target in input area,
Target doubledictionary (ODP) layer uses two doubledictionary { Do, PoAnd { Db, PbRespectively indicate any one classification and background
Target.If provincial characteristics x can be preferably by background doubledictionary { Db, PbIndicate, then no mesh is likely in image-region
Mark.ODP distinguishes the region for having a wide range of background using threshold value T not according to formula (11) Direct Recognition background, into one
The carry out target detection of step.In conjunction with the reconstructive residual error that formula 10 defines, the target fractional Q (x) of input area feature x is defined
For:
The wherein precision and detection background recall rate (T is bigger, and precision is higher, and recall rate is lower) of T control detection, the present invention
0.5 is set as according to verifying collection experience.Therefore, model identifies background based on whether Q (x) is 0.
Classification score S (x, k) indicates a possibility that feature x belongs to kth class.In order to calculate the classification of target, classification double word
Allusion quotation (CDP) layer is made of K doubledictionary, and wherein K is the quantity of target category.After the feature x of given input area, CDP will be in K
A specific doubledictionary { D of classificationk, PkOn encode x, and export the reconstructed residual of each doubledictionary, the present invention uses reconstructive residual error
It defines classification S (x, k):
Rule of thumb 0.003 is set by constant beta.
Target fractional and classification score are finally merged using product rule, x belongs to the classification score of kth classDefinition
For:
φ is enabled to indicate CNN layer functions, IiIt indicates to have class label yiInput area, then feature x=φ (I, ω).Knot
Close classification scoringFinal classification loss is defined as:
Wherein | ∈ { 0,1 } is target function, and R { ω, D, P } indicates the regularization about the parameter of CNN and two DPCL
?.
The recurrence of bounding box is lost.The multitask loss that model defines is easy to add other relevant losses, such as Shandong
Stick loses [12].It enablesWithIt is prediction and the ground truth boundary of candidate region I
Frame, wherein k indicates that I belongs to k-th of target category.Then bounding box loss is returned to be defined as:
Wherein H1(z) it is Huber loss, there is robustness to exceptional value:
According to sum rule, merge LclsAnd Lloc, multitask loss be defined as:
WhereinIt is to indicate IiWhether be target index.
The present invention gives several examinations examples, and mould is compared in selection polarization EO-1 hyperion camera and target contracting in experiment
Type carries out ground military target in whole scene emulation experiment indoor simulation type unmanned aerial vehicle platform low as simulation experiment device
Sky is scouted, and is obtained image data and is carried out experimental verification, as shown in Figure 3.Experimental situation is Dell PRECISION TOWER
5810 work stations, design parameter are:Inter (R) Xeon (R) E5-1660v43.2GHz, 32.0GB RAM, 8GB SGRAM,
7 professional version+MatLab (R2016a) of Windows.
The data sample of acquisition is divided into three classes, and is tank, troop transporter and Shock vehicle respectively, simulates the target under different scenes
Image Acquisition.To reduce data scale and complexity, has chosen more single posture scene and carry out experimental verification, such as Fig. 6 institute
Show.4200 samples are randomly selected in obtaining 4500 image patterns for model training, 300 samples are for testing.It is real
The initialization for being used for network parameter using ZF model is tested, after obtaining image pattern, extracts feature export feature text in conjunction with ZF network
Part carries out model training using tag file, and carries out target detection using the model perfected.
In present invention experiment, set { τ, λ, κ, beta, gamma, T, m } to 0.01,0.01,0.001,0.003,0.0001,
0.5,64}.In conjunction with Faster R-CNN and ZF as baseline model, CNN parameter is the pre-training first on ImageNet, at this
It invents and is finely tuned in the VOC training and verifying collection of the surface car of production, replacing softmax classification layer on this basis is the present invention
The DPCL of proposition, starting DPBP are finely adjusted network, and learning rate is set as 0.00001, and momentum is set since 0.9.It is micro-
In tune, all IoU<0.5 region is considered as background, and the corresponding target category in region depending on IoU >=0.5 is positive, these are positive
The weight in region is defined by its IoU with true picture bounding box.
Experiment first verifies that influence of the characteristics of polarization high spectrum image to detection effect, uses Faster R-CNN frame
Common RGB image target and polarization high spectrum image target are detected respectively;Every kind of target point 10 groups, every group of 10 samples
It is detected, as shown in Figure 4.
As can be seen that polarization high spectrum image enhances the contrast of target and comparison in figure, reduce to a certain extent
The complexity of background detects score in this case and is totally higher than RGB image so that target is more prominent.It is shown in table 2
The overall accuracy and accuracy of all categories of experiment, the results showed that the high contrast features for polarizing high spectrum image facilitate
The promotion of testing result.Secondly it is directed to identical polarization EO-1 hyperion target, is tested using improved CNN+DPCL model,
The detection case of CNN+DPCL model is as shown in Figure 5.
The testing result of 2 two kinds of images of table compares
It can be seen from the figure that CNN+DPCL is more accurate to the calibration of bounding box, and the score of target also generally mentions
It is high.Table 3 compared the testing result of two kinds of models.It is compared with using the Faster R-CNN of softmax classifier, CNN+DPCL
Small size raising is achieved in the detection effect of this experiment.
The polarization high spectrum image testing result of 3 two kinds of models of table compares
The embodiments described above only express several embodiments of the present invention, and the description thereof is more specific and detailed, but simultaneously
Limitations on the scope of the patent of the present invention therefore cannot be interpreted as.It should be pointed out that for those of ordinary skill in the art
For, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to guarantor of the invention
Protect range.Therefore, the scope of protection of the patent of the invention shall be subject to the appended claims.
Claims (8)
1. a kind of polarization EO-1 hyperion low-altitude reconnaissance image typical target detection method, its step are as follows:
Step 1: acquiring the image pattern collection under more scenes, described image sample set includes test sample collection and training sample set;
It is handled Step 2: sending optimization system for the sample set;
Step 3: by the optimization system output test result.
2. polarization EO-1 hyperion low-altitude reconnaissance image typical target detection method according to claim 1, which is characterized in that institute
It states and acquires the image pattern collection under more scenes including the use of polarization EO-1 hyperion low target detection analog platform progress sample set
Acquisition.
3. polarization EO-1 hyperion low-altitude reconnaissance image typical target detection method according to claim 1, which is characterized in that institute
Stating optimization system includes deep convolutional neural networks CNN module and doubledictionary classifier layer DPCL module, and the CNN module is by convolution
Layer, pond layer and full articulamentum are constituted, and for extracting characteristics of image, assessment score judges whether it is target, the DPCL module
The classification and positioning that target is carried out based on the characteristics of image that CNN is extracted, are divided into target DPCL and classification DPCL, for calculating conduct
The score of specific objective classification.
4. polarization EO-1 hyperion low-altitude reconnaissance image typical target detection method according to claim 3, which is characterized in that institute
It states and sends optimization system for the sample set and handle, including:
Step 2.1 learns joint training mechanism using feature learning and classifier, optimizes CNN parameter and DPCL;
Step 2.2 is extracted feature by CNN, and is replicated to the feature, while passing to DPCL layers of target and classification
DPCL layers;
Step 2.3 calculates target category score, and determines target category;
Step 2.4 returns the position for calculating object boundary frame by bounding box.
5. polarization EO-1 hyperion low-altitude reconnaissance image typical target detection method according to claim 4, which is characterized in that institute
It states using feature learning and classifier study joint training mechanism, optimizes CNN parameter and DPCL, including:
Firstly, it is as follows to define DPCL:
Wherein, λ>0,κ>0, it is scalar constant,Indicate XkComplementary data matrix,For bound term;
Secondly, to doubledictionary (Dk, Pk) be separately optimized,
{Pk,DkPartial derivative be defined as:
According toObtain XkPartial derivative:
OwnedLater, it executes backpropagation and updates CNN parameter.
6. polarization EO-1 hyperion low-altitude reconnaissance image typical target detection method according to claim 4, which is characterized in that institute
It states and feature is extracted by CNN, and the feature is replicated, while passing to DPCL layers and classification DPCL layers of target, including:
A candidate region I in given test image, extracts CNN feature x from I first, then defines the reconstructed residual of kth classification:
The classifying rules of DPCL is as follows:
As y ≠ 0, further uses bounding box and return the position that adjustment target initially positions, after CNN layers of extraction feature,
Feature is replicated and passes to DPCL layers and classification DPCL layers of target simultaneously.
7. polarization EO-1 hyperion low-altitude reconnaissance image typical target detection method according to claim 5, which is characterized in that institute
It states and calculates the target category score, and determine target category, including:
Firstly, the target fractional Q (x) of input area feature x is defined as:
The wherein precision of T control detection and detection background recall rate, the present invention is set as 0.5 according to verifying collection experience, and is based on Q
It (x) whether is 0 to identify background;
Secondly, classification score S (x, k) is defined as:
Wherein, K is the quantity of target category, and β is set as 0.003, finally merges target fractional and classification using product rule
Score, x belong to the classification score of kth classIt is defined as:
φ, which is arranged, indicates CNN layer functions, IiIt indicates to have class label yiInput area, feature x=φ (I, ω), then finally
Classification Loss is defined as:
Wherein | ∈ { 0,1 } is target function, and R { ω, D, P } indicates the regularization term about the parameter of CNN and two DPCL.
8. polarization EO-1 hyperion low-altitude reconnaissance image typical target detection method according to claim 4, which is characterized in that institute
The position for returning by bounding box and calculating object boundary frame is stated, including,
It enablesWithIt is prediction and the ground truth bounding box of candidate region I, wherein k table
Show that I belongs to k-th of target category, bounding box is then returned into loss and is defined as:
Wherein H1(z) it is Huber loss, there is robustness to exceptional value:
According to sum rule, merge LclsAnd Lloc, multitask loss be defined as:
WhereinIt is to indicate IiWhether be target index.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810274928.7A CN108830130A (en) | 2018-03-30 | 2018-03-30 | A kind of polarization EO-1 hyperion low-altitude reconnaissance image typical target detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810274928.7A CN108830130A (en) | 2018-03-30 | 2018-03-30 | A kind of polarization EO-1 hyperion low-altitude reconnaissance image typical target detection method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108830130A true CN108830130A (en) | 2018-11-16 |
Family
ID=64154291
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810274928.7A Pending CN108830130A (en) | 2018-03-30 | 2018-03-30 | A kind of polarization EO-1 hyperion low-altitude reconnaissance image typical target detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108830130A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109858546A (en) * | 2019-01-28 | 2019-06-07 | 北京工业大学 | A kind of image-recognizing method based on rarefaction representation |
CN109934255A (en) * | 2019-01-22 | 2019-06-25 | 小黄狗环保科技有限公司 | A kind of Model Fusion method for delivering object Classification and Identification suitable for beverage bottle recycling machine |
CN110009090A (en) * | 2019-04-02 | 2019-07-12 | 北京市商汤科技开发有限公司 | Neural metwork training and image processing method and device |
CN110163161A (en) * | 2019-05-24 | 2019-08-23 | 西安电子科技大学 | Multiple features fusion pedestrian detection method based on Scale invariant |
CN111104877A (en) * | 2019-12-06 | 2020-05-05 | 中国人民解放军陆军炮兵防空兵学院 | Universal momentum method and target detection and identification method based on universal momentum method |
CN111368712A (en) * | 2020-03-02 | 2020-07-03 | 四川九洲电器集团有限责任公司 | Hyperspectral image disguised target detection method based on deep learning |
CN111369455A (en) * | 2020-02-27 | 2020-07-03 | 复旦大学 | Highlight object measuring method based on polarization image and machine learning |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103051509A (en) * | 2012-08-03 | 2013-04-17 | 北京航空航天大学 | Tree-structure-based initialization method |
CN105987754A (en) * | 2015-03-04 | 2016-10-05 | 中国人民解放军电子工程学院 | Imager integrating hyperspectral and polarization hyperspectral detectability |
CN107133616A (en) * | 2017-04-02 | 2017-09-05 | 南京汇川图像视觉技术有限公司 | A kind of non-division character locating and recognition methods based on deep learning |
-
2018
- 2018-03-30 CN CN201810274928.7A patent/CN108830130A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103051509A (en) * | 2012-08-03 | 2013-04-17 | 北京航空航天大学 | Tree-structure-based initialization method |
CN105987754A (en) * | 2015-03-04 | 2016-10-05 | 中国人民解放军电子工程学院 | Imager integrating hyperspectral and polarization hyperspectral detectability |
CN107133616A (en) * | 2017-04-02 | 2017-09-05 | 南京汇川图像视觉技术有限公司 | A kind of non-division character locating and recognition methods based on deep learning |
Non-Patent Citations (1)
Title |
---|
KEZE WANG 等: "Dictionary Pair Classifier Driven Convolutional Neural Networks for Object Detection", 《2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109934255A (en) * | 2019-01-22 | 2019-06-25 | 小黄狗环保科技有限公司 | A kind of Model Fusion method for delivering object Classification and Identification suitable for beverage bottle recycling machine |
CN109858546A (en) * | 2019-01-28 | 2019-06-07 | 北京工业大学 | A kind of image-recognizing method based on rarefaction representation |
CN109858546B (en) * | 2019-01-28 | 2021-03-30 | 北京工业大学 | Image identification method based on sparse representation |
CN110009090A (en) * | 2019-04-02 | 2019-07-12 | 北京市商汤科技开发有限公司 | Neural metwork training and image processing method and device |
CN110163161A (en) * | 2019-05-24 | 2019-08-23 | 西安电子科技大学 | Multiple features fusion pedestrian detection method based on Scale invariant |
CN111104877A (en) * | 2019-12-06 | 2020-05-05 | 中国人民解放军陆军炮兵防空兵学院 | Universal momentum method and target detection and identification method based on universal momentum method |
CN111369455A (en) * | 2020-02-27 | 2020-07-03 | 复旦大学 | Highlight object measuring method based on polarization image and machine learning |
CN111369455B (en) * | 2020-02-27 | 2022-03-18 | 复旦大学 | Highlight object measuring method based on polarization image and machine learning |
CN111368712A (en) * | 2020-03-02 | 2020-07-03 | 四川九洲电器集团有限责任公司 | Hyperspectral image disguised target detection method based on deep learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108830130A (en) | A kind of polarization EO-1 hyperion low-altitude reconnaissance image typical target detection method | |
CN108537742B (en) | Remote sensing image panchromatic sharpening method based on generation countermeasure network | |
CN105740894B (en) | Semantic annotation method for hyperspectral remote sensing image | |
CN113065558A (en) | Lightweight small target detection method combined with attention mechanism | |
CN103093444B (en) | Image super-resolution reconstruction method based on self-similarity and structural information constraint | |
CN108038846A (en) | Transmission line equipment image defect detection method and system based on multilayer convolutional neural networks | |
CN108764308A (en) | A kind of recognition methods again of the pedestrian based on convolution loop network | |
CN109145992A (en) | Cooperation generates confrontation network and sky composes united hyperspectral image classification method | |
CN110674741B (en) | Gesture recognition method in machine vision based on double-channel feature fusion | |
CN107145836B (en) | Hyperspectral image classification method based on stacked boundary identification self-encoder | |
CN108090447A (en) | Hyperspectral image classification method and device under double branch's deep structures | |
CN107330357A (en) | Vision SLAM closed loop detection methods based on deep neural network | |
CN108090472B (en) | Pedestrian re-identification method and system based on multi-channel consistency characteristics | |
CN106203523A (en) | The classification hyperspectral imagery of the semi-supervised algorithm fusion of decision tree is promoted based on gradient | |
CN107423747B (en) | A kind of conspicuousness object detection method based on depth convolutional network | |
CN105989336B (en) | Scene recognition method based on deconvolution deep network learning with weight | |
CN109271895A (en) | Pedestrian's recognition methods again based on Analysis On Multi-scale Features study and Image Segmentation Methods Based on Features | |
CN108734199A (en) | High spectrum image robust classification method based on segmentation depth characteristic and low-rank representation | |
CN104298974A (en) | Human body behavior recognition method based on depth video sequence | |
CN112347888A (en) | Remote sensing image scene classification method based on bidirectional feature iterative fusion | |
CN102750385A (en) | Correlation-quality sequencing image retrieval method based on tag retrieval | |
CN108776777A (en) | The recognition methods of spatial relationship between a kind of remote sensing image object based on Faster RCNN | |
CN112837315A (en) | Transmission line insulator defect detection method based on deep learning | |
Lu et al. | A CNN-transformer hybrid model based on CSWin transformer for UAV image object detection | |
CN109919246A (en) | Pedestrian's recognition methods again based on self-adaptive features cluster and multiple risks fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20181219 Address after: 230000 No. 555 Wangjiangxi Road, Hefei High-tech Zone, Anhui Province Applicant after: ANHUI XINHUA University Address before: 230000 Room 703, Building 71, 451 Huangshan Road, Hefei City, Anhui Province Applicant before: Xu Guoming |
|
TA01 | Transfer of patent application right | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181116 |
|
RJ01 | Rejection of invention patent application after publication |