CN112884007B - SAR image classification method for pixel-level statistical description learning - Google Patents
SAR image classification method for pixel-level statistical description learning Download PDFInfo
- Publication number
- CN112884007B CN112884007B CN202110093797.4A CN202110093797A CN112884007B CN 112884007 B CN112884007 B CN 112884007B CN 202110093797 A CN202110093797 A CN 202110093797A CN 112884007 B CN112884007 B CN 112884007B
- Authority
- CN
- China
- Prior art keywords
- sar image
- pixel
- formula
- target
- description
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 30
- 238000013145 classification model Methods 0.000 claims abstract description 25
- 230000004927 fusion Effects 0.000 claims abstract description 4
- 230000006870 function Effects 0.000 claims description 31
- 230000009466 transformation Effects 0.000 claims description 17
- 238000013519 translation Methods 0.000 claims description 16
- 230000004913 activation Effects 0.000 claims description 15
- 239000013598 vector Substances 0.000 claims description 15
- 238000012545 processing Methods 0.000 claims description 13
- 238000013507 mapping Methods 0.000 claims description 11
- 238000005457 optimization Methods 0.000 claims description 10
- 150000001875 compounds Chemical class 0.000 claims description 9
- 239000011159 matrix material Substances 0.000 claims description 9
- 238000012549 training Methods 0.000 claims description 7
- 238000011176 pooling Methods 0.000 claims description 6
- 230000003044 adaptive effect Effects 0.000 claims description 3
- 239000002131 composite material Substances 0.000 claims description 3
- 238000010191 image analysis Methods 0.000 abstract description 5
- 238000003703 image analysis method Methods 0.000 abstract 1
- 238000003384 imaging method Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 230000007246 mechanism Effects 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 4
- 230000001427 coherent effect Effects 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 4
- 238000011161 development Methods 0.000 description 4
- 230000018109 developmental process Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses an SAR image classification method for pixel-level statistical description learning, which comprises the following steps: s1, inputting the target SAR image into the SAR image classification model; s2, extracting the pixel-level statistical description characteristics of the target SAR image by a discrimination sub-network in the SAR image classification model; s3, extracting structural mode description features of the target SAR image by a mode sub-network in the SAR image classification model; s4, fusing the pixel-level statistical description features and the structural mode description features by a fusion module in the SAR image classification model to obtain image description features of the target SAR image; and S5, generating a classification result of the target SAR image based on the image description characteristics by a Softmax layer in the SAR image classification model. The SAR image analysis method based on the fuzzy clustering can solve the problems of low generalization capability and insufficient robustness in SAR image analysis.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to an SAR image classification method for pixel-level statistical description learning.
Background
Synthetic Aperture Radar (SAR) is a research direction for earlier development in the technical field of modern radar signal processing, is an active microwave remote sensing imaging system, and is widely applied to the military and civil fields. The SAR system can simultaneously obtain a clutter signal of the surface feature including amplitude and phase information, which carries rich target information. The SAR image interpretation technology is an important means for analyzing SAR image information, and with the continuous development of SAR imaging systems and the improvement of data acquisition capacity, the analysis and interpretation of massive SAR data become research hotspots.
The traditional SAR image classification method comprises image statistical modeling, texture analysis and other methods. The image statistical modeling method generally realizes SAR image classification based on a Bayesian decision theory framework, and typically comprises the following steps: weibull distribution, logarithmic gaussian distribution, and K distribution, among others. However, due to its limited degrees of freedom, this type of distribution still has deficiencies in fitting non-gaussian distributed SAR data. In the texture analysis method, image classification is realized by extracting texture features of the SAR image, and in the research of the method, the features obtained based on the traditional texture analysis method are usually features with low-level or middle-level semantics, and the image description capability is limited.
In recent years, deep learning has become the mainstream of development of image processing. Convolutional Neural Network (CNN) is one of the most popular deep learning models at present, and omits a process of artificially selecting features, and instead, automatically extracts image features for tasks such as classification, semantic segmentation, target detection and the like through multilayer processing primitives, such as processing primitives of convolution, nonlinear activation, pooling and the like. By means of the characteristic parameters with extremely strong expression capability, the convolutional neural network makes breakthrough development on the image classification problem. However, the method does not consider the pixel level statistical characteristics and the high-level structure semantic characteristics of the SAR image at the same time, has low universality in the SAR image analysis field, is difficult to process the problems of granular noise, image distortion and the like caused by a coherent imaging mechanism, and further has an unsatisfactory classification effect.
In summary, a new technical solution is urgently needed to solve the problems of low generalization capability and insufficient robustness in the SAR image analysis.
Disclosure of Invention
Aiming at the defects of the prior art, the invention discloses an SAR image classification method based on pixel-level statistical description learning, which is used for solving the problems of low generalization capability and insufficient robustness in SAR image analysis, effectively solving the problems of granular noise and image distortion caused by a coherent imaging mechanism and improving the classification effect.
In order to solve the technical problems, the invention adopts the following technical scheme:
a SAR image classification method of pixel level statistical description learning comprises the following steps:
s1, inputting the target SAR image into the SAR image classification model;
s2, extracting the pixel-level statistical description feature z of the target SAR image by a discrimination sub-network in the SAR image classification model ps ;
S3, extracting structural mode description characteristic z of target SAR image by mode sub-network in SAR image classification model pa ;
S4, a fusion module in the SAR image classification model describes the characteristic z with the pixel level statistics ps And structural Pattern description feature z pa Fusing to obtain an image description characteristic z of the target SAR image;
and S5, generating a classification result of the target SAR image based on the image description characteristics by a Softmax layer in the SAR image classification model.
Preferably, step S2 includes:
s201, extracting a pixel value mean value mu of the target SAR image based on the following formula x Sum pixel value standard deviation σ x :
In the formula, x i Expressing the pixel value of the ith pixel point of the target SAR image, and expressing the number of the pixel points of the target SAR image by n;
s202, based on the following formula, the method is used for mu x And σ x Carrying out scale and translation transformation to obtain corresponding characteristics z μ And z σ :
z μ =w μ μ x +b μ
z σ =w σ σ x +b σ
In the formula, z μ ∈V,z σ E is V, and V is a high-dimensional mapping space;w μ and b μ Respectively represent mu x Corresponding scale and translation transformation vectors, w μ =[w μ1 ,w μ2 ,...,w μD ] T ,b μ =[b μ1 ,b μ2 ,...,b μD ] T ,b μd And w μd Respectively represent mu x D-th value of (D) corresponds to scale and translation transformation parameters, D-1, 2, …, D denotes μ x The maximum dimension of; w is a σ And b σ Respectively represent sigma x Corresponding scale and translation transformation parameters, w σ =[w σ1 ,w σ2 ,...,w σD ] T ,b σ =[b σ1 ,b σ2 ,...,b σD ] T ,b σd And w σd Respectively represent sigma x The scale and translation transformation parameters corresponding to the d-th value of (a);
s203, pair z based on the following formula μ And z σ Adaptive optimization and nonlinear processing are carried out to generate pixel-level statistical description characteristics z ps :
In the formula (I), the compound is shown in the specification,ReLU (-) is a modified linear activation unit function,andrespectively represents z ps The corresponding weight matrix and the bias vector,representing an M-dimensional linear space of,representing an M x 2D dimensional linear space.
Preferably, the mode subnetwork includes 4 convolutional layers with all the activation functions as ReLU functions, and the processing formula of the kth convolutional layer is:
z k =H(W k z k-1 +b k )
in the formula, z k Denotes the output of the kth convolutional layer, k is 1,2,3,4, z 0 For the target SAR image, H (-) denotes a composite function of the ReLU activation mapping and pooling functions, W k And b k Respectively represent z k The corresponding weight matrix and the bias vector,z pa features are described for the structural patterns.
Preferably, in step S4, the feature z is statistically described at the pixel level based on the following equation ps And structural Pattern description feature z pa And (3) fusing to obtain an image description characteristic z of the target SAR image:
z=ReLU(W ps z ps +W pa z pa )
wherein ReLU (. cndot.) is a modified linear activation unit function, W ps And W pa Are each z ps And z pa And (4) corresponding weight matrix.
Preferably, the method further comprises the following steps:
and S6, after the SAR image classification model training is finished each time, optimizing the parameters of the SAR image classification model by using a random gradient descent algorithm, wherein the optimization target is to find the minimum average loss function.
Preferably, in step S6, the parameter optimization of the pattern sub-network is performed based on the following formula:
in the formula, X n' Denotes the nth' sample, y n' Andare each X n' The true tag and the estimated tag of (a),in order to be a function of the loss,
in the formula (I), the compound is shown in the specification,<·>and ln (·) respectively represent the inner product sum logarithm operation, Y n' Representing a genuine label y n' Label vector based on One-Hot coding, a n' Represents X n' The output vector after passing through the Softmax layer, the jth output of the Softmax layer isComprises the following steps:
in the formula, zj is the jth input of the Softmax layer, and M represents the number of the input of the Softmax layer;
the update formula of the weight parameter of the k-th layer convolution in the pattern sub-network is as follows:
in the formula (I), the compound is shown in the specification,a weight parameter representing the k-th layer convolution after the i +1 th layer is suboptimal,a weight parameter representing the ith sub-optimized k-th layer convolution,m is the gradient of the weight parameter of the loss function relative to the k-th layer convolution, M represents the number of training samples, eta tableShowing the learning rate.
In summary, the invention discloses an SAR image classification method for pixel-level statistical description learning, which combines the statistical characteristics of an SAR image with the learning capability of a convolutional neural network. Based on a statistical description learning theory, a discriminant sub-network is utilized to extract pixel-level first-order and high-order statistical primitives of the SAR image, and discriminant pixel-level statistical description is learned through high-dimensional mapping of the network under the constraint of minimized classification errors. In addition, the structural mode description of the SAR image is hierarchically learned by utilizing a multilayer network structure, and the hierarchical relevance between local pixel points of the SAR image is extracted. And finally, under the constraint of minimized classification errors, carrying out joint optimization on the pixel-level statistical description and the structural mode description to generate the final description characteristics of the SAR image. Compared with the prior art, the SAR image classification method based on the fuzzy clustering can solve the problems of low generalization capability and insufficient robustness in SAR image analysis, effectively process granular noise and image distortion caused by a coherent imaging mechanism, and improve the classification effect
Drawings
For purposes of promoting a better understanding of the objects, aspects and advantages of the invention, reference will now be made in detail to the present invention as illustrated in the accompanying drawings, in which:
FIG. 1 is a schematic diagram illustrating the principle of a SAR image classification method by pixel level statistics description learning disclosed in the present invention.
Fig. 2 is a schematic diagram of a SAR image classification model in the present invention.
FIG. 3 is a diagram illustrating the extraction of image mean and standard deviation statistical primitives in the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
As shown in fig. 1, the invention discloses a method for classifying an SAR image through pixel-level statistical description learning, comprising the following steps:
s1, inputting the target SAR image into the SAR image classification model;
s2, extracting the pixel-level statistical description feature z of the target SAR image by a discrimination sub-network in the SAR image classification model ps ;
S3, extracting structural mode description characteristic z of target SAR image by mode sub-network in SAR image classification model pa ;
S4, a fusion module in the SAR image classification model describes the characteristic z with the pixel level statistics ps And structural Pattern description feature z pa Fusing to obtain an image description characteristic z of the target SAR image;
and S5, generating a classification result of the target SAR image based on the image description characteristics by a Softmax layer in the SAR image classification model.
The method in the prior art is difficult to process granular noise points, image distortion and a complex scattering mechanism of the target caused by a coherent imaging mechanism, so that the classification effect is not ideal. The generalization capability mentioned in the invention is not high, and the pointer needs to adopt different training models for different types of SAR pictures. The SAR image classification method of pixel level statistical description learning disclosed by the invention is based on the statistical description learning theory, establishes a unified model of SAR image bottom layer statistical description learning and high-layer structure mode description learning, enhances the effective description of SAR image modes with random characteristics and structural characteristics, not only improves the classification precision, but also improves the universality of the network
In specific implementation, step S2 includes:
s201, extracting a pixel value mean value mu of the target SAR image based on the following formula x Sum pixel value standard deviation σ x :
In the formula, x i Expressing the pixel value of the ith pixel point of the target SAR image, and expressing the number of the pixel points of the target SAR image by n;
s202, based on the following formula, the method is used for mu x And σ x Carrying out scale and translation transformation to obtain corresponding characteristics z μ And z σ :
z μ =w μ μ x +b μ
z σ =w σ σ x +b σ
In the formula, z μ ∈V,z σ E is V, and V is a high-dimensional mapping space; w is a μ And b μ Respectively represent mu x Corresponding scale and translation transformation vectors, w μ =[w μ1 ,w μ2 ,...,w μD ] T ,b μ =[b μ1 ,b μ2 ,...,b μD ] T ,b μd And w μd Respectively represent mu x D-th value of (D) corresponds to the scale and translation transformation parameter, D-1, 2, …, D denotes μ x The maximum dimension of; w is a σ And b σ Respectively represent sigma x Corresponding scale and translation transformation parameters, w σ =[w σ1 ,w σ2 ,...,w σD ] T ,b σ =[b σ1 ,b σ2 ,...,b σD ] T ,b σd And w σd Respectively represent sigma x The scale and translation transformation parameters corresponding to the d-th value of (a);
s203, pair z based on the following formula μ And z σ Adaptive optimization and nonlinear processing are carried out to generate pixel-level statistical description characteristics z ps :
In the formula (I), the compound is shown in the specification,ReLU (-) is a modified linear activation unit function,andrespectively represents z ps The corresponding weight matrix and the bias vector,representing an M-dimensional linear space of,representing an M x 2D dimensional linear space.
Firstly, extracting first-order and high-order statistical primitives of an input SAR image, wherein the formula is realized as shown in FIG. 3, and the method specifically comprises the following steps:
1) performing single-scale average pooling processing on the input image by a spatial pyramid pooling module (SPP) to obtain a pixel value average value mu of the target SAR image x ;
2) The pixel value standard deviation sigma can be obtained by combining the SPP module, the Power module and the Eltwise module x 。
After extracting the mean and standard deviation statistical primitives, the DiscNet maps the statistical primitives to a high-dimensional space through autonomous scale and translation transformation. The corresponding high-dimensional characteristics of the mean value and the standard deviation after the high-dimensional mapping are respectively recorded as z λ E.g. V and z μ And e.g. V, wherein V is a high-dimensional mapping space. After high-dimensional mapping, the z pair is constrained by using the minimized classification error μ And z λ The interaction between the two is carried out with self-adaptive optimization and nonlinear processing, and finally, pixel-level statistical description characteristics are generated.
The discrimination sub-network is used for extracting pixel-level statistical elements of the input SAR image and generating the SAR image bottom-layer pixel-level statistical description with discrimination through nonlinear and linear transformation. The pixel-level statistical description is learned through linear and nonlinear high-dimensional mapping after the average value statistical primitive and the standard deviation statistical primitive of the SAR image are extracted by utilizing the discrimination subnetwork, the effect is better than that of directly describing the pixel-level statistical characteristics of the SAR image by using the average value statistical primitive and the standard deviation statistical primitive, and the accuracy of the finally trained model is higher.
In specific implementation, the mode subnetwork includes 4 convolutional layers whose activation functions are all ReLU functions, and the processing formula of the kth convolutional layer is as follows:
z k =H(W k z k-1 +b k )
in the formula, z k Denotes the output of the kth convolutional layer, k is 1,2,3,4, z 0 For the target SAR image, H (-) denotes a composite function of the ReLU activation mapping and pooling functions, W k And b k Respectively represents z k The corresponding weight matrix and the bias vector,z pa features are described for the structural modes.
In the present invention, as shown in fig. 2, the mode subnetwork includes 4 convolutional layers, where the convolutional kernel sizes of the first, second, and third convolutional layers are all 3 × 3, the step size is all 1, and the activation functions are all ReLU functions. The fourth convolutional kernel is 1 × 1, the step size is 1, and the activation function is the ReLU function.
In particular, in step S4, the pixel level statistics are described as z ps And structural Pattern description feature z pa And (3) fusing to obtain an image description characteristic z of the target SAR image:
z=ReLU(W ps z ps +W pa z pa )
wherein ReLU (. cndot.) is a modified linear activation unit function, W ps And W pa Are each z ps And z pa And (4) corresponding weight matrix.
When the concrete implementation, still include:
and S6, after each network training is finished, optimizing the parameters of the whole network by using a random gradient descent algorithm, wherein the optimization aim is to find the minimum average loss function.
In step S6, in a specific implementation, the parameters of the pattern sub-network are optimized based on the following formula:
in the formula, X n' Denotes the nth' sample, y n' Andare each X n' The true tag and the estimated tag of (a),in order to be a function of the loss,
in the formula (I), the compound is shown in the specification,<·>and ln (·) respectively represent the inner product sum logarithm operation, Y n' Representing a genuine label y n' Label vector based on One-Hot coding, a n' Represents X n' Output vector after passing through Softmax layer, jth output of Softmax layer(n' th sample X) n' What is output is a set of numbers,j, representing this set of numbers) is:
in the formula, z j The j input of the Softmax layer, wherein M represents the input number of the Softmax layer;
the update formula of the weight parameter of the k-th layer convolution in the pattern sub-network is as follows:
in the formula (I), the compound is shown in the specification,a weight parameter representing the k-th layer convolution after the i +1 th layer is suboptimal,a weight parameter representing the ith sub-optimized k-th layer convolution,convolution with respect to k-th layer for loss functionM represents the number of training samples, and η represents the learning rate.
In this way, the parameters can be updated with the gradient of the loss function, which is the direction in which the point changes the fastest, with the advantages of easy computation, less time consumption, and faster convergence on large datasets.
Finally, it is noted that the above-mentioned embodiments illustrate rather than limit the invention, and that, while the invention has been described with reference to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (4)
1. A SAR image classification method of pixel level statistical description learning is characterized by comprising the following steps:
s1, inputting the target SAR image into the SAR image classification model;
s2, extracting the pixel-level statistical description feature z of the target SAR image by a discrimination sub-network in the SAR image classification model ps (ii) a Step S2 includes:
s201, extracting a pixel value mean value mu of the target SAR image based on the following formula x Sum pixel value standard deviation σ x :
In the formula, x i Expressing the pixel value of the ith pixel point of the target SAR image, and expressing the number of the pixel points of the target SAR image by n;
s202, based on the following formula, the method is used for mu x And σ x Carrying out scale and translation transformation to obtain corresponding characteristics z μ And z σ :
z μ =w μ μ x +b μ
z σ =w σ σ x +b σ
In the formula, z μ ∈V,z σ E is V, and V is a high-dimensional mapping space; w is a μ And b μ Respectively represent mu x Corresponding scale and translation transformation vectors, w μ =[w μ1 ,w μ2 ,...,w μD ] T ,b μ =[b μ1 ,b μ2 ,...,b μD ] T ,b μd And w μd Respectively represent mu x D-th value of (D) corresponds to the scale and translation transformation parameter, D-1, 2, …, D denotes μ x The maximum dimension of; w is a σ And b σ Respectively represent sigma x Corresponding scale and translation transformation parameters, w σ =[w σ1 ,w σ2 ,...,w σD ] T ,b σ =[b σ1 ,b σ2 ,...,b σD ] T ,b σd And w σd Respectively represent sigma x The scale and translation transformation parameters corresponding to the d-th value of (a);
s203, pair z based on the following formula μ And z σ Performing adaptive optimization and nonlinear processing to generate pixel-level statistical description characteristics z ps :
In the formula (I), the compound is shown in the specification,ReLU (-) is a modified linear activation unit function,andrespectively represents z ps The corresponding weight matrix and the bias vector,representing an M-dimensional linear space of,representing an mx 2D dimensional linear space;
s3, extracting structural mode description characteristic z of target SAR image by mode sub-network in SAR image classification model pa (ii) a The mode subnetwork includes 4 convolutional layers with all the activation functions being ReLU functions, and the processing formula of the kth convolutional layer is:
z k =H(W k z k-1 +b k )
in the formula, z k Represents the output of the kth convolutional layer, k is 1,2,3,4, z 0 For the target SAR image, H (-) denotes a composite function of the ReLU activation mapping and pooling functions, W k And b k Respectively represents z k The corresponding weight matrix and the bias vector,z pa characterizing the structural patterns;
s4, a fusion module in the SAR image classification model describes the characteristic z with the pixel level statistics ps And structural Pattern description feature z pa Fusing to obtain an image description characteristic z of the target SAR image;
and S5, generating a classification result of the target SAR image based on the image description characteristics by a Softmax layer in the SAR image classification model.
2. The method for classifying SAR image of pixel level statistical description learning according to claim 1, wherein in step S4, the pixel level statistical description feature z is based on the following formula ps And structural Pattern description feature z pa And (3) fusing to obtain an image description characteristic z of the target SAR image:
z=ReLU(W ps z ps +W pa z pa )
wherein ReLU (. cndot.) is a modified linear activation unit function, W ps And W pa Are each z ps And z pa And (4) corresponding weight matrix.
3. The method for classifying SAR images learned by pixel-level statistical description of claim 1, further comprising:
and S6, after the SAR image classification model training is finished each time, optimizing the parameters of the SAR image classification model by using a random gradient descent algorithm, wherein the optimization target is to find the minimum average loss function.
4. The method for classifying SAR image of pixel level statistical description learning according to claim 3, wherein in step S6, the parameter optimization of the pattern sub-network is performed based on the following formula:
in the formula, X n' Denotes the nth' sample, y n' Andare each X n' The true tag and the estimated tag of (a),in order to be a function of the loss,
in the formula (I), the compound is shown in the specification,<·>and ln (·) respectively represent the inner product sum logarithm operation, Y n' Representing a genuine label y n' Label vector based on One-Hot coding, a n' Represents X n' The output vector after passing through the Softmax layer, the jth output of the Softmax layer isComprises the following steps:
in the formula, z j The j input of the Softmax layer, wherein M represents the input number of the Softmax layer;
the update formula of the weight parameter of the k-th layer convolution in the pattern sub-network is as follows:
in the formula (I), the compound is shown in the specification,a weight parameter representing the k-th layer convolution after i +1 is suboptimal,a weight parameter representing the ith sub-optimized k-th layer convolution,the gradient of the weight parameter of the loss function relative to the k-th layer convolution is shown, M represents the number of training samples, and eta represents the learning rate.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110093797.4A CN112884007B (en) | 2021-01-22 | 2021-01-22 | SAR image classification method for pixel-level statistical description learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110093797.4A CN112884007B (en) | 2021-01-22 | 2021-01-22 | SAR image classification method for pixel-level statistical description learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112884007A CN112884007A (en) | 2021-06-01 |
CN112884007B true CN112884007B (en) | 2022-08-09 |
Family
ID=76051781
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110093797.4A Expired - Fee Related CN112884007B (en) | 2021-01-22 | 2021-01-22 | SAR image classification method for pixel-level statistical description learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112884007B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105184309A (en) * | 2015-08-12 | 2015-12-23 | 西安电子科技大学 | Polarization SAR image classification based on CNN and SVM |
CN107403434A (en) * | 2017-07-28 | 2017-11-28 | 西安电子科技大学 | SAR image semantic segmentation method based on two-phase analyzing method |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106408030B (en) * | 2016-09-28 | 2019-06-25 | 武汉大学 | SAR image classification method based on middle layer semantic attribute and convolutional neural networks |
CN106934397B (en) * | 2017-03-13 | 2020-09-01 | 北京市商汤科技开发有限公司 | Image processing method and device and electronic equipment |
CN107766794B (en) * | 2017-09-22 | 2021-05-14 | 天津大学 | Image semantic segmentation method with learnable feature fusion coefficient |
CN108446716B (en) * | 2018-02-07 | 2019-09-10 | 武汉大学 | The PolSAR image classification method merged is indicated with sparse-low-rank subspace based on FCN |
CN110020693B (en) * | 2019-04-15 | 2021-06-08 | 西安电子科技大学 | Polarimetric SAR image classification method based on feature attention and feature improvement network |
CN110533683B (en) * | 2019-08-30 | 2022-04-29 | 东南大学 | Image omics analysis method fusing traditional features and depth features |
CN111612066B (en) * | 2020-05-21 | 2022-03-08 | 成都理工大学 | Remote sensing image classification method based on depth fusion convolutional neural network |
CN112101410B (en) * | 2020-08-05 | 2021-08-06 | 中国科学院空天信息创新研究院 | Image pixel semantic segmentation method and system based on multi-modal feature fusion |
-
2021
- 2021-01-22 CN CN202110093797.4A patent/CN112884007B/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105184309A (en) * | 2015-08-12 | 2015-12-23 | 西安电子科技大学 | Polarization SAR image classification based on CNN and SVM |
CN107403434A (en) * | 2017-07-28 | 2017-11-28 | 西安电子科技大学 | SAR image semantic segmentation method based on two-phase analyzing method |
Also Published As
Publication number | Publication date |
---|---|
CN112884007A (en) | 2021-06-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Adarsh et al. | YOLO v3-Tiny: Object Detection and Recognition using one stage improved model | |
CN110135267B (en) | Large-scene SAR image fine target detection method | |
CN111368896A (en) | Hyperspectral remote sensing image classification method based on dense residual three-dimensional convolutional neural network | |
CN111191583B (en) | Space target recognition system and method based on convolutional neural network | |
CN111310666B (en) | High-resolution image ground feature identification and segmentation method based on texture features | |
CN111368769B (en) | Ship multi-target detection method based on improved anchor point frame generation model | |
CN112101278A (en) | Hotel point cloud classification method based on k nearest neighbor feature extraction and deep learning | |
Chen et al. | An Apple Detection Method Based on Des‐YOLO v4 Algorithm for Harvesting Robots in Complex Environment | |
CN113449736B (en) | Photogrammetry point cloud semantic segmentation method based on deep learning | |
CN110472585A (en) | A kind of VI-SLAM closed loop detection method based on inertial navigation posture trace information auxiliary | |
Chen et al. | Object detection in remote sensing images based on deep transfer learning | |
CN106408030A (en) | SAR image classification method based on middle lamella semantic attribute and convolution neural network | |
CN112381030B (en) | Satellite optical remote sensing image target detection method based on feature fusion | |
CN108537121A (en) | Self-adaptive remote sensing scene classification method based on meteorological environment parameter and image information fusion | |
Ji et al. | Few-shot scene classification of optical remote sensing images leveraging calibrated pretext tasks | |
CN115329683B (en) | Aviation luggage online loading planning method, device, equipment and medium | |
CN115424223A (en) | Graph neural network training method, point cloud feature extraction method, device and medium | |
CN115861619A (en) | Airborne LiDAR (light detection and ranging) urban point cloud semantic segmentation method and system of recursive residual double-attention kernel point convolution network | |
CN115393631A (en) | Hyperspectral image classification method based on Bayesian layer graph convolution neural network | |
CN107392926B (en) | Remote sensing image feature selection method based on early-stage land thematic map | |
Huang et al. | An improved YOLOv3‐tiny algorithm for vehicle detection in natural scenes | |
CN107529647B (en) | Cloud picture cloud amount calculation method based on multilayer unsupervised sparse learning network | |
CN112884007B (en) | SAR image classification method for pixel-level statistical description learning | |
Huan et al. | SAR multi‐target interactive motion recognition based on convolutional neural networks | |
CN110222778A (en) | Online multi-angle of view classification method, system, device based on depth forest |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20220809 |
|
CF01 | Termination of patent right due to non-payment of annual fee |