CN115984223A - Image oil spill detection method based on PCANet and multi-classifier fusion - Google Patents

Image oil spill detection method based on PCANet and multi-classifier fusion Download PDF

Info

Publication number
CN115984223A
CN115984223A CN202310011074.4A CN202310011074A CN115984223A CN 115984223 A CN115984223 A CN 115984223A CN 202310011074 A CN202310011074 A CN 202310011074A CN 115984223 A CN115984223 A CN 115984223A
Authority
CN
China
Prior art keywords
matrix
classifier
image
vector
follows
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310011074.4A
Other languages
Chinese (zh)
Inventor
魏雪云
江蒋伟
张贞凯
郑威
靳标
奚彩萍
尚尚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University of Science and Technology
Original Assignee
Jiangsu University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University of Science and Technology filed Critical Jiangsu University of Science and Technology
Priority to CN202310011074.4A priority Critical patent/CN115984223A/en
Publication of CN115984223A publication Critical patent/CN115984223A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A20/00Water conservation; Efficient water supply; Efficient water use
    • Y02A20/20Controlling water pollution; Waste water treatment
    • Y02A20/204Keeping clear the surface of open water from oil spills

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the field of image oil spill segmentation of synthetic aperture radars, in particular to an image oil spill detection method based on PCANet and multi-classifier fusion, which realizes SAR image oil spill identification based on combination of a PCANet network and a shallow classifier, and is applied to SAR image feature extraction under different polarizations on the basis of analyzing the algorithm principle and structure of the PCANet network, thereby effectively improving the data feature dimension, and improving the image identification precision and the classifier bias by means of decision-level fusion based on three shallow classifiers of a support vector machine, K-nearest neighbor and SOFTMAX.

Description

Image oil spill detection method based on PCANet and multi-classifier fusion
Technical Field
The invention relates to the field of synthetic aperture radar image oil spill segmentation, in particular to an image oil spill detection method based on PCANet and multi-classifier fusion.
Background
In recent years, marine oil spill accidents frequently occur, so that great losses are caused to economic and ecological environments, and the adoption of an effective method for detecting oil spill becomes a research hotspot at home and abroad. Synthetic Aperture Radar (SAR) has the characteristics of all-time, all-weather, large range, high precision and the like, and provides an effective means for oil spill detection; deep learning is a machine learning method which has been rapidly developed in recent years, and can perform oil spill detection by learning features of a sample to establish a model, while a Convolutional Neural Network (CNN) is a model widely applied to deep learning and has been successfully applied to target detection and the like.
In convolutional neural networks, the PCANet network, a CNN-based simplified deep learning model, contains only very basic data processing components, namely, a cascaded principal component analysis convolutional layer, binarization and histogram operation. PCANet employs principal component analysis to learn a multi-stage filter bank, followed by indexing and merging by simple binary hash value and block histogram operations. The network can be designed and learned very easily and efficiently. Although the PCANet network is a simple lightweight deep network designed on the basis of other advanced deep learning networks, in a large number of comparison experiments later, the PCANet brings about a lot of surprises, and in most texture image classification tasks, the classification effect of the PCANet reaches the classification result equivalent to that of many current deep learning methods, and is even better than that of more complex networks on part of data sets.
The existing literature about SAR target automatic identification mainly develops research work on two aspects of feature extraction and classifier design:
the feature extraction is an important step of an SAR target automatic identification algorithm, and aims to extract information reflecting the self-class characteristics of a target from an SAR image and eliminate interference of information irrelevant to the class of the target, so that the difficulty of effective classification and identification of a classifier in the subsequent steps is reduced. In the aspect of SAR target feature extraction, the current research on oil spill detection mostly focuses on the aspects of image analysis, segmentation, classification and the like, and deep research is not carried out on the features and scattering mechanism of oil spill, and the complexity and uncertainty of 'dark' feature areas in SAR images make the precision of oil spill and oil spill-like distinguishing lower and the difficulty increased. Therefore, how to extract the oil spill information quickly and accurately is still a difficult topic in the future.
The classifier has a bias problem in the model learning process due to the fact that the number of oil spilling image samples is small, the training set has an imbalance problem and the classifier has defects.
Disclosure of Invention
The invention aims to solve the technical problems and provides an image oil spilling detection method based on PCANet and multi-classifier fusion.
The invention adopts PCANet which has simple system structure and strong capability of adapting to different environments to replace the traditional deep learning model to extract the SAR image characteristics, thereby greatly reducing the model complexity. And a new weighting algorithm is adopted to perform decision-level fusion, so that the bias of the classifier is effectively improved, and the classification precision is improved.
In order to achieve the purpose, the invention provides the following technical scheme to realize the following purposes:
the image oil spill detection method based on PCANet and multi-classifier fusion comprises the following steps:
s1, selecting an SAR oil spilling image data set as a training sample, inputting the SAR oil spilling image data set into a principal component analysis neural network, and preprocessing an image matrix;
s2, generating a corresponding filter according to the image data processed in the step S1, and performing convolution operation on the filter serving as a convolution kernel and the image to further enrich shallow features of the image data;
and S3, processing the image data output in the step S2 by using a hash function, and reducing the overall complexity of the image data. Then using histogram statistics to generate an expanded histogram feature vector;
s4, building a plurality of classifiers based on SVM, K neighbor and SoftMax algorithm, and classifying the feature vectors output in the step S3;
and S5, summing output classification vectors of different classifiers by adopting a weighted voting mode to obtain a final prediction classification vector, and comparing the final prediction classification vector with an actual sample to obtain a model error rate.
Further, the specific process of step S1 is as follows:
s11, preprocessing each picture, partitioning the matrix by taking k1 multiplied by k2 as the size according to the pixels of the picture, expanding each partitioned matrix into column vectors according to a column priority principle, and recombining the column vectors into a matrix from left to right;
s12, the picture pixel is m multiplied by n, and the line step length is b when the block is divided 1 Column step size of b 2 Moving along the picture matrix, and calculating the size of the matrix according to the formula:
line: r = k 1 ×k 2
The method comprises the following steps: c = ((m-k) 1 )/b 1 +1)×((m-k 2 )/b 2 +1)。
Further, the step S2 specifically includes the following steps:
s21, the principal component analysis is divided into two stages, and in the first stage, the column average is subtracted from each column of the matrix to obtain the matrix
Figure BDA0004038495540000037
Then the same processing is carried out on n training pictures to obtain->
Figure BDA0004038495540000031
The row size of matrix x is r, the columns are c × n, PCA algorithm is used for the obtained matrix, and L is taken as the front 1 The eigenvector corresponding to the largest eigenvalue is used as a filter, and the mathematical expression is as follows: />
Figure BDA0004038495540000032
S22, in the second stage, the N images I output in the first stage are firstly processed i Respectively with L of the first stage 1 Convolving the filter to obtain L 1 XN images, which are subsequently similar to the first phase, are filled with a boundary 0 before convolution of the matrix, in order to convolve the expanded matrix, the matrix sum I that can be obtained i The sizes are the same.
Further, the specific process of step S3 is as follows:
s31, for each image in the second stage
Figure BDA0004038495540000033
The following calculations were made:
Figure BDA0004038495540000034
then, histogram matrix statistics is carried out on the matrix, and the range of the histogram is
Figure BDA0004038495540000036
Vectorizing the histogram matrix to obtain a row vector, then performing histogram processing on all the matrixes, and then cascading to obtain the block expansion histogram characteristics.
Further, the specific process of step S4 is as follows:
s41, the SVM classifier establishes a hyperplane based on sample data to enable the data interval to be classified to be the maximum, then the sample data is segmented to become a convex quadratic problem to be solved, and the hyperplane equation is as follows:
g(x)=w·x+b
w is the weight vector in the discriminant function, w = (w) 1 ,w 2 ,…,w n ) T B is a constant, w · x is the inner product of the weight vector and the sample vector;
s42, K neighbor classifier, byCalculating the distance between the unknown sample and the given class according to the known data and the given class, and then distinguishing the class to which the unknown sample belongs, wherein the distance calculation expression is as follows:
Figure BDA0004038495540000035
s43, the SoftMax classifier can map the output value to the probability of the prediction result by using a hypothesis function, wherein the specific hypothesis function is as follows:
Figure BDA0004038495540000041
wherein k is a category label serial number and is also a dimension of a final output probability vector; w is a parameter matrix of the classifier model, which can also be expressed as w = (w) 1 ,w 2 ,…,w k ) (ii) a The function s is expressed as s = f (x) (i) ;w)=w T X (i)
Further, the step S5 specifically includes the following steps:
s51, because a plurality of classifiers are introduced into the model, multi-classifier fusion is carried out in a weighted voting mode, the outputs of different classifiers are weighted and summed to obtain a final prediction vector, and the weighted voting formula is as follows:
Figure BDA0004038495540000042
Figure BDA0004038495540000043
wherein P is i For the classification result vector, P, after the feature vector has passed through the ith classifier i The predicted label in the vector is 1 in the one-hot coding mode, and the rest labels are 0; beta is a i As a weight parameter of the ith classifier, e i The corresponding classifier error rate.
The invention has the beneficial effects that: the method introduces various methods such as SVM, SOFTMAX, PCA, CNN and the like into the field of oil spill detection, improves the CNN model, increases the characteristic value of data through a PCANet characteristic extraction network, and can be used when the characteristic value of the data is less and is difficult to distinguish; the constructed multi-classifier is used for decision-level fusion, the experimental time is prolonged in the process, but the bias of the classifier is improved, and the classification accuracy is improved.
Drawings
FIG. 1 is a flow chart of the method implementation of the present invention.
Fig. 2 is a flow chart of implementation of decision-level fusion.
Fig. 3 is a statistical diagram of the spatial overlapping degree of the oil spot area extracted by the classifier and the oil spot area interpreted by an expert, which is the experimental result of different classifiers for a sample set.
Fig. 4 is a comparison graph of the detection results of the SAR oil spill image, the left graph is the oil spill oil spot (image dark region) interpreted by the expert, and the right graph is the oil spill oil spot (image dark region) extracted by the neural network.
Detailed Description
For the purpose of enhancing understanding of the present invention, the present invention will be described in further detail with reference to the accompanying drawings and examples, which are provided for illustration only and are not intended to limit the scope of the present invention.
The embodiment is as follows: as shown in fig. 1, an image oil spill detection method based on PCANet and multi-classifier fusion includes the following steps:
s1, selecting an SAR oil spilling image data set as a training sample, inputting the SAR oil spilling image data set into a principal component analysis neural network, and preprocessing an image matrix;
s2, generating a corresponding filter according to the image data processed in the step S1, and performing convolution operation on the filter and the image by taking the filter as a convolution kernel to further enrich the shallow feature of the image data;
and S3, processing the image data output in the step S2 by utilizing a Hash function, and reducing the overall complexity of the image data. Then using histogram statistics to generate an expanded histogram feature vector;
s4, as shown in FIG. 2, building a plurality of classifiers based on SVM, K neighbor and SoftMax algorithm, and classifying the feature vectors output in the step S3;
and S5, summing the output classification vectors of different classifiers by adopting a weighted voting mode to obtain a final prediction classification vector, and comparing the final prediction classification vector with an actual sample to obtain a model error rate and a semantic segmentation result. The detection precision is shown in fig. 3, and is a statistic of the spatial overlapping degree of the oil spot area extracted by the classifier and the oil spot area interpreted by the expert, the semantic segmentation result is shown in fig. 4, the left side is the oil spilling oil spot (image dark region) interpreted by the expert, and the right side is the oil spilling oil spot (image dark region) extracted by the neural network.
In this embodiment, step S1 selects the ocean oil spill SAR image data sets of the mexico oil spill area and the bosch oil spill area generated and shared by the HPSCIL team of geological university in china and the RS-IDEA team of wuhan university as training samples to be input into the principal component analysis neural network, and performs preprocessing and corresponding feature extraction on the image matrix. The specific implementation method comprises the following steps:
in the initial stage, each picture is preprocessed, the matrix is partitioned according to the pixels of the picture by taking the size of k1 multiplied by k2, each partitioned matrix is expanded into column vectors according to the column priority principle, and the column vectors are recombined into a matrix from left to right. Picture pixel is mxn, and line step length is b when partitioning 1 Column step size of b 2 Moving along the picture matrix, the matrix size is calculated as:
line: r = k 1 ×k 2
The method comprises the following steps: c = ((m-k) 1 )/b 1 +1)×((m-k 2 )/b 2 +1)。
Further, the specific process of step S2 is as follows:
the principal component analysis in the first stage, taking the ith picture as an example, obtains x after being partitioned i And subtracting the column average from each column of the matrix to obtain the matrix
Figure BDA0004038495540000061
Then the same processing is carried out on n training pictures to obtain->
Figure BDA0004038495540000062
The row size of the matrix x is r, the column is c x n, the obtained matrix uses PCA algorithm, and the front L is taken 1 And the eigenvector corresponding to the largest eigenvalue is used as a filter. The mathematical expression is:
Figure BDA0004038495540000063
formula (la) in (1)
Figure BDA0004038495540000064
Meaning that a matrix which is mapped to a matrix size k1 × k2 is->
Figure BDA0004038495540000065
q l (xx T ) Denotes xx T Is obtained, thus corresponding to L 1 A k1 × k2 filter, using the L for each picture 1 The filters perform convolution.
The principal component analysis of the second stage and the principal component analysis of the first stage form a cascade principal component analysis, and firstly, N images I output by the first stage are processed i Respectively with L of the first stage 1 Convolving the filter to obtain L 1 X N pictures to be trained in the second stage
Figure BDA0004038495540000066
The subsequent process is similar to the first stage, and a boundary 0 is filled in the matrix before convolution, so as to convolve the expanded matrix, and obtain a matrix sum I i The sizes are the same. />
Then combining the block results into a matrix to remove the column mean value to obtain
Figure BDA0004038495540000067
The result is obtained by convolving N pictures with the l filter and then partitioning the N pictures. Combine all filter outputs together->
Figure BDA0004038495540000068
And after the blocking is finished, performing principal component analysis on the matrix Y, solving eigenvalue and eigenvector, and sequentially taking front L in the order from large to small 2 The eigenvector corresponding to each eigenvalue is used as a filter, namely:
Figure BDA0004038495540000069
l=1,2,...,L 2 and (5) convolving the filter with each image to obtain the input image of the next step.
And step S3: processing the image data output in the step S2 by using a hash function, reducing the overall complexity of the image data, and then generating an extended histogram feature vector by using histogram statistics, wherein the specific implementation method is as follows:
for images
Figure BDA00040384955400000610
The following calculations were made: />
Figure BDA00040384955400000611
Each picture requires sum L 2 The filters are respectively convolved; the function of the H · function is to change all the elements in the matrix to 0 and 1, but not to change the original shape of the matrix, i.e. the original element of the matrix larger than 0 is 1 corresponding to the element in the new matrix position, otherwise 0, each calculated matrix is multiplied by a corresponding weight, and then the results are added, the weight should correspond to the filter size.
For the obtained matrix
Figure BDA0004038495540000074
Partitioning into B blocks, obtaining a partitioning matrix with the size of k1k2 xB, performing histogram matrix statistics on the partitioning matrix, wherein the range of the histogram is->
Figure BDA0004038495540000072
Whose matrix size is->
Figure BDA0004038495540000073
And vectorizing the histogram matrix to obtain a row vector, performing histogram processing on all the matrixes in the same way, and then cascading to finally obtain the block expansion histogram characteristics.
And step S4: and (4) building various classifiers based on SVM, K nearest neighbor and SoftMax algorithm, and classifying the feature vectors output in the step (S3). The specific implementation method comprises the following steps:
the SVM classifier establishes a hyperplane based on sample data to maximize the data interval to be classified, and then divides the sample data to form a convex quadratic problem solution, wherein the hyperplane equation is as follows:
g(x)=w·x+b
where w is the weight vector in the discriminant function, w = (w) 1 ,w 2 ,…,w n ) T B is a constant, w · x is the inner product of the weight vector and the sample vector. When the two types of samples are classified, the classification decision function expression constructed by the symbolic function is as follows:
f(x)=sgn(w·x+b)
wherein sgn represents the constructor with a value of 1 if the input value is greater than 0, a value less than 0 is output as-1, and the resulting sample labels are "+1" and "-1".
In order to better classify the distortion types, a Gaussian sum function is used as a kernel function of the SVM classifier. The gaussian kernel function is applicable to multi-classification problems and is a block of convergence rates in the computation process. The Gaussian kernel function has two parameters which are an adjustable parameter y and a normalization coefficient c respectively, wherein the adjustable parameter y influences the smoothness of a model in the SVM, the smaller the value of y is, the more gradual the model of the SVM classifier is, and c is an important parameter of a loss function in the gradient descending process in the SVM. The SVM classification model can cause the over-fitting phenomenon along with the continuous increase of the parameter c, and the complexity and high deviation of the hyperplane model are influenced; conversely, if the parameter c is too small, this may also cause an under-fitting phenomenon to the SVM classifier.
And the K neighbor classifier calculates the distance between the unknown sample and the known data according to the known data and the given category, and then judges the category to which the unknown sample belongs. The distance calculation expression is:
Figure BDA0004038495540000071
the SoftMax classifier can map the output values to probabilities of the predicted outcome using a hypothesis function, with the final output probability values summed to equal 1. For sample data feature x (i) And category label y (i) Let us assume a function h W (x) The probability value that each data feature belongs to different types can be calculated, and the probability distribution is output, wherein the specific assumption function is as follows:
Figure BDA0004038495540000081
wherein k is a category label serial number and is also a dimension of a final output probability vector; w is a parameter matrix of the classifier model, which can also be expressed as w = (w) 1 ,w 2 ,…,w k ) (ii) a The function s is expressed as: s = f (x) (i) ;w)=w T X (i)
Step S5: and summing the output classification vectors of different classifiers by adopting a weighted voting mode to obtain a final prediction classification vector, and comparing the final prediction classification vector with an actual sample to obtain a model error rate. The specific implementation method comprises the following steps:
since multiple classifiers are introduced into the model and because the classifiers perform differently, the final result cannot be obtained using simple voting. Therefore, the multi-classifier fusion is carried out by adopting a weighted voting mode. And performing weighted summation on the outputs of different classifiers to obtain a final prediction vector. The weighted voting formula is as follows:
Figure BDA0004038495540000082
Figure BDA0004038495540000083
wherein, P i For the classification result vector, P, after the feature vector has passed through the ith classifier i The predicted label in the vector is 1 in the one-hot coding mode, and the rest labels are 0; beta is a beta i As a weight parameter of the ith classifier, e i As the corresponding classifier error rate.
In summary, in the current deep learning network, the PCANet uses the most basic PCA filter as a convolutional layer filter, uses binarization hash coding processing in a nonlinear layer, uses a block expansion histogram in a resampling layer and is assisted by the binarization hash coding, and uses the output of the resampling layer as the final feature extraction result of the whole PCANet network, so that the resource required by network training can be effectively reduced, and the network training efficiency is improved.
The above description is only an example of the present invention, and the common general knowledge of the known specific structures and characteristics in the schemes is not described herein. It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.

Claims (7)

1. An image oil spill detection method based on PCANet and multi-classifier fusion is characterized by comprising the following steps:
s1, selecting an SAR oil spilling image data set as a training sample, inputting the training sample into a principal component analysis neural network, and preprocessing an image matrix;
s2, generating a corresponding filter according to the image data processed in the step S1, and performing convolution operation on the filter and the image by taking the filter as a convolution kernel to further enrich the shallow feature of the image data;
s3, processing the image data output in the step S2 by using a hash function, reducing the overall complexity of the image data, and then generating an expansion histogram feature vector by using histogram statistics;
s4, building a plurality of classifiers based on SVM, K neighbor and SoftMax algorithm, and classifying the feature vectors output in the step S3;
and S5, summing output classification vectors of different classifiers by adopting a weighted voting mode to obtain a final prediction classification vector, and comparing the final prediction classification vector with an actual sample to obtain a model error rate.
2. The image oil spill detection method based on PCANet and multi-classifier fusion as claimed in claim 1, wherein the specific process of the step S1 is as follows:
s11, preprocessing each picture, partitioning the matrix by taking k1 multiplied by k2 as the size according to the pixels of the picture, unfolding each partitioned matrix into column vectors according to a column priority principle, and recombining the column vectors into a matrix from left to right;
s12, the picture pixel is m multiplied by n, and the line step length is b during blocking 1 Column step size of b 2 Moving along the picture matrix, and calculating the size of the matrix according to the formula:
line: r = k 1 ×k 2
The method comprises the following steps: c = ((m-k) 1 )/b 1 +1)×((m-k 2 )/b 2 +1)。
3. The method for detecting oil spill of image based on PCANet and multi-classifier fusion as claimed in claim 2, wherein the specific process of the step S2 is as follows:
s21, the principal component analysis is divided into two stages, and in the first stage, the column average is subtracted from each column of the matrix to obtain the matrix
Figure FDA0004038495530000011
Then the same processing is carried out on n training pictures to obtain->
Figure FDA0004038495530000012
The row size of matrix x is r, the columns are c × n, PCA algorithm is used for the obtained matrix, and L is taken as the front 1 The eigenvector corresponding to the largest eigenvalue is used as a filter, and the mathematical expression is as follows: />
Figure FDA0004038495530000021
S22, outputting N images I to the first stage in the second stage i Respectively with L of the first stage 1 Convolving the filter to obtain L 1 XN images, which are subsequently similar to the first phase, are filled with a boundary 0 before convolution of the matrix, in order to convolve the expanded matrix, the matrix sum I that can be obtained i The sizes are the same.
4. The image oil spill detection method based on PCANet and multi-classifier fusion as claimed in claim 3, wherein the specific process of the step S3 is as follows:
for each image of the second stage
Figure FDA0004038495530000022
The following calculations were made:
Figure FDA0004038495530000023
then, histogram matrix statistics is carried out on the matrix, and the range of the histogram is
Figure FDA0004038495530000024
Vectorizing a histogram matrix to obtain a row vector, performing histogram processing on all the matrixes in the same way, and then cascading to finally obtain block expansionAnd (4) a histogram feature.
5. The image oil spill detection method based on PCANet and multi-classifier fusion as claimed in claim 4, further characterized in that the specific process of the step S4 is as follows:
s41, the SVM classifier establishes a hyperplane based on sample data to enable the data interval to be classified to be the maximum, then the sample data is segmented to become a convex quadratic problem to be solved, and the hyperplane equation is as follows:
g(x)=w·x+b
w is the weight vector in the discriminant function, w = (w) 1 ,w 2 ,…,w n ) T B is a constant, w · x is the inner product of the weight vector and the sample vector;
s42, the K neighbor classifier calculates the distance between the unknown sample and the known data through the known data and the given category, then judges the category to which the unknown sample belongs, and the distance calculation expression is as follows:
Figure FDA0004038495530000025
s43, the SoftMax classifier can map the output value to the probability of the prediction result by using a hypothesis function, wherein the specific hypothesis function is as follows:
Figure FDA0004038495530000031
wherein k is a category label serial number and is also a dimension of a final output probability vector; w is a parameter matrix of the classifier model, which can also be expressed as w = (w) 1 ,w 2 ,…,w k ) (ii) a The function s is expressed as s = f (x) (i) ;w)=w T X (i)
6. The method according to claim 5, wherein in the step S4, in the process of S41, when classifying the two types of samples, the classification decision function constructed by the symbolic function is expressed as follows:
f(x)=sgn(w·x+b)
where sgn represents the constructor that has a value of 1 if the input value is greater than 0, a value of-1 if the output is less than 0, and the resulting sample labels are "+1" and "-1".
7. The image oil spill detection method based on PCANet and multi-classifier fusion as claimed in claim 6, wherein the specific process of the step S5 is as follows:
s51, because a plurality of classifiers are introduced into the model, multi-classifier fusion is carried out in a weighted voting mode, the outputs of different classifiers are weighted and summed to obtain a final prediction vector, and the weighted voting formula is as follows:
Figure FDA0004038495530000032
Figure FDA0004038495530000033
wherein P is i For the classification result vector, P, after the feature vector has passed through the ith classifier i The predicted label in the vector is 1 in a one-hot coding mode, and the rest labels are 0; beta is a i As a weight parameter of the ith classifier, e i The corresponding classifier error rate.
CN202310011074.4A 2023-01-05 2023-01-05 Image oil spill detection method based on PCANet and multi-classifier fusion Pending CN115984223A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310011074.4A CN115984223A (en) 2023-01-05 2023-01-05 Image oil spill detection method based on PCANet and multi-classifier fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310011074.4A CN115984223A (en) 2023-01-05 2023-01-05 Image oil spill detection method based on PCANet and multi-classifier fusion

Publications (1)

Publication Number Publication Date
CN115984223A true CN115984223A (en) 2023-04-18

Family

ID=85972144

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310011074.4A Pending CN115984223A (en) 2023-01-05 2023-01-05 Image oil spill detection method based on PCANet and multi-classifier fusion

Country Status (1)

Country Link
CN (1) CN115984223A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117355038A (en) * 2023-11-10 2024-01-05 江西红板科技股份有限公司 X-shaped hole processing method and system for circuit board soft board

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117355038A (en) * 2023-11-10 2024-01-05 江西红板科技股份有限公司 X-shaped hole processing method and system for circuit board soft board
CN117355038B (en) * 2023-11-10 2024-03-19 江西红板科技股份有限公司 X-shaped hole processing method and system for circuit board soft board

Similar Documents

Publication Publication Date Title
CN111860612B (en) Unsupervised hyperspectral image hidden low-rank projection learning feature extraction method
CN107229904B (en) Target detection and identification method based on deep learning
US20190228268A1 (en) Method and system for cell image segmentation using multi-stage convolutional neural networks
US8989442B2 (en) Robust feature fusion for multi-view object tracking
CN107633226B (en) Human body motion tracking feature processing method
CN111723693B (en) Crowd counting method based on small sample learning
CN111583263A (en) Point cloud segmentation method based on joint dynamic graph convolution
CN110322445B (en) Semantic segmentation method based on maximum prediction and inter-label correlation loss function
CN112508090A (en) External package defect detection method
CN111583279A (en) Super-pixel image segmentation method based on PCBA
CN111160400A (en) Attack resisting method based on modified boundary attack
CN112733942A (en) Variable-scale target detection method based on multi-stage feature adaptive fusion
Dudi et al. Optimized threshold-based convolutional neural network for plant leaf classification: a challenge towards untrained data
CN112329784A (en) Correlation filtering tracking method based on space-time perception and multimodal response
CN113962281A (en) Unmanned aerial vehicle target tracking method based on Siamese-RFB
Lee A Study on Classification and Detection of Small Moths Using CNN Model.
CN115984223A (en) Image oil spill detection method based on PCANet and multi-classifier fusion
CN108320301B (en) Target tracking optimization method based on tracking learning detection
Sun et al. Visual tracking via joint discriminative appearance learning
CN114241250A (en) Cascade regression target detection method and device and computer readable storage medium
Tavakkoli et al. Incremental SVDD training: improving efficiency of background modeling in videos
Wu et al. Vehicle detection in high-resolution images using superpixel segmentation and CNN iteration strategy
CN115393631A (en) Hyperspectral image classification method based on Bayesian layer graph convolution neural network
CN115439926A (en) Small sample abnormal behavior identification method based on key region and scene depth
Tang et al. Rapid forward vehicle detection based on deformable Part Model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination