CN110738249A - aurora image clustering method based on deep neural network - Google Patents

aurora image clustering method based on deep neural network Download PDF

Info

Publication number
CN110738249A
CN110738249A CN201910950408.8A CN201910950408A CN110738249A CN 110738249 A CN110738249 A CN 110738249A CN 201910950408 A CN201910950408 A CN 201910950408A CN 110738249 A CN110738249 A CN 110738249A
Authority
CN
China
Prior art keywords
aurora
spectralnet
dcae
vgg
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910950408.8A
Other languages
Chinese (zh)
Other versions
CN110738249B (en
Inventor
杨秋菊
刘畅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaanxi Normal University
Original Assignee
Shaanxi Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shaanxi Normal University filed Critical Shaanxi Normal University
Priority to CN201910950408.8A priority Critical patent/CN110738249B/en
Publication of CN110738249A publication Critical patent/CN110738249A/en
Application granted granted Critical
Publication of CN110738249B publication Critical patent/CN110738249B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an aurora image clustering method based on a deep neural network, which mainly solves the problem that the existing automatic classification of aurora images is basically supervised classification.

Description

aurora image clustering method based on deep neural network
Technical Field
The invention belongs to the technical field of image processing, relates to a deep learning algorithm and an image clustering method, and particularly relates to aurora image clustering methods based on a deep neural network, which can be used for clustering large-scale aurora images.
Background
The aurora is only macroscopic high-altitude atmospheric phenomenon in the south and north extreme latitude areas of the earth, which can directly reflect the connection between the change of the earth magnetic field and the solar activity.
Early aurora image classification work was visually observed by scientists and manually labeled according to the classification mechanism criteria set at The quantity of aurora images of only arctic yellow river stations reaches millions of levels every year, and the traditional manual classification method cannot meet the requirement of analyzing a large amount of aurora data. With the development of computer technology, image processing and machine learning techniques are beginning to be applied to the study of aurora image classification. 2004, the year
Figure BDA0002225609740000011
In the literature of et al "
Figure BDA0002225609740000012
M T, Donovan EF.Dinal aurora observational science induced via a mechanization, 2004,22(4): 1103-. In 2010, Wang et al in the document "Wang Q, Liang J, Hu Z J, et al, spatial temporal textual automatic classification of daylight autora in all-sky images.J. Atmos.Sol. -Terr.Phys.,2010,72:498 and 508" propose a spatial texture-based characterization method to characterize the global shape and local texture of an aurora image and classify a daytime aurora image observed overwintering at a yellow river station.
The concept of "deep learning" was proposed in the document "Hinton G E, Osindero S, Teh YW. A fast learning algorithm for deep belief nets. neural Compout, 2006,18: 1527-. Until 2012 AlexNet took champions on the Large-Scale Visual Recognition Challenge (Large-Scale Visual Recognition Challenge) of ImageNet, and from then on the Deep Convolutional Neural Network (DCNN) gradually replaced the traditional machine learning method. In 2018, Clausen and Nickisch used an inclusion-V4 deep neural network to divide aurora images into 6 categories of electrodeless, cloud, moon, arc aurora, diffuse aurora and discrete aurora. In recent years, although aurora research methods based on manual features are still continued, the classical manual features have been gradually replaced by an intermediate feature layer of a deep convolutional neural network, and superior performance to the conventional manual features is achieved.
However, the aurora image classification mechanism is not uniform , the workload of marking data is large and tedious, and the accuracy and objectivity of a label are difficult to guarantee by manual marking.
Disclosure of Invention
Aiming at the defects of the aurora image supervision and classification method based on the deep learning technology, the invention provides kinds of aurora image unsupervised clustering networks, which do not need to manually appoint a classification mechanism and data labels, can automatically classify aurora images and greatly improve the aurora classification efficiency.
In order to achieve the purpose, the invention adopts the following technical scheme:
aurora image clustering method based on deep neural network, comprising the following steps:
step 1: pre-training a VGGNet 16-based depth convolution self-encoder DCAE _ VGG using all aurora images to be clustered;
step 2: extracting a feature vector of each aurora image by using the trained DCAE _ VGG;
and step 3: the feature vectors are clustered using a clustering network consisting of siemes, SpectralNet, and k-means clustering.
Step , step 1 specifically includes:
step 1.1: combining VGGNet16 with an auto-encoder to build a DCAE _ VGG model structure of the deep convolutional auto-encoder;
step 1.2: adjusting the size of the aurora image to be clustered to 64 x 64, inputting the size into DCAE _ VGG, training the DCAE _ VGG until a loss function is converged, wherein the DCAE _ VGG optimizes parameter values by using a random gradient descent method, and selects a mean square error as the loss function;
step 1.3: and saving the model structure and the model parameters of the DCAE _ VGG.
, the DCAE _ VGG model structure in step 1.1 is composed of an encoding part, a hidden layer and a decoding part, wherein the first five groups of neural networks and corresponding pooling layers of VGGNet16 are reserved in the encoding part, full-connection layers with the length of 512 are used for replacing the original three full-connection layers and softmax layers, the hidden layer is full-connection layers with the length of 10 and used for representing low-dimensional feature vectors of the aurora image, and the decoding part is composed of full-connection layers with the length of 512 and four deconvolution layers.
, the step 2 is to call the model structure and model parameters of the DCAE _ VGG, to input the n polar light images to be clustered into the coding part of DCAE _ VGG only by using the model structure and parameters of the coding part and hidden layer part of DCAE _ VGG, to obtain the feature vectors { x } of these images through the hidden layer of DCAE _ VGG1,x2,...,xn}。
Step , step 3 specifically includes:
step 3.1: training a Siamese network by constructing a positive/negative similarity pair training set, wherein the trained Siamese network is used for calculating a similarity matrix W in the SpectralNet in step 3.2;
step 3.2: training the SpectralNet to obtain input { x1,x2,...,xnThe corresponding mapping output y1,y2,...,ynI.e. y ═ Fθ(x) Theta is a parameter of SpectralNet;
step 3.3: for { y1,y2,...,ynAnd (5) clustering by using a k-means clustering method to obtain a clustering result.
Step , step 3.1 specifically includes:
step 3.1.1: constructing a positive/negative similarity pair training set for the Siemese network by calculating the Euclidean distance of two feature vectors, marking k nearest neighbor points of each point as positive similarity pairs by the Siemese network, randomly selecting equivalent non-neighbor points as negative similarity pairs, and determining the value of k by setting the number of the Siemese network similarity pairs;
step 3.1.2: training the Siamese network to obtain the mapping relation
Figure BDA0002225609740000041
Wherein theta isSiameseParameters representing the Siemese network will have feature vectors x every timeiMapping to z in another spacesiDetermined by minimizing a loss function during trainingLoss function L for fixed mapping, Siamese networkssiameseIs defined as:
where c is a constant, after the Siamese network training is completed, | | | z is used in step 3.2i-zjI replaces Euclidean distance xi-xjThe similarity matrix W in SpextralNet is calculated.
Step , step 3.2 specifically includes:
step 3.2.1: feature vector from all aurora images { x1,x2,...,xn groups of m small-batch feature samples { x } are randomly extracted1,x2,...,xmAnd obtaining corresponding mapping output of the feature samples of the m aurora images after the feature samples pass through an orthogonal layer of the SpectralNet
Figure BDA0002225609740000043
Step 3.2.2: computing Cholesky factorization
Figure BDA0002225609740000044
Step 3.2.3: set the weights of the SpectralNet orthogonal layers to
Figure BDA0002225609740000045
Step 3.2.4: feature vector from all aurora images { x1,x2,...,xn groups of m small-batch feature samples { x } are randomly extracted1,x2,...,xmInputting the characteristic samples of the m aurora images into a trained Siemese network to obtain a similarity matrix W with the output of m multiplied by m;
step 3.2.5: feature sample { x ] of aurora image1,x2,...,xmInput SpectralNet, calculate the loss function L of SpectralNetSpectralNet(theta) adjustment by its gradientWeights, L, of other layers than the orthogonal layer of SpectralNetSpectralNet(θ) is defined as:
step 3.2.6: repeating the steps 3.2.1-3.2.5 while losing the function LSpectralNet(theta) when convergence is carried out, determining the parameters and the weight of the SpectralNet, and finishing the training of the SpectralNet;
step 3.2.7: feature vector { x of aurora image to be clustered1,x2,...,xnInputting the trained SpectralNet, namely obtaining the corresponding mapping output y1,y2,...,yn}。
Compared with the prior art, the invention has the following beneficial technical effects:
, subject crossing, the invention utilizes the side polar light observation data of the arctic yellow river station, automatically classifies the polar light images by means of a computer image processing technology and a deep learning technology, applies the deep learning technology to the clustering of the polar light images for the first time, and is helpful for researching the generation mechanism of various polar lights in the step .
Secondly, the method does not depend on any prior classification mechanism and does not need to manually mark data, the aurora image classification based on the deep learning technology is supervised classification at present, and partial data need to be manually marked according to the prior classification mechanism for model training. The workload of marking data is large and tedious, and manual marking is difficult to ensure the objectivity and accuracy of the label. According to the invention, the features of the aurora images are automatically extracted through DCAE _ VGG, and the clustering network consisting of Siemese, SpectralNet and k-means clustering is used for unsupervised classification of the aurora images, so that the problems existing in the current aurora image supervised classification are solved, and the aurora image classification efficiency is greatly improved.
Thirdly, the optimal classification of the aurora images is explored, no aurora classification mechanism of the system exists at present, the potential features of the aurora images are extracted through a deep neural network, unsupervised clustering is carried out on the aurora images, and the aurora images with high morphological similarity are automatically classified into classes.
Drawings
FIG. 1 is a flow chart of an implementation of the present invention;
FIG. 2 is a schematic diagram of a deep convolutional auto-encoder DCAE _ VGG network structure based on VGGNet16 in the present invention;
FIG. 3 is a schematic view of a visualization of a feature vector for DCAE _ VGG extraction;
FIG. 4 is an aurora occurrence time profile based on class 2 tag labels;
FIG. 5 is a distribution diagram of aurora occurrence time based on class 2 results;
fig. 6 is an exemplary diagram of a clustering result.
Detailed Description
The following describes the implementation steps and technical effects of the present invention in detail with reference to the accompanying drawings.
Referring to fig. 1, the implementation steps of the invention are as follows:
step 1: the depth convolution self-encoder DCAE _ VGG based on VGGNet16 was pre-trained using all the aurora images to be clustered.
1.1) combining VGGNet16 with a self-encoder, building network structures such as a deep convolutional self-encoder DCAE _ VGG.DCAE _ VGG shown in FIG. 2, wherein the deep convolutional self-encoder DCAE _ VGG is composed of an encoding part, a hidden layer (characteristic vector) and a decoding part, the first five groups of neural networks of VGGNet16 and corresponding pooling layers are reserved in the encoding part, full-connection layers with the length of 512 are used for replacing the original three full-connection layers and softmax layers, the hidden layer is full-connection layers with the length of 10 and is used for representing low-dimensional characteristic vectors of an aurora image, and the decoding part is composed of full-connection layers with the length of 512 and four deconvolution layers.
1.2) the size of all aurora images to be clustered is adjusted to 64 x 64 and input into DCAE _ VGG, and the DCAE _ VGG is trained until the loss function converges. The DCAE _ VGG optimizes parameter values by using a random gradient descent method, and selects a mean square error as a loss function.
1.3) saving the model structure and model parameters of the DCAE _ VGG.
Step 2: and extracting a feature vector of each aurora image by using the trained DCAE _ VGG. Call securityThe model structure and model parameters of the stored DCAE _ VGG are only used when the feature vector is extracted, and the model structure and model parameters of the coding part and the hidden layer part of the DCAE _ VGG are only used when the feature vector is extracted. All n aurora images to be clustered are input into a coding part of DCAE _ VGG, and feature vectors { x ] of the images are obtained through a hidden layer of the DCAE _ VGG1,x2,...,xn}。
And step 3: the feature vectors are clustered using a clustering network consisting of siemes, SpectralNet, and k-means clustering.
3.1) training the Siamese network by constructing a training set of positive/negative similarity pairs, the trained Siamese network will be used to calculate the similarity matrix W in SpectralNet in step 3.2:
3.1.1) through calculating the Euclidean distance of two eigenvectors, construct the training set of positive/negative similar pairs for the Siemese network, the Siemese network marks k nearest neighbor points of each point as positive similar pairs, randomly selects equivalent non-neighbor points as negative similar pairs, the numerical value of k is determined by setting the number of the Siemese network similar pairs, and the number of the similar pairs in the experiment is set as 3200.
3.1.2) train the Siamese network, which will train the Siamese network every feature vectors xiMapping to z in another spacesiThe mapping relation is
Figure BDA0002225609740000071
θSiameseRepresenting parameters of the Siamese network. The size of the network structure of the Siemese network is 512-4, the mapping relation is determined by minimizing a loss function in the training process, and the loss function L of the Siemese networksiameseIs defined as:
where the constant c is set to 1. After the siense network training is completed, | | z is used when calculating the similarity matrix W in SpextralNet in step 3.2i-zjI replaces Euclidean distance xi-xjL, wherein
Figure BDA0002225609740000073
σ is set to 25.
3.2) training SpectralNet:
the network structure size of the SpectralNet is 512-4, and the number of cluster clusters is set to be 2. Input { x is obtained by SpectralNet1,x2,...,xnThe corresponding mapping output y1,y2,...,ynI.e. y ═ Fθ(x) And theta is a parameter of SpectralNet.
3.2.1) feature vectors from all aurora images { x1,x2,...,xn groups of m small-batch feature samples { x } are randomly extracted1,x2,...,xmAnd after the characteristic samples of the m aurora images pass through an orthogonal layer of the SpectralNet, obtaining corresponding mapping output of the characteristic samples
Figure BDA0002225609740000074
3.2.2) calculating Cholesky factorization
3.2.3) set the weights of the SpectralNet orthogonal layers to
Figure BDA0002225609740000082
3.2.4) feature vector from all aurora images { x1,x2,...,xn groups of m small-batch feature samples { x } are randomly extracted1,x2,...,xmInputting the characteristic samples of the m aurora images into a trained Siemese network, and calculating a similarity matrix W;
3.2.5) sample features of an aurora image { x }1,x2,...,xmInput SpectralNet, calculate the loss function L of SpectralNetSpectralNet(θ) whose gradient is used to adjust the weights of the other layers of the SpectralNet except the orthogonal layerHeavy, LSpectralNet(θ) is defined as:
Figure BDA0002225609740000083
3.2.6) repeat steps 3.2.1-3.2.5 when the function L is lostSpectralNet(theta) when convergence is carried out, determining the parameters and the weight of the SpectralNet, and finishing the training of the SpectralNet;
3.2.7) feature vector { x) of aurora images to be clustered1,x2,...,xnInputting the trained SpectralNet to obtain the corresponding mapping output y1,y2,...,yn}。
3.3) pairs { y1,y2,...,ynAnd (5) clustering by using a k-means clustering method to obtain a clustering result.
The effects of the present invention can be further described by the following simulation experiments .
Experiment 1: the DCAE _ VGG model extracts a feature vector of the aurora image.
The experimental conditions are as follows: the method uses 4000 aurora images acquired by the arctic yellow river station 2003-2008 overwintering observation for experiments.
The experimental contents are as follows: the DCAE _ VGG model is trained by using 4000 aurora images, and the structure and the model parameters of the DCAE _ VGG model are saved after the training is finished. And extracting the feature vector of the aurora image by using the trained DCAE _ VGG model, and visualizing the feature vector by using a t-SNE algorithm, wherein the visualization result is shown in FIG. 3.
As can be seen from fig. 3, the feature vectors are distributed in two clusters, which indicates that the potential features of the aurora image are distributed in 2 types, and in addition, the two groups of features have good separability, which indicates that types of highly distinguishable representations of the aurora image are extracted by DCAE _ VGG.
Experiment 2: the aurora images are grouped into 2 types by the invention.
The experimental conditions are as follows: the experiment uses the characteristic vectors of 4000 aurora images obtained in experiment 1. Based on the document Hu et al "Hu Z J, Yang H, Huang D, et al. synthetic distribution of polysaccharide aurora: Multiple-wavelet H all-sky observation at Yellow River Station in Ny-
Figure BDA0002225609740000091
The polar light classification mechanism of the north-yellow river station day side provided in svalbard.j.attmos.sol. -terr.phys.,2009,71:794-804 ", wherein the 4000 images comprise 1000 images of arc, valance crown, hot spot and radiation crown, and the four types of polar light are combined into 2 types in steps, namely, the arc polar light image and the hot spot polar light image are mixed and labeled into a large type , the valance crown polar light image and the radiation crown polar light image are mixed and labeled into a large type , respectively labeled as type 1 and type 2, and an polar light occurrence time distribution diagram based on the type 2 label is drawn, which is detailed in fig. 4.
The experimental contents are as follows: referring to the visualization result of the feature vector in experiment 1, 4000 aurora images were grouped into 2 types using the method proposed by the present invention. The distribution of the occurrence time of aurora based on the results of class 2 was plotted, and details are shown in FIG. 5. From the clustering results, 20 aurora images of each type were randomly selected for frame-by-frame comparison, as shown in fig. 6.
Comparing fig. 4 and fig. 5, it can be seen that the curve trend of the distribution of the aurora occurrence time is basically , from the clustering result fig. 5, it can be seen that the -th aurora in the clustering result shows a "bimodal" distribution in the front and back of the moon, and rarely occurs near the noon, the second aurora mainly occurs in the front and near the noon, and rarely occurs on the dim side, from fig. 6, it can be seen that the morphology of the aurora image aggregated into the -th category has the characteristics of a single arc, a multi-arc structure, a bright spot structure, and the like, and the morphology of the aurora image aggregated into the second category is overall darker and has the characteristics of weak and uniform aurora intensity.

Claims (7)

1, aurora image clustering method based on deep neural network, characterized by comprising the following steps:
step 1: pre-training a VGGNet 16-based depth convolution self-encoder DCAE _ VGG using all aurora images to be clustered;
step 2: extracting a feature vector of each aurora image by using the trained DCAE _ VGG;
and step 3: the feature vectors are clustered using a clustering network consisting of siemes, SpectralNet, and k-means clustering.
2. The aurora image clustering method based on the deep neural network as claimed in claim 1, wherein step 1 specifically includes:
step 1.1: combining VGGNet16 with an auto-encoder to build a DCAE _ VGG model structure of the deep convolutional auto-encoder;
step 1.2: adjusting the size of the aurora image to be clustered to 64 x 64, inputting the size into DCAE _ VGG, training the DCAE _ VGG until a loss function is converged, wherein the DCAE _ VGG optimizes parameter values by using a random gradient descent method, and selects a mean square error as the loss function;
step 1.3: and saving the model structure and the model parameters of the DCAE _ VGG.
3. The aurora image clustering method based on deep neural network as claimed in claim 2, wherein in step 1.1, the DCAE _ VGG model structure is composed of a coding part, a hidden layer and a decoding part, wherein the coding part reserves the first five groups of neural networks of VGGNet16 and corresponding pooling layers, the original three full-connected layers and softmax layer are replaced by full-connected layers with length of 512, the hidden layer is full-connected layers with length of 10 and is used for representing low-dimensional feature vectors of aurora images, and the decoding part is composed of full-connected layers with length of 512 and four deconvolution layers.
4. The aurora image clustering method based on deep neural network of claim 3, wherein the step 2 is to call the model structure and model parameters of the saved DCAE _ VGG, to input the n aurora images to be clustered into the coding part of DCAE _ VGG by using only the model structure and parameters of the coding part and hidden layer part of DCAE _ VGG, to obtain the feature vector { x ] of these images through the hidden layer of DCAE _ VGG1,x2,...,xn}。
5. The aurora image clustering method based on the deep neural network as claimed in claim 4, wherein step 3 specifically includes:
step 3.1: training a Siamese network by constructing a positive/negative similarity pair training set, wherein the trained Siamese network is used for calculating a similarity matrix W in the SpectralNet in step 3.2;
step 3.2: training the SpectralNet to obtain input { x1,x2,...,xnThe corresponding mapping output y1,y2,...,ynI.e. y ═ Fθ(x) Theta is a parameter of SpectralNet;
step 3.3: for { y1,y2,...,ynAnd (5) clustering by using a k-means clustering method to obtain a clustering result.
6. The aurora image clustering method based on the deep neural network as claimed in claim 5, wherein step 3.1 specifically includes:
step 3.1.1: constructing a positive/negative similarity pair training set for the Siemese network by calculating the Euclidean distance of two feature vectors, marking k nearest neighbor points of each point as positive similarity pairs by the Siemese network, randomly selecting equivalent non-neighbor points as negative similarity pairs, and determining the value of k by setting the number of the Siemese network similarity pairs;
step 3.1.2: training the Siamese network to obtain the mapping relation
Figure FDA0002225609730000021
Wherein theta isSiameseParameters representing the siemese network, which will be every eigenvectors xiMapping to z in another spacesiThe mapping relation is determined by minimizing the loss function in the training process, the loss function L of the Siamese networksiameseIs defined as:
Figure FDA0002225609730000022
where c is a constant, after the Siamese network training is completed, | | | z is used in step 3.2i-zjI replaces Euclidean distance xi-xjThe similarity matrix W in SpextralNet is calculated.
7. The aurora image clustering method based on the deep neural network as claimed in claim 6, wherein step 3.2 specifically includes:
step 3.2.1: feature vector from all aurora images { x1,x2,...,xn groups of m small-batch feature samples { x } are randomly extracted1,x2,...,xmAnd obtaining corresponding mapping output of the feature samples of the m aurora images after the feature samples pass through an orthogonal layer of the SpectralNet
Figure FDA0002225609730000031
Step 3.2.2: computing Cholesky factorization
Figure FDA0002225609730000032
Step 3.2.3: set the weights of the SpectralNet orthogonal layers to
Figure FDA0002225609730000033
Step 3.2.4: feature vector from all aurora images { x1,x2,...,xn groups of m small-batch feature samples { x } are randomly extracted1,x2,...,xmInputting the characteristic samples of the m aurora images into a trained Siemese network to obtain a similarity matrix W with the output of m multiplied by m;
step 3.2.5: feature sample { x ] of aurora image1,x2,...,xmInput SpectralNet, calculate the loss function L of SpectralNetSpectralNet(theta) adjustment by its gradientWeights, L, of other layers than the orthogonal layer of SpectralNetSpectralNet(θ) is defined as:
Figure FDA0002225609730000034
step 3.2.6: repeating the steps 3.2.1-3.2.5 while losing the function LSpectralNet(theta) when convergence is carried out, determining the parameters and the weight of the SpectralNet, and finishing the training of the SpectralNet;
step 3.2.7: feature vector { x of aurora image to be clustered1,x2,...,xnInputting the trained SpectralNet, namely obtaining the corresponding mapping output y1,y2,...,yn}。
CN201910950408.8A 2019-10-08 2019-10-08 Aurora image clustering method based on deep neural network Active CN110738249B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910950408.8A CN110738249B (en) 2019-10-08 2019-10-08 Aurora image clustering method based on deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910950408.8A CN110738249B (en) 2019-10-08 2019-10-08 Aurora image clustering method based on deep neural network

Publications (2)

Publication Number Publication Date
CN110738249A true CN110738249A (en) 2020-01-31
CN110738249B CN110738249B (en) 2022-08-26

Family

ID=69269893

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910950408.8A Active CN110738249B (en) 2019-10-08 2019-10-08 Aurora image clustering method based on deep neural network

Country Status (1)

Country Link
CN (1) CN110738249B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116778208A (en) * 2023-08-24 2023-09-19 吉林大学 Image clustering method based on depth network model

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170372201A1 (en) * 2016-06-22 2017-12-28 Massachusetts Institute Of Technology Secure Training of Multi-Party Deep Neural Network
CN108364006A (en) * 2018-01-17 2018-08-03 超凡影像科技股份有限公司 Medical Images Classification device and its construction method based on multi-mode deep learning
CN108613802A (en) * 2018-05-10 2018-10-02 重庆大学 A kind of mechanical failure diagnostic method based on depth mixed network structure

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170372201A1 (en) * 2016-06-22 2017-12-28 Massachusetts Institute Of Technology Secure Training of Multi-Party Deep Neural Network
CN108364006A (en) * 2018-01-17 2018-08-03 超凡影像科技股份有限公司 Medical Images Classification device and its construction method based on multi-mode deep learning
CN108613802A (en) * 2018-05-10 2018-10-02 重庆大学 A kind of mechanical failure diagnostic method based on depth mixed network structure

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
半个夏天1314: "SpectralNet : spectral clustering using deep neural networks", 《HTTPS://BLOG.CSDN.NET/LEMON759597/ARTICLE/DETAILS/81564163》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116778208A (en) * 2023-08-24 2023-09-19 吉林大学 Image clustering method based on depth network model
CN116778208B (en) * 2023-08-24 2023-11-10 吉林大学 Image clustering method based on depth network model

Also Published As

Publication number Publication date
CN110738249B (en) 2022-08-26

Similar Documents

Publication Publication Date Title
CN112507793B (en) Ultra-short term photovoltaic power prediction method
CN107622104B (en) Character image identification and marking method and system
Liang et al. Detection and evaluation method of transmission line defects based on deep learning
CN108304826A (en) Facial expression recognizing method based on convolutional neural networks
CN108629338B (en) Face beauty prediction method based on LBP and convolutional neural network
CN108229550B (en) Cloud picture classification method based on multi-granularity cascade forest network
CN110827260B (en) Cloth defect classification method based on LBP characteristics and convolutional neural network
CN110543906B (en) Automatic skin recognition method based on Mask R-CNN model
CN110457982A (en) A kind of crop disease image-recognizing method based on feature transfer learning
CN111127360B (en) Gray image transfer learning method based on automatic encoder
CN114694038A (en) High-resolution remote sensing image classification method and system based on deep learning
CN108595558B (en) Image annotation method based on data equalization strategy and multi-feature fusion
CN110853070A (en) Underwater sea cucumber image segmentation method based on significance and Grabcut
CN105138975B (en) A kind of area of skin color of human body dividing method based on degree of depth conviction network
CN109711411B (en) Image segmentation and identification method based on capsule neurons
CN111709443B (en) Calligraphy character style classification method based on rotation invariant convolution neural network
Chen et al. Agricultural remote sensing image cultivated land extraction technology based on deep learning
CN110334584A (en) A kind of gesture identification method based on the full convolutional network in region
CN111259733A (en) Point cloud image-based ship identification method and device
CN104036294A (en) Spectral tag based adaptive multi-spectral remote sensing image classification method
CN110738249B (en) Aurora image clustering method based on deep neural network
CN114329031A (en) Fine-grained bird image retrieval method based on graph neural network and deep hash
CN116933141B (en) Multispectral laser radar point cloud classification method based on multicore graph learning
CN113378962A (en) Clothing attribute identification method and system based on graph attention network
CN111191510B (en) Relation network-based remote sensing image small sample target identification method in complex scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant