CN113723455B - Strong gravitation lens system classification method and device based on metric learning - Google Patents

Strong gravitation lens system classification method and device based on metric learning Download PDF

Info

Publication number
CN113723455B
CN113723455B CN202110853811.6A CN202110853811A CN113723455B CN 113723455 B CN113723455 B CN 113723455B CN 202110853811 A CN202110853811 A CN 202110853811A CN 113723455 B CN113723455 B CN 113723455B
Authority
CN
China
Prior art keywords
layer
image
lens
gravitation
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110853811.6A
Other languages
Chinese (zh)
Other versions
CN113723455A (en
Inventor
邹志强
张芷瑞
吴家皋
韩杨
洪舒欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN202110853811.6A priority Critical patent/CN113723455B/en
Publication of CN113723455A publication Critical patent/CN113723455A/en
Application granted granted Critical
Publication of CN113723455B publication Critical patent/CN113723455B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a strong gravity lens system classification method and device based on metric learning. The method comprises the following steps: acquiring a gravitation lens image sample set; preprocessing an image sample in the gravitational lens image sample set; pairing the preprocessed image samples to obtain a training sample set, wherein the training sample set comprises image pairs with the same category and image pairs with different categories; and taking the matched data as the input of the model, constructing a feature extraction model and a similarity measurement calculation model, and training the model. Based on the theoretical basis of metric learning, the application utilizes the similarity metric between different vectors after feature extraction to determine the class of data, so that the application uses fewer training samples to complete the classification task of the gravitation lens system, and avoids the model dependence problem in deep learning image classification.

Description

Strong gravitation lens system classification method and device based on metric learning
Technical Field
The application relates to a strong gravity lens system classification method and device based on metric learning, and belongs to the technical field of celestial body physics intelligent processing.
Background
Astronomy is a long-history observation science, and along with the development of scientific technology, observation equipment is continuously upgraded, and the knowledge of human beings on universe is gradually expanded from near to far and from shallow to deep. Dark matter (Dark matter) is a theoretically proposed invisible substance that may be present in the universe, which may be a major component of the universe of substances, but not any known substance that constitutes the visible celestial body. The force capable of maintaining the motion of the substances on a large scale is only attractive, and the absolute evenly distributed substances do not have the force capable of enabling the substances to move. Therefore, all cosmic structures today should come from very early material distribution micro-fluctuations that leave traces in the cosmic microwave background (Cosmic Microwave Background, CMB). If we know the nature of dark matter that is almost one quarter of the universe, know the distribution or fluctuation of dark matter very early in the universe, we can further know the universe and evolution of the universe. Dark matter and dark energy are therefore of great importance in astronomical circles. However, when dark matter does not emit any radiation, we are not likely to be directly observed, but their spatial distribution can be analyzed by the gravitational lens effect, meaning that the gravitational lens effect is an important method of finding cosmic dark matter and dark energy. In the era of lack of astronomical image data, people mainly rely on eyes to identify an attractive lens system, however, with the establishment of LSST (Large Synoptic Survey Telescope U.S. large-view space-time sky patrol project), EUCLID (European Union Euclidean space station sky patrol project), CSST (China Space Station Telescope Chinese sky patrol space telescope project) and other projects, people will come to the large-scale sky patrol era. There are tens of thousands of astronomical image data at this time, and in order to be able to study and explore dark matter and dark energy, it is necessary to identify attractive lens systems in these images to determine the presence of dark matter and dark energy. For exponentially growing data sets, human eye recognition can lead to very low search efficiency, so automated search becomes a solution that brings a qualitative change to the study of dark matter and dark energy in the future large-scale on-the-fly era. The automatic searching of dark substances and dark energy is helpful to the excavation of special and unknown celestial bodies in the universe, and meets the aim of human exploration of the mystery of the universe.
The existing gravitation lens system classification model is based on traditional deep learning, and the method has strong dependence on a data set and the model, and a large amount of data is required to be trained to obtain a model with good classification effect. However, the positive samples of the strong gravitational lens are very few, and hundreds of positive samples can be observed at present, so that training of the depth model cannot be realized. In addition, the existing classification model of the gravitational lens system also has requirements on the authenticity of the training set, and if the generated sample is used as the training set for training, the obtained model effect is different from the model effect obtained by using the real data set.
Disclosure of Invention
The application aims to provide a strong gravity lens system classification method and device based on metric learning, which also solve the problem of the dependence of data and models of a deep learning method in the task of gravity lens system classification, so as to realize training of a deep model under the condition of less training sample size.
In order to solve the technical problems, in one aspect, the present application provides a method for classifying a strong gravity lens system based on metric learning, including:
preprocessing an gravitation lens image to be classified and a gravitation lens image with known categories, and inputting the preprocessed gravitation lens image as an image pair into a pre-trained gravitation lens classification model, wherein the gravitation lens classification model outputs the same or different results of the two image categories;
the gravity lens classification model comprises two feature extraction models and a similarity measurement calculation model, wherein the output ends of the two feature extraction models are connected with the input end of the similarity measurement calculation model, and the images are respectively input into the two feature extraction models.
Further, the two feature extraction models output paired feature vectors, and the similarity metric calculation model calculates euclidean geometric distance values of the two images based on the paired feature vectors, the euclidean geometric distance values representing whether the two images are of the same category.
Further, if the calculated euclidean distance value is less than 0.5, it indicates that the two images are in the same category, and if the calculated euclidean distance value is greater than 0.5, it indicates that the two images are not in the same category.
Further, the feature extraction model comprises an input layer, a first convolution layer, a first normalization layer, a first activation layer, a first maximum pooling layer, a first discarding layer, a second convolution layer, a second normalization layer, a second activation layer, a second maximum pooling layer, a second discarding layer, a third convolution layer, a third normalization layer, a third activation layer, a third maximum pooling layer, a third discarding layer, a flattening layer, a first full connection layer, a fourth activation layer and a second full connection layer which are sequentially connected, wherein the first activation layer, the second activation layer, the third activation layer and the fourth activation layer all adopt relu activation functions.
Further, the convolution kernel of the first convolution layer has a size of 3, the number of channels is 32, the step length is 1, and the filling mode is same filling;
the first activation layer marks the result of the negative number output by the first normalization layer as 0, and the result of the positive number is the result of the first activation layer;
the first discarding layer randomly discards 50% of neurons of the first convolutional layer;
the convolution kernel of the second convolution layer has a size of 3, the number of channels is 64, the step length is 1, and the filling mode is same filling;
the second activation layer marks the result of the negative number output by the second normalization layer as 0, and the result of the positive number is the result of the second activation layer;
the second discarding layer randomly discards 50% of neurons of the second convolutional layer;
the convolution kernel of the third convolution layer has a size of 3, the number of channels is 256, the step length is 1, and the filling mode is same filling;
the third activation layer marks the result of the negative number output by the third normalization layer as 0, and the result of the positive number is the result of the third activation layer;
the third discarding layer randomly discards 50% of neurons of the third convolutional layer;
the flattening layer changes the data into one-dimensional data.
Further, the training method of the gravity lens classification model comprises the following steps:
acquiring a gravitation lens image sample set, wherein the gravitation lens image sample set comprises a positive sample and a negative sample;
preprocessing an image sample in the gravitational lens image sample set;
pairing the preprocessed image samples to obtain a training sample set, wherein the training sample set comprises image pairs with the same category and image pairs with different categories;
inputting each image pair in the training sample set into two feature extraction models, and extracting paired feature vectors;
inputting the paired feature vectors into a similarity measurement calculation model to obtain Euclidean geometric distance between image pairs;
and training the gravity lens classification model by using the training sample set and using an loss function and an optimization algorithm with the aim that the values of the Euclidean geometric distances of images with different categories are close to 1 and the values of the Euclidean geometric distances of images with the same category are close to 0.
Further, the preprocessing of the image samples in the gravitational lens image sample set includes:
uniformly scaling pixels of the image samples in the gravitation lens image sample set to a [0,1] section, and cutting the scaled image samples according to the positions of the gravitation lenses in the image to obtain an image containing all characteristic areas.
Further, the pairing of the preprocessed image samples includes:
calculating the minimum value of the number of positive and negative samples of the preprocessed gravitation lens image sample set, and taking the minimum value as a reference m;
taking the ith data and the (i+1) th data in the positive samples as a data pair, and adding a tag 0 to the data pair, wherein i= … n ', n' represents the total number of data of the positive samples in the gravity lens image sample set; taking the data obtained by the ith data in the positive sample and the random numbers in 0-m in the negative sample as a data pair, and adding a tag 1 to the data pair.
In another aspect, the present application provides a strong gravity lens system classifying device based on metric learning, comprising:
the two feature extraction models are configured to perform feature extraction on the preprocessed gravitation lens images to be classified and the gravitation lens images with known categories to obtain paired feature vectors;
and the similarity measurement calculation model is configured to output the same or different results of the two image categories based on the paired feature vectors.
Further, the strong gravity lens system classifying device based on metric learning further comprises:
the preprocessing module is configured to scale pixels of the gravitation lens image to be classified and the gravitation lens image with known categories, and cut the scaled image according to the positions of the gravitation lenses in the image to obtain an image containing all characteristic areas.
The application achieves the beneficial technical effects that: according to the application, aiming at the characteristic of few samples of the gravitation lens system, the gravitation lens classification model is built, a small number of image samples are preprocessed and paired to obtain a paired training set, and the training set is used for training the model, so that the gravitation lens system classification task can be completed by using fewer training samples, the problem of dependence of the existing gravitation lens system classification model on a data set and a model is solved, and the difference of model effects obtained by using a real data set and generated samples as the training set is reduced.
Drawings
FIG. 1 is a flow chart of a method for classifying a strong gravity lens system based on metric learning according to an embodiment of the present application;
FIG. 2 is a flow chart of attractive lens image preprocessing in an embodiment of the present application;
FIG. 3 is a block diagram of a feature extraction model and a similarity metric calculation model in an embodiment of the application.
Detailed Description
The application is further described below in connection with specific embodiments. The following examples are only for more clearly illustrating the technical aspects of the present application, and are not intended to limit the scope of the present application.
As described above, the existing gravitational lens system classification models are all based on traditional deep learning, and the method has strong dependence on data sets and models, and a large amount of data is required to train to obtain a model with good classification effect, but the positive samples of the strong gravitational lens are very few, so that training of the deep model cannot be realized.
To this end, in one embodiment, the present application provides a method of classifying a strong gravity lens system based on metric learning. As shown in fig. 1, the method includes:
step 1, acquiring an attractive lens image sample set;
the acquired gravitational lens image sample set comprises positive samples and negative samples, wherein the positive samples are data with the same category, and the negative samples are data with different categories from the positive samples.
Step 2, preprocessing an image sample in the gravitational lens image sample set;
the method comprises the steps of preprocessing gravitation lens system data in a sample set by adopting standardization, uniformly scaling image pixels of the gravitation lens data to a [0,1] section, intercepting the image according to the position of the gravitation lens in the image, and completely retaining a characteristic area under the condition that the image is as small as possible.
In one embodiment, as shown in fig. 2, the preprocessing specifically includes:
input: gravitation lens system sample set g= { S 1 ,S 2 ,S 3 ,…,S n },S i Representing the ith attractive lens system sample;
and (3) outputting: the preprocessed gravitational lens system image.
a1. Traversing the gravitation lens system sample set, setting a circulation variable i from 1 to n, wherein n represents the total number of the real gravitation lens system data, and i=1 at the beginning;
a2. traversing the data of each gravitation lens system, carrying out normalization processing on each pixel of each gravitation lens system, wherein the input size of the gravitation lens is 110 x 3, compressing each pixel value from [0,255] to [0,1], and then cutting the image to the data size of 64 x 3;
a3. performing i=i+1;
a4. and when i < n, jumping to a2, otherwise finishing preprocessing of the gravitation lens system data.
Step 3, pairing the preprocessed image samples to obtain a training sample set;
the application provides a method based on metric learning, which is different from the traditional deep learning method, wherein the input of the method is two pairs of data, and if the data types are the same, the label is 0; if the data categories are different, the tag is 1.
In one embodiment, pairing the preprocessed image samples includes:
input: preprocessed gravitational lens system dataset G 1 ={S 1 ′,S 2 ′,S 3 ′,…,S n ′},S i ' represents the ith attractive lens system sample;
and (3) outputting: paired gravitational lens system dataset, training sample set G 2 ={{x 11 ,x 12 ,y 1 },{x 21 ,x 22 ,y 2 }{x 31 ,x 32 ,y 3 },...,{x n1 ,x n2 ,y n }, where x i1 ,x i2 Representing the i-th group of input data pairs, y i The label of the data pair is input for the i-th group.
b1. For data set G 1 Traversing, setting a circulation variable i from 1 to n ', wherein n' represents the total number of data of positive samples in the gravitational lens system data set, and i=1 initially;
b2. calculating the minimum value of the number of positive and negative samples, and taking the minimum value as a reference m;
b3. acquiring the ith data and the (i+1) th data as a data pair, wherein the label of the data pair is 0, the data acquired by the ith data and the random number in 0-m in the negative sample is a data pair, and the label of the data pair is 1;
b4. performing i=i+1;
b5. and when i < n', jumping to b1, otherwise completing the pairing of the gravitation lens system data.
Step 4, constructing an gravitation lens classification model;
the gravity lens classification model comprises two feature extraction models and a similarity measurement calculation model, wherein the two feature extraction models are twin network models, and the output ends of the two feature extraction models are connected with the input end of the similarity measurement calculation model.
4.1, building a feature extraction model:
input: gravitation lens system training sample
And (3) outputting: feature vector after feature extraction
1) Readjusting the data structure through a Reshape layer;
2) Building a feature extraction model through a convolution layer, a pooling layer, a leveling layer, a full connection layer and the like;
3) And reducing the dimension of the data finally extracted to the full connection layer to the dimension of 128 by reducing the dimension, thereby obtaining the characteristic data of 1×128.
As shown in fig. 3, the feature extraction model specifically includes:
first part (input layer): the input data is a paired training sample, which comprises a data pair x1, x2, a data size (64,64,3) and a label y;
second part (first convolution layer): is a convolution layer with a convolution kernel size of 3 and a channel number of 32, the step length is 1, the filling mode is same filling, and the obtained data is 64 x 32;
third part (first normalization layer): is a normalization layer (Batchnormalization) to avoid gradient explosions;
fourth part (first active layer): is a linear activation function relu (Rectified Linear Unit) for marking the result of a negative number as 0 and the result of a positive number as itself;
fifth part (first max pooling layer): compressing data by a maximum pooling layer with the size of 2 x2, and changing the data size from 64 x 3 to 32 x 32;
sixth part (first discard layer): is a Dropout discarding layer, which discards 50% of neurons of the previous convolution layer randomly, and controls the size of the model;
a seventh section (second convolution layer); is a convolution layer with a convolution kernel size of 3 and a channel number of 64, the step length is 1, the filling mode is same filling, and the obtained data is 32 x 64;
eighth part (second normalization layer): is a normalization layer (Batchnormalization) to avoid gradient explosion;
ninth part (second active layer): is a linear activation function relu (Rectified Linear Unit) that marks the result of a negative number as 0 and the result of a positive number as itself;
tenth part (second largest pooling layer): compressing data by a maximum pooling layer with the size of 2 x2, and changing the data size from 64 x 3 to 16 x 64;
eleventh part (second discard layer): is a Dropout discarding layer, which discards 50% of neurons of the previous convolution layer randomly, and controls the size of the model;
twelfth part (third convolution layer); is a convolution layer with a convolution kernel size of 3 and a channel number of 256, the step length is 1, the filling mode is same filling, and the obtained data is 16×16×256;
thirteenth part (third normalization layer): is a normalization layer (Batchnormalization) to avoid gradient explosions;
fourteenth portion (third active layer): is a linear activation function relu (Rectified Linear Unit), the result of the negative number is marked as 0, and the result of the positive number is the result of the positive number itself;
fifteenth part (third max pooling layer): compressing data by a maximum pooling layer with the size of 2 x2, and changing the data size from 64 x 3 to 16 x 64;
sixteenth part (third discard layer): is a Dropout discarding layer, which discards 50% of neurons of the previous convolution layer randomly, and controls the size of the model;
seventeenth part (flattened layer): is a flat layer, which changes data into a one-dimensional type, and the data size is 16384 x 1;
eighteenth part (first full connection layer): is a full connection layer, reduces the data volume, reduces the data to 512 x 1;
nineteenth portion (fourth active layer): is a relu linear activation unit;
twentieth part (second full tie layer): is a fully connected layer, compressing data to 128 x1 as output.
4.2, constructing a similarity measurement calculation model
Input: paired feature vectors obtained in step 3
And (3) outputting: similarity measurement distance between pairs of samples
1) Inputting the two feature vectors obtained in the step 3 into a calculation formula of the Euclidean geometric distance;
2) The value of the Euclidean geometric distance is close to 1 for the image pair with the label of 1, and the value of the Euclidean geometric distance is close to 0 for the image pair with the label of 0 through a loss function and an optimization method;
as shown in fig. 3, the similarity metric calculation model specifically includes:
a first part: the input is two feature vectors of 128 x 1;
a second part: placing the two feature vectors into an European geometric distance calculation formula;
third section: a distance value is obtained representing a determination of whether the input data pair is of the same category, if the distance value is less than 0.5, it is indicated as the same category, and if the distance value is greater than 0.5, it is indicated as not the same category.
And 5, training the gravitation lens classification model by using the training sample set obtained in the step 3.
The method specifically comprises the following steps:
5.1, inputting each image pair in the training sample set into two feature extraction models, and extracting paired feature vectors;
5.2, inputting the paired feature vectors into a similarity measurement calculation model to obtain the Euclidean geometric distance between the image pairs,
and 5.3, training the gravity lens classification model by using a loss function and an optimization algorithm and taking images with different categories with values close to 1 for the Euclidean geometric distance and images with the same category with values close to 0 for the Euclidean geometric distance as targets.
Specifically, after the model is constructed, the model is trained, wherein the batch size of training samples is set to be 64, a contrast loss function is selected, an activation function is used as a correction linear unit, nonlinear transformation is completed through the activation function, parameter optimization is performed through an RMS optimization algorithm, the learning rate is 0.01, the attenuation term is 1e-08, the momentum is 0.9, and the iteration number is set to be 100, so that the optimal model is obtained.
Through iterative learning for a certain number of times, a classification model with trained weights can be obtained. Through deep learning iterative learning, a condition generation countermeasure network with trained weights can be obtained.
After the optimal gravitational lens classification model is obtained through training, the model can be directly used for classifying the gravitational lens system, and the specific method comprises the following steps:
preprocessing an attractive lens image to be classified and an attractive lens image with known categories, inputting the preprocessed attractive lens image as an image pair into two feature extraction models to obtain paired feature vectors, inputting the paired feature vectors into a similarity measurement calculation model, and outputting results with the same or different categories of the two images.
In another embodiment, the present application provides a strong gravity lens system classifying device based on metric learning, comprising:
the two feature extraction models are configured to perform feature extraction on the preprocessed gravitation lens images to be classified and the gravitation lens images with known categories to obtain paired feature vectors;
and the similarity measurement calculation model is configured to output the same or different results of the two image categories based on the paired feature vectors.
Further, the strong gravity lens system classifying device based on metric learning further comprises:
the preprocessing module is configured to scale pixels of the gravitation lens image to be classified and the gravitation lens image with known categories, and cut the scaled image according to the positions of the gravitation lenses in the image to obtain an image containing all characteristic areas.
According to the gravity lens system classification method, metric learning, small sample learning and the like are fused, a twin network model in the metric learning is provided aiming at the characteristic of few samples of the gravity lens system, and the similarity measurement between different vectors after feature extraction is utilized to determine the class of data, so that training and classification of a classification model under a small number of data sets are realized, and the model dependence problem in deep learning image classification is avoided.
The present application has been disclosed in the preferred embodiments, but the application is not limited thereto, and the technical solutions obtained by adopting equivalent substitution or equivalent transformation fall within the protection scope of the present application.

Claims (8)

1. A method for classifying a strong gravity lens system based on metric learning, comprising:
preprocessing an gravitation lens image to be classified and a gravitation lens image with known categories, and inputting the preprocessed gravitation lens image as an image pair into a pre-trained gravitation lens classification model, wherein the gravitation lens classification model outputs the same or different results of the two image categories;
the gravity lens classification model comprises two feature extraction models and a similarity measurement calculation model, wherein the output ends of the two feature extraction models are connected with the input end of the similarity measurement calculation model, and the images are respectively input into the two feature extraction models;
the feature extraction model comprises an input layer, a first convolution layer, a first normalization layer, a first activation layer, a first maximum pooling layer, a first discarding layer, a second convolution layer, a second normalization layer, a second activation layer, a second maximum pooling layer, a second discarding layer, a third convolution layer, a third normalization layer, a third activation layer, a third maximum pooling layer, a third discarding layer, a flattening layer, a first full connection layer, a fourth activation layer and a second full connection layer which are sequentially connected, wherein the first activation layer, the second activation layer, the third activation layer and the fourth activation layer all adopt relu activation functions;
the training method of the gravity lens classification model comprises the following steps:
acquiring a gravitation lens image sample set, wherein the gravitation lens image sample set comprises a positive sample and a negative sample;
preprocessing an image sample in the gravitational lens image sample set;
pairing the preprocessed image samples to obtain a training sample set, wherein the training sample set comprises image pairs with the same category and image pairs with different categories;
inputting each image pair in the training sample set into two feature extraction models, and extracting paired feature vectors;
inputting the paired feature vectors into a similarity measurement calculation model to obtain Euclidean geometric distance between image pairs;
and training the gravity lens classification model by using the training sample set and using an loss function and an optimization algorithm with the aim that the values of the Euclidean geometric distances of images with different categories are close to 1 and the values of the Euclidean geometric distances of images with the same category are close to 0.
2. The method of claim 1, wherein the two feature extraction models output paired feature vectors, and the similarity metric calculation model calculates euclidean distance values of the two images based on the paired feature vectors, the euclidean distance values representing whether the two images are of the same class.
3. The method of claim 2, wherein if the calculated euclidean distance value is less than 0.5, the two images are in the same class, and if the calculated euclidean distance value is greater than 0.5, the two images are not in the same class.
4. The method of claim 1, wherein the method further comprises the step of classifying the lens system based on the metric learning,
the convolution kernel of the first convolution layer has a size of 3, the number of channels is 32, the step length is 1, and the filling mode is same filling;
the first activation layer marks the result of the negative number output by the first normalization layer as 0, and the result of the positive number is the result of the first activation layer;
the first discarding layer randomly discards 50% of neurons of the first convolutional layer;
the convolution kernel of the second convolution layer has a size of 3, the number of channels is 64, the step length is 1, and the filling mode is same filling;
the second activation layer marks the result of the negative number output by the second normalization layer as 0, and the result of the positive number is the result of the second activation layer;
the second discarding layer randomly discards 50% of neurons of the second convolutional layer;
the convolution kernel of the third convolution layer has a size of 3, the number of channels is 256, the step length is 1, and the filling mode is same filling;
the third activation layer marks the result of the negative number output by the third normalization layer as 0, and the result of the positive number is the result of the third activation layer;
the third discarding layer randomly discards 50% of neurons of the third convolutional layer;
the flattening layer changes the data into one-dimensional data.
5. The method of claim 1, wherein preprocessing the image samples in the gravitational lens image sample set comprises:
uniformly scaling pixels of the image samples in the gravitation lens image sample set to a [0,1] section, and cutting the scaled image samples according to the positions of the gravitation lenses in the image to obtain an image containing all characteristic areas.
6. The method of classifying a strong gravity lens system based on metric learning according to claim 1, wherein said pairing the preprocessed image samples comprises:
calculating the minimum value of the number of positive and negative samples of the preprocessed gravitation lens image sample set, and taking the minimum value as a reference m;
taking the ith data and the (i+1) th data in the positive samples as a data pair, and adding a tag 0 to the data pair, wherein i= … n ', n' represents the total number of data of the positive samples in the gravity lens image sample set; taking the data obtained by the ith data in the positive sample and the random numbers in 0-m in the negative sample as a data pair, and adding a tag 1 to the data pair.
7. A strong gravity lens system classifying device based on metric learning, comprising:
the two feature extraction models are configured to perform feature extraction on the preprocessed gravitation lens images to be classified and the gravitation lens images with known categories to obtain paired feature vectors;
a similarity measure calculation model configured to output results of the same or different image categories based on the paired feature vectors;
the apparatus is for implementing the strong gravity lens system classification method based on metric learning of any of claims 1-6.
8. The strong gravity lens image classification device based on metric learning of claim 7, further comprising:
the preprocessing module is configured to scale pixels of the gravitation lens image to be classified and the gravitation lens image with known categories, and cut the scaled image according to the positions of the gravitation lenses in the image to obtain an image containing all characteristic areas.
CN202110853811.6A 2021-07-28 2021-07-28 Strong gravitation lens system classification method and device based on metric learning Active CN113723455B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110853811.6A CN113723455B (en) 2021-07-28 2021-07-28 Strong gravitation lens system classification method and device based on metric learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110853811.6A CN113723455B (en) 2021-07-28 2021-07-28 Strong gravitation lens system classification method and device based on metric learning

Publications (2)

Publication Number Publication Date
CN113723455A CN113723455A (en) 2021-11-30
CN113723455B true CN113723455B (en) 2023-10-13

Family

ID=78674112

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110853811.6A Active CN113723455B (en) 2021-07-28 2021-07-28 Strong gravitation lens system classification method and device based on metric learning

Country Status (1)

Country Link
CN (1) CN113723455B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115240249B (en) * 2022-07-07 2023-06-06 湖北大学 Feature extraction classification metric learning method, system and storage medium for face recognition

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110852292A (en) * 2019-11-18 2020-02-28 南京邮电大学 Sketch face recognition method based on cross-modal multi-task depth measurement learning
CN111723675A (en) * 2020-05-26 2020-09-29 河海大学 Remote sensing image scene classification method based on multiple similarity measurement deep learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170091270A1 (en) * 2015-09-30 2017-03-30 Linkedln Corporation Organizational url enrichment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110852292A (en) * 2019-11-18 2020-02-28 南京邮电大学 Sketch face recognition method based on cross-modal multi-task depth measurement learning
CN111723675A (en) * 2020-05-26 2020-09-29 河海大学 Remote sensing image scene classification method based on multiple similarity measurement deep learning

Also Published As

Publication number Publication date
CN113723455A (en) 2021-11-30

Similar Documents

Publication Publication Date Title
CN111259930B (en) General target detection method of self-adaptive attention guidance mechanism
CN110443143B (en) Multi-branch convolutional neural network fused remote sensing image scene classification method
CN108052911B (en) Deep learning-based multi-mode remote sensing image high-level feature fusion classification method
CN106909924B (en) Remote sensing image rapid retrieval method based on depth significance
CN106355151B (en) A kind of three-dimensional S AR images steganalysis method based on depth confidence network
CN108108764B (en) Visual SLAM loop detection method based on random forest
CN108038445B (en) SAR automatic target identification method based on multi-view deep learning framework
CN112633350B (en) Multi-scale point cloud classification implementation method based on graph convolution
CN111950453A (en) Optional-shape text recognition method based on selective attention mechanism
CN104599275A (en) Understanding method of non-parametric RGB-D scene based on probabilistic graphical model
CN111626267B (en) Hyperspectral remote sensing image classification method using void convolution
CN112950780A (en) Intelligent network map generation method and system based on remote sensing image
CN116206185A (en) Lightweight small target detection method based on improved YOLOv7
CN110991284A (en) Optical remote sensing image statement description generation method based on scene pre-classification
CN113723455B (en) Strong gravitation lens system classification method and device based on metric learning
CN114842264A (en) Hyperspectral image classification method based on multi-scale spatial spectral feature joint learning
CN115311502A (en) Remote sensing image small sample scene classification method based on multi-scale double-flow architecture
Polewski et al. Combining active and semisupervised learning of remote sensing data within a renyi entropy regularization framework
CN116524189A (en) High-resolution remote sensing image semantic segmentation method based on coding and decoding indexing edge characterization
CN114627424A (en) Gait recognition method and system based on visual angle transformation
CN114170446A (en) Temperature and brightness characteristic extraction method based on deep fusion neural network
CN117593666A (en) Geomagnetic station data prediction method and system for aurora image
CN113837046A (en) Small sample remote sensing image scene classification method based on iterative feature distribution learning
CN117132910A (en) Vehicle detection method and device for unmanned aerial vehicle and storage medium
CN115272412B (en) Edge calculation-based low-small slow target detection method and tracking system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant