CN116128048A - Optical remote sensing image cloud detection model training method, detection method and device - Google Patents

Optical remote sensing image cloud detection model training method, detection method and device Download PDF

Info

Publication number
CN116128048A
CN116128048A CN202310181292.2A CN202310181292A CN116128048A CN 116128048 A CN116128048 A CN 116128048A CN 202310181292 A CN202310181292 A CN 202310181292A CN 116128048 A CN116128048 A CN 116128048A
Authority
CN
China
Prior art keywords
model
feature
remote sensing
optical remote
sensing image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310181292.2A
Other languages
Chinese (zh)
Inventor
闫志远
杨竹君
李俊希
刁文辉
戴威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aerospace Information Research Institute of CAS
Original Assignee
Aerospace Information Research Institute of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aerospace Information Research Institute of CAS filed Critical Aerospace Information Research Institute of CAS
Priority to CN202310181292.2A priority Critical patent/CN116128048A/en
Publication of CN116128048A publication Critical patent/CN116128048A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/778Active pattern-learning, e.g. online learning of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure provides an optical remote sensing image cloud detection model training method, comprising: inputting the optical remote sensing image into an untrained cloud detection model, and training to obtain a teacher model; after fixing parameters of a teacher model, respectively inputting the optical remote sensing image into the teacher model and a preset student model, and respectively outputting a first feature map, a first prediction probability distribution, a second feature map and a second prediction probability distribution; determining first feature relevance knowledge according to the first feature map; determining second feature relevance knowledge according to the second feature map; calculating a loss value of a preset student model according to the first feature correlation knowledge, the second feature correlation knowledge, the first prediction probability distribution, the second prediction probability distribution, the prediction probability value obtained by calculating by using the first prediction probability distribution and the truth value label; and updating parameters of the preset student model according to the loss value until the preset student model is converged, and taking the converged preset student model as a target optical remote sensing image cloud detection model.

Description

Optical remote sensing image cloud detection model training method, detection method and device
Technical Field
The disclosure relates to the technical field of optical remote sensing, in particular to an optical remote sensing image cloud detection model training method, a detection method and a detection device.
Background
With the rapid development of remote sensing technology, satellite images are widely applied to different fields such as meteorological early warning, disaster monitoring and military surveillance. However, because the sensor is greatly affected by the atmospheric density, the cloud layer change and the like, a plurality of images have the cloud layer shielding problem, so that cloud detection becomes a key for carrying out subsequent identification, classification and interpretation on the images, and is one of the bases of remote sensing image data restoration work. The cloud detection method in the related art is based on the deep neural network to realize high-precision prediction, however, the deep neural network model trained by the training method in the related art has the defects of large self-parameters, high floating point calculation amount, large consumption of storage and calculation resources and the like.
Disclosure of Invention
In view of the above, the present disclosure provides an optical remote sensing image cloud detection model training method, a detection method and a device for improving cloud detection efficiency.
According to a first aspect of the present disclosure, there is provided an optical remote sensing image cloud detection model training method, including:
Inputting an optical remote sensing image in the optical remote sensing image dataset into an untrained cloud detection model based on the depth neural network, and training to obtain a trained cloud detection model based on the depth neural network as a teacher model;
under the condition of fixing model parameters of the teacher model, respectively inputting optical remote sensing images in the optical remote sensing image dataset into the teacher model and a preset student model, and respectively outputting a first characteristic image and a second characteristic image extracted by the model, and a first prediction probability distribution and a second prediction probability distribution of model prediction, wherein a prediction probability value calculated by the first prediction probability distribution is used for representing prediction capability knowledge of the teacher model;
determining first feature relevance knowledge according to the first feature map; determining second feature relevance knowledge according to the second feature map;
calculating a loss value of the preset student model according to the first feature correlation knowledge, the second feature correlation knowledge, the first prediction probability distribution, the second prediction probability distribution, a prediction probability value calculated by using the first prediction probability distribution and a truth value label of the optical remote sensing image;
And updating parameters of the preset student model according to the loss value until the preset student model converges, and taking the converged preset student model as a target optical remote sensing image cloud detection model.
According to an embodiment of the disclosure, first feature relevance knowledge is determined according to the first feature map; and determining second feature relevance knowledge according to the second feature map, including:
downsampling a truth label to adapt the dimensions of the first feature map and the second feature map, wherein the optical remote sensing image dataset comprises truth labels of the optical remote sensing image and the artificially labeled cloud region;
determining a non-cloud area and a feature mapping of a cloud area of the teacher model according to the first feature map; determining the feature mapping of the non-cloud area and the cloud area of the preset student model according to the second feature map;
respectively calculating to obtain the characteristic centers of the non-cloud area and the cloud area of the teacher model and the characteristic centers of the non-cloud area and the cloud area of the preset student model by using global averaging pooling;
calculating the first feature relevance knowledge according to the cosine distance between the feature map of the teacher model and the corresponding feature center; and calculating the second feature relevance knowledge according to the feature mapping and the corresponding feature center of the preset student model.
According to an embodiment of the present disclosure, the calculating the first feature relevance knowledge according to the cosine distance between the feature map of the teacher model and the corresponding feature center includes:
calculating to obtain feature association knowledge of the non-cloud area of the teacher model according to the cosine distance between the first feature map of the non-cloud area of the teacher model and the corresponding feature center;
calculating to obtain feature relevance knowledge of the cloud area of the teacher model according to the cosine distance between the second feature mapping of the cloud area of the teacher model and the corresponding feature center;
combining the feature correlation knowledge of the non-cloud area of the teacher model with the feature correlation knowledge of the cloud area of the teacher model to serve as the first feature correlation knowledge; and
according to the feature mapping and the corresponding feature center of the preset student model, calculating to obtain the second feature relevance knowledge, including:
according to the cosine distance between the third feature mapping of the non-cloud area of the preset student model and the corresponding feature center, calculating to obtain feature relevance knowledge of the non-cloud area of the preset student model;
according to the cosine distance between the fourth feature mapping of the cloud area of the preset student model and the corresponding feature center, calculating to obtain feature relevance knowledge of the cloud area of the preset student model;
And combining the feature correlation knowledge of the non-cloud area of the preset student model with the feature correlation knowledge of the cloud area of the preset student model to serve as the second feature correlation knowledge.
According to an embodiment of the present disclosure, the above-described loss values include: a feature correlation loss value, a pixel adaptive distillation loss value, and a cross entropy loss value;
the characteristic relevance loss value is calculated by applying the first characteristic relevance knowledge and the second characteristic relevance knowledge to KL divergence, wherein the KL divergence is used as a loss function;
the pixel adaptive distillation loss value is calculated by the following method: calculating an inner product of the first prediction probability distribution and One-hot true value to obtain a prediction probability value; using a KL divergence between the first predictive probability distribution and the second predictive probability distribution as a loss function, multiplying the predictive probability value by the KL divergence, and calculating a weighted loss value, wherein the weighted loss value is used as a pixel adaptive distillation loss value; and
the cross entropy loss value is calculated by the following method: inputting the optical remote sensing image in the optical remote sensing image dataset into a preset student model, and outputting the second prediction probability distribution of the preset student model; and calculating the cross entropy loss value by using the truth label of the optical remote sensing image and the second prediction probability distribution.
According to an embodiment of the present disclosure, the teacher model is composed of a deep neural network feature extractor and a predictor.
According to an embodiment of the present disclosure, fixing the model parameters of the teacher model includes:
fixing the network structure and the weight of the teacher model.
A second aspect of the present disclosure provides an optical remote sensing image cloud detection method, including:
inputting the optical remote sensing image dataset into a target optical remote sensing image cloud detection model, and outputting an optical remote sensing image cloud detection result;
the target optical remote sensing image cloud detection model is obtained through training according to the optical remote sensing image cloud detection model training method.
A third aspect of the present disclosure provides an optical remote sensing image cloud detection model training apparatus, including:
the training module is used for inputting the optical remote sensing images in the optical remote sensing image dataset into an untrained cloud detection model based on the depth neural network, and training the training module to obtain the trained cloud detection model based on the depth neural network as a teacher model;
the feature extraction module is used for respectively inputting the optical remote sensing images in the optical remote sensing image dataset into the teacher model and a preset student model under the condition of fixing model parameters of the teacher model, and respectively outputting a first feature map and a second feature map extracted by the model, and a first prediction probability distribution and a second prediction probability distribution of model prediction, wherein a prediction probability value obtained by calculating the first prediction probability distribution is used for representing the prediction capability knowledge of the teacher model;
The determining module is used for determining first feature relevance knowledge according to the first feature map; and determining second feature relevance knowledge according to the second feature map;
the calculation module is used for calculating the loss value of the preset student model according to the first feature correlation knowledge, the second feature correlation knowledge, the first prediction probability distribution, the second prediction probability distribution, the prediction probability value obtained by calculating the first prediction probability distribution and the truth value label of the optical remote sensing image;
and the updating module is used for updating the parameters of the preset student model according to the loss value until the preset student model is converged, and taking the converged preset student model as a target optical remote sensing image cloud detection model.
A fourth aspect of the present disclosure provides an optical remote sensing image cloud detection apparatus, including:
the processing module is used for inputting the optical remote sensing image dataset into a target optical remote sensing image cloud detection model and outputting an optical remote sensing image cloud detection result;
the target optical remote sensing image cloud detection model is obtained through training according to the optical remote sensing image cloud detection model training method.
A fifth aspect of the present disclosure provides an electronic device, comprising: one or more processors; and a memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the optical remote sensing image cloud detection model training method and the optical remote sensing image cloud detection method.
According to the embodiment of the disclosure, through training a powerful teacher model in advance, feature correlation knowledge and model prediction capability knowledge of the teacher model are respectively built according to feature output and prediction probability distribution of the teacher model, and the feature correlation knowledge and model prediction capability knowledge are used as supervision information of the teacher model to be migrated into a preset student model, so that a lightweight cloud detection model is obtained. The lightweight cloud detection model obtained through training by the training method overcomes the defects of large parameter quantity, high floating point calculation quantity and large storage and calculation resource requirement of the depth model in the related technology on the basis of ensuring higher cloud detection accuracy and cloud detection efficiency, and realizes the lightweight and efficient lightweight cloud detection model.
Drawings
The foregoing and other objects, features and advantages of the disclosure will be more apparent from the following description of embodiments of the disclosure with reference to the accompanying drawings, in which:
FIG. 1 schematically illustrates an application scenario diagram of an optical remote sensing image cloud detection model training method, detection method, apparatus, and device according to an embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow chart of an optical remote sensing image cloud detection model training method according to an embodiment of the disclosure;
FIG. 3 schematically illustrates a flow chart of a method of optical remote sensing image cloud detection in accordance with an embodiment of the present disclosure;
FIG. 4 schematically illustrates a block diagram of an optical remote sensing image cloud detection model training apparatus according to an embodiment of the disclosure;
fig. 5 schematically illustrates a block diagram of a configuration of an optical remote sensing image cloud detection apparatus according to an embodiment of the present disclosure;
fig. 6 schematically illustrates a block diagram of an electronic device adapted to implement an optical remote sensing image cloud detection model training method, according to an embodiment of the disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is only exemplary and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and/or the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It should be noted that the terms used herein should be construed to have meanings consistent with the context of the present specification and should not be construed in an idealized or overly formal manner.
Where expressions like at least one of "A, B and C, etc. are used, the expressions should generally be interpreted in accordance with the meaning as commonly understood by those skilled in the art (e.g.," a system having at least one of A, B and C "shall include, but not be limited to, a system having a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
For the purposes of promoting an understanding of the principles and advantages of the disclosure, reference will now be made to the embodiments illustrated in the drawings and specific language will be used to describe the same.
At present, many small lightweight models are widely studied, but the cloud detection performance of these lightweight models is far inferior to that of deep neural network models. Therefore, the realization of a lightweight and efficient optical remote sensing image cloud detection model has important significance.
Knowledge distillation (all english called knowledge distillation) is a common method of model compression. Unlike pruning and quantization in model compression, knowledge distillation is to train a small lightweight model with the supervision information of the large model with better performance to achieve better performance and accuracy. The first time was proposed and applied by Hinton in 2015 over classification tasks, this large model is called the teacher model, and the small model is called the student model. The supervisory information output from the teacher model is referred to as knowledge, and the process by which the student model learns to migrate the supervisory information from the teacher model is referred to as distillation.
Based on the above, the present disclosure provides an optical remote sensing image cloud detection model training method, which firstly trains a teacher model with strong cloud detection performance by using a deep neural network, distills constructed feature relevance knowledge and model prediction capability knowledge by modeling the knowledge of a cloud target in an optical remote sensing image, so that a light student model can simulate the feature output and network output of the teacher model, and the cloud detection performance of the light cloud detection model is improved.
It should be noted that, unless there is an execution sequence between different operations or an execution sequence between different operations in technical implementation, the execution sequence between multiple operations may be different, and multiple operations may also be executed simultaneously in the embodiment of the disclosure.
Fig. 1 schematically illustrates an application scenario diagram of an optical remote sensing image cloud detection model training method, detection method, apparatus and device according to an embodiment of the disclosure.
As shown in fig. 1, an application scenario 100 according to this embodiment may include terminal devices 101, 102, 103, a network 104, and a server 105.
The terminal devices 101, 102, 103 may be various electronic devices with communication capabilities including, but not limited to, smartphones, tablet computers, laptop and desktop computers, and the like. The terminal device 101, 102, 103 may have a memory in which an optical telemetry image dataset for model training may be stored; alternatively, the terminal devices 101, 102, 103 may be connected to an external storage device, in which an optical remote sensing image dataset for model training may be stored.
The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The server 105 may be a server providing computing resource support for various computing tasks, e.g., the server 105 may provide computing resource support for a training process of a student model when the user is training the student model using the terminal devices 101, 102, 103.
It should be noted that, the optical remote sensing image cloud detection model training method provided in the embodiments of the present disclosure may be generally executed by the server 105. Accordingly, the optical remote sensing image cloud detection model training apparatus provided in the embodiments of the present disclosure may be generally disposed in the server 105. The optical remote sensing image cloud detection model training method provided by the embodiments of the present disclosure may also be performed by a server or a server cluster that is different from the server 105 and is capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Accordingly, the optical remote sensing image cloud detection model training apparatus provided by the embodiments of the present disclosure may also be disposed in a server or a server cluster different from the server 105 and capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Alternatively, the optical remote sensing image cloud detection model training method provided by the embodiment of the present disclosure may also be performed by the terminal device 101, 102, or 103, or may also be performed by other terminal devices different from the terminal device 101, 102, or 103. Accordingly, the optical remote sensing image cloud detection model training apparatus provided by the embodiments of the present disclosure may also be disposed in the terminal device 101, 102, or 103, or disposed in another terminal device different from the terminal device 101, 102, or 103.
For example, the optical remote sensing image dataset may be originally stored in any one of the terminal devices 101, 102, or 103 (for example, but not limited to, the terminal device 101), or stored on an external storage device and may be imported into the terminal device 101. Then, the terminal device 101 may locally perform the optical remote sensing image cloud detection model training method provided by the embodiment of the present disclosure to train the student network with the optical remote sensing image data set to obtain the target optical remote sensing image cloud detection model, or the terminal device 101 may send the optical remote sensing image data set to other terminal devices, servers, or server clusters, and perform the optical remote sensing image cloud detection model training method provided by the embodiment of the present disclosure with other terminal devices, servers, or server clusters that receive the optical remote sensing image data set to train the student network with the optical remote sensing image data set to obtain the target optical remote sensing image cloud detection model.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Fig. 2 schematically illustrates a flowchart of an optical remote sensing image cloud detection model training method according to an embodiment of the disclosure.
As shown in fig. 2, the method may include operations S210 to S250.
In operation S210, an optical remote sensing image in the optical remote sensing image dataset is input into an untrained cloud detection model based on the depth neural network, and the trained cloud detection model based on the depth neural network is obtained as a teacher model.
In operation S220, under the condition of fixing model parameters of the teacher model, the optical remote sensing images in the optical remote sensing image dataset are respectively input into the teacher model and the preset student model, and the first feature map and the second feature map extracted by the model, and the first prediction probability distribution and the second prediction probability distribution predicted by the model are respectively output, wherein the prediction probability value calculated by using the first prediction probability distribution characterizes the prediction capability knowledge of the teacher model.
In operation S230, determining first feature relevance knowledge according to the first feature map; and determining second feature relevance knowledge according to the second feature map.
In operation S240, a loss value of the preset student model is calculated according to the first feature correlation knowledge, the second feature correlation knowledge, the first predictive probability distribution and the second predictive probability distribution, the predictive probability value calculated using the first predictive probability distribution, and the truth value label of the optical remote sensing image.
In operation S250, according to the loss value, the parameters of the preset student model are updated by using a random gradient descent algorithm until the preset student model converges, and the converged preset student model is used as the target optical remote sensing image cloud detection model.
According to the embodiment of the disclosure, through training a powerful teacher model in advance, feature correlation knowledge and model prediction capability knowledge of the teacher model are respectively built according to feature output and prediction probability distribution of the teacher model, and the feature correlation knowledge and model prediction capability knowledge are used as supervision information of the teacher model to be migrated into a preset student model, so that a lightweight cloud detection model is obtained. The lightweight cloud detection model obtained through training by the training method overcomes the defects of large parameter quantity, high floating point calculation quantity and large storage and calculation resource requirement of the depth model in the related technology on the basis of ensuring higher cloud detection accuracy and cloud detection efficiency, and realizes the lightweight and efficient lightweight cloud detection model.
According to an embodiment of the disclosure, first feature relevance knowledge is determined from a first feature map; and determining second feature relevance knowledge from the second feature map, comprising:
downsampling a truth label to adapt the dimensions of the first feature map and the second feature map, wherein the optical remote sensing image dataset comprises truth labels of the optical remote sensing image and the artificially labeled cloud region;
Determining a non-cloud area and a feature mapping of the cloud area of the teacher model according to the first feature map; determining a non-cloud area and a feature mapping of a cloud area of a preset student model according to the second feature map;
respectively calculating to obtain the characteristic centers of a non-cloud area and a cloud area of the teacher model and the characteristic centers of the non-cloud area and the cloud area of the preset student model by using global average pooling;
calculating to obtain first feature relevance knowledge according to cosine distances between feature mapping of the teacher model and corresponding feature centers; and calculating to obtain second feature relevance knowledge according to the feature mapping of the preset student model and the corresponding feature center.
According to an embodiment of the present disclosure, according to a cosine distance between a feature map of a teacher model and a corresponding feature center, a first feature relevance knowledge is calculated, including:
according to the cosine distance between the first feature mapping of the non-cloud area of the teacher model and the corresponding feature center, calculating to obtain feature relevance knowledge of the non-cloud area of the teacher model;
according to the cosine distance between the second feature mapping of the cloud area of the teacher model and the corresponding feature center, calculating to obtain feature relevance knowledge of the cloud area of the teacher model;
The feature association knowledge of the non-cloud area of the teacher model is combined with the feature association knowledge of the cloud area of the teacher model to serve as first feature association knowledge; and
according to the feature mapping of the preset student model and the corresponding feature center, calculating to obtain second feature relevance knowledge, wherein the second feature relevance knowledge comprises the following steps:
according to the cosine distance between the third feature mapping of the non-cloud area of the preset student model and the corresponding feature center, calculating to obtain feature relevance knowledge of the non-cloud area of the preset student model;
according to the cosine distance between the fourth feature mapping of the cloud area of the preset student model and the corresponding feature center, calculating to obtain feature relevance knowledge of the cloud area of the preset student model;
feature correlation knowledge of a non-cloud area of the preset student model is combined with feature correlation knowledge of a cloud area of the preset student model to serve as second feature correlation knowledge.
According to an embodiment of the present disclosure, the loss values include: a feature correlation loss value, a pixel adaptive distillation loss value, and a cross entropy loss value;
the characteristic relevance loss value is obtained by calculating by applying first characteristic relevance knowledge and second characteristic relevance knowledge to KL divergence, wherein the KL divergence is used as a loss function;
The pixel self-adaptive distillation loss value is calculated by the following method: calculating an inner product of the first prediction probability distribution and One-hot true value to obtain a prediction probability value; using KL divergence between the first prediction probability distribution and the second prediction probability distribution as a loss function, multiplying the prediction probability value by the KL divergence, calculating to obtain a weighted loss value, and using the weighted loss value as a pixel self-adaptive distillation loss value; and
the cross entropy loss value is calculated by the following method: inputting the optical remote sensing images in the optical remote sensing image dataset into a preset student model, and outputting a second prediction probability distribution of the preset student model; and calculating by using the truth label of the optical remote sensing image and the second prediction probability distribution to obtain the cross entropy loss value.
According to embodiments of the present disclosure, a teacher model is composed of a deep neural network feature extractor and a predictor; the preset student model also comprises a feature extractor and a predictor, wherein the feature extractor of the preset student model is a neural network with few parameters and floating point operations, and can be ResNet18 or MobileNet.
According to an embodiment of the present disclosure, model parameters of a fixed teacher model include: the network structure and the weight of the teacher model are fixed.
According to an embodiment of the disclosure, data in an optical remote sensing image dataset is acquired from an optical remote sensing satellite, comprising an optical remote sensing image and a truth value tag of a manually annotated cloud region, wherein the tag is stored as a grayscale image.
Fig. 3 schematically illustrates a flowchart of an optical remote sensing image cloud detection method according to an embodiment of the disclosure.
As shown in fig. 3, the optical remote sensing image cloud detection method may include step S310.
In step S310, inputting the optical remote sensing image dataset into a target optical remote sensing image cloud detection model, and outputting an optical remote sensing image cloud detection result; the target optical remote sensing image cloud detection model is obtained through training by the optical remote sensing image cloud detection model training method.
Based on the above-mentioned optical remote sensing image cloud detection model training method, the present disclosure further provides an optical remote sensing image cloud detection model training device, which will be described in detail below with reference to fig. 4.
Fig. 4 schematically illustrates a block diagram of an optical remote sensing image cloud detection model training apparatus according to an embodiment of the present disclosure.
As shown in fig. 4, the optical remote sensing image cloud detection model training apparatus 400 of this embodiment includes a training module 410, a feature extraction module 420, a determination module 430, a calculation module 440, and an update module 450.
The training module 410 is configured to input the optical remote sensing image in the optical remote sensing image dataset into an untrained cloud detection model based on the depth neural network, and train the trained cloud detection model based on the depth neural network as a teacher model.
The feature extraction module 420 is configured to input the optical remote sensing images in the optical remote sensing image dataset into the teacher model and the preset student model respectively, and output a first feature map and a second feature map extracted by the model, and a first prediction probability distribution and a second prediction probability distribution of model prediction respectively, where the prediction probability value calculated by using the first prediction probability distribution characterizes the prediction capability knowledge of the teacher model.
A determining module 430, configured to determine first feature relevance knowledge according to the first feature map; and determining second feature relevance knowledge from the second feature map.
The calculating module 440 is configured to calculate a loss value of the preset student model according to the first feature correlation knowledge, the second feature correlation knowledge, the first prediction probability distribution, the second prediction probability distribution, the prediction probability value calculated by using the first prediction probability distribution, and the truth label of the optical remote sensing image.
And the updating module 450 is configured to update parameters of the preset student model according to the loss value until the preset student model converges, and take the converged preset student model as the target optical remote sensing image cloud detection model.
According to an embodiment of the present disclosure, the determination module 430 includes a processing sub-module, a determination sub-module, a first calculation sub-module, and a second calculation sub-module.
And the processing sub-module is used for downsampling the truth labels to adapt to the sizes of the first feature map and the second feature map, wherein the optical remote sensing image dataset comprises the optical remote sensing image and the truth labels of the artificially marked cloud areas.
The determining submodule is used for determining a non-cloud area and a feature mapping of a cloud area of the teacher model according to the first feature map; and determining a non-cloud area and a feature map of a cloud area of the preset student model according to the second feature map.
The first computing sub-module is used for respectively computing and obtaining feature centers of a non-cloud area and a cloud area of the teacher model and feature centers of the non-cloud area and the cloud area of the preset student model by utilizing global average pooling.
The second computing sub-module is used for computing to obtain first feature relevance knowledge according to the cosine distance between the feature mapping of the teacher model and the corresponding feature center; and calculating to obtain second feature relevance knowledge according to the feature mapping of the preset student model and the corresponding feature center.
According to an embodiment of the present disclosure, the second computing submodule includes a first computing unit and a second computing unit.
The first calculation unit is used for calculating and obtaining first feature relevance knowledge according to the cosine distance between the feature mapping of the teacher model and the corresponding feature center.
And the second calculation unit is used for calculating and obtaining second feature relevance knowledge according to the feature mapping of the preset student model and the corresponding feature center.
According to an embodiment of the present disclosure, the first computing unit includes a first computing subunit, a second computing subunit, and a first joint subunit.
The first calculating subunit is configured to calculate, according to the cosine distance between the first feature map of the non-cloud area of the teacher model and the corresponding feature center, obtain feature relevance knowledge of the non-cloud area of the teacher model. And the second calculating subunit is used for calculating to obtain the feature relevance knowledge of the cloud area of the teacher model according to the cosine distance between the second feature mapping of the cloud area of the teacher model and the corresponding feature center.
And the first combination subunit is used for combining the feature association knowledge of the non-cloud area of the teacher model with the feature association knowledge of the cloud area of the teacher model to serve as first feature association knowledge.
According to an embodiment of the present disclosure, the second computing unit includes a third computing subunit, a fourth computing subunit, and a second merging subunit.
And the third computing subunit is used for computing to obtain feature relevance knowledge of the non-cloud area of the preset student model according to the cosine distance between the third feature mapping of the non-cloud area of the preset student model and the corresponding feature center.
And the fourth calculating subunit is used for calculating to obtain the feature relevance knowledge of the cloud area of the preset student model according to the cosine distance between the fourth feature map of the cloud area of the preset student model and the corresponding feature center.
And the second combining subunit is used for combining the characteristic relevance knowledge of the non-cloud area of the preset student model with the characteristic relevance knowledge of the cloud area of the preset student model to serve as second characteristic relevance knowledge.
According to an embodiment of the present disclosure, the computing module 430 includes a third computing sub-module and a fourth computing sub-module.
And the third computing sub-module is used for applying the first feature relevance knowledge and the second feature relevance knowledge to the KL divergence to calculate and obtain the feature relevance loss value.
And the fourth calculation sub-module is used for calculating the pixel self-adaptive distillation loss value according to the first prediction probability distribution and the second prediction probability distribution.
And a fifth calculation sub-module, configured to calculate a cross entropy loss value according to the second prediction probability distribution and the truth label.
According to an embodiment of the present disclosure, the fourth calculation submodule includes a third calculation unit and a fourth calculation unit.
And the third calculation unit is used for calculating an inner product of the first prediction probability distribution and One-hot true value to obtain a prediction probability value.
And a fourth calculation unit for multiplying the prediction probability value by the KL divergence by using the KL divergence as a loss function between the first prediction probability distribution and the second prediction probability distribution, and calculating a weighted loss value, wherein the weighted loss value is used as a pixel adaptive distillation loss value.
According to an embodiment of the present disclosure, the fifth calculation submodule includes a first output unit and a fifth calculation unit.
The first output unit is used for inputting the optical remote sensing images in the optical remote sensing image dataset into a preset student model and outputting second prediction probability distribution of the preset student model.
And the fifth calculation unit is used for calculating the cross entropy loss value by utilizing the truth label of the optical remote sensing image and the second prediction probability distribution.
Based on the above-mentioned optical remote sensing image cloud detection method, the present disclosure further provides an optical remote sensing image cloud detection device, which will be described in detail below with reference to fig. 5.
Fig. 5 schematically illustrates a block diagram of a configuration of an optical remote sensing image cloud detection apparatus according to an embodiment of the present disclosure.
As shown in fig. 5, the optical remote sensing image cloud detection apparatus 500 of this embodiment includes a processing module 510.
The processing module 510 is configured to input the optical remote sensing image dataset into a target optical remote sensing image cloud detection model, and output an optical remote sensing image cloud detection result; the target optical remote sensing image cloud detection model is obtained through training according to the optical remote sensing image cloud detection model training method.
According to an embodiment of the present disclosure, training module 410, feature extraction module 420, determination module 430, calculation module 440, and update module 450; or any of the plurality of processing modules 510 may be combined in one module for implementation, or any of the plurality of modules may be split into a plurality of modules. Alternatively, at least some of the functionality of one or more of the modules may be combined with at least some of the functionality of other modules and implemented in one module. According to an embodiment of the present disclosure, training module 410, feature extraction module 420, determination module 430, calculation module 440, and update module 450; or at least one of the processing modules 510 may be implemented at least in part as hardware circuitry, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or in hardware or firmware in any other reasonable manner of integrating or packaging the circuitry, or in any one of or a suitable combination of three of software, hardware, and firmware. Alternatively, training module 410, feature extraction module 420, determination module 430, calculation module 440, and update module 450; or at least one of the processing modules 510 may be at least partially implemented as a computer program module which, when executed, performs the corresponding function.
It should be noted that, in the embodiment of the present disclosure, the optical remote sensing image cloud detection model training device portion corresponds to the optical remote sensing image cloud detection model training method portion in the embodiment of the present disclosure, and description of the optical remote sensing image cloud detection model training device portion and the training method of the target optical remote sensing image cloud detection model in the optical remote sensing image cloud detection device specifically refer to the optical remote sensing image cloud detection model training method portion, which is not described herein again.
Fig. 6 schematically illustrates a block diagram of an electronic device adapted to implement an optical remote sensing image cloud detection model training method, according to an embodiment of the disclosure. The electronic device shown in fig. 6 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 6, an electronic device 600 according to an embodiment of the present disclosure includes a processor 601 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. The processor 601 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or an associated chipset and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), or the like. Processor 601 may also include on-board memory for caching purposes. The processor 601 may comprise a single processing unit or a plurality of processing units for performing different actions of the method flows according to embodiments of the disclosure.
In the RAM603, various programs and data necessary for the operation of the electronic apparatus 600 are stored. The processor 601, the ROM 602, and the RAM603 are connected to each other through a bus 604. The processor 601 performs various operations of the method flow according to the embodiments of the present disclosure by executing programs in the ROM 602 and/or the RAM 603. Note that the program may be stored in one or more memories other than the ROM 602 and the RAM 603. The processor 601 may also perform various operations of the method flow according to embodiments of the present disclosure by executing programs stored in the one or more memories.
According to an embodiment of the present disclosure, the electronic device 600 may also include an input/output (I/O) interface 605, the input/output (I/O) interface 605 also being connected to the bus 604. The electronic device 600 may also include one or more of the following components connected to the I/O interface 605: an input portion 606 including a keyboard, mouse, etc.; an output portion 607 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The drive 610 is also connected to the I/O interface 605 as needed. Removable media 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed as needed on drive 610 so that a computer program read therefrom is installed as needed into storage section 608.
The present disclosure also provides a computer-readable storage medium that may be embodied in the apparatus/device/system described in the above embodiments; or may exist alone without being assembled into the apparatus/device/system. The computer-readable storage medium carries one or more programs which, when executed, implement methods in accordance with embodiments of the present disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example, but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. For example, according to embodiments of the present disclosure, the computer-readable storage medium may include ROM 602 and/or RAM 603 and/or one or more memories other than ROM 602 and RAM 603 described above.
Embodiments of the present disclosure also include a computer program product comprising a computer program containing program code for performing the methods shown in the flowcharts. When the computer program product runs in a computer system, the program code is used for enabling the computer system to realize the optical remote sensing image cloud detection model training method provided by the embodiment of the disclosure.
The above-described functions defined in the system/apparatus of the embodiments of the present disclosure are performed when the computer program is executed by the processor 601. The systems, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
In one embodiment, the computer program may be based on a tangible storage medium such as an optical storage device, a magnetic storage device, or the like. In another embodiment, the computer program may also be transmitted, distributed in the form of signals over a network medium, and downloaded and installed via the communication section 609, and/or installed from the removable medium 611. The computer program may include program code that may be transmitted using any appropriate network medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
In such an embodiment, the computer program may be downloaded and installed from a network through the communication portion 609, and/or installed from the removable medium 611. The above-described functions defined in the system of the embodiments of the present disclosure are performed when the computer program is executed by the processor 601. The systems, devices, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
According to embodiments of the present disclosure, program code for performing computer programs provided by embodiments of the present disclosure may be written in any combination of one or more programming languages, and in particular, such computer programs may be implemented in high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. Programming languages include, but are not limited to, such as Java, c++, python, "C" or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that the features recited in the various embodiments of the disclosure and/or in the claims may be provided in a variety of combinations and/or combinations, even if such combinations or combinations are not explicitly recited in the disclosure. In particular, the features recited in the various embodiments of the present disclosure and/or the claims may be variously combined and/or combined without departing from the spirit and teachings of the present disclosure. All such combinations and/or combinations fall within the scope of the present disclosure.
The embodiments of the present disclosure are described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described above separately, this does not mean that the measures in the embodiments cannot be used advantageously in combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be made by those skilled in the art without departing from the scope of the disclosure, and such alternatives and modifications are intended to fall within the scope of the disclosure.

Claims (10)

1. An optical remote sensing image cloud detection model training method comprises the following steps:
inputting an optical remote sensing image in the optical remote sensing image dataset into an untrained cloud detection model based on the depth neural network, and training to obtain a trained cloud detection model based on the depth neural network as a teacher model;
under the condition of fixing model parameters of the teacher model, respectively inputting optical remote sensing images in the optical remote sensing image dataset into the teacher model and a preset student model, respectively outputting a first characteristic image and a second characteristic image extracted by using the model, and a first prediction probability distribution and a second prediction probability distribution of model prediction, wherein a prediction probability value calculated by using the first prediction probability distribution represents prediction capability knowledge of the teacher model;
Determining first feature relevance knowledge according to the first feature map; determining second feature relevance knowledge according to the second feature map;
calculating a loss value of the preset student model according to the first feature correlation knowledge, the second feature correlation knowledge, the first prediction probability distribution, the second prediction probability distribution, a prediction probability value obtained by calculating by using the first prediction probability distribution and a truth value label of the optical remote sensing image;
and updating parameters of the preset student model according to the loss value until the preset student model converges, and taking the converged preset student model as a target optical remote sensing image cloud detection model.
2. The training method of claim 1, wherein the determining a first feature relevance knowledge from the first feature map; and determining second feature relevance knowledge from the second feature map, comprising:
downsampling a truth label to adapt the dimensions of the first feature map and the second feature map, wherein the optical remote sensing image dataset comprises truth labels for the optical remote sensing image and the artificially labeled cloud region;
Determining a non-cloud area and a feature mapping of a cloud area of the teacher model according to the first feature map; determining a non-cloud area and a feature mapping of a cloud area of the preset student model according to the second feature map;
respectively calculating to obtain the characteristic centers of the non-cloud area and the cloud area of the teacher model and the characteristic centers of the non-cloud area and the cloud area of the preset student model by using global average pooling;
calculating to obtain the first feature relevance knowledge according to the cosine distance between the feature mapping of the teacher model and the corresponding feature center; and calculating to obtain the second feature relevance knowledge according to the feature mapping and the corresponding feature center of the preset student model.
3. The training method of claim 2, wherein,
according to the cosine distance between the feature mapping of the teacher model and the corresponding feature center, calculating to obtain the first feature relevance knowledge, including:
according to the cosine distance between the first feature mapping of the non-cloud area of the teacher model and the corresponding feature center, calculating to obtain feature relevance knowledge of the non-cloud area of the teacher model;
According to the cosine distance between the second feature mapping of the cloud area of the teacher model and the corresponding feature center, calculating to obtain feature relevance knowledge of the cloud area of the teacher model;
the feature association knowledge of the non-cloud area of the teacher model is combined with the feature association knowledge of the cloud area of the teacher model to serve as the first feature association knowledge; and
according to the feature mapping and the corresponding feature center of the preset student model, calculating to obtain the second feature relevance knowledge, including:
according to the cosine distance between the third feature mapping of the non-cloud area of the preset student model and the corresponding feature center, calculating to obtain feature relevance knowledge of the non-cloud area of the preset student model;
according to the cosine distance between the fourth feature mapping of the cloud area of the preset student model and the corresponding feature center, calculating to obtain feature relevance knowledge of the cloud area of the preset student model;
and combining the feature relevance knowledge of the non-cloud area of the preset student model with the feature relevance knowledge of the cloud area of the preset student model to serve as the second feature relevance knowledge.
4. The training method of claim 1, wherein the penalty value comprises: a feature correlation loss value, a pixel adaptive distillation loss value, and a cross entropy loss value;
The characteristic relevance loss value is calculated by applying the first characteristic relevance knowledge and the second characteristic relevance knowledge to KL divergence, wherein the KL divergence is used as a loss function;
the pixel self-adaptive distillation loss value is calculated by the following method: calculating an inner product of the first prediction probability distribution and One-hot true value to obtain a prediction probability value; using KL divergence between the first prediction probability distribution and the second prediction probability distribution as a loss function, multiplying the prediction probability value by the KL divergence, and calculating to obtain a weighted loss value, wherein the weighted loss value is used as a pixel self-adaptive distillation loss value; and
the cross entropy loss value is calculated by the following method: inputting the optical remote sensing images in the optical remote sensing image dataset into a preset student model, and outputting the second prediction probability distribution of the preset student model; and calculating the cross entropy loss value by using the truth label of the optical remote sensing image and the second prediction probability distribution.
5. The training method of claim 1, wherein the teacher model is comprised of a deep neural network feature extractor and a predictor.
6. The training method of claim 1, wherein the fixing of model parameters of the teacher model comprises:
and fixing the network structure and the weight of the teacher model.
7. An optical remote sensing image cloud detection method comprises the following steps:
inputting the optical remote sensing image dataset into a target optical remote sensing image cloud detection model, and outputting an optical remote sensing image cloud detection result;
the optical remote sensing image cloud detection model of the target is obtained by training according to the optical remote sensing image cloud detection model training method of any one of claims 1 to 6.
8. An optical remote sensing image cloud detection model training device, comprising:
the training module is used for inputting the optical remote sensing images in the optical remote sensing image dataset into an untrained cloud detection model based on the depth neural network, and training the training module to obtain the trained cloud detection model based on the depth neural network as a teacher model;
the feature extraction module is used for respectively inputting the optical remote sensing images in the optical remote sensing image dataset into the teacher model and a preset student model under the condition of fixing model parameters of the teacher model, and respectively outputting a first feature map and a second feature map extracted by the model, and a first prediction probability distribution and a second prediction probability distribution of model prediction, wherein a prediction probability value obtained by calculating the first prediction probability distribution is used for representing the prediction capability knowledge of the teacher model;
The determining module is used for determining first feature relevance knowledge according to the first feature map; and determining second feature relevance knowledge according to the second feature map;
the calculation module is used for calculating a loss value of the preset student model according to the first feature correlation knowledge, the second feature correlation knowledge, the first prediction probability distribution, the second prediction probability distribution, a prediction probability value obtained by calculating by using the first prediction probability distribution and a truth value label of the optical remote sensing image;
and the updating module is used for updating the parameters of the preset student model according to the loss value until the preset student model is converged, and taking the converged preset student model as a target optical remote sensing image cloud detection model.
9. An optical remote sensing image cloud detection device, comprising:
the processing module is used for inputting the optical remote sensing image dataset into a target optical remote sensing image cloud detection model and outputting an optical remote sensing image cloud detection result;
wherein the target optical remote sensing image cloud detection model is obtained by training the optical remote sensing image cloud detection model training method according to any one of claims 1 to 6.
10. An electronic device, comprising:
one or more processors;
a memory for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1 to 7.
CN202310181292.2A 2023-02-27 2023-02-27 Optical remote sensing image cloud detection model training method, detection method and device Pending CN116128048A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310181292.2A CN116128048A (en) 2023-02-27 2023-02-27 Optical remote sensing image cloud detection model training method, detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310181292.2A CN116128048A (en) 2023-02-27 2023-02-27 Optical remote sensing image cloud detection model training method, detection method and device

Publications (1)

Publication Number Publication Date
CN116128048A true CN116128048A (en) 2023-05-16

Family

ID=86302813

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310181292.2A Pending CN116128048A (en) 2023-02-27 2023-02-27 Optical remote sensing image cloud detection model training method, detection method and device

Country Status (1)

Country Link
CN (1) CN116128048A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117521848A (en) * 2023-11-10 2024-02-06 中国科学院空天信息创新研究院 Remote sensing basic model light-weight method and device for resource-constrained scene

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117521848A (en) * 2023-11-10 2024-02-06 中国科学院空天信息创新研究院 Remote sensing basic model light-weight method and device for resource-constrained scene
CN117521848B (en) * 2023-11-10 2024-05-28 中国科学院空天信息创新研究院 Remote sensing basic model light-weight method and device for resource-constrained scene

Similar Documents

Publication Publication Date Title
CN106204522B (en) Joint depth estimation and semantic annotation of a single image
US11392792B2 (en) Method and apparatus for generating vehicle damage information
CN110276346B (en) Target area recognition model training method, device and computer readable storage medium
WO2019129032A1 (en) Remote sensing image recognition method and apparatus, storage medium and electronic device
CN111523640B (en) Training method and device for neural network model
CN112258512B (en) Point cloud segmentation method, device, equipment and storage medium
US11768876B2 (en) Method and device for visual question answering, computer apparatus and medium
CN113822428A (en) Neural network training method and device and image segmentation method
US11379718B2 (en) Ground truth quality for machine learning models
CN111340220B (en) Method and apparatus for training predictive models
US20220101199A1 (en) Point-of-interest recommendation
CN112329762A (en) Image processing method, model training method, device, computer device and medium
CN113781493A (en) Image processing method, image processing apparatus, electronic device, medium, and computer program product
CN116128048A (en) Optical remote sensing image cloud detection model training method, detection method and device
CN112712036A (en) Traffic sign recognition method and device, electronic equipment and computer storage medium
CN113762454A (en) Track abnormity detection method and device
CN115375899A (en) Point cloud semantic segmentation network training method, point cloud semantic segmentation method and point cloud semantic segmentation device
CN111291715A (en) Vehicle type identification method based on multi-scale convolutional neural network, electronic device and storage medium
CN113111684B (en) Training method and device for neural network model and image processing system
CN116155628B (en) Network security detection method, training device, electronic equipment and medium
CN116468970A (en) Model training method, image processing method, device, equipment and medium
CN116343169A (en) Path planning method, target object motion control device and electronic equipment
CN115937691A (en) Remote sensing image fine-grained classification method and device based on small sample continuous learning
CN113706705B (en) Image processing method, device, equipment and storage medium for high-precision map
CN113255819B (en) Method and device for identifying information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination