CN114792116B - Remote sensing classification method for crops in time sequence deep convolution network - Google Patents

Remote sensing classification method for crops in time sequence deep convolution network Download PDF

Info

Publication number
CN114792116B
CN114792116B CN202210586722.4A CN202210586722A CN114792116B CN 114792116 B CN114792116 B CN 114792116B CN 202210586722 A CN202210586722 A CN 202210586722A CN 114792116 B CN114792116 B CN 114792116B
Authority
CN
China
Prior art keywords
classification
remote sensing
image
model
temporal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210586722.4A
Other languages
Chinese (zh)
Other versions
CN114792116A (en
Inventor
李华朋
刘焕军
张树清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeast Institute of Geography and Agroecology of CAS
Original Assignee
Northeast Institute of Geography and Agroecology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeast Institute of Geography and Agroecology of CAS filed Critical Northeast Institute of Geography and Agroecology of CAS
Priority to CN202210586722.4A priority Critical patent/CN114792116B/en
Publication of CN114792116A publication Critical patent/CN114792116A/en
Application granted granted Critical
Publication of CN114792116B publication Critical patent/CN114792116B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a remote sensing classification method for crops in a time sequence deep convolution network, which comprises the following steps: step one, acquiring and preprocessing images of a multi-temporal remote sensing data set; step two, defining parameters of a time sequence depth convolution network model; step three, multi-temporal remote sensing data set joint information mining; excavating single-view image information of the multi-temporal remote sensing data set; fifthly, model prediction classification; step six, finally classifying the graph output; compared with the existing multi-temporal image crop remote sensing classification method, the method increases single-scene image information mining on the basis of traditional combined information mining, and fully mines and utilizes unique information of each single-scene image in the multi-temporal image dataset on crop classification; and the method can simultaneously mine and utilize the combination information related to classification of the multi-time-phase image dataset and the unique information of the single-view image in the dataset, thereby realizing the full and complete mining and utilization of the multi-time-phase remote sensing information.

Description

Remote sensing classification method for crops in time sequence deep convolution network
Technical Field
The invention relates to the technical field of remote sensing classification, in particular to a crop remote sensing classification method of a time sequence deep convolution network.
Background
The seasonal is one of the most obvious characteristics of crops, so the use of multi-temporal remote sensing images is one of the main means for improving the classification precision of crops, the seasonal differences of different crop growing processes provide effective crop distinguishing information, namely different crop distinguishing capacities are different, so the multi-temporal image information can complement each other to improve the identification precision of crops, the most advanced crop remote sensing classification method at present is a deep convolution network CNN, which automatically learns multi-level characteristics from the remote sensing images by using convolution windows, fully excavates the spectrum and spatial background information of the images, and therefore has advantages in the aspect of crop remote sensing classification drawing compared with the traditional machine learning method, the standard CNN model uses overlapped multi-temporal images as model input data sources, can extract the combined information of multi-temporal image sequences, but cannot excavate the unique information related to crop classification of single-view remote sensing images, namely, the standard CNN cannot fully and thoroughly excavate the rich information of the multi-temporal remote sensing images.
At present, the remote sensing classification of crops is mostly carried out by adopting a machine learning method on the market, and the traditional remote sensing classification methods such as a support vector machine, a random forest and the like are difficult to obtain better crop classification results, mainly because the methods are shallow layer classifiers with layers below 3 layers, and are difficult to solve the difficult problem of complicated nonlinear space-time classification.
Disclosure of Invention
The invention aims to provide a remote sensing classification method for crops in a time sequence deep convolution network, which aims to solve the problems in the background technology.
In order to achieve the above purpose, the present invention provides the following technical solutions: a remote sensing classification method for crops in a time sequence deep convolution network comprises the following steps: step one, acquiring and preprocessing images of a multi-temporal remote sensing data set; step two, defining parameters of a time sequence depth convolution network model; step three, multi-temporal remote sensing data set joint information mining; excavating single-view image information of the multi-temporal remote sensing data set; fifthly, model prediction classification; step six, finally classifying the graph output;
Firstly, shooting remote sensing satellite images of crops in a region to be classified by using a remote sensing satellite, shooting a research region at different times by using the satellite, then performing image preprocessing on a shot multi-temporal image dataset, and obtaining a standard multi-temporal image dataset for later use after processing is completed;
In the second step, it is assumed that n view coverage research areas remote sensing images are acquired in total, M crop types need to be distinguished, m= (M 1,M2,...,Mi,...,Mn) represents a multi-temporal remote sensing image dataset, where i and n respectively represent an ith view image and a total number of images contained in the multi-temporal image dataset, the multi-temporal images have the same spatial range and spatial resolution so as to realize spatial superposition of pixel layers, o= (O 1,o2,...,oj,...,ou) represents an object set segmented from the image M, O j and u respectively represent a j-th object and a total number of segmented objects, t= (T 1,t2,...,tk,...,tv) represents a training sample set, T k represents a k-th training sample, and v represents a total sample number; selecting an object-oriented deep convolution network model (OCNN) as a classifier of the remote sensing image, namely completing classification drawing of the whole remote sensing image by identifying each segmented object of the image;
in the third step, the standard multi-temporal image dataset obtained in the first step is input as a data source into the initial OCNN model, namely OCNN ori, and the training process of the model is expressed as follows:
OCNNori=OCNN.Train(Mstack,T)
the joint information of the multi-phase image dataset is extracted using an initial model, and an initial classification probability (P (X) ori) is calculated using the following formula:
P(X)ori=OCNNori.Predict(Mstack,O)
The above formula is used for completing the joint information mining of the related classification of the multi-temporal images;
In the fourth step, after the joint information mining in the third step is completed, crop classification information of single-view images in the image dataset is further mined on the basis of the joint information of the multi-time-phase dataset, each model iteration is performed, the single-view images in the multi-time-phase dataset are input into a TS-OCNN model according to imaging time in a scene-by-scene mode, so that the number of model iterations is consistent with the number of images contained in the image dataset, the classification probability (if i=1, the probability is P (X) ori) of the previous iteration (i-1) and the current ith scene image (M i) are spatially overlapped to serve as new data sources for image classification:
Mcon i=Combine(Mi,P(X)i-1)
The combined new data set M con i is used for the input data source for the current ith iteration OCNN model, training OCNN the model using training samples T as follows:
OCNNi=OCNN.Train(Mcon i,T);
In the fifth step, after the model training in the fourth step is completed, the classification category of each segmented object is predicted by using OCNN after the training is completed, and the trained model is then used for classifying the prediction probability as follows:
P(X)i=OCNNi.Predict(Mcon i,O)
A topic classification map (TC i) is generated based on the classification probability result (P (X) i) as follows:
TCi=argmax(P(X)i)
Dividing the class corresponding to the maximum classification probability value of each segmented object into classification classes of the object by the arg max function, wherein i represents the circulation times, and obtaining a classification chart;
In the sixth step, the TS-OCNN model generates a remote sensing classification map by using the output classification probability result in each iteration, calculates the precision of each classification map, and selects the final classification map with the highest precision as the TS-OCNN model (TC final).
Preferably, in the first step, the preprocessing method of the multi-temporal image dataset includes: projection conversion, atmospheric correction, geometric correction, and cloud removal.
Preferably, in the second step, the segmentation algorithm of the remote sensing image is a multi-scale segmentation algorithm.
Preferably, in the fourth step, combine represents a function for combining M i and the probability of previous iteration classification P (X) i-1.
Preferably, in the fifth step, the spatial size of P (X) i is consistent with the multi-temporal image set M, and the dimension of P (X) i is equal to the number (M) of crops to be classified.
Preferably, in the sixth step, the multi-temporal image dataset contains n images, and the TS-OCNN model iterates n times in total, and each time a classification map is generated, the model iterates completely, and n classification maps are produced in total.
Compared with the prior art, the invention has the beneficial effects that: compared with the existing crop remote sensing classification method, the method has the advantages that single-view image information mining is added on the basis of traditional combined information mining, so that unique information of each single-view image in the multi-time-phase image dataset on crop classification is fully mined and utilized, classification accuracy is improved, combined information of the multi-time-phase image dataset on classification and unique information of the single-view image in the dataset can be simultaneously mined and utilized, and full and complete mining and utilization of multi-time-phase remote sensing information is realized.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, an embodiment of the present invention is provided: a remote sensing classification method for crops in a time sequence deep convolution network comprises the following steps: step one, acquiring and preprocessing images of a multi-temporal remote sensing data set; step two, defining parameters of a time sequence depth convolution network model; step three, multi-temporal remote sensing data set joint information mining; excavating single-view image information of the multi-temporal remote sensing data set; fifthly, model prediction classification; step six, finally classifying the graph output;
In the first step, firstly, shooting remote sensing satellite images of crops in an area to be classified by using a remote sensing satellite, shooting a research area at different times by using the satellite, and then performing image preprocessing on a shot multi-temporal image dataset, wherein the mode of preprocessing the multi-temporal image dataset comprises the following steps: projection conversion, atmospheric correction, geometric correction and cloud removal processing are carried out, and a standard multi-temporal image data set is obtained for standby after the processing is completed;
In the second step, assuming that n view coverage research areas remote sensing images are acquired in total, M crop types need to be distinguished, m= (M 1,M2,...,Mi,...,Mn) represents a multi-temporal remote sensing image dataset, where i and n represent an ith view image and the total number of images contained in the multi-temporal image dataset respectively, the multi-temporal images have the same spatial range and spatial resolution so as to realize spatial superposition of pixel layers, o= (O 1,o2,...,oj,...,ou) represents an object set segmented from the image M, where O j and u represent the j-th object and the total number of segmented objects respectively, t= (T 1,t2,...,tk,...,tv) represents a training sample set, T k represents the k-th training sample, v represents the total number of samples, and an object-oriented depth convolution network model (OCNN) is selected as a classifier of the remote sensing images, namely, classification mapping of the whole remote sensing image is completed by identifying each segmented object of the image, and the segmentation algorithm of the remote sensing image is a multi-scale segmentation algorithm;
in the third step, the standard multi-temporal image dataset obtained in the first step is input as a data source into the initial OCNN model, namely OCNN ori, and the training process of the model is expressed as follows:
OCNNori=OCNN.Train(Mstack,T)
the joint information of the multi-phase image dataset is extracted using an initial model, and an initial classification probability (P (X) ori) is calculated using the following formula:
P(X)ori=OCNNori.Predict(Mstack,O)
The above formula is used for completing the joint information mining of the related classification of the multi-temporal images;
In the fourth step, after the joint information mining in the third step is completed, crop classification information of single-view images in the image dataset is further mined on the basis of the joint information of the multi-time-phase dataset, each model iteration is performed, the single-view images in the multi-time-phase dataset are input into a TS-OCNN model according to imaging time in a scene-by-scene mode, so that the number of model iterations is consistent with the number of images contained in the image dataset, the classification probability (if i=1, the probability is P (X) ori) of the previous iteration (i-1) and the current ith scene image (M i) are spatially overlapped to serve as new data sources for image classification:
Mcon i=Combine(Mi,P(X)i-1)
Combine represents a function used to Combine M i and the previous iteration classification probability P (X) i-1, and the combined new data set M con i is used for the input data source of the current ith iteration OCNN model, training OCNN the model using training sample T as follows:
OCNNi=OCNN.Train(Mcon i,T);
In the fifth step, after the model training in the fourth step is completed, the classification category of each segmented object is predicted by using OCNN after the training is completed, and the trained model is then used for classifying the prediction probability as follows:
P(X)i=OCNNi.Predict(Mcon i,O)
the spatial size of P (X) i is consistent with the multi-temporal image set M, the dimension of P (X) i is equal to the number (M) of crops to be classified, and a thematic classification chart (TC i) is generated based on the classification probability result (P (X) i) as shown in the following formula:
TCi=arg max(P(X)i)
Dividing the class corresponding to the maximum classification probability value of each segmented object into classification classes of the object by the arg max function, wherein i represents the circulation times, and obtaining a classification chart;
In the sixth step, the TS-OCNN model generates a remote sensing classification map by using the output classification probability result in each iteration, and the multi-temporal image dataset contains n images in total, the TS-OCNN model iterates n times, each time a classification map is generated, n classification maps are generated after the model iterates are completed, the precision of each classification map is calculated, and the final classification map with the highest precision is selected as the final classification map of the TS-OCNN model (TC final).
Based on the above, the invention has the advantages that when the invention is used, compared with the existing crop remote sensing classification method, single-view image information mining is added on the basis of traditional combined information mining, which is favorable for fully mining and utilizing the unique information of each single-view image in the multi-time-phase image dataset related to crop classification, improves the classification accuracy, and can simultaneously mine and utilize the combined information of the multi-time-phase image dataset related to classification and the unique information of the single-view image in the dataset, thereby realizing the full and complete mining and utilization of multi-time-phase remote sensing information.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.

Claims (6)

1. A remote sensing classification method for crops in a time sequence deep convolution network comprises the following steps: step one, acquiring and preprocessing images of a multi-temporal remote sensing data set; step two, defining parameters of a time sequence depth convolution network model; step three, multi-temporal remote sensing data set joint information mining; excavating single-view image information of the multi-temporal remote sensing data set; fifthly, model prediction classification; step six, finally classifying the graph output; the method is characterized in that:
Firstly, shooting remote sensing satellite images of crops in a region to be classified by using a remote sensing satellite, shooting a research region at different times by using the satellite, then performing image preprocessing on a shot multi-temporal image dataset, and obtaining a standard multi-temporal image dataset for later use after processing is completed;
In the second step, it is assumed that n view coverage research areas remote sensing images are acquired in total, M crop types need to be distinguished, m= (M 1,M2,...,Mi,...,Mn) represents a multi-temporal remote sensing image dataset, where i and n respectively represent an ith view image and a total number of images contained in the multi-temporal image dataset, the multi-temporal images have the same spatial range and spatial resolution so as to realize spatial superposition of pixel layers, o= (O 1,o2,...,oj,...,ou) represents an object set segmented from the image M, O j and u respectively represent a j-th object and a total number of segmented objects, t= (T 1,t2,...,tk,...,tv) represents a training sample set, T k represents a k-th training sample, and v represents a total sample number; selecting an object-oriented deep convolution network model (OCNN) as a classifier of the remote sensing image, namely completing classification drawing of the whole remote sensing image by identifying each segmented object of the image;
in the third step, the standard multi-temporal image dataset obtained in the first step is input as a data source into the initial OCNN model, namely OCNN ori, and the training process of the model is expressed as follows:
OCNNori=OCNN.Train(Mstack,T)
the joint information of the multi-phase image dataset is extracted using an initial model, and an initial classification probability (P (X) ori) is calculated using the following formula:
P(X)ori=OCNNori.Predict(Mstack,O)
The above formula is used for completing the joint information mining of the related classification of the multi-temporal images;
In the fourth step, after the joint information mining in the third step is completed, crop classification information of single-view images in the image dataset is further mined on the basis of the joint information of the multi-time-phase dataset, each model iteration is performed, the single-view images in the multi-time-phase dataset are input into a TS-OCNN model according to imaging time in a scene-by-scene mode, so that the number of model iterations is consistent with the number of images contained in the image dataset, the classification probability (if i=1, the probability is P (X) ori) of the previous iteration (i-1) and the current ith scene image (M i) are spatially overlapped to serve as new data sources for image classification:
Mcon i=Combine(Mi,P(X)i-1)
The combined new data set M con i is used for the input data source for the current ith iteration OCNN model, training OCNN the model using training samples T as follows:
OCNNi=OCNN.Train(Mcon i,T);
In the fifth step, after the model training in the fourth step is completed, the classification category of each segmented object is predicted by using the trained 0CNN, and the trained model is then used for classifying the prediction probability as follows:
P(X)i=OCNNi.Predict(Mcon i,O)
A topic classification map (TC i) is generated based on the classification probability result (P (X) i) as follows:
TCi=argmax(P(X)i)
Dividing the class corresponding to the maximum classification probability value of each segmented object into classification classes of the object by the arg max function, wherein i represents the circulation times, and obtaining a classification chart;
In the sixth step, the TS-OCNN model generates a remote sensing classification map by using the output classification probability result in each iteration, calculates the precision of each classification map, and selects the final classification map with the highest precision as the TS-OCNN model (TC final).
2. The remote sensing classification method for crops in a time-series deep convolutional network according to claim 1, wherein: in the first step, the preprocessing method of the multi-temporal image dataset includes: projection conversion, atmospheric correction, geometric correction, and cloud removal.
3. The remote sensing classification method for crops in a time-series deep convolutional network according to claim 1, wherein: in the second step, the remote sensing image segmentation algorithm is a multi-scale segmentation algorithm.
4. The remote sensing classification method for crops in a time-series deep convolutional network according to claim 1, wherein: in step four, combine represents a function used to Combine M i and the previous iteration classification probability P (X) i-1.
5. The remote sensing classification method for crops in a time-series deep convolutional network according to claim 1, wherein: in the fifth step, the spatial size of P (X) i is consistent with the multi-temporal image set M, and the dimension of P (X) i is equal to the number (M) of crops to be classified.
6. The remote sensing classification method for crops in a time-series deep convolutional network according to claim 1, wherein: in the sixth step, the multi-temporal image dataset contains n images, so that the TS-OCNN model iterates n times in total, each time a classification map is generated, and n classification maps are produced after the model iterates.
CN202210586722.4A 2022-05-26 2022-05-26 Remote sensing classification method for crops in time sequence deep convolution network Active CN114792116B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210586722.4A CN114792116B (en) 2022-05-26 2022-05-26 Remote sensing classification method for crops in time sequence deep convolution network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210586722.4A CN114792116B (en) 2022-05-26 2022-05-26 Remote sensing classification method for crops in time sequence deep convolution network

Publications (2)

Publication Number Publication Date
CN114792116A CN114792116A (en) 2022-07-26
CN114792116B true CN114792116B (en) 2024-05-03

Family

ID=82463041

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210586722.4A Active CN114792116B (en) 2022-05-26 2022-05-26 Remote sensing classification method for crops in time sequence deep convolution network

Country Status (1)

Country Link
CN (1) CN114792116B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105760978A (en) * 2015-07-22 2016-07-13 北京师范大学 Agricultural drought grade monitoring method based on temperature vegetation drought index (TVDI)
CN105809189A (en) * 2016-03-02 2016-07-27 中国科学院遥感与数字地球研究所 Time series image processing method
WO2021068176A1 (en) * 2019-10-11 2021-04-15 安徽中科智能感知产业技术研究院有限责任公司 Crop planting distribution prediction method based on time series remote sensing data and convolutional neural network
WO2021184891A1 (en) * 2020-03-20 2021-09-23 中国科学院深圳先进技术研究院 Remotely-sensed image-based terrain classification method, and system
WO2022032329A1 (en) * 2020-08-14 2022-02-17 Agriculture Victoria Services Pty Ltd System and method for image-based remote sensing of crop plants

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105760978A (en) * 2015-07-22 2016-07-13 北京师范大学 Agricultural drought grade monitoring method based on temperature vegetation drought index (TVDI)
CN105809189A (en) * 2016-03-02 2016-07-27 中国科学院遥感与数字地球研究所 Time series image processing method
WO2021068176A1 (en) * 2019-10-11 2021-04-15 安徽中科智能感知产业技术研究院有限责任公司 Crop planting distribution prediction method based on time series remote sensing data and convolutional neural network
WO2021184891A1 (en) * 2020-03-20 2021-09-23 中国科学院深圳先进技术研究院 Remotely-sensed image-based terrain classification method, and system
WO2022032329A1 (en) * 2020-08-14 2022-02-17 Agriculture Victoria Services Pty Ltd System and method for image-based remote sensing of crop plants

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于地理国情普查成果的农作物种植结构提取方法研究;阳俊;初启凤;罗建松;张学之;孙畅;;测绘与空间地理信息;20200618(S1);全文 *
遥感影像数据挖掘研究进展;周小成, 汪小钦;遥感信息;20050930(03);全文 *

Also Published As

Publication number Publication date
CN114792116A (en) 2022-07-26

Similar Documents

Publication Publication Date Title
Chen et al. Learning context flexible attention model for long-term visual place recognition
Mnih et al. Learning to label aerial images from noisy data
US9042648B2 (en) Salient object segmentation
CN111445488B (en) Method for automatically identifying and dividing salt body by weak supervision learning
JP2020038667A (en) Method and device for generating image data set for cnn learning for detection of obstacle in autonomous driving circumstances and test method and test device using the same
CN112613350A (en) High-resolution optical remote sensing image airplane target detection method based on deep neural network
CN109919073B (en) Pedestrian re-identification method with illumination robustness
WO2021169049A1 (en) Method for glass detection in real scene
Payet et al. Scene shape from texture of objects
CN114117614A (en) Method and system for automatically generating building facade texture
CN114820655A (en) Weak supervision building segmentation method taking reliable area as attention mechanism supervision
Zhou et al. Building segmentation from airborne VHR images using Mask R-CNN
CN104463962B (en) Three-dimensional scene reconstruction method based on GPS information video
CN117456136A (en) Digital twin scene intelligent generation method based on multi-mode visual recognition
CN114926738A (en) Deep learning-based landslide identification method and system
CN111242134A (en) Remote sensing image ground object segmentation method based on feature adaptive learning
CN104008374B (en) Miner's detection method based on condition random field in a kind of mine image
Tian et al. Semantic segmentation of remote sensing image based on GAN and FCN network model
CN114792116B (en) Remote sensing classification method for crops in time sequence deep convolution network
CN110728316A (en) Classroom behavior detection method, system, device and storage medium
Peltomäki et al. Evaluation of long-term LiDAR place recognition
CN113095235B (en) Image target detection method, system and device based on weak supervision and discrimination mechanism
Schuegraf et al. Deep Learning for the Automatic Division of Building Constructions into Sections on Remote Sensing Images
Li et al. Few-shot meta-learning on point cloud for semantic segmentation
CN112396126A (en) Target detection method and system based on detection of main stem and local feature optimization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant