CN114792116A - Time series deep convolution network crop remote sensing classification method - Google Patents

Time series deep convolution network crop remote sensing classification method Download PDF

Info

Publication number
CN114792116A
CN114792116A CN202210586722.4A CN202210586722A CN114792116A CN 114792116 A CN114792116 A CN 114792116A CN 202210586722 A CN202210586722 A CN 202210586722A CN 114792116 A CN114792116 A CN 114792116A
Authority
CN
China
Prior art keywords
classification
remote sensing
temporal
image
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210586722.4A
Other languages
Chinese (zh)
Other versions
CN114792116B (en
Inventor
李华朋
刘焕军
张树清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeast Institute of Geography and Agroecology of CAS
Original Assignee
Northeast Institute of Geography and Agroecology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeast Institute of Geography and Agroecology of CAS filed Critical Northeast Institute of Geography and Agroecology of CAS
Priority to CN202210586722.4A priority Critical patent/CN114792116B/en
Publication of CN114792116A publication Critical patent/CN114792116A/en
Application granted granted Critical
Publication of CN114792116B publication Critical patent/CN114792116B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a time series deep convolution network crop remote sensing classification method, which comprises the following steps: acquiring and preprocessing a multi-temporal remote sensing data set image; step two, defining parameters of a time series deep convolution network model; step three, mining joint information of the multi-temporal remote sensing data set; step four, mining single-scene image information of the multi-temporal remote sensing data set; step five, model prediction classification; step six, outputting a final classification chart; compared with the existing multi-temporal image crop remote sensing classification method, the method has the advantages that the single-scene image information mining is added on the basis of the traditional joint information mining, and the unique information of each single-scene image related to crop classification in the multi-temporal image data set is fully mined and utilized; and moreover, the joint information related to classification of the multi-temporal image data set and the unique information of the single scene image in the data set can be mined and utilized simultaneously, so that the multi-temporal remote sensing information is fully and completely mined and utilized.

Description

Time sequence deep convolution network crop remote sensing classification method
Technical Field
The invention relates to the technical field of remote sensing classification, in particular to a time sequence deep convolution network crop remote sensing classification method.
Background
Seasonality is one of the most obvious characteristics of crops, so that the use of multi-temporal remote sensing images is one of the main means for improving the classification precision of crops, the seasonal differences of different crop growth processes provide effective crop distinguishing information, namely the crop distinguishing capability of different temporal phases is different, so that the multi-temporal image information can complement each other to improve the identification precision of crops, the most advanced crop remote sensing classification method at present is a deep convolution network CNN, a convolution window is used for automatically learning multilevel characteristics from remote sensing images, and the spectrum and space background information of the images are fully mined, so that the method has more advantages in the aspect of crop remote sensing classification mapping compared with the traditional machine learning and other methods, however, the standard CNN model uses the superposed multi-temporal images as a model input data source, can extract the joint information contained in a multi-temporal image sequence, but cannot mine the unique information related to the classification of crops possessed by a single-scene remote sensing image, that is, the standard CNN cannot sufficiently and thoroughly mine the abundant information of the multi-temporal remote sensing image.
At present, the remote sensing classification of crops is mostly carried out by adopting a machine learning method in the market, and the machine learning method, such as a support vector machine, a random forest and other traditional remote sensing classification methods, is difficult to obtain a better classification result of the crops, mainly because the methods are all shallow classifiers with less than 3 layers, and the problem of complex nonlinear space-time classification is difficult to process.
Disclosure of Invention
The invention aims to provide a time series deep convolution network crop remote sensing classification method to solve the problems in the background technology.
In order to achieve the purpose, the invention provides the following technical scheme: a time series deep convolution network crop remote sensing classification method comprises the following steps: acquiring and preprocessing a multi-temporal remote sensing data set image; step two, defining time series depth convolution network model parameters; step three, mining the joint information of the multi-temporal remote sensing data sets; step four, mining single-scene image information of the multi-temporal remote sensing data set; step five, model prediction classification; step six, outputting a final classification chart;
in the first step, firstly, a remote sensing satellite is used for shooting crop remote sensing satellite images of an area to be classified, the satellite is used for shooting a research area at different time, then image preprocessing is carried out on a shot multi-temporal image data set, and a standard multi-temporal image data set is obtained for standby after the image preprocessing is completed;
in the second step, it is assumed that remote sensing images of n scenes covering the research area are obtained in total, M crop types need to be distinguished, and M ═ M (M ═ M) 1 ,M 2 ,...,M i ,...,M n ) The method comprises the steps of representing a multi-temporal remote sensing image data set, wherein i and n respectively represent the total number of ith scene images and images contained in the multi-temporal image data set, the multi-temporal images have the same spatial range and spatial resolution so as to realize spatial superposition of pixel layers, and O is (O) 1 ,o 2 ,...,o j ,...,o u ) Denotes a set of objects segmented from a picture M, where o j And u represents the total number of j-th object and segmented object, respectively, and T ═ T 1 ,t 2 ,...,t k ,...,t v ) Represents the training sample set, t k Represents the kth training sample, v represents the total number of samples; selecting an object-oriented depth convolution network (OCNN) model as a classifier of the remote sensing image, namely, identifying each segmented object of the image to complete classification drawing of the whole remote sensing image;
in the third step, the standard multi-temporal image data set obtained in the first step is used as a data source and is input into an initial OCNN model, namely an OCNN model ori The training process of the model is represented as follows:
OCNN ori =OCNN.Train(M stack ,T)
extracting joint information of multi-temporal image data sets using an initial model, initial classification probability: (P(X) ori ) Calculated using the following formula:
P(X) ori =OCNN ori .Predict(M stack ,O)
the joint information mining related to classification of the multi-temporal image is completed by using the formula;
in the fourth step, after the mining of the joint information in the third step is completed, further mining crop classification information contained in the monoscopic images in the image data set on the basis of the joint information of the multi-temporal data set, and inputting the monoscopic images in the multi-temporal data set into the TS-OCNN model one by one according to imaging time every time of model iteration, so that the number of times of model iteration is consistent with the number of images contained in the image set, starting from the ith iteration (i is more than or equal to 1), and the classification probability of the previous iteration (i-1) (if i is 1, the probability is P (X)) ori ) And the current ith scene image (M) i ) Performing spatial superposition as a new data source for image classification:
M con i =Combine(M i ,P(X) i-1 )
combined new data set M con i An input data source for the current i-th iteration OCNN model, training the OCNN model using the training samples T as follows:
OCNN i =OCNN.Train(M con i ,T);
in the fifth step, after the model training in the fourth step is completed, the trained OCNN is used to predict the classification category of each segmented object, and the trained model is then used to classify and predict the probability as follows:
P(X) i =OCNN i .Predict(M con i ,O)
based on classification probability results (P (X) i ) Generating a topic classification chart (TC) i ) The formula is as follows:
TC i =argmax(P(X) i )
the argmax function divides the category corresponding to the maximum classification probability value of each segmented object into the classification category of the object, i represents the cycle number, and a classification chart is obtained;
in the sixth step, the TS-OCNN model generates a remote sensing classification chart by using the output classification probability result in each iteration, the precision of each classification chart is calculated, and the classification chart with the highest precision (TC) is selected as the final classification chart of the TS-OCNN model final )。
Preferably, in the first step, the preprocessing method of the multi-temporal image data set includes: projection transformation, atmospheric correction, geometric correction and cloud removal processing.
Preferably, in the second step, the segmentation algorithm of the remote sensing image is a multi-scale segmentation algorithm.
Preferably, in the fourth step, combination represents a binding molecule for binding M i And the previous iteration classification probability P (X) i-1 As a function of (c).
Preferably, in the fifth step, P (X) i Has a spatial size consistent with the multi-temporal image set M, P (X) i Is equal to the number of crops to be sorted (m).
Preferably, in the sixth step, if the multi-temporal image data set contains n images in total, the TS-OCNN model iterates n times in total, and each time generates one classification map, and the model iterates to complete n classification map outputs in total.
Compared with the prior art, the invention has the beneficial effects that: compared with the existing crop remote sensing classification method, the method has the advantages that the single-scene image information mining is added on the basis of the traditional joint information mining, the unique information of each single-scene image related to the crop classification in the multi-temporal image data set is fully mined and utilized, the classification accuracy is improved, the joint information related to the classification in the multi-temporal image data set and the unique information of the single-scene image in the data set can be mined and utilized simultaneously, and the multi-temporal remote sensing information is fully and completely mined and utilized.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, an embodiment of the present invention: a time series deep convolution network crop remote sensing classification method comprises the following steps: acquiring and preprocessing a multi-temporal remote sensing data set image; step two, defining parameters of a time series deep convolution network model; step three, mining the joint information of the multi-temporal remote sensing data sets; step four, mining single-scene image information of the multi-temporal remote sensing data set; step five, model prediction classification; step six, outputting a final classification chart;
in the first step, firstly, a remote sensing satellite is used for shooting crop remote sensing satellite images of an area to be classified, the satellite is used for shooting a research area at different times, then image preprocessing is carried out on a shot multi-temporal image data set, and the mode of preprocessing the multi-temporal image data set comprises the following steps: projection conversion, atmospheric correction, geometric correction and cloud removal processing are carried out, and a standard multi-temporal image data set is obtained for standby after the processing is finished;
in the second step, it is assumed that remote sensing images of n scenes covering the research area are obtained in total, M crop types need to be distinguished, and M ═ M (M ═ M) 1 ,M 2 ,...,M i ,...,M n ) The method comprises the steps of representing a multi-temporal remote sensing image data set, wherein i and n respectively represent the ith scene image and the total number of images contained in the multi-temporal remote sensing image data set, the multi-temporal images have the same spatial range and spatial resolution ratio so as to realize spatial superposition of pixel layers, and O (O) is 1 ,o 2 ,...,o j ,...,o u ) Denotes a set of objects segmented from the image M, where o j And u represents the total number of j-th object and divided object, respectively, and T ═ T 1 ,t 2 ,...,t k ,...,t v ) Represents a training sample set, t k Denotes the kth training sample, v denotes the total number of samplesMeasuring, namely selecting an object-oriented depth convolution network model (OCNN) as a classifier of the remote sensing image, namely completing classification drawing of the whole remote sensing image by identifying each segmented object of the image, wherein the segmentation algorithm of the remote sensing image is a multi-scale segmentation algorithm;
in the third step, the standard multi-temporal image data set obtained in the first step is used as a data source and is input into an initial OCNN model, namely an OCNN model ori The training process of the model is represented as follows:
OCNN ori =OCNN.Train(M stack ,T)
extracting joint information for multi-temporal image datasets using an initial model, initial classification probability (P (X)) ori ) Calculated using the following formula:
P(X) ori =OCNN ori .Predict(M stack ,O)
the joint information mining related to classification of the multi-temporal image is completed by using the formula;
in the fourth step, after the mining of the joint information in the third step is completed, further mining the crop classification information contained in the single scene images in the image data set on the basis of the joint information of the multi-time-phase data set, and inputting the single scene images into the TS-OCNN model one by one according to the imaging time in the multi-time-phase data set every time of model iteration, so that the number of times of model iteration is consistent with the number of images contained in the image set, starting from the ith iteration (i is more than or equal to 1), and the classification probability of the previous iteration (i-1) (if i is more than 1, the probability is P (X)) ori ) And the current ith scene image (M) i ) Performing spatial superposition as a new data source for image classification:
M con i =Combine(M i ,P(X) i-1 )
combine denotes a binding partner for M i And the previous iteration classification probability P (X) i-1 Of the new data set M after integration con i An input data source for the current ith iteration OCNN model, training the OCNN model using the training sample T as follows:
OCNN i =OCNN.Train(M con i ,T);
in the fifth step, after the model training in the fourth step is completed, the trained OCNN is used to predict the classification category of each segmented object, and the trained model is then used to classify and predict the probability as follows:
P(X) i =OCNN i .Predict(M con i ,O)
P(X) i has a spatial size consistent with the multi-temporal image set M, P (X) i Is equal to the number of crops to be classified (m), based on the classification probability result (P (X)) i ) Generating a topic classification chart (TC) i ) The formula is as follows:
TC i =arg max(P(X) i )
the arg max function divides the category corresponding to the maximum classification probability value of each segmented object into the classification category of the object, i represents the cycle number, and a classification diagram is obtained;
in the sixth step, the TS-OCNN model generates a remote sensing classification chart by using the output classification probability result in each iteration, the multi-temporal image data set contains n images in total, the TS-OCNN model iterates n times in total, one classification chart is generated in each iteration, the model outputs n classification charts in total after the model iteration is finished, the precision of each classification chart is calculated, and the highest precision is selected as the final classification chart (TC) of the TS-OCNN model final )。
Based on the above, the method has the advantages that when the method is used, compared with the existing crop remote sensing classification method, the single-scene image information mining is added on the basis of the traditional joint information mining, the method is favorable for fully mining and utilizing the unique information of each single-scene image in the multi-temporal image data set related to the crop classification, the classification accuracy is improved, the joint information of the multi-temporal image data set related to the classification and the unique information of the single-scene image in the data set can be mined and utilized simultaneously, and the multi-temporal remote sensing information is fully and completely mined and utilized.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.

Claims (6)

1. A time series deep convolution network crop remote sensing classification method comprises the following steps: acquiring and preprocessing a multi-temporal remote sensing data set image; step two, defining time series depth convolution network model parameters; step three, mining joint information of the multi-temporal remote sensing data set; step four, mining single-scene image information of the multi-temporal remote sensing data set; step five, model prediction classification; step six, outputting a final classification chart; the method is characterized in that:
in the first step, firstly, a remote sensing satellite is used for shooting crop remote sensing satellite images of an area to be classified, the satellite is used for shooting a research area at different time, then image preprocessing is carried out on a shot multi-temporal image data set, and a standard multi-temporal image data set is obtained for standby after the image preprocessing is completed;
in the second step, it is assumed that n scene coverage research area remote sensing images are obtained in total, M crop types need to be distinguished, and M ═ M (M ═ is 1 ,M 2 ,...,M i ,...,M n ) The method comprises the steps of representing a multi-temporal remote sensing image data set, wherein i and n respectively represent the ith scene image and the total number of images contained in the multi-temporal remote sensing image data set, the multi-temporal images have the same spatial range and spatial resolution ratio so as to realize spatial superposition of pixel layers, and O (O) is 1 ,o 2 ,...,o j ,...,o u ) Denotes a set of objects segmented from the image M, where o j And u represents the total number of j-th object and divided object, respectively, and T ═ T 1 ,t 2 ,...,t k ,...,t v ) Represents the training sample set, t k Represents the kth training sample, v represents the total number of samples; selecting an object-oriented depth convolution network model (OCNN) as a classifier of the remote sensing image, namely, completing classification drawing of the whole remote sensing image by identifying each segmented object of the image;
in the third step, the standard multi-temporal image data set obtained in the first step is used as a data source and is input into an initial OCNN model, namely an OCNN model ori The training process of the model is represented as follows:
OCNN ori =OCNN.Train(M stack ,T)
extracting joint information of multi-temporal image data sets using an initial model, initial classification probability (P (X)) ori ) Calculated using the following formula:
P(X) ori =OCNN ori .Predict(M stack ,O)
the joint information mining related to classification of the multi-temporal image is completed by using the formula;
in the fourth step, after the mining of the joint information in the third step is completed, further mining crop classification information contained in the monoscopic images in the image data set on the basis of the joint information of the multi-temporal data set, and inputting the monoscopic images in the multi-temporal data set into the TS-OCNN model one by one according to imaging time every time of model iteration, so that the number of times of model iteration is consistent with the number of images contained in the image set, starting from the ith iteration (i is more than or equal to 1), and the classification probability of the previous iteration (i-1) (if i is 1, the probability is P (X)) ori ) And the current ith scene image (M) i ) Performing spatial superposition as a new data source for image classification:
M con i =Combine(M i ,P(X) i-1 )
combined new data set M con i An input data source for the current ith iteration OCNN model, training the OCNN model using the training sample T as follows:
OCNN i =OCNN.Train(M con i ,T);
in the fifth step, after the model training in the fourth step is completed, the trained 0CNN is used to predict the classification category of each segmented object, and the trained model is then used to classify and predict the probability as follows:
P(X) i =OCNN i .Predict(M con i ,O)
based on classification probability results (P (X) i ) Generating a topic classification chart (TC) i ) The formula is as follows:
TC i =argmax(P(X) i )
the argmax function divides the category corresponding to the maximum classification probability value of each segmented object into the classification category of the object, i represents the cycle number, and a classification chart is obtained;
in the sixth step, the TS-OCNN model generates a remote sensing classification chart by using the output classification probability result in each iteration, the precision of each classification chart is calculated, and the highest precision is selected as the final classification chart (TC) of the TS-OCNN model final )。
2. The remote sensing crop classification method based on the time-series deep convolution network as claimed in claim 1, characterized in that: in the first step, the multi-temporal image dataset preprocessing method includes: projection transformation, atmospheric correction, geometric correction and cloud removal processing.
3. The remote sensing crop classification method based on the time-series deep convolution network as claimed in claim 1, characterized in that: in the second step, the segmentation algorithm of the remote sensing image is a multi-scale segmentation algorithm.
4. The remote sensing crop classification method based on the time-series deep convolutional network as claimed in claim 1, which is characterized in that: in the fourth step, combination represents a binding molecule for binding M i And the previous iteration classification probability P (X) i-1 As a function of (c).
5. The time series deep convolution network farm of claim 1The remote sensing crop classification method is characterized by comprising the following steps: in the fifth step, P (X) i Has a spatial size consistent with the multi-temporal image set M, P (X) i Is equal to the number of crops to be sorted (m).
6. The remote sensing crop classification method based on the time-series deep convolution network as claimed in claim 1, characterized in that: in the sixth step, if the multi-temporal image data set contains n images in total, the TS-OCNN model iterates n times in total, and each time generates one classification diagram, and the model iterates to complete n classification diagram outputs in total.
CN202210586722.4A 2022-05-26 2022-05-26 Remote sensing classification method for crops in time sequence deep convolution network Active CN114792116B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210586722.4A CN114792116B (en) 2022-05-26 2022-05-26 Remote sensing classification method for crops in time sequence deep convolution network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210586722.4A CN114792116B (en) 2022-05-26 2022-05-26 Remote sensing classification method for crops in time sequence deep convolution network

Publications (2)

Publication Number Publication Date
CN114792116A true CN114792116A (en) 2022-07-26
CN114792116B CN114792116B (en) 2024-05-03

Family

ID=82463041

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210586722.4A Active CN114792116B (en) 2022-05-26 2022-05-26 Remote sensing classification method for crops in time sequence deep convolution network

Country Status (1)

Country Link
CN (1) CN114792116B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105760978A (en) * 2015-07-22 2016-07-13 北京师范大学 Agricultural drought grade monitoring method based on temperature vegetation drought index (TVDI)
CN105809189A (en) * 2016-03-02 2016-07-27 中国科学院遥感与数字地球研究所 Time series image processing method
WO2021068176A1 (en) * 2019-10-11 2021-04-15 安徽中科智能感知产业技术研究院有限责任公司 Crop planting distribution prediction method based on time series remote sensing data and convolutional neural network
WO2021184891A1 (en) * 2020-03-20 2021-09-23 中国科学院深圳先进技术研究院 Remotely-sensed image-based terrain classification method, and system
WO2022032329A1 (en) * 2020-08-14 2022-02-17 Agriculture Victoria Services Pty Ltd System and method for image-based remote sensing of crop plants

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105760978A (en) * 2015-07-22 2016-07-13 北京师范大学 Agricultural drought grade monitoring method based on temperature vegetation drought index (TVDI)
CN105809189A (en) * 2016-03-02 2016-07-27 中国科学院遥感与数字地球研究所 Time series image processing method
WO2021068176A1 (en) * 2019-10-11 2021-04-15 安徽中科智能感知产业技术研究院有限责任公司 Crop planting distribution prediction method based on time series remote sensing data and convolutional neural network
WO2021184891A1 (en) * 2020-03-20 2021-09-23 中国科学院深圳先进技术研究院 Remotely-sensed image-based terrain classification method, and system
WO2022032329A1 (en) * 2020-08-14 2022-02-17 Agriculture Victoria Services Pty Ltd System and method for image-based remote sensing of crop plants

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
周小成, 汪小钦: "遥感影像数据挖掘研究进展", 遥感信息, no. 03, 30 September 2005 (2005-09-30) *
阳俊;初启凤;罗建松;张学之;孙畅;: "基于地理国情普查成果的农作物种植结构提取方法研究", 测绘与空间地理信息, no. 1, 18 June 2020 (2020-06-18) *

Also Published As

Publication number Publication date
CN114792116B (en) 2024-05-03

Similar Documents

Publication Publication Date Title
CN108537743B (en) Face image enhancement method based on generation countermeasure network
CN108564109B (en) Remote sensing image target detection method based on deep learning
US9558268B2 (en) Method for semantically labeling an image of a scene using recursive context propagation
CN111274921B (en) Method for recognizing human body behaviors by using gesture mask
CN111696137B (en) Target tracking method based on multilayer feature mixing and attention mechanism
CN102750385B (en) Correlation-quality sequencing image retrieval method based on tag retrieval
CN109241982A (en) Object detection method based on depth layer convolutional neural networks
CN107463954B (en) A kind of template matching recognition methods obscuring different spectrogram picture
CN110210431B (en) Point cloud semantic labeling and optimization-based point cloud classification method
CN110827312B (en) Learning method based on cooperative visual attention neural network
CN115248876B (en) Remote sensing image overall recommendation method based on content understanding
CN110276753A (en) Objective self-adapting hidden method based on the mapping of feature space statistical information
CN112800982A (en) Target detection method based on remote sensing scene classification
CN111462162A (en) Foreground segmentation algorithm for specific class of pictures
CN114299398B (en) Small sample remote sensing image classification method based on self-supervision contrast learning
CN110415261B (en) Expression animation conversion method and system for regional training
CN109002771A (en) A kind of Classifying Method in Remote Sensing Image based on recurrent neural network
CN113837191B (en) Cross-star remote sensing image semantic segmentation method based on bidirectional unsupervised domain adaptive fusion
CN111507416A (en) Smoking behavior real-time detection method based on deep learning
CN113011438B (en) Bimodal image significance detection method based on node classification and sparse graph learning
CN104008374B (en) Miner's detection method based on condition random field in a kind of mine image
CN110490053B (en) Human face attribute identification method based on trinocular camera depth estimation
CN104537124A (en) Multi-view metric learning method
CN114792116A (en) Time series deep convolution network crop remote sensing classification method
Li et al. Few-shot meta-learning on point cloud for semantic segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant