CN117095299B - Grain crop extraction method, system, equipment and medium for crushing cultivation area - Google Patents

Grain crop extraction method, system, equipment and medium for crushing cultivation area Download PDF

Info

Publication number
CN117095299B
CN117095299B CN202311345594.5A CN202311345594A CN117095299B CN 117095299 B CN117095299 B CN 117095299B CN 202311345594 A CN202311345594 A CN 202311345594A CN 117095299 B CN117095299 B CN 117095299B
Authority
CN
China
Prior art keywords
sample
remote sensing
sensing image
extraction
extraction model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311345594.5A
Other languages
Chinese (zh)
Other versions
CN117095299A (en
Inventor
詹远增
王兴坤
冯存均
周伟
李晓天
张艳
刘晓忠
邓小渊
马彦
赵建雪
徐盼
朱校娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Institute Of Surveying And Mapping Science And Technology
Original Assignee
Zhejiang Institute Of Surveying And Mapping Science And Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Institute Of Surveying And Mapping Science And Technology filed Critical Zhejiang Institute Of Surveying And Mapping Science And Technology
Priority to CN202311345594.5A priority Critical patent/CN117095299B/en
Publication of CN117095299A publication Critical patent/CN117095299A/en
Application granted granted Critical
Publication of CN117095299B publication Critical patent/CN117095299B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Abstract

The invention provides a grain crop extraction method for a crushed cultivation area, which comprises the following steps: performing first extraction of target crops on a first remote sensing image of a working area by using the trained first extraction model to obtain a first extraction result of the target crops; the first extraction result is each initial pattern spot of the target crop judged in the first remote sensing image; acquiring a second remote sensing image corresponding to the first extraction result; the second remote sensing image is a remote sensing image formed by fusing optical image features and texture features of the first remote sensing image; and performing second extraction on the target crop by using the trained second extraction model to obtain a final extraction result of the target crop. The grain crop extraction method, system, equipment and medium for the crushed cultivation area can accurately and dynamically extract grain crops in a scene with a complex planting structure.

Description

Grain crop extraction method, system, equipment and medium for crushing cultivation area
Technical Field
The application belongs to the technical field of remote sensing image processing and application, and particularly relates to a grain crop extraction method, a grain crop extraction system, grain crop extraction equipment and grain crop extraction medium for a crushed cultivation area.
Background
With the development of scientific technology, the deep learning grain crop extraction method based on remote sensing images opens up a new research direction for grain crop extraction work.
The existing deep learning grain crop extraction method based on remote sensing images still has the following problems: 1. neglecting the influence of the planting environment and the planting structure on grain crop extraction, the method is easy to be interfered by non-target crops under the application scene of complex planting structure, and the extraction precision of the model is reduced; 2. the model has weak generalization capability, and for a new region with large difference of planting environments and/or planting structures, a training sample needs to be reconstructed, and the model is retrained based on the newly constructed training sample, so that the model is suitable for grain crop extraction work of the new region.
Based on the above problems, how to provide a grain crop extraction method which can accurately extract target crops in a scene with a complex planting structure and is suitable for large-scale popularization and use is an important problem to be solved at present.
Disclosure of Invention
In view of the above drawbacks of the prior art, an object of the present application is to provide a method, a system, an apparatus and a medium for extracting grain crops in a crushed cultivation area, which are used for solving the problems that the existing grain crop extraction method is not suitable for large-scale popularization and use and is difficult to accurately and dynamically extract grain crops in a scene with a complex planting structure.
To achieve the above and other related objects, the present invention provides a grain crop extraction method for a crushed cultivated area, comprising the steps of:
performing first extraction of target crops on a first remote sensing image of a working area by using the trained first extraction model to obtain a first extraction result of the target crops; the first extraction result is each initial pattern spot of the target crop judged in the first remote sensing image; the first remote sensing image is an optical remote sensing image of the working area;
acquiring a second remote sensing image corresponding to the first extraction result; the second remote sensing image is a remote sensing image formed by fusing optical image features and texture features of the first remote sensing image;
and performing second extraction on the target crop by using the trained second extraction model to obtain a final extraction result of the target crop.
In an embodiment of the invention, the method further comprises: training an original first extraction model based on the first sample set, and training an original second extraction model based on the second sample set to correspondingly obtain a trained first extraction model and a trained second extraction model;
The first sample set comprises target crop pattern spots and other ground pattern spots extracted based on a first sample remote sensing image; the first sample remote sensing image is an optical remote sensing image;
the second sample set comprises the target crop pattern spots and the difficult-to-separate crop pattern spots extracted based on a second sample remote sensing image; the second sample remote sensing image is a remote sensing image formed by fusing optical characteristics and texture characteristics, and the planting structure of a sample area corresponding to the second sample remote sensing image is the same as that of the working area.
In an embodiment of the present invention, the first sample set is constructed in a manner including:
determining a target observation window period of the target crop, and acquiring a first sample remote sensing image of the target crop, wherein the acquisition time is positioned in the target observation window period;
extracting the target crop pattern spots and the other ground pattern spots from the first sample remote sensing image;
a first sample set is constructed based on each of the target crop map spots and the other ground object map spots.
In an embodiment of the present invention, when the first sample remote sensing image corresponding to the same area is multi-phase, the obtaining method of the target crop image spot and the other ground object image spots includes:
Dividing each first sample remote sensing image by adopting a preset pattern spot dividing template to obtain each target crop pattern spot and other ground object pattern spots; the pattern spot segmentation template is vector data formed by vector boundaries corresponding to each pattern spot in the first sample remote sensing image.
In an embodiment of the present invention, the second sample set is constructed in a manner including:
acquiring a first sample result of the target crop in the sample area by using the first extraction model; based on the first sample result, adopting a comparison screening mode to misjudge the first sample result as the pattern spot of the target crop, and setting the pattern spot as an initial misjudgment pattern spot;
judging whether the distance between each initial misjudgment pattern spot and the target crop pattern spot exceeds a preset first distance threshold value; taking the misjudgment pattern spot with the distance from the target crop pattern spot not exceeding a preset first distance threshold as a final misjudgment pattern spot; and constructing a second sample set based on the final misjudgment pattern spot and the target crop pattern spot.
In another embodiment of the present invention, the training the original first extraction model based on the first sample set includes:
Constructing an initial first sample set based on the first sample remote sensing image set;
pre-training an original first extraction model based on the initial first sample set to obtain a pre-trained first extraction model; the first extraction model after the pre-training is the first extraction model with the model precision reaching a preset first precision threshold;
training the pre-trained first extraction model by using a new first sample remote sensing image set in an iterative optimization mode to obtain a trained first extraction model;
the training of the pre-trained first extraction model by using the new first sample remote sensing image set and adopting an iterative optimization mode comprises the following steps:
acquiring a new first sample remote sensing image;
extracting the target crop from the new first sample remote sensing image by using a current first extraction model to obtain a new first sample;
updating the current first sample set based on the newly added first sample to obtain a new first sample set;
training the current first extraction model based on the new first sample set to obtain a new first extraction model;
repeating the steps until the model precision of the first extraction model reaches a preset second precision threshold.
In another embodiment of the present invention, the training the original first extraction model based on the first sample set and training the original second extraction model based on the second sample set includes:
constructing an initial first sample set based on the first sample remote sensing image set; constructing an initial second sample set based on the second sample remote sensing image set;
pre-training an original first extraction model based on the initial first sample set to obtain a pre-trained first extraction model; pre-training the original second extraction model based on the initial second sample set to obtain a pre-trained second extraction model; the first extraction model after the pre-training is the first extraction model with the model precision reaching a preset third precision threshold; the second extraction model after the pre-training is the second extraction model with the model precision reaching a preset fourth precision threshold;
training the pre-trained first extraction model and second extraction model by using a third sample remote sensing image set in a dual-model iterative optimization mode so as to correspondingly obtain the trained first extraction model and second extraction model;
The training of the pre-trained first extraction model and the pre-trained second extraction model by using the third sample remote sensing image set and adopting a dual-model iterative optimization mode comprises the following steps:
extracting a new third sample remote sensing image in the third sample remote sensing image set;
performing first extraction on the target crop to obtain a first sample result by using a current first extraction model on the new third sample remote sensing image;
acquiring a third remote sensing image corresponding to the first sample result; the third remote sensing image is a remote sensing image formed by fusing optical image features and texture features in the third sample remote sensing image;
performing second extraction on the target crop by using a current second extraction model to obtain a second sample result;
detecting whether the second sample result is accurate; setting the accurate second sample result as a newly added first sample; and constructing a newly added second sample based on the incorrect second sample result to obtain a newly added first sample and a newly added second sample respectively;
updating the current first sample set based on the newly added first sample to obtain a new first sample set; based on the newly added second sample, updating the current second sample set to obtain a new second sample set;
Training the current first extraction model based on the new first sample set to obtain a new first extraction model; training the current second extraction model based on the new second sample set to obtain a new second extraction model;
repeating the steps until the model precision of the first extraction model reaches a preset fifth precision threshold value and the model precision of the second extraction model reaches a preset sixth precision threshold value;
the third sample remote sensing image is a remote sensing image which is different from the first sample remote sensing image and the second sample remote sensing image in the sample area.
Correspondingly, the invention provides a grain crop extraction system for a crushed cultivation area, which is characterized by comprising:
the first extraction result acquisition module is used for carrying out first extraction on the target crop on the first remote sensing image of the working area based on the trained first extraction model to obtain a first extraction result of the target crop; the first extraction result is each initial pattern spot of the target crop judged in the first remote sensing image; the first remote sensing image is an optical remote sensing image of the working area;
The second remote sensing image acquisition module is used for acquiring a second remote sensing image corresponding to the first extraction result; the second remote sensing image is formed by fusing optical image features and texture features;
and the final extraction result acquisition module is used for carrying out second extraction on the target crop on the second remote sensing image based on the trained second extraction model to acquire the final extraction result of the target crop.
Correspondingly, the invention provides grain crop extraction equipment for a crushed cultivation area, which is characterized in that the equipment comprises:
a memory for storing a computer program;
and the processor is used for executing the computer program stored in the memory so as to enable the equipment to execute the grain crop extraction method based on the equipment.
Correspondingly, the invention provides a computer readable storage medium storing a computer program which when executed by a processor implements the grain crop extraction method as described above applied to the apparatus.
As described above, the grain crop extraction method, system, equipment and medium for the crushed cultivated area have the following beneficial effects:
Only inputting a second remote sensing image corresponding to the first extraction result into the second extraction model, avoiding the input of other invalid information, causing interference to the reasoning process of the second extraction model, and improving the reasoning speed of the second extraction model; and, because the first extraction model does not distinguish the target crop from the difficult-to-separate crop, namely the first extraction model is easy to identify the target crop and the difficult-to-separate crop as the target crop, the recall ratio of the target crop extraction is ensured, the second extraction model is based on the optical image characteristics, the texture characteristics and the spatial relationship among the crops, and the distinguishing of the target crop and the difficult-to-separate crop is carried out on the fully-looked-up target crop extraction area in the first extraction result, so that the accuracy of the target crop extraction result is improved; in addition, since the first extraction model and the second extraction model are two independent models, the grain crop extraction method is used for extracting the same target crop in other scenes with completely different planting structures, and only the second extraction model is required to be optimized without retraining the first extraction model, so that the grain crop extraction method has stronger generalization capability and is suitable for large-scale popularization and use.
Drawings
Fig. 1 is a schematic flow chart of an embodiment of the grain crop extraction method for the crushed cultivation area provided in the first aspect of the present application.
Fig. 2 is a schematic flow chart of an embodiment of the grain crop extraction method for the crushed cultivation area provided in the second aspect of the present application.
Fig. 3 is a schematic flow chart of an embodiment of the grain crop extraction method for the crushed cultivation area provided in the third aspect of the present application.
Fig. 4 is a schematic flow chart of an embodiment of the grain crop extraction method for the crushed cultivation area provided in the fourth aspect of the present application.
Fig. 5 is a schematic flow chart of an embodiment of the grain crop extraction method for the crushed cultivation area provided in the fifth aspect of the present application.
Fig. 6 is a schematic block diagram of a grain crop extraction system according to an embodiment of the present invention in the first aspect.
Fig. 7 is a schematic block diagram of a grain crop extraction system according to an embodiment of the present invention in the second aspect.
Fig. 8 is a schematic block diagram of a grain crop extraction system in an embodiment of the crushed cultivation area according to the third aspect of the present application.
Fig. 9 is a schematic block diagram of a grain crop extraction system according to an embodiment of the present invention in the crushed cultivation area according to the fourth aspect of the present application.
Fig. 10 is a schematic block diagram of a grain crop extraction system according to an embodiment of the present invention in the fifth aspect.
Fig. 11 is a schematic structural view of the grain crop extraction apparatus for the crushed cultivation area according to the present application in an embodiment.
Description of the reference numerals
S100-S500; s100', a step; s100'', step; s500', a step; 300. a grain crop extraction system; 301. a first extraction result acquisition module; 302. the second remote sensing image acquisition module; 303. a final extraction result acquisition module; 304. a first sample set construction module; 3041. constructing a sub-module of the pattern spot segmentation template; 305. a second sample set construction module; 306. the first extraction model iteration optimization module; 307. a dual-model iterative optimization module; 400. grain crop extraction equipment; 401. a memory; 402. a processor.
Detailed Description
Other advantages and effects of the present application will become apparent to those skilled in the art from the present disclosure, when the following description of the embodiments is taken in conjunction with the accompanying drawings. The present application may be embodied or carried out in other specific embodiments, and the details of the present application may be modified or changed from various points of view and applications without departing from the spirit of the present application. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict.
It should be noted that, the illustrations provided in the following embodiments merely illustrate the basic concepts of the application by way of illustration, and only the components related to the application are shown in the drawings and are not drawn according to the number, shape and size of the components in actual implementation, and the form, number and proportion of the components in actual implementation may be arbitrarily changed, and the layout of the components may be more complex.
Interpretation of the terms:
and (3) a physical stage: the method is a biological climate period corresponding to regular changes of sprouting, branch pulling, leaf expanding, flowering, fruiting, defoliation and the like of target crops along with the change of climate;
NDVI: i.e. normalized vegetation index, which quantifies vegetation by the difference between near infrared (vegetation strongly reflected) and red (vegetation absorbed), ranging between-1 and +1;
NDVI timing profile: based on time sequence, sorting the NDVI characteristics of the target crops obtained from the multi-period remote sensing images, wherein the obtained NDVI sequences containing time characteristics after sorting are the NDVI time sequence distribution.
In order to solve the problems that the existing grain crop extraction method is easy to be interfered by a scene, can not accurately and dynamically extract target crops and can not be popularized and used in a large scale under the scene of complex planting structure, the following embodiments of the application provide a grain crop extraction method, a system, equipment and a medium for crushing a cultivation area, and the method, the system, the equipment and the medium are used for extracting the target crops and other ground objects by acquiring a first extraction model for identifying the target crops and other ground objects and acquiring a second extraction model for identifying the target crops and the difficult-to-divide crops; and extracting target crops from the remote sensing images of the working area based on the first extraction model to obtain a first extraction result, and extracting the first extraction result based on the second extraction model to obtain a final extraction result of the target crops in the working area.
Wherein the target crop is a grain crop to be identified; illustratively, the target crop includes one or more of wheat, rice, canola, corn, or other food crops;
the difficult-to-separate crops are crops which are difficult to distinguish from the target crops in the aspect of optical image characteristics; illustratively, the refractory crop includes one or more of vegetables, turf, nursery stock, aquatic crops, or other crops.
Referring to fig. 1, a schematic flow chart of an embodiment of the grain crop extraction method for the crushed cultivation area according to the first aspect of the present invention is shown.
As shown in fig. 1, in this embodiment, the grain crop extraction method for the crushed cultivation area includes the following steps:
step S200, acquiring a trained first extraction model; performing first extraction of target crops on a first remote sensing image of a working area by using the first extraction model to obtain a first extraction result of the target crops; the first extraction result is each initial pattern spot of the target crop judged in the first remote sensing image; the first remote sensing image is an optical remote sensing image of the working area;
the first extraction model is a deep learning model for identifying the target crops and other ground features based on optical image characteristics;
The working area is an area for extracting the target crops by using the trained model;
optionally, the working area image is a satellite remote sensing image of 2-2.5 m, and the updating period is two months.
Optionally, the other features include artificial features such as houses, roads, bridges, etc., and/or natural features such as woodlands, wetlands, beaches, lakes, etc.
In this embodiment, the first extraction model includes a full convolutional neural network (FCN), a Convolutional Neural Network (CNN), or other existing deep learning network model.
In a specific embodiment, the first extraction model uses an FCN model as a base network, and a scale sampling layer is added before a first convolution layer of the FCN model; illustratively, the multi-scale sampling layer is a feature pyramid.
The FCN model comprises a full convolution part and a deconvolution part, wherein the full convolution part is used for extracting features; the deconvolution part is used for obtaining the original-size semantic segmentation image through up-sampling. The FCN model is an FCN-8s model, the FCN-8s model includes 5 convolution layers and 3 deconvolution layers, the convolution layers are configured to perform 2 convolutions for the first 2 convolution layers, and perform 3 convolutions for the last 3 convolution layers, each convolution layer adopts a 3×3 convolution kernel and is respectively connected with an activation layer and a pooling layer, and each deconvolution layer is correspondingly added with a feature map which is obtained after the pooling operation and has the same resolution size as the deconvolution layer.
Optionally, the full convolution part is a VGG or res net network.
Specifically, the first extraction model after training is obtained; inputting the working area image into the first extraction model, wherein the first extraction model carries out classification prediction of the target crop and other ground object categories on each pixel of the working area image based on the optical image characteristics, and extracts the classification result as a pattern spot of the target crop based on the classification result of each pixel to serve as a first extraction result of the target crop.
Step S300, a second remote sensing image corresponding to the first extraction result is obtained; the second remote sensing image is a remote sensing image formed by fusing optical image features and texture features of the first remote sensing image;
specifically, the step S300 includes:
acquiring the first extraction result, performing mask processing on the first remote sensing image based on the region of the target crop extracted from the first extraction result, and acquiring masked image data; extracting texture features of the masked image data, and performing band synthesis on the texture features and optical image features of the masked image data to obtain the second remote sensing image formed after band synthesis;
Specifically, the masking process is to take the extraction area of the target crop in the first extraction result as a region of interest, so that the second extraction model performs model calculation based on only the extraction area of the target crop.
Step S400, acquiring a trained second extraction model; and performing second extraction on the target crop by using the second extraction model to obtain a final extraction result of the target crop.
The second extraction model is a deep learning model for identifying the target crops and refractory crops based on the optical image characteristics, the texture characteristics and the spatial relations among the crops;
in one embodiment, the second extraction model is a transducer model.
Wherein the transducer model comprises an encoding component and a decoding component; the encoding component includes a 6-layer encoder; the decoding component includes a 6-layer decoder; each of the encoders includes a self-attention layer and a feed-forward network, and each of the decoders includes a self-attention layer, an attention layer and a feed-forward network.
Specifically, the step S400 includes:
inputting the second remote sensing image into a second extraction model, wherein the second extraction model carries out classification prediction on each pixel in the second remote sensing image based on the optical image characteristics, the texture characteristics and the spatial relation among crops of the target crop so as to identify the target crop and the difficult-to-separate crop; and extracting the classification result as the pattern spot of the target crop based on the classification result of each pixel, and taking the pattern spot as the final extraction result of the target crop.
Optionally, before performing step S200, the method further includes:
step S100, training an original first extraction model based on the first sample set, and training an original second extraction model based on the second sample set, so as to obtain a trained first extraction model and second extraction model.
Wherein, the positive sample in the first sample set is the target crop pattern spot, and the negative sample is other ground pattern spots.
The positive samples in the second sample set are target crop pattern spots, the negative samples are difficult-to-separate crop pattern spots, and other ground objects except the target crops and the difficult-to-separate crops are used as backgrounds; the remote sensing image in the second sample set is a sample area remote sensing image; the sample area is the same as the planting structure of the working area; the planting structure comprises: crop species and crop fractions of each species.
The sample area and the working area are identical in planting structure, namely the sample area planting structure meets preset conditions; illustratively, the preset conditions include: 1. the difference between the ratio of the target crop number to the total crop number in the sample area and the ratio of the target crop number to the total crop number in the working area is less than a preset first ratio threshold; 2. the ratio of the number of other crop types in the sample area that are the same as those in the working area to the total number of other crop types in the sample area is greater than a preset second ratio threshold; the other crop is a crop that is not the target crop.
In one embodiment, the preset first ratio threshold is 20% and the preset second ratio threshold is 80%.
The target crop number and the total crop number are respectively determined based on the planting areas of the corresponding crops, and the planting areas are respectively obtained through statistical annual notices or other historical data of the sample area or the working area.
Specifically, training an original first extraction model based on a first sample set to obtain a trained first extraction model; and extracting texture features of the optical image of the second sample, synthesizing the texture features and the optical image features of the corresponding second sample in a wave band to obtain synthesized sample data, and training an original second extraction model based on the synthesized sample data to obtain a trained second extraction model.
According to the grain crop extraction method, only the second remote sensing image corresponding to the first extraction result is input into the second extraction model, so that the input of other invalid information is avoided, interference is caused to the reasoning process of the second extraction model, and the reasoning speed of the second extraction model is improved; and, because the first extraction model does not distinguish the target crop from the difficult-to-separate crop, namely the first extraction model is easy to identify the target crop and the difficult-to-separate crop as the target crop, the recall ratio of the target crop extraction is ensured, the second extraction model is based on the optical image characteristics, the texture characteristics and the spatial relationship among the crops, and the distinguishing of the target crop and the difficult-to-separate crop is carried out on the fully-looked-up target crop extraction area in the first extraction result, so that the accuracy of the target crop extraction result is improved; in addition, since the first extraction model and the second extraction model are two independent models, the grain crop extraction method is used for extracting the same target crop in other scenes with completely different planting structures, and only the second extraction model is required to be optimized without retraining the first extraction model, so that the grain crop extraction method has stronger generalization capability and is suitable for large-scale popularization and use.
Referring to fig. 2, a schematic flow chart of an embodiment of the method for extracting grain crops in a crushed cultivation area according to the second aspect of the present invention is shown;
as shown in fig. 2, in this embodiment, the grain crop extraction method of the crushed cultivation area is different from the method shown in fig. 1 in that the method further includes, before executing step S100:
s500, constructing a first sample set;
specifically, the construction method of the first sample set includes:
step S510, determining a target observation window period of the target crop, and acquiring a first sample remote sensing image of the target crop, wherein the acquisition time is positioned in the target observation window period;
specifically, determining a climatic period of the target crop; obtaining the NDVI change trend of the target crop in the climatic period; and determining a target observation window period of the target crop based on the NDVI change trend.
The NDVI change trend refers to an NDVI characteristic change trend reflected by the target crop in a climatic period, and comprises an NDVI starting value, a maximum value and an end value; the NDVI start value is: an NDVI value of the target crop observed earliest in the target crop waiting period; the maximum value of the NDVI is as follows: the maximum value of NDVI of the target crop is obtained in the target crop waiting period; the NDVI end value is: NDVI values of the target crop observed at the latest in the target crop waiting period; correspondingly, the earliest time for observing the NDVI value of the target crop is the earliest observation time point, the acquisition time of the NDVI maximum value of the target crop is the target observation time point, and the latest time for observing the NDVI value of the target crop is the latest observation time point;
The target observation window period refers to a period from the earliest observation time point to the latest observation time point;
the first sample remote sensing image is an optical remote sensing image; the first sample remote sensing image is a remote sensing image with a resolution of 2-2.5 meters.
Step S520, after extracting the target crop pattern spots and other ground pattern spots from the first sample remote sensing image, constructing a first sample set based on each target crop pattern spot and other ground pattern spots.
Specifically, an object-oriented segmentation method is adopted to segment the first sample remote sensing image, and each segmented image spot is obtained; extracting NDVI time sequence distribution corresponding to each pattern spot; adopting a characteristic matching mode, and taking the pattern spots of which the NDVI time sequence distribution accords with the NDVI change trend of the target crop as the pattern spots of the target crop; obtaining new image spots of the target crops by adopting a contrast screening mode, and taking the image spots of other non-target crops as image spots of other ground objects; and constructing a first sample set based on the new target crop pattern spots and other ground pattern spots.
Optionally, the feature matching method includes, but is not limited to: decision trees, random forests, and/or support vector machines.
Optionally, the object-oriented segmentation method includes, but is not limited to: multi-scale segmentation algorithms.
Optionally, the means of contrast screening includes, but is not limited to: rapidly identifying non-crop pattern spots in pattern spots conforming to the NDVI change trend of the target crop by taking remote sensing images with resolution of better than 0.8 m, images acquired by cameras and/or field investigation results as references, and screening out the non-crop pattern spots; the updating period of the remote sensing image with the resolution of better than 0.8 meter is six months.
In one embodiment, the implementation manner of the NDVI timing distribution according to the NDVI variation trend of the target crop includes:
judging whether the initial value acquisition time, the maximum value acquisition time and the end value acquisition time of the NDVI in the NDVI time sequence distribution curve are smaller than a first time threshold value or not corresponding to the difference value between the earliest observation time point, the target observation time point and the latest observation time point in the NDVI change trend of the target crop; if yes, the NDVI time sequence distribution is judged to accord with the NDVI change trend of the target crop. Illustratively, the first time threshold is 15 days.
The initial value acquisition time of the NDVI is the time when the target crop NDVI is acquired earliest in the NDVI time sequence distribution curve; the maximum value acquisition time of the NDVI is the acquisition time of the maximum value of the NDVI in the NDVI time sequence distribution curve; and the acquisition time of the end value of the NDVI is the time of latest acquisition of the target crop NDVI in the NDVI time sequence distribution curve.
When the first sample remote sensing image corresponding to the same area is multi-period, in order to ensure that the division areas of the image spots in the first sample remote sensing image of each period are the same, and further ensure the accuracy of NDVI time sequence distribution corresponding to each image spot and the uniformity of sample labeling, in a specific embodiment, the implementation manner of obtaining each image spot in the first sample remote sensing image comprises: and dividing the first sample remote sensing image in each period by adopting a preset pattern spot dividing template.
Wherein, the pattern spot segmentation template is: vector data consisting of vector boundaries corresponding to each image spot in the remote sensing image.
Preferably, the method for obtaining the plaque segmentation template includes:
step S521, acquiring a target first sample remote sensing image based on a target observation time point;
and step 522, dividing the target first sample remote sensing image by adopting an object-oriented dividing method, and acquiring the image spot dividing template based on the vector boundary of each image spot after division.
According to the grain crop extraction method, the first sample remote sensing image in each period is segmented by adopting the segmentation template, and the segmented image spots are obtained in a characteristic matching mode to obtain target crop image spots so as to construct first samples, so that a large number of samples can be quickly obtained in a short time, and the accuracy of NDVI time sequence distribution corresponding to each image spot and the uniformity of sample labeling can be ensured; and the speed of screening out the non-crop pattern spots is faster than the speed of drawing and selecting the target pattern spots, so that the acquisition efficiency of the first sample is improved, and further, the training of rapidly developing the first extraction model is realized.
Referring to fig. 3, a schematic flow chart of an embodiment of the grain crop extraction method for the crushed cultivation area according to the third aspect of the present invention is shown;
as shown in fig. 3, in this embodiment, the grain crop extraction method of the crushed cultivation area is different from the method shown in fig. 1 in that the method further includes, before executing step S100:
step S500', constructing a second sample set;
specifically, the construction method of the second sample set includes:
step S510', obtaining a first sample result of the target crop in the sample area by using the first extraction model; based on the first sample result, adopting a comparison screening mode to misjudge the first sample result as the pattern spot of the target crop, and setting the pattern spot as an initial misjudgment pattern spot;
the first sample result comprises the target crop pattern spots and other ground pattern spots extracted based on the remote sensing image of the sample area;
the modes of contrast screening include, but are not limited to: and checking a first sample result of the target crop by taking a remote sensing image with resolution better than 0.8 m, an image acquired by a camera and/or an field investigation result as references, misjudging the first sample result as a pattern spot of the target crop, marking the pattern spot as an initial misjudging pattern spot, and acquiring an initial misjudging pattern spot set.
Step S520', judging whether the distance between each initial misjudgment pattern spot and the target crop pattern spot is smaller than a preset first distance threshold; taking the misjudgment pattern spot with the distance from the target crop pattern spot smaller than a preset first distance threshold as a final misjudgment pattern spot; taking the target crop pattern spots after the final misjudgment pattern spots are screened out as target crop pattern spots with accurate judgment; constructing a second sample set based on the final misjudged pattern spot and the accurately judged target crop pattern spot;
the positive samples in the second sample set are the target crop pattern spots with accurate judgment, the negative samples are the final misjudgment pattern spots, namely the difficult-to-separate crop pattern spots, and other ground features are the background; the background is other ground objects except the target crops and the hard-to-separate crops, and the ground objects do not participate in training in the training process of the second extraction model.
According to the grain crop extraction method, the influence of the planting structure on the deep learning model to extract grain crops (according to the third law of geography-the geographic similarity law: the more similar geographic environments are, the more similar geographic target features are) is considered, so that only erroneous results, namely crops which are difficult to distinguish, are extracted by the first extraction model, and a second sample set is constructed; training the second extraction model based on the constructed second sample set, wherein the second extraction model can be prevented from being difficult to fit due to training based on massive samples; secondly, the contribution degree of scene features in the sample in the second extraction model training process can be improved.
Referring to fig. 4, a schematic flow chart of an embodiment of the method for extracting grain crop in the crushed cultivation area according to the fourth aspect of the invention is shown.
As shown in fig. 4, in this embodiment, the grain crop extraction method of the crushed cultivated area is different from the method shown in fig. 1 in that, when executing step S100, the method further includes:
step S100', constructing an initial first sample set based on the first sample remote sensing image set; pre-training the original first extraction model based on the initial first sample set to obtain a pre-trained first extraction model; and training the pre-trained first extraction model by using a new first sample remote sensing image set in an iterative optimization mode to obtain the trained first extraction model.
Wherein the new first sample remote sensing image set comprises each new first sample remote sensing image; the new first sample remote sensing image is an image different from the first sample remote sensing image, including different acquisition time and/or different acquisition area; the first extraction model after the pre-training is the first extraction model with the model precision reaching a preset first precision threshold.
Specifically, the training of the first extraction model after the pre-training by adopting the iterative optimization method includes the following sub-steps when single training is performed:
step S110', obtaining the current first extraction model;
wherein the current first extraction model is the latest first extraction model which can be obtained before the current training is executed.
Step S120', extracting the target crop from the new first sample remote sensing image by using the current first extraction model to obtain the target crop pattern spots and other ground pattern spots in the new first sample remote sensing image;
step S130', judging the target crop pattern spots, screening non-crop pattern spots in the target crop pattern spots, and obtaining new target crop pattern spots and other ground pattern spots; constructing a new first sample based on the new target crop pattern spots and other ground pattern spots;
the construction method of the newly added first sample is the same as that of the first sample, and will not be described herein.
Step S140', judging whether the number of samples of the newly added first sample reaches a preset first number threshold, if yes, updating the first sample set based on the newly added first sample, and performing optimization training on the first extraction model based on the updated first sample set to obtain a first extraction model after current training; if not, updating the new first sample remote sensing image, and re-executing the steps S110 '-S140' based on the updated new first sample remote sensing image;
Step S150', obtaining the current model precision of the first extraction model after optimization training, judging whether the current model precision reaches a preset second precision threshold, if so, exiting the iterative optimization process of the first extraction model; if not, repeating the steps S110 'to S150'.
It should be noted that, constructing a complete training sample, the construction time of the sample is long; secondly, because the model after preliminary training has learned partial characteristics, the model after preliminary training is easier to converge in the training process and the training speed is faster than that of the model without training.
According to the grain crop extraction method provided by the embodiment, the same target crop in different areas is considered, and the characteristic difference of the grain crop is not large, so that a new first sample is quickly obtained in a cyclic iteration mode in the embodiment, and further, the first extraction model is iteratively optimized based on the updated first sample set, so that the model can be initially subjected to preliminary training, and in the iterative optimization process, the model accuracy is quickly improved based on the new training sample and the model after the preliminary training.
Referring to fig. 5, a schematic flow chart of an embodiment of a method for extracting grain crops in a crushed cultivation area according to the present invention is shown;
As shown in fig. 5, in this embodiment, when executing step S100, the method further includes:
step S100'', constructing an initial first sample set based on the first sample remote sensing image set; constructing an initial second sample set based on the second sample remote sensing image set; pre-training an original first extraction model based on an initial first sample set to obtain a pre-trained first extraction model; pre-training the original second extraction model based on the initial second sample set to obtain a pre-trained second extraction model; training the pre-trained first extraction model and second extraction model by using a third sample remote sensing image set in a dual-model iterative optimization mode so as to correspondingly obtain the trained first extraction model and second extraction model;
the first extraction model after the pre-training is the first extraction model with the model precision reaching a preset third precision threshold; the second extraction model after the pre-training is the second extraction model with the model precision reaching a preset fourth precision threshold; the third sample remote sensing image set comprises each third sample remote sensing image; the third sample remote sensing image is an image which is in the sample area and is different from the first sample remote sensing image and the second sample remote sensing image, and comprises different acquisition time and/or different acquisition areas.
Specifically, the training the pre-trained first extraction model and the pre-trained second extraction model by adopting a dual-model iterative optimization method includes, when performing single dual-model optimization:
step S110'', acquiring the current first extraction model and second extraction model; performing first extraction of the target crop on a third sample remote sensing image by using the current first extraction model so as to obtain a first sample result of the target crop; acquiring a third remote sensing image corresponding to the first sample result; performing second extraction on the third remote sensing image by using the current second extraction model to obtain a second sample result;
the current first extraction model and the second extraction model are respectively the latest first extraction model and the latest second extraction model which can be obtained before the current training is executed;
the third remote sensing image is a remote sensing image formed by fusing optical image features and texture features in the third sample remote sensing image;
the first sample result comprises the target crop pattern spot extracted based on a third sample remote sensing image; the second sample result comprises the target crop pattern spots and the indistinct crop pattern spots extracted based on the third remote sensing image.
Step S120'', detecting whether the second sample result is accurate; setting the accurate second sample result as a newly added first sample; judging whether the ratio of the number of extracted correct image spots to the number of extracted incorrect image spots in the second sample result reaches a preset third ratio threshold or not based on the incorrect second sample result, and if so, adopting a construction mode of the second sample to construct a newly added second sample;
the specific implementation manner of constructing the newly added second sample by adopting the construction manner of the second sample is the same as the construction manner of the second sample in the embodiment shown in fig. 3, and will not be described herein.
Step 130″ detects whether the number of samples of the newly added first sample is greater than a preset second number threshold, and whether the number of samples of the newly added second sample reaches a preset third number threshold, if yes, current dual-model optimization is performed, and the model precision of the first extraction model and the model precision of the second extraction model after current optimization are obtained; if not, updating the third sample remote sensing image, and re-executing the steps S110 'to S130' based on the new third sample remote sensing image;
Wherein the implementation of the dual-model optimization comprises:
updating the first sample set based on the newly added first sample, and performing optimization training on the first extraction model based on the updated first sample set; updating the second sample set based on the newly added second sample, and performing optimization training on the second extraction model based on the updated second sample set;
step 140 ", detecting whether the model precision of the first extraction model after the current optimization reaches a preset fifth precision threshold value and whether the model precision of the second extraction model after the current optimization reaches a preset sixth precision threshold value, if yes, exiting the double-model optimization process; if not, repeating the steps S110 'to S140';
according to the grain crop extraction method, the accuracy of the extraction result obtained based on the second extraction model is high, so that the speed of obtaining a new sample is high based on the second sample result, and a large number of training samples can be obtained rapidly; and training the first extraction model and the second extraction model in an iterative optimization mode based on the newly added first sample and second sample, so that the accuracy of the first extraction model and the second extraction model can be rapidly improved.
It should be noted that, in the above embodiments of the present application, the remote sensing images used in the above embodiments are all remote sensing images of the target crop in the weather period.
In order to solve the above-mentioned problems in the prior art, an embodiment of the present application further provides a grain crop extraction system for a crushed cultivation area, as shown in fig. 6, in this embodiment, the grain crop extraction system for a crushed cultivation area provided by the present invention includes:
a first extraction result obtaining module 301, configured to perform, based on the trained first extraction model, first extraction of the target crop on the first remote sensing image of the working area, to obtain a first extraction result of the target crop; the first extraction result is each initial pattern spot of the target crop judged in the first remote sensing image; the first remote sensing image is an optical remote sensing image of the working area;
a second remote sensing image obtaining module 302, configured to obtain a second remote sensing image corresponding to the first extraction result; the second remote sensing image is a remote sensing image formed by fusing optical image features and texture features of the first remote sensing image;
and a final extraction result obtaining module 303, configured to perform second extraction of the target crop on the second remote sensing image based on the trained second extraction model, to obtain a final extraction result of the target crop.
As shown in fig. 7, in this embodiment, the grain crop extraction system for a crushed cultivation area provided by the present invention further includes:
the first sample set construction module 304 is configured to obtain a first sample remote sensing image, extract the target crop pattern spots and other ground pattern spots from the first sample remote sensing image, and construct a first sample set based on each of the target crop pattern spots and other ground pattern spots;
in this embodiment, the first sample set building module further includes the following submodules:
the plaque segmentation template construction submodule 3041 is used for segmenting the target first sample remote sensing image and acquiring the plaque segmentation template based on the vector boundary of each segmented plaque;
as shown in fig. 8, in this embodiment, the grain crop extraction system for a crushed cultivation area provided by the present invention further includes:
the second sample set construction module 305 is configured to obtain a first sample result of the target crop in the sample area based on a first extraction model, set a map spot that is misjudged as the target crop in the first sample result as an initial misjudgment map spot, and determine whether a distance between each initial misjudgment map spot and the target crop map spot exceeds a preset first distance threshold; taking the misjudgment pattern spot with the distance from the target crop pattern spot not exceeding a preset first distance threshold as a final misjudgment pattern spot; and constructing a second sample set based on the final misjudgment pattern spot and the target crop pattern spot.
As shown in fig. 9, in this embodiment, the grain crop extraction system for a crushed cultivation area provided by the present invention further includes:
the first extraction model iteration optimization module 306 is configured to construct an initial first sample set based on the first sample remote sensing image set; pre-training the original first extraction model based on the initial first sample set to obtain a pre-trained first extraction model; training the pre-trained first extraction model by using a new first sample remote sensing image set in an iterative optimization mode to obtain a trained first extraction model;
as shown in fig. 10, in this embodiment, the grain crop extraction system for a crushed cultivation area provided by the present invention further includes:
the dual-model iterative optimization module 307 is configured to construct an initial first sample set based on the first sample remote sensing image set; constructing an initial second sample set based on the second sample remote sensing image set; pre-training an original first extraction model based on the initial first sample set to obtain a pre-trained first extraction model; pre-training the original second extraction model based on the initial second sample set to obtain a pre-trained second extraction model; and training the pre-trained first extraction model and second extraction model by using a third sample remote sensing image set in a dual-model iterative optimization mode so as to correspondingly obtain the trained first extraction model and second extraction model.
The first extraction model after pre-training is a first extraction model with model precision reaching a preset third precision threshold; the second extraction model after the pre-training is the second extraction model with the model precision reaching a preset fourth precision threshold.
As shown in fig. 11, in the present embodiment, the present invention provides a grain crop extraction apparatus for a crushed cultivation area, the apparatus 400 includes a memory 401 and a processor 402, and the memory 401 is used for storing a computer program; the processor 402 is configured to execute a computer program stored in the memory 401, so that the apparatus 400 performs the grain crop extraction method of the crushed cultivation area according to any of the embodiments of the present application. Since the specific implementation process of the steps of the grain crop extraction method for crushing the cultivation area has been described in detail in the above embodiments, no description is repeated here.
The memory 401 includes: ROM (Read Only Memory image), RAM (Random Access Memory), magnetic disk, USB flash disk, memory card or optical disk, etc.
The processor 402 is connected to the memory 401 for executing the computer program stored in the memory 401, so that the apparatus 400 performs the grain crop extraction method described above.
Preferably, the processor 402 may be a general-purpose processor, including a central processing unit (Central ProcessingUnit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (Digital Signal Processor, DSP for short), application specific integrated circuits (Application SpecificIntegrated Circuit, ASIC for short), field programmable gate arrays (Field Programmable Gate Array, FPGA for short) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
Embodiments of the present application also provide a computer-readable storage medium. Those of ordinary skill in the art will appreciate that all or part of the steps in the method implementing the above embodiments may be implemented by a program to instruct a processor, where the program may be stored in a computer readable storage medium, where the storage medium is a non-transitory (non-transitory) medium, such as a random access memory, a read only memory, a flash memory, a hard disk, a solid state disk, a magnetic tape (magnetic tape), a floppy disk (floppy disk), an optical disk (optical disk), and any combination thereof. The storage media may be any available media that can be accessed by a computer or a data storage device such as a server, data center, or the like that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a digital video disc (digital video disc, DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
Embodiments of the present application may also provide a computer program product comprising one or more computer instructions. When the computer instructions are loaded and executed on a computing device, the processes or functions described in accordance with the embodiments of the present application are produced in whole or in part. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, or data center to another website, computer, or data center by a wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.).
The computer program product is executed by a computer, which performs the method according to the preceding method embodiment. The computer program product may be a software installation package, which may be downloaded and executed on a computer in case the aforementioned method is required.
The descriptions of the processes or structures corresponding to the drawings have emphasis, and the descriptions of other processes or structures may be referred to for the parts of a certain process or structure that are not described in detail.
The foregoing embodiments are merely illustrative of the principles of the present application and their effectiveness, and are not intended to limit the application. Modifications and variations may be made to the above-described embodiments by those of ordinary skill in the art without departing from the spirit and scope of the present application. Accordingly, it is intended that all equivalent modifications and variations which may be accomplished by persons skilled in the art without departing from the spirit and technical spirit of the disclosure be covered by the claims of this application.

Claims (9)

1. A method for extracting grain crops in a crushed cultivation area, which is characterized by comprising the following steps:
training an original first extraction model based on the first sample set, and training an original second extraction model based on the second sample set to correspondingly obtain a trained first extraction model and a trained second extraction model;
performing first extraction of target crops on a first remote sensing image of a working area by using the trained first extraction model to obtain a first extraction result of the target crops; the first extraction result is each initial pattern spot of the target crop judged in the first remote sensing image; the first remote sensing image is an optical remote sensing image of the working area; the first extraction model is a deep learning model for identifying the target crops and other ground objects based on optical image characteristics;
Acquiring a second remote sensing image corresponding to the first extraction result; the second remote sensing image is a remote sensing image formed by fusing optical image features and texture features of the first remote sensing image;
performing second extraction of the target crop on the second remote sensing image by using the trained second extraction model to obtain a final extraction result of the target crop; the second extraction model is a deep learning model for identifying the target crops and refractory crops based on the optical image characteristics, the texture characteristics and the spatial relations among the crops;
the second sample set comprises target crop pattern spots and refractory crop pattern spots extracted based on a second sample remote sensing image; the second sample remote sensing image is a remote sensing image formed by fusing optical characteristics and texture characteristics, and the planting structure of a sample area corresponding to the second sample remote sensing image is the same as that of the working area; the construction mode of the second sample set comprises the following steps:
acquiring a first sample result of the target crop in the sample area by using the first extraction model; based on the first sample result, adopting a comparison screening mode to misjudge the first sample result as the pattern spot of the target crop, and setting the pattern spot as an initial misjudgment pattern spot:
Judging whether the distance between each initial misjudgment pattern spot and the target crop pattern spot exceeds a preset first distance threshold value; taking the misjudgment pattern spot with the distance from the target crop pattern spot not exceeding a preset first distance threshold as a final misjudgment pattern spot; and constructing a second sample set based on the final misjudgment pattern spot and the target crop pattern spot.
2. The method as recited in claim 1, further comprising:
the first sample set comprises the target crop pattern spots and other ground pattern spots extracted based on a first sample remote sensing image; the first sample remote sensing image is an optical remote sensing image.
3. The method of claim 1, wherein the first sample set is constructed in a manner that includes:
determining a target observation window period of the target crop, and acquiring a first sample remote sensing image of the target crop, wherein the acquisition time is positioned in the target observation window period;
extracting the target crop pattern spots and the other ground pattern spots from the first sample remote sensing image;
a first sample set is constructed based on each of the target crop map spots and the other ground object map spots.
4. The method of claim 3, wherein when the first sample remote sensing image corresponding to the same area is multi-phase, the method for acquiring the target crop pattern spot and other ground pattern spots comprises:
Dividing each first sample remote sensing image by adopting a preset pattern spot dividing template to obtain each target crop pattern spot and each other ground object pattern spot; the pattern spot segmentation template is vector data formed by vector boundaries corresponding to each pattern spot in the first sample remote sensing image.
5. The method of claim 1, wherein training the original first extraction model based on the first set of samples comprises:
constructing an initial first sample set based on the first sample remote sensing image set;
pre-training an original first extraction model based on the initial first sample set to obtain a pre-trained first extraction model; the first extraction model after the pre-training is the first extraction model with the model precision reaching a preset first precision threshold;
training the pre-trained first extraction model by using a new first sample remote sensing image set in an iterative optimization mode to obtain a trained first extraction model;
the training of the pre-trained first extraction model by using the new first sample remote sensing image set and adopting an iterative optimization mode comprises the following steps:
Acquiring a new first sample remote sensing image;
extracting the target crop from the new first sample remote sensing image by using a current first extraction model to obtain a new first sample;
updating the current first sample set based on the newly added first sample to obtain a new first sample set;
training the current first extraction model based on the new first sample set to obtain a new first extraction model;
and repeatedly executing the iterative optimization mode, and training the first extraction model until the model precision of the first extraction model reaches a preset second precision threshold.
6. The method of claim 1, wherein training the original first extraction model based on the first sample set and training the original second extraction model based on the second sample set comprises:
constructing an initial first sample set based on the first sample remote sensing image set; constructing an initial second sample set based on the second sample remote sensing image set;
pre-training an original first extraction model based on the initial first sample set to obtain a pre-trained first extraction model; pre-training the original second extraction model based on the initial second sample set to obtain a pre-trained second extraction model; the first extraction model after the pre-training is the first extraction model with the model precision reaching a preset third precision threshold; the second extraction model after the pre-training is the second extraction model with the model precision reaching a preset fourth precision threshold;
Training the pre-trained first extraction model and second extraction model by using a third sample remote sensing image set in a dual-model iterative optimization mode so as to correspondingly obtain the trained first extraction model and second extraction model;
the training of the pre-trained first extraction model and the pre-trained second extraction model by using the third sample remote sensing image set and adopting a dual-model iterative optimization mode comprises the following steps:
acquiring a new third sample remote sensing image in the third sample remote sensing image set;
performing first extraction on the target crop to obtain a first sample result by using a current first extraction model on the new third sample remote sensing image;
acquiring a third remote sensing image corresponding to the first sample result; the third remote sensing image is a remote sensing image formed by fusing optical image features and texture features in the third sample remote sensing image;
performing second extraction on the target crop by using a current second extraction model to obtain a second sample result;
detecting whether the second sample result is accurate; setting the accurate second sample result as a newly added first sample; and constructing a newly added second sample based on the incorrect second sample result to obtain a newly added first sample and a newly added second sample respectively;
Updating the current first sample set based on the newly added first sample to obtain a new first sample set; based on the newly added second sample, updating the current second sample set to obtain a new second sample set;
training the current first extraction model based on the new first sample set to obtain a new first extraction model; training the current second extraction model based on the new second sample set to obtain a new second extraction model;
training a first extraction model and a second extraction model by repeatedly executing the mode of the dual-model iterative optimization until the model precision of the first extraction model reaches a preset fifth precision threshold value and the model precision of the second extraction model reaches a preset sixth precision threshold value;
the third sample remote sensing image is a remote sensing image which is different from the first sample remote sensing image and the second sample remote sensing image in the sample area.
7. A grain crop extraction system for a crushed tilled area, the system comprising:
the first extraction result acquisition module is used for carrying out first extraction on the target crop on the first remote sensing image of the working area based on the trained first extraction model to obtain a first extraction result of the target crop; the first extraction result is each initial pattern spot of the target crop judged in the first remote sensing image; the first remote sensing image is an optical remote sensing image of the working area; the first extraction model is a deep learning model for identifying the target crops and other ground objects based on optical image characteristics;
The second remote sensing image acquisition module is used for acquiring a second remote sensing image corresponding to the first extraction result; the second remote sensing image is formed by fusing optical image features and texture features;
the final extraction result acquisition module is used for carrying out second extraction on the target crop on the second remote sensing image based on the trained second extraction model to acquire a final extraction result of the target crop; the second extraction model is a deep learning model for identifying the target crops and refractory crops based on the optical image characteristics, the texture characteristics and the spatial relations among the crops;
the obtaining modes of the trained first extraction model and the trained second extraction model comprise:
training an original first extraction model based on the first sample set, and training an original second extraction model based on the second sample set to correspondingly obtain a trained first extraction model and a trained second extraction model; the second sample set comprises target crop pattern spots and refractory crop pattern spots extracted based on a second sample remote sensing image; the second sample remote sensing image is a remote sensing image formed by fusing optical characteristics and texture characteristics, and the planting structure of a sample area corresponding to the second sample remote sensing image is the same as that of the working area; the construction mode of the second sample set comprises the following steps:
Acquiring a first sample result of the target crop in the sample area by using the first extraction model; based on the first sample result, adopting a comparison screening mode to misjudge the first sample result as the pattern spot of the target crop, and setting the pattern spot as an initial misjudgment pattern spot;
judging whether the distance between each initial misjudgment pattern spot and the target crop pattern spot exceeds a preset first distance threshold value; taking the misjudgment pattern spot with the distance from the target crop pattern spot not exceeding a preset first distance threshold as a final misjudgment pattern spot; and constructing a second sample set based on the final misjudgment pattern spot and the target crop pattern spot.
8. A grain crop extraction apparatus for a crushed cultivation area, the apparatus comprising:
a memory for storing a computer program;
a processor for executing the computer program stored by the memory to cause the apparatus to perform the grain crop extraction method of any one of claims 1 to 6.
9. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when executed by a grain crop extraction apparatus for a crushed cultivation area, implements the method of any one of claims 1 to 6.
CN202311345594.5A 2023-10-18 2023-10-18 Grain crop extraction method, system, equipment and medium for crushing cultivation area Active CN117095299B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311345594.5A CN117095299B (en) 2023-10-18 2023-10-18 Grain crop extraction method, system, equipment and medium for crushing cultivation area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311345594.5A CN117095299B (en) 2023-10-18 2023-10-18 Grain crop extraction method, system, equipment and medium for crushing cultivation area

Publications (2)

Publication Number Publication Date
CN117095299A CN117095299A (en) 2023-11-21
CN117095299B true CN117095299B (en) 2024-01-26

Family

ID=88775415

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311345594.5A Active CN117095299B (en) 2023-10-18 2023-10-18 Grain crop extraction method, system, equipment and medium for crushing cultivation area

Country Status (1)

Country Link
CN (1) CN117095299B (en)

Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107507211A (en) * 2017-07-24 2017-12-22 中国科学院合肥物质科学研究院 Remote sensing image segmentation method based on multi-Agent and MRF
CN107578447A (en) * 2017-09-26 2018-01-12 北京师范大学 A kind of crop ridge location determining method and system based on unmanned plane image
CN110163868A (en) * 2019-04-17 2019-08-23 安阳师范学院 A kind of remote sensing image segmentation method
CN110728658A (en) * 2019-09-16 2020-01-24 武汉大学 High-resolution remote sensing image weak target detection method based on deep learning
WO2020143323A1 (en) * 2019-01-08 2020-07-16 平安科技(深圳)有限公司 Remote sensing image segmentation method and device, and storage medium and server
WO2020232905A1 (en) * 2019-05-20 2020-11-26 平安科技(深圳)有限公司 Superobject information-based remote sensing image target extraction method, device, electronic apparatus, and medium
CN112287871A (en) * 2020-11-12 2021-01-29 广东海洋大学 Near-shore aquaculture area remote sensing image extraction method based on multi-feature and spectrum fusion
CN112906455A (en) * 2020-12-28 2021-06-04 国家海洋信息中心 Coastal zone ecological system remote sensing identification method
CN113033453A (en) * 2021-04-06 2021-06-25 北京艾尔思时代科技有限公司 Method and system suitable for remote sensing identification of crop types in landscape crushing area
CN113255452A (en) * 2021-04-26 2021-08-13 中国自然资源航空物探遥感中心 Extraction method and extraction system of target water body
CN113673358A (en) * 2021-07-28 2021-11-19 青海省地质调查院(青海省地质矿产研究院、青海省地质遥感中心) Plateau salt lake range extraction method and system based on satellite remote sensing image
CN113705523A (en) * 2021-09-06 2021-11-26 青岛星科瑞升信息科技有限公司 Layered city impervious surface extraction method based on optical and dual-polarization SAR fusion
CN113920420A (en) * 2020-07-07 2022-01-11 香港理工大学深圳研究院 Building extraction method and device, terminal equipment and readable storage medium
CN114972191A (en) * 2022-04-25 2022-08-30 航天宏图信息技术股份有限公司 Method and device for detecting farmland change
CN115223054A (en) * 2022-07-15 2022-10-21 国家林业和草原局西南调查规划院 Remote sensing image change detection method based on partition clustering and convolution
WO2023000159A1 (en) * 2021-07-20 2023-01-26 海南长光卫星信息技术有限公司 Semi-supervised classification method, apparatus and device for high-resolution remote sensing image, and medium
CN115690130A (en) * 2022-12-30 2023-02-03 杭州咏柳科技有限公司 Image processing method and device
CN115953612A (en) * 2022-10-14 2023-04-11 航天宏图信息技术股份有限公司 ConvNeXt-based remote sensing image vegetation classification method and device
CN115995005A (en) * 2023-03-22 2023-04-21 航天宏图信息技术股份有限公司 Crop extraction method and device based on single-period high-resolution remote sensing image
CN116246161A (en) * 2022-12-06 2023-06-09 中国科学院空天信息创新研究院 Method and device for identifying target fine type of remote sensing image under guidance of domain knowledge
CN116309136A (en) * 2023-02-27 2023-06-23 武汉大学 Remote sensing image cloud zone reconstruction method based on SAR priori knowledge guidance
CN116363522A (en) * 2023-02-14 2023-06-30 国家海洋信息中心 Coastal zone reclamation sea change remote sensing monitoring method based on deep learning
CN116503733A (en) * 2023-04-25 2023-07-28 北京卫星信息工程研究所 Remote sensing image target detection method, device and storage medium

Patent Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107507211A (en) * 2017-07-24 2017-12-22 中国科学院合肥物质科学研究院 Remote sensing image segmentation method based on multi-Agent and MRF
CN107578447A (en) * 2017-09-26 2018-01-12 北京师范大学 A kind of crop ridge location determining method and system based on unmanned plane image
WO2020143323A1 (en) * 2019-01-08 2020-07-16 平安科技(深圳)有限公司 Remote sensing image segmentation method and device, and storage medium and server
CN110163868A (en) * 2019-04-17 2019-08-23 安阳师范学院 A kind of remote sensing image segmentation method
WO2020232905A1 (en) * 2019-05-20 2020-11-26 平安科技(深圳)有限公司 Superobject information-based remote sensing image target extraction method, device, electronic apparatus, and medium
CN110728658A (en) * 2019-09-16 2020-01-24 武汉大学 High-resolution remote sensing image weak target detection method based on deep learning
CN113920420A (en) * 2020-07-07 2022-01-11 香港理工大学深圳研究院 Building extraction method and device, terminal equipment and readable storage medium
CN112287871A (en) * 2020-11-12 2021-01-29 广东海洋大学 Near-shore aquaculture area remote sensing image extraction method based on multi-feature and spectrum fusion
CN112906455A (en) * 2020-12-28 2021-06-04 国家海洋信息中心 Coastal zone ecological system remote sensing identification method
CN113033453A (en) * 2021-04-06 2021-06-25 北京艾尔思时代科技有限公司 Method and system suitable for remote sensing identification of crop types in landscape crushing area
CN113255452A (en) * 2021-04-26 2021-08-13 中国自然资源航空物探遥感中心 Extraction method and extraction system of target water body
WO2023000159A1 (en) * 2021-07-20 2023-01-26 海南长光卫星信息技术有限公司 Semi-supervised classification method, apparatus and device for high-resolution remote sensing image, and medium
CN113673358A (en) * 2021-07-28 2021-11-19 青海省地质调查院(青海省地质矿产研究院、青海省地质遥感中心) Plateau salt lake range extraction method and system based on satellite remote sensing image
CN113705523A (en) * 2021-09-06 2021-11-26 青岛星科瑞升信息科技有限公司 Layered city impervious surface extraction method based on optical and dual-polarization SAR fusion
CN114972191A (en) * 2022-04-25 2022-08-30 航天宏图信息技术股份有限公司 Method and device for detecting farmland change
CN115223054A (en) * 2022-07-15 2022-10-21 国家林业和草原局西南调查规划院 Remote sensing image change detection method based on partition clustering and convolution
CN115953612A (en) * 2022-10-14 2023-04-11 航天宏图信息技术股份有限公司 ConvNeXt-based remote sensing image vegetation classification method and device
CN116246161A (en) * 2022-12-06 2023-06-09 中国科学院空天信息创新研究院 Method and device for identifying target fine type of remote sensing image under guidance of domain knowledge
CN115690130A (en) * 2022-12-30 2023-02-03 杭州咏柳科技有限公司 Image processing method and device
CN116363522A (en) * 2023-02-14 2023-06-30 国家海洋信息中心 Coastal zone reclamation sea change remote sensing monitoring method based on deep learning
CN116309136A (en) * 2023-02-27 2023-06-23 武汉大学 Remote sensing image cloud zone reconstruction method based on SAR priori knowledge guidance
CN115995005A (en) * 2023-03-22 2023-04-21 航天宏图信息技术股份有限公司 Crop extraction method and device based on single-period high-resolution remote sensing image
CN116503733A (en) * 2023-04-25 2023-07-28 北京卫星信息工程研究所 Remote sensing image target detection method, device and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Dual Attention Based Multi-scale Feature Fusion Network for Indoor RGBD Semantic Segmentation;Zhongwei Hua等;2022 26th International Conference on Pattern Recognition (ICPR);3639-3644 *
一种基于自适应 M-S 模型的遥感影像分割方法;赵明衍等;测绘科学技术学报;第36卷(第2期);155-160 *
农作物种植结构遥感提取研究进展;胡琼;吴文斌;宋茜;余强毅;杨鹏;唐华俊;;中国农业科学(10);1900-1914 *

Also Published As

Publication number Publication date
CN117095299A (en) 2023-11-21

Similar Documents

Publication Publication Date Title
Albattah et al. A novel deep learning method for detection and classification of plant diseases
CN110321963B (en) Hyperspectral image classification method based on fusion of multi-scale and multi-dimensional space spectrum features
US11521380B2 (en) Shadow and cloud masking for remote sensing images in agriculture applications using a multilayer perceptron
Saralioglu et al. Semantic segmentation of land cover from high resolution multispectral satellite images by spectral-spatial convolutional neural network
Hong et al. Globenet: Convolutional neural networks for typhoon eye tracking from remote sensing imagery
Bhatt et al. Detection of diseases and pests on images captured in uncontrolled conditions from tea plantations
CN112308152B (en) Hyperspectral image ground object classification method based on spectrum segmentation and homogeneous region detection
CN113095409B (en) Hyperspectral image classification method based on attention mechanism and weight sharing
CN112990334A (en) Small sample SAR image target identification method based on improved prototype network
CN110991284B (en) Optical remote sensing image statement description generation method based on scene pre-classification
CN113033453A (en) Method and system suitable for remote sensing identification of crop types in landscape crushing area
CN113435254A (en) Sentinel second image-based farmland deep learning extraction method
CN114324336B (en) Nondestructive measurement method for biomass of soybean in whole growth period
CN116863341B (en) Crop classification and identification method and system based on time sequence satellite remote sensing image
Jia et al. YOLOF-Snake: An efficient segmentation model for green object fruit
CN117095299B (en) Grain crop extraction method, system, equipment and medium for crushing cultivation area
CN109344837B (en) SAR image semantic segmentation method based on deep convolutional network and weak supervised learning
Bajpai et al. Deep learning model for plant-leaf disease detection in precision agriculture
Byun et al. Deep Learning-Based Rainfall Prediction Using Cloud Image Analysis
Liu et al. Segmentation of wheat farmland with improved U-Net on drone images
Correa Martins et al. Identifying plant species in kettle holes using UAV images and deep learning techniques
Zhao et al. Improving object-oriented land use/cover classification from high resolution imagery by spectral similarity-based post-classification
Chaudhari et al. Drought classification and prediction with satellite image-based indices using variants of deep learning models
CN114610938A (en) Remote sensing image retrieval method and device, electronic equipment and computer readable medium
CN112949726A (en) ISCP cloud classification method, system, medium and terminal based on FY-4A satellite

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant