CN111597861A - System and method for automatically interpreting ground object of remote sensing image - Google Patents
System and method for automatically interpreting ground object of remote sensing image Download PDFInfo
- Publication number
- CN111597861A CN111597861A CN201910128490.6A CN201910128490A CN111597861A CN 111597861 A CN111597861 A CN 111597861A CN 201910128490 A CN201910128490 A CN 201910128490A CN 111597861 A CN111597861 A CN 111597861A
- Authority
- CN
- China
- Prior art keywords
- remote sensing
- sensing image
- ground
- data
- sample
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a remote sensing image ground object automatic interpretation system and method. The system comprises: the automatic sample acquisition service is used for randomly acquiring sample label data in combination with the ground feature type of map data, acquiring an image tile corresponding to a sample label from a remote sensing image according to the requirement of a deep learning platform, and generating sample data based on the image tile; the model training service is used for receiving sample data, performing model training by using a deep learning platform and generating an optimal prediction model; the ground object classification service is used for acquiring a ground object classification result from the remote sensing image according to the optimal prediction model; and the post-classification processing service is used for carrying out subsequent processing on the ground feature classification result. The invention adopts a service containerization technical framework, reduces the system coupling degree, is convenient for system upgrading, migration and deployment, and ensures that the engineering application of the remote sensing image ground object automatic interpretation is easy to realize.
Description
Technical Field
The invention relates to the technical field of remote sensing, in particular to a remote sensing image ground object automatic interpretation system and method based on deep learning.
Background
The method for acquiring the ground feature information from the remote sensing image comprises two means of manual interpretation and automatic computer classification, wherein the manual interpretation has the characteristics of low interpretation speed, high precision and the like and is a main means of engineering application, and the automatic computer classification has the characteristics of high interpretation speed and low precision and is an auxiliary means of engineering application; with the continuous development of earth observation technology, remote sensing data is increased rapidly, and the remote sensing image is utilized to automatically classify the ground objects, so that engineering application is gradually developed from theoretical research.
With the theoretical breakthrough of the deep learning technology, artificial intelligence obtains remarkable results, remote sensing image ground feature classification by utilizing deep learning gradually becomes a research hotspot, and compared with the traditional automatic classification technology, the deep learning technology has the following characteristics: firstly, the classification precision is high, and the highest classification precision of a single ground object can reach 95 percent; secondly, the sawtooth feeling of the ground feature boundary in the classification result is not obvious, and the local part is closer to the ground feature distribution rule; thirdly, the deep learning automatically carries out the feature learning without manually screening the feature wave band as a classification data source.
However, in the engineering application of automatic classification of ground features of remote sensing images, the deep learning technology has the following problems: firstly, a large number of samples are needed for model training, and the sample collection is time-consuming and labor-consuming; secondly, high-performance GPU is needed for model training; the dependence on software environment is high, and deployment and migration are difficult; thirdly, the model trainer needs to have more deep learning theoretical knowledge and can adjust training parameters in time according to system feedback; from model prediction to platform application, the processing process is complex, and the requirement on professional ability of personnel is high.
Disclosure of Invention
The present invention is directed to solving at least one of the problems of the prior art or the related art.
To this end, according to a first aspect of the present invention, there is provided an automatic interpretation system for a ground feature in a remote sensing image, comprising:
the automatic sample acquisition service is used for randomly acquiring sample label data in combination with the ground feature type attributes of the map data, acquiring image tiles corresponding to sample labels from the remote sensing images according to the requirements of the deep learning platform, and generating sample data based on the image tiles;
the model training service is used for receiving the sample data, performing model training by using a deep learning platform and generating an optimal prediction model;
the ground object classification service is used for acquiring a ground object classification result from the remote sensing image according to the optimal prediction model;
and the post-classification processing service is used for carrying out subsequent processing on the ground feature classification result.
Further, the automatic sample collection service collects sample label data based on an open source map and map tile technology, and collects image tiles corresponding to sample labels from remote sensing images.
Further, the sample data format required by the deep learning platform comprises true color images, ground feature class description files, remote sensing image tiles and training verification statistical files.
Further, a full convolutional neural network fcnn algorithm is integrated into a deep learning platform in the model training service.
Further, the subsequent processing comprises the vectorization conversion of the ground feature classification result and the integration and display with the GIS platform.
According to a second aspect of the invention, an automatic interpretation method for the ground features of the remote sensing images is provided, which is characterized by being executed by the system according to the first aspect.
Further, the sample automated acquisition step comprises:
clipping a vector map tile of an interested area from an open source map;
selecting M map vector tiles in the region of interest according to a randomness algorithm, and generating a tile description file;
generating a true color image and a gray image corresponding to each tile based on the ground feature attribute information of the map vector tiles;
extracting an image tile corresponding to each vector tile from the remote sensing image by referring to the tile description file;
deleting files with inconsistent ground objects in the same-name files of the remote sensing image and the true color image;
distributing all the sample data sets to training, verifying and testing stages according to a preset proportion, and storing the sample data sets in a training and verifying statistical file.
Further, the model training step includes:
receiving the selection of a pre-training model, the input of sample data, the input of iteration times and step length parameters, and executing model training by a deep learning platform, wherein the deep learning platform integrates a full convolution neural network fcnn algorithm;
and continuously adjusting parameters according to a feedback result in the training process, finishing the training when the classification precision exceeds a threshold value, and generating an optimal prediction model.
Further, the land feature classification step includes:
tiling a pair of remote sensing images to generate remote sensing image tiles;
deriving and acquiring a plurality of ground object classification subdata by utilizing the optimal prediction model based on the remote sensing image tile;
and splicing the plurality of ground feature classification subdata to generate a complete picture as a ground feature classification result.
Further, the post-classification processing step includes:
vectorizing the feature classification result to generate vectorized feature classification data;
removing fragmented polygons in the vectorized ground object classification data to generate merged vector data;
and superposing the merged vector data with the latest remote sensing image map, and modifying the surface feature data which is not consistent with the latest remote sensing image by using a vector editing tool.
And publishing the modified vector data into a standard tile, and publishing the standard tile to a GIS platform for visual display.
The invention adopts a service containerization technical framework, reduces the system coupling degree, and is convenient for system upgrading, migration and deployment; the automation of sample collection is realized by combining the open source map and the tile technology, and a data basis is provided for large-scale deep learning training; through containerization packaging of the service, the problem of difficult software migration and deployment is solved; through service encapsulation such as ground object classification, classification post-processing and the like, the problem of flow processing from classification prediction to platform application is solved, and therefore engineering application of remote sensing image ground object automatic interpretation is easy to realize.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a block diagram of an automatic interpretation system for remote sensing image ground features according to the invention;
FIG. 2 is a flow chart of a method for automatically interpreting a ground object in a remote sensing image according to the present invention;
fig. 3 is a schematic diagram of an automatic interpretation process of a remote sensing image ground object according to an embodiment of the invention.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those specifically described herein, and therefore the scope of the present invention is not limited by the specific embodiments disclosed below.
Referring to fig. 1, there is shown an automatic interpretation system for remote sensing image ground features according to the invention. The invention adopts a service-oriented technical architecture to carry out containerization packaging on the service, and generates a sample automatic acquisition service 11, a model training service 12, a surface feature classification service 13 and a classification post-processing service 14.
The sample automatic acquisition service 11 randomly acquires sample label data by combining with the ground feature type attribute of the map data, acquires an image tile corresponding to a sample label from a remote sensing image according to the requirement of a deep learning platform, and generates sample data based on the image tile;
further, the sample automatic acquisition service 11 generates training sample data based on the open source map and map tile technology, and the sample data content includes true color images, surface feature type description files, remote sensing image tiles, and training verification statistical files.
The model training service 12 is used for receiving sample data, performing model training by using a deep learning platform and generating an optimal prediction model;
further, the full convolution neural network fcnn algorithm is integrated with a deep learning platform, such as the Caffe platform, in the model training service 12.
The ground object classification service 13 is used for obtaining a ground object classification result from the remote sensing image according to the optimal prediction model;
the post-classification processing service 14 is used for performing subsequent processing on the surface feature classification result.
Further, the subsequent processing comprises the vectorization conversion of the ground feature classification result and the integration and display with the GIS platform.
Referring to fig. 2, there is shown an automatic interpretation method of remote sensing image ground features according to the invention, which is executed by the system described with reference to fig. 1, and the method includes:
s21, automatic sample collection: randomly selecting a plurality of areas on a map, and generating sample data according to a sample data format required by a deep learning platform;
specifically, the step S21 includes:
clipping a vector map tile of an interested area from an open source map;
selecting M map vector tiles in the region of interest according to a randomness algorithm, and generating a tile description file; m is optionally 50000.
Generating a true color image and a gray image corresponding to each tile based on the ground feature attribute information of the map vector tiles;
extracting an image tile corresponding to each vector tile from the remote sensing image by referring to the tile description file;
deleting files with inconsistent ground objects in the same-name files of the remote sensing image and the true color image;
distributing all the sample data sets to training, verifying and testing stages according to a preset proportion, and storing the sample data sets in a training and verifying statistical file.
S22, model training: receiving sample data, and performing model training by using a deep learning platform to generate an optimal prediction model;
specifically, the step S22 includes:
receiving the selection of a pre-training model, the input of sample data, the input of iteration times and step length parameters, and executing model training by a deep learning platform, wherein the deep learning platform integrates a full convolution neural network fcnn algorithm;
and continuously adjusting parameters according to a feedback result in the training process, finishing the training when the classification precision exceeds a threshold value, and generating an optimal prediction model.
S23, land feature classification step: obtaining a ground feature classification result from the remote sensing image according to the optimal prediction model;
specifically, the step S23 includes:
tiling a pair of remote sensing images to generate remote sensing image tiles;
deriving and acquiring a plurality of ground object classification subdata by utilizing the optimal prediction model based on the remote sensing image tile;
and splicing the plurality of ground feature classification subdata to generate a complete picture as a ground feature classification result.
S24, classification post-processing step: and carrying out subsequent processing on the ground feature classification result.
Specifically, the step S24 includes:
vectorizing the feature classification result to generate vectorized feature classification data;
removing fragmented polygons in the vectorized ground object classification data to generate merged vector data;
and overlapping the merged vector data with the latest remote sensing image map, and displaying after modifying the surface feature data which are not in accordance with the latest remote sensing image by using a vector editing tool.
Referring to fig. 3, a remote sensing image ground object automatic interpretation process according to one embodiment of the invention is shown. The remote sensing image ground feature automatic interpretation system based on deep learning adopts a service containerization technical framework, reduces the system coupling degree, is convenient for system upgrading and migration deployment, and comprises four parts including automatic acquisition, model training, ground feature classification and classification post-processing in the processing process, wherein each part is containerized service.
The automatic sample collection is packaged as a bridge-data service, a plurality of areas on a map are randomly selected mainly based on an open source map and map tile technology, each area is a standard map tile with the size of 512 pixels by 512 pixels, and the map tiles are stored as a rasterized image and description file. Model training integrates an fcnn (full convolution neural network) algorithm on the basis of a caffe platform (a deep learning platform), and is packaged into a bridge-train service, a user inputs sample data, fills in training parameters to perform model training, and continuously adjusts the parameters according to a feedback result to generate an optimal prediction model. And the ground object classification is packaged into a mail-run service, a user inputs an original remote sensing image and an optimal prediction model, and the service performs classification processing on the original image data after tiling to obtain a ground object classification result. And the classified post-processing is packaged into a mail-mapping service, and is mainly responsible for the vectorization conversion of the ground feature classification result, and the integration and display of the ground feature classification result and the GIS platform.
The method for realizing automatic sample collection by the bridge-data service comprises the following steps:
downloading global vector data of an open source map, and inputting an area range as an interested area for sample collection;
setting an acquisition rule of a map tile, determining the types of ground features to be acquired, wherein the size of the tile is 512 × 512 pixels or 256 × 256 pixels, and keeping 1-3 types of ground features in each acquisition as much as possible; randomly selecting 50000 map vector tiles in an interested area, automatically completing acquisition according to user configuration, generating a tile description file and describing tile information of an acquisition sample;
generating a true color image and a gray image corresponding to each tile based on the map vector tiles, wherein the true color image and the gray image serve as labels of training samples, ground object distribution can be clearly seen in the true color image, and the sample input of the deep convolutional neural network is required to be a gray image;
extracting an image tile corresponding to each vector tile from an original remote sensing image or a remote sensing image obtained by tiling the remote sensing image by referring to a tile description file;
receiving an operation instruction, deleting files with inconsistent ground features in files with the same file name for true color images and remote sensing image tiles with the same file name so as to avoid the influence of wrong training sample marks on model training, manually comparing the true color images and the remote sensing image tiles one by one and sending the operation instruction;
all training sample data sets are distributed to training, verifying and testing stages according to the ratio of 2:2:1 and are stored in a descriptive file.
The method comprises the following steps of carrying out model training on the brain-train service, wherein the training comprises the following steps:
receiving selection of a pre-training model (pre-training on a COCO data set), inputting parameters such as sample data, iteration times, step length and the like, and executing model training;
receiving an adjustment to the parameter, the adjustment being made by a user observing a curve during the training process and being continuously adjusted according to the feedback result; and when the classification precision exceeds a threshold value, namely a user psychological expected value (the user can accept automatic classification precision), finishing final machine training and generating an optimal prediction model.
The mail-run service realizes ground object classification, and comprises the following steps:
inputting a remote sensing image to be classified, performing tiling processing, and generating a remote sensing image tile with the standard tile size of 512 × 512 pixels;
performing ground object classification prediction on the generated remote sensing image tiles by using the optimal prediction model, and outputting ground object classification subdata which is a rasterized picture;
and splicing the generated pictures of the plurality of ground feature classification subdata to generate a complete picture as a ground feature classification result.
The method for realizing classified post-processing by the bridge-mapping service comprises the following steps:
vectorizing the complete picture of the ground feature classification result to generate vectorized ground feature classification data;
setting a minimum polygon threshold, merging polygons with insufficient areas into surrounding polygons, removing fragmented polygons, and generating merged vector data;
after the vector data and the latest remote sensing image map are overlapped, a user uses a vector editing tool to modify and adjust the region where the ground feature classification result does not accord with the remote sensing image, and the region is displayed;
in addition, the bridge-mapping service can randomly acquire a plurality of vector data after automatic classification, and compare the vector data with open source vector data to obtain the final classification precision of automatic classification.
And publishing the modified vector data into a standard tile, and publishing the standard tile to a GIS platform for visual display.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium. The above is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and various modifications and changes will occur to those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (10)
1. An automatic interpretation system for ground features of remote sensing images is characterized by comprising:
the automatic sample acquisition service is used for randomly acquiring sample label data in combination with the ground feature type attributes of the map data, acquiring image tiles corresponding to sample labels from the remote sensing images according to the requirements of the deep learning platform, and generating sample data based on the image tiles;
the model training service is used for receiving sample data, performing model training by using a deep learning platform and generating an optimal prediction model;
the ground object classification service is used for acquiring a ground object classification result from the remote sensing image according to the optimal prediction model;
and the post-classification processing service is used for carrying out subsequent processing on the ground feature classification result.
2. The system of claim 1, wherein the sample automated collection service obtains the sample label based on open source map and map tile technology, and collects an image tile corresponding to the sample label from the remote sensing image.
3. The system according to claim 1 or 2, wherein the sample data format required by the deep learning platform comprises true color images, ground feature class description files, remote sensing image tiles and training verification statistics files.
4. The system according to claim 1, characterized in that a full convolutional neural network fcnn algorithm is integrated in the model training service in a deep learning platform.
5. The system according to claim 1, wherein the subsequent processing comprises terrain classification result vectorization conversion and integration and presentation with a GIS platform.
6. A method for automatically interpreting a ground object in a remote sensing image, wherein the method is executed by the system according to any one of claims 1-5, and the method comprises the following steps:
the automatic sample collection step: randomly acquiring sample label data by combining with the ground object type attribute of the map data, acquiring an image tile corresponding to a sample label from a remote sensing image according to the requirement of a deep learning platform, and generating sample data based on the image tile;
model training: receiving sample data, and performing model training by using a deep learning platform to generate an optimal prediction model;
and (3) land feature classification step: obtaining a ground feature classification result from the remote sensing image according to the optimal prediction model;
and (3) classification post-treatment step: and carrying out subsequent processing on the ground feature classification result.
7. The method of claim 6, wherein the sample automated acquisition step comprises:
clipping a vector map tile of an interested area from an open source map;
selecting M map vector tiles in the region of interest according to a randomness algorithm, and generating a tile description file;
generating a true color image and a gray image corresponding to each tile based on the ground feature attribute information of the map vector tiles;
extracting an image tile corresponding to each vector tile from the remote sensing image by referring to the tile description file;
deleting files with inconsistent ground objects in the same-name files of the remote sensing image and the true color image;
distributing all sample data sets to training, verifying and testing stages according to the ratio of 2:1:1, and storing distribution rules in a training and verifying statistical file.
8. The method of claim 7, wherein the model training step comprises:
receiving the selection of a pre-training model, the input of sample data, the input of iteration times and step length parameters, and executing model training by a deep learning platform, wherein the deep learning platform integrates a full convolution neural network fcnn algorithm;
and continuously adjusting parameters according to a feedback result in the training process, finishing the training when the classification precision exceeds a threshold value, and generating an optimal prediction model.
9. The method of claim 8, wherein the terrain classification step comprises:
tiling a pair of remote sensing images to generate remote sensing image tiles;
deriving and acquiring a plurality of ground object classification subdata by utilizing the optimal prediction model based on the remote sensing image tile;
and splicing the plurality of ground feature classification subdata to generate a complete picture as a ground feature classification result.
10. The method of claim 9, wherein the post-classification processing step comprises:
vectorizing the feature classification result to generate vectorized feature classification data;
removing fragmented polygons in the vectorized ground object classification data to generate merged vector data;
superposing the merged vector data with the latest remote sensing image map, and modifying the surface feature data which are not consistent with the latest remote sensing image by using a vector editing tool;
and publishing the modified vector data into a standard tile, and publishing the standard tile to a GIS platform for visual display.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910128490.6A CN111597861A (en) | 2019-02-21 | 2019-02-21 | System and method for automatically interpreting ground object of remote sensing image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910128490.6A CN111597861A (en) | 2019-02-21 | 2019-02-21 | System and method for automatically interpreting ground object of remote sensing image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111597861A true CN111597861A (en) | 2020-08-28 |
Family
ID=72184816
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910128490.6A Pending CN111597861A (en) | 2019-02-21 | 2019-02-21 | System and method for automatically interpreting ground object of remote sensing image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111597861A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112101464A (en) * | 2020-09-17 | 2020-12-18 | 西安泽塔云科技股份有限公司 | Method and device for acquiring image sample data based on deep learning |
CN112329751A (en) * | 2021-01-06 | 2021-02-05 | 北京道达天际科技有限公司 | Deep learning-based multi-scale remote sensing image target identification system and method |
CN112906537A (en) * | 2021-02-08 | 2021-06-04 | 北京艾尔思时代科技有限公司 | Crop identification method and system based on convolutional neural network |
CN113034025A (en) * | 2021-04-08 | 2021-06-25 | 成都国星宇航科技有限公司 | Remote sensing image annotation system and method |
CN113158855A (en) * | 2021-04-08 | 2021-07-23 | 成都国星宇航科技有限公司 | Remote sensing image auxiliary processing method and device based on online learning |
CN113496220A (en) * | 2021-09-07 | 2021-10-12 | 阿里巴巴达摩院(杭州)科技有限公司 | Image processing method, system and computer readable storage medium |
CN115690593A (en) * | 2022-03-31 | 2023-02-03 | 中国科学院空天信息创新研究院 | Land classification method and device and cloud server |
CN115965622A (en) * | 2023-02-15 | 2023-04-14 | 航天宏图信息技术股份有限公司 | Method and device for detecting change of remote sensing tile data |
-
2019
- 2019-02-21 CN CN201910128490.6A patent/CN111597861A/en active Pending
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112101464A (en) * | 2020-09-17 | 2020-12-18 | 西安泽塔云科技股份有限公司 | Method and device for acquiring image sample data based on deep learning |
CN112101464B (en) * | 2020-09-17 | 2024-03-15 | 西安锐思数智科技股份有限公司 | Deep learning-based image sample data acquisition method and device |
CN112329751A (en) * | 2021-01-06 | 2021-02-05 | 北京道达天际科技有限公司 | Deep learning-based multi-scale remote sensing image target identification system and method |
CN112906537A (en) * | 2021-02-08 | 2021-06-04 | 北京艾尔思时代科技有限公司 | Crop identification method and system based on convolutional neural network |
CN112906537B (en) * | 2021-02-08 | 2023-12-01 | 北京艾尔思时代科技有限公司 | Crop identification method and system based on convolutional neural network |
CN113034025A (en) * | 2021-04-08 | 2021-06-25 | 成都国星宇航科技有限公司 | Remote sensing image annotation system and method |
CN113158855A (en) * | 2021-04-08 | 2021-07-23 | 成都国星宇航科技有限公司 | Remote sensing image auxiliary processing method and device based on online learning |
CN113034025B (en) * | 2021-04-08 | 2023-12-01 | 成都国星宇航科技股份有限公司 | Remote sensing image labeling system and method |
CN113496220A (en) * | 2021-09-07 | 2021-10-12 | 阿里巴巴达摩院(杭州)科技有限公司 | Image processing method, system and computer readable storage medium |
CN115690593A (en) * | 2022-03-31 | 2023-02-03 | 中国科学院空天信息创新研究院 | Land classification method and device and cloud server |
CN115965622A (en) * | 2023-02-15 | 2023-04-14 | 航天宏图信息技术股份有限公司 | Method and device for detecting change of remote sensing tile data |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111597861A (en) | System and method for automatically interpreting ground object of remote sensing image | |
JP7011146B2 (en) | Image processing device, image processing method, image processing program, and teacher data generation method | |
US20240104382A1 (en) | System and method for instance-level lane detection for autonomous vehicle control | |
CN111986099A (en) | Tillage monitoring method and system based on convolutional neural network with residual error correction fused | |
CN105930841A (en) | Method and device for automatic semantic annotation of image, and computer equipment | |
CN110910343A (en) | Method and device for detecting pavement cracks and computer equipment | |
WO2011023247A1 (en) | Generating raster image representing road existence probability based on probe measurements | |
WO2011023245A1 (en) | Realigning road networks in a digital map based on reliable road existence probability data | |
CN109190631A (en) | The target object mask method and device of picture | |
CN111062441A (en) | Scene classification method and device based on self-supervision mechanism and regional suggestion network | |
CN112233129B (en) | Deep learning-based parallel multi-scale attention mechanism semantic segmentation method and device | |
Marangoz et al. | Analysis of land use land cover classification results derived from sentinel-2 image | |
CN112329751A (en) | Deep learning-based multi-scale remote sensing image target identification system and method | |
CN111597932A (en) | Road crack image identification method, device and system based on convolutional neural network | |
CN111612891A (en) | Model generation method, point cloud data processing device, point cloud data processing equipment and medium | |
CN108229515A (en) | Object classification method and device, the electronic equipment of high spectrum image | |
KR20150108241A (en) | Apparatus and method of fast and natural terrain generation | |
CN113673369A (en) | Remote sensing image scene planning method and device, electronic equipment and storage medium | |
CN113282781B (en) | Image retrieval method and device | |
CN112884074B (en) | Image design method, equipment, storage medium and device based on decision tree | |
CN104778468A (en) | Image processing device, image processing method and monitoring equipment | |
CN114863274A (en) | Surface green net thatch cover extraction method based on deep learning | |
CN115019044A (en) | Individual plant segmentation method and device, terminal device and readable storage medium | |
CN107045727A (en) | A kind of texture synthesis method and its device | |
CN114419057A (en) | Image-based road surface segmentation method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |