CN106910202B - Image segmentation method and system for ground object of remote sensing image - Google Patents

Image segmentation method and system for ground object of remote sensing image Download PDF

Info

Publication number
CN106910202B
CN106910202B CN201710081136.3A CN201710081136A CN106910202B CN 106910202 B CN106910202 B CN 106910202B CN 201710081136 A CN201710081136 A CN 201710081136A CN 106910202 B CN106910202 B CN 106910202B
Authority
CN
China
Prior art keywords
image
coordinate point
remote sensing
probability map
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710081136.3A
Other languages
Chinese (zh)
Other versions
CN106910202A (en
Inventor
涂刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Gem Zhuo Technology LLC
Original Assignee
Wuhan Gem Zhuo Technology LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Gem Zhuo Technology LLC filed Critical Wuhan Gem Zhuo Technology LLC
Priority to CN201710081136.3A priority Critical patent/CN106910202B/en
Publication of CN106910202A publication Critical patent/CN106910202A/en
Application granted granted Critical
Publication of CN106910202B publication Critical patent/CN106910202B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to an image segmentation method and system for a ground object of a remote sensing image, wherein the method comprises the following steps: s1: putting the remote sensing image into a full convolution network, wherein the full convolution network comprises a plurality of convolution layer groups, a plurality of deconvolution layers and a CRF model layer which are sequentially arranged, and the convolution layer groups comprise convolution layers and loose convolution layers which are alternately arranged; s2: carrying out coordinate point marking on the remote sensing image through a plurality of convolution layer groups and a plurality of deconvolution layers to obtain a ground feature classification probability map, wherein different ground features in the ground feature classification probability map have different coordinate point colors and coordinate point depths; s3: and classifying all coordinate points in the ground feature classification probability map according to the coordinate point colors and the coordinate point depths to obtain segmentation images of different ground features. The invention has the beneficial effects that: according to the technical scheme, the color and the depth of the remote sensing image are added into image recognition and segmentation, color information and depth information are comprehensively analyzed, and fine cutting of the image is achieved through a CRF model layer.

Description

Image segmentation method and system for ground object of remote sensing image
Technical Field
The invention relates to the technical field of remote sensing image processing, in particular to an image segmentation method and system for ground objects of a remote sensing image.
Background
The method is a key technology of a geographic information system for dividing the edges of the ground objects of the remote sensing images, and has very important functions in the fields of land planning, disaster prevention and control, unmanned aerial vehicles, satellites, unmanned ships and resource monitoring. In the traditional method, only two-dimensional data is considered, only the relation between the coordinate point color and the coordinate point position of an image is considered in the segmentation process, and the three-dimensional remote sensing image cannot be segmented by effectively utilizing all information.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: in the traditional method, only two-dimensional data is considered, only the relation between the coordinate point color and the coordinate point position of an image is considered in the segmentation process, and the three-dimensional remote sensing image cannot be segmented by effectively utilizing all information.
The technical scheme for solving the technical problems is as follows:
an image segmentation method for ground objects of remote sensing images comprises the following steps:
s1: putting the remote sensing image into a full convolution network, wherein the full convolution network comprises a plurality of convolution layer groups, a plurality of deconvolution layers and a CRF model layer which are sequentially arranged, and the convolution layer groups comprise convolution layers and loose convolution layers which are alternately arranged;
s2: carrying out coordinate point marking on the remote sensing image through the plurality of convolution layer groups and the plurality of deconvolution layers to obtain a ground feature classification probability map, wherein different ground features in the ground feature classification probability map have different coordinate point colors and coordinate point depths;
s3: and classifying all coordinate points in the ground feature classification probability map according to the coordinate point colors and the coordinate point depths to obtain segmentation images of different ground features.
The invention has the beneficial effects that: according to the technical scheme, the color and the depth of the remote sensing image are added into image recognition and segmentation, color information and depth information are comprehensively analyzed, a CRF model layer is used as an upper sampling layer of a deep learning neural network, and fine cutting of the image is achieved on the basis of rough segmentation output by the network.
On the basis of the technical scheme, the invention can be further improved as follows.
Preferably, the step S2 includes:
s21: fusing the image of the remote sensing image marked by at least one convolution layer group coordinate point with the image marked by all the convolution layer groups and at least one deconvolution layer coordinate point for multiple times to obtain a fused image;
s22: and fusing the remote sensing image and the fused image for multiple times after the remote sensing image and the fused image are marked by at least one deconvolution layer coordinate point to obtain a ground feature classification probability map.
The beneficial effect of adopting the further scheme is that: the full convolution network replaces the full connection of the traditional network with convolution, adds an anti-convolution layer, and blends the results of the first layers of the network with the final result of the network, thereby obtaining more image information.
Preferably, the step S3 includes:
s31: inputting the coordinate point color into an energy function of the CRF model layer to calculate to obtain first energy values of all coordinate points in the ground feature classification probability map;
s32: inputting the depth of the coordinate points into an energy function of the CRF model layer to calculate to obtain second energy values of all the coordinate points in the terrain classification probability map;
s33: calculating to obtain final energy values of all coordinate points according to the first energy value and the second energy value;
s34: and classifying all coordinate points in the ground feature classification probability map according to the final energy value to obtain segmentation images of different ground features.
The beneficial effect of adopting the further scheme is that: the CRF algorithm and the Gibbs energy function are improved, the color and the depth of the coordinate point are used as judgment bases, the coordinate point is placed in the energy function, the coordinate point is correctly classified through iteration, the value of the energy function is reduced, and image cutting is achieved.
An image segmentation system for remote sensing image ground objects, comprising:
the remote sensing image acquisition module is used for acquiring a remote sensing image, and comprises an input module, an output module and a processing module, wherein the input module is used for inputting the remote sensing image into a full convolution network, the full convolution network comprises a plurality of convolution layer groups, a plurality of deconvolution layers and a CRF (cyclic redundancy check) model layer which are sequentially arranged, and the convolution layer groups comprise convolution layers and loose convolution layers which are alternately arranged;
the marking module is used for marking coordinate points on the remote sensing image through the plurality of convolution layer groups and the plurality of deconvolution layers to obtain a ground feature classification probability map, wherein different ground features in the ground feature classification probability map have different coordinate point colors and coordinate point depths;
and the classification module is used for classifying all coordinate points in the ground feature classification probability map according to the coordinate point colors and the coordinate point depths to obtain segmentation images of different ground features.
Preferably, the marking module comprises:
the first fusion submodule is used for fusing the image of the remote sensing image marked by at least one convolution layer group coordinate point with the image marked by all the convolution layer groups and at least one deconvolution layer coordinate point for multiple times to obtain a fused image;
and the second fusion submodule is used for fusing the remote sensing image and the image of the fused image after the at least one deconvolution layer coordinate point mark for multiple times to obtain a ground feature classification probability map.
Preferably, the classification module comprises:
the first calculation submodule is used for inputting the coordinate point color into an energy function of the CRF model layer to calculate and obtain first energy values of all coordinate points in the ground feature classification probability map;
the second calculation submodule is used for inputting the depth of the coordinate point into an energy function of the CRF model layer to calculate and obtain second energy values of all coordinate points in the ground feature classification probability map;
the third calculation submodule is used for calculating to obtain the final energy values of all the coordinate points according to the first energy value and the second energy value;
and the classification submodule is used for classifying all coordinate points in the ground feature classification probability map according to the final energy value to obtain the segmentation images of different ground features.
Drawings
Fig. 1 is a schematic flow chart of an image segmentation method for a ground object of a remote sensing image according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of an image segmentation method for a ground object in a remote sensing image according to another embodiment of the present invention;
FIG. 3 is a schematic flow chart of a method for segmenting an image of a ground object in a remote sensing image according to another embodiment of the present invention;
FIG. 4 is a schematic structural diagram of an image segmentation system for a ground object in a remote sensing image according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an image segmentation system for a ground object in a remote sensing image according to another embodiment of the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
As shown in fig. 1, in an embodiment, an image segmentation method for a ground object in a remote sensing image is provided, which includes:
s1: putting the remote sensing image into a full convolution network, wherein the full convolution network comprises a plurality of convolution layer groups, a plurality of deconvolution layers and a CRF model layer which are sequentially arranged, and the convolution layer groups comprise convolution layers and loose convolution layers which are alternately arranged;
s2: carrying out coordinate point marking on the remote sensing image through a plurality of convolution layer groups and a plurality of deconvolution layers to obtain a ground feature classification probability map, wherein different ground features in the ground feature classification probability map have different coordinate point colors and coordinate point depths;
s3: and classifying all coordinate points in the ground feature classification probability map according to the coordinate point colors and the coordinate point depths to obtain segmentation images of different ground features.
It should be understood that, in this embodiment, the color and the depth of the remote sensing image are added into the image recognition and segmentation, the color information and the depth information are comprehensively analyzed, the CRF model layer is used as an upsampling layer of the deep learning neural network, and the fine cutting of the image is realized on the basis of the coarse segmentation of the network output. The CRF (conditional random field) combines the characteristics of a maximum entropy model and a hidden Markov model, is an undirected graph model, and has good effect in sequence labeling tasks such as word segmentation, part of speech labeling, named entity recognition and the like in recent years. CRF is a typical discriminant model.
Specifically, in this embodiment, first, a conventional full convolutional network is improved, a convolutional layer is used instead of a full link layer, and an image is up-sampled by using an inverse convolutional layer and a CRF model layer after the convolutional layer; then, the image to be segmented is placed in the improved full convolution network, coordinate point marking is carried out on the remote sensing image through the seven layers of convolution layers and the three layers of deconvolution layers, different colors and depths are marked on the coordinate points, finally, all coordinate points in the image after the coordinate point marking are subjected to iterative classification through a CRF model layer according to the colors and the depths of the coordinate points, and fine segmentation is carried out to obtain segmented images of different ground objects.
As shown in fig. 2, in another embodiment, step S2 in fig. 1 includes:
s21: fusing the image of the remote sensing image marked by at least one convolution layer group coordinate point with the image marked by all convolution layer groups and at least one deconvolution layer coordinate point for multiple times to obtain a fused image;
s22: and fusing the remote sensing image and the fused image for multiple times after the remote sensing image and the fused image are marked by at least one deconvolution layer coordinate point to obtain a ground feature classification probability map.
It should be understood that in this embodiment, the full convolutional network replaces the fully connected layer of the conventional network with the convolutional layer, adds the anti-convolutional layer, and blends the results of the first layers of the network with the final result of the network, so as to obtain more image information.
As shown in fig. 3, in another embodiment, step S3 in fig. 1 includes:
s31: inputting the color of the coordinate point into an energy function of a CRF model layer to calculate to obtain first energy values of all the coordinate points in the ground feature classification probability map;
s32: inputting the depth of the coordinate points into an energy function of a CRF model layer to calculate to obtain second energy values of all the coordinate points in the ground feature classification probability map;
s33: calculating to obtain final energy values of all coordinate points according to the first energy value and the second energy value;
s34: and classifying all coordinate points in the ground feature classification probability map according to the final energy value to obtain segmentation images of different ground features.
It should be understood that, in this embodiment, the color and the depth of the coordinate point are used as the judgment basis, the coordinate point is put into the improved energy function, the coordinate point is correctly classified through iteration, the value of the energy function is reduced, and the image cutting is realized.
Specifically, in this embodiment, a first energy value corresponding to a color of a coordinate point and a second energy value corresponding to a depth of the coordinate point of all coordinate points in the feature classification probability map are respectively calculated according to the color and the depth of the coordinate point and through an energy function of a CRF model layer, the first energy value and the second energy value are added to obtain a total energy of each coordinate point, and the feature classification probability map is accurately segmented according to the total energy of each coordinate point to obtain a feature segmentation image. Modified gibbs energy function: e (p) ═ e (z) + e (d), where e (p) is the total energy of the coordinate points, e (z) is the energy of segmentation according to the color of the coordinate points, e (d) is the energy of segmentation according to the depth of the coordinate points, e (d) is implemented in a similar way to e (z), except that the depth of the coordinate points replaces the color of the coordinate points. The implementation mode of E (z) is as follows:
Figure BDA0001225934710000061
zithe value of the ith coordinate point is mainly composed of two parts, wherein the first part is a first part before the addition sign and is an initial energy function of a single coordinate point; the second part is the similarity energy between the coordinate points and the surrounding coordinate points. Through iteration, the CRF model layer classifies the coordinate points into correct classification, and the energy value of the energy function is continuously reduced, so that correct segmentation is realized.
As shown in fig. 4, in an embodiment, there is provided an image segmentation system for a ground feature of a remote sensing image, including:
the remote sensing image acquisition module comprises an input module 1, a full convolution network and a remote sensing image acquisition module, wherein the input module 1 is used for inputting a remote sensing image into the full convolution network, the full convolution network comprises a plurality of convolution layer groups, a plurality of deconvolution layers and CRF (cross domain gradient) model layers which are sequentially arranged, and the convolution layer groups comprise convolution layers and sparse convolution layers which are alternately arranged;
the marking module 2 is used for marking coordinate points on the remote sensing image through the plurality of convolution layer groups and the plurality of deconvolution layers to obtain a ground feature classification probability map, wherein different ground features in the ground feature classification probability map have different coordinate point colors and coordinate point depths;
and the classification module 3 is used for classifying all coordinate points in the ground feature classification probability map according to the coordinate point colors and the coordinate point depths to obtain segmentation images of different ground features.
As shown in fig. 5, in another embodiment, the marking module 2 in fig. 4 includes:
the first fusion submodule 21 is configured to perform multiple fusion on an image of the remote sensing image marked by at least one convolution layer group coordinate point and an image marked by all convolution layer groups and at least one deconvolution layer coordinate point to obtain a fusion image;
and the second fusion submodule 22 is used for fusing the remote sensing image and the image of the fused image after the remote sensing image and the fused image are marked by at least one deconvolution layer coordinate point for multiple times to obtain a ground feature classification probability map.
As shown in fig. 5, in another embodiment, the classification module 3 in fig. 4 includes:
the first calculating submodule 31 is configured to input the coordinate point color into an energy function of the CRF model layer to calculate first energy values of all coordinate points in the ground feature classification probability map;
the second calculating submodule 32 is used for inputting the depth of the coordinate point into an energy function of the CRF model layer to calculate and obtain second energy values of all coordinate points in the ground feature classification probability map;
a third calculating submodule 33, configured to calculate final energy values of all coordinate points according to the first energy value and the second energy value;
and the classification submodule 34 is configured to classify all coordinate points in the ground feature classification probability map according to the final energy value, so as to obtain segmented images of different ground features.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (4)

1. An image segmentation method for ground features of remote sensing images is characterized by comprising the following steps:
s1: putting the remote sensing image into a full convolution network, wherein the full convolution network comprises a plurality of convolution layer groups, a plurality of deconvolution layers and a CRF model layer which are sequentially arranged, and the convolution layer groups comprise convolution layers and loose convolution layers which are alternately arranged;
s2: carrying out coordinate point marking on the remote sensing image through the plurality of convolution layer groups and the plurality of deconvolution layers to obtain a ground feature classification probability map, wherein different ground features in the ground feature classification probability map have different coordinate point colors and coordinate point depths;
s3: classifying all coordinate points in the ground feature classification probability map according to the coordinate point colors and the coordinate point depths to obtain segmentation images of different ground features;
the step S3 includes:
s31: inputting the coordinate point color into an energy function of the CRF model layer to calculate to obtain first energy values of all coordinate points in the ground feature classification probability map;
s32: inputting the depth of the coordinate points into an energy function of the CRF model layer to calculate to obtain second energy values of all the coordinate points in the terrain classification probability map;
s33: calculating to obtain final energy values of all coordinate points according to the first energy value and the second energy value;
s34: and classifying all coordinate points in the ground feature classification probability map according to the final energy value to obtain segmentation images of different ground features.
2. The image segmentation method according to claim 1, wherein the step S2 includes:
s21: fusing the image of the remote sensing image marked by at least one convolution layer group coordinate point with the image marked by all the convolution layer groups and at least one deconvolution layer coordinate point for multiple times to obtain a fused image;
s22: and fusing the remote sensing image and the fused image for multiple times after the remote sensing image and the fused image are marked by at least one deconvolution layer coordinate point to obtain a ground feature classification probability map.
3. An image segmentation system for remote sensing image ground objects, comprising:
the remote sensing image acquisition system comprises an input module (1) and a remote sensing image acquisition module, wherein the input module is used for inputting a remote sensing image into a full convolution network, the full convolution network comprises a plurality of convolution layer groups, a plurality of deconvolution layers and CRF (cyclic redundancy check) model layers which are sequentially arranged, and the convolution layer groups comprise convolution layers and sparse convolution layers which are alternately arranged;
the marking module (2) is used for marking coordinate points on the remote sensing image through the convolution layer groups and the deconvolution layers to obtain a ground feature classification probability map, wherein different ground features in the ground feature classification probability map have different coordinate point colors and coordinate point depths;
the classification module (3) is used for classifying all coordinate points in the ground feature classification probability map according to the coordinate point colors and the coordinate point depths to obtain segmentation images of different ground features;
the classification module (3) comprises:
the first calculating submodule (31) is used for inputting the coordinate point color into an energy function of the CRF model layer to calculate to obtain first energy values of all coordinate points in the terrain classification probability map;
the second calculation submodule (32) is used for inputting the coordinate point depth into an energy function of the CRF model layer to calculate to obtain second energy values of all coordinate points in the terrain classification probability map;
a third calculation submodule (33) for calculating a final energy value of all coordinate points according to the first energy value and the second energy value;
and the classification submodule (34) is used for classifying all coordinate points in the ground feature classification probability map according to the final energy value to obtain segmentation images of different ground features.
4. The image segmentation system according to claim 3, characterized in that the labeling module (2) comprises:
the first fusion submodule (21) is used for fusing the image of the remote sensing image marked by at least one convolution layer group coordinate point with the image marked by all the convolution layer groups and at least one deconvolution layer coordinate point for multiple times to obtain a fused image;
and the second fusion submodule (22) is used for fusing the remote sensing image and the image of the fused image after being marked by at least one deconvolution layer coordinate point for multiple times to obtain a ground feature classification probability map.
CN201710081136.3A 2017-02-15 2017-02-15 Image segmentation method and system for ground object of remote sensing image Active CN106910202B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710081136.3A CN106910202B (en) 2017-02-15 2017-02-15 Image segmentation method and system for ground object of remote sensing image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710081136.3A CN106910202B (en) 2017-02-15 2017-02-15 Image segmentation method and system for ground object of remote sensing image

Publications (2)

Publication Number Publication Date
CN106910202A CN106910202A (en) 2017-06-30
CN106910202B true CN106910202B (en) 2020-03-24

Family

ID=59207635

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710081136.3A Active CN106910202B (en) 2017-02-15 2017-02-15 Image segmentation method and system for ground object of remote sensing image

Country Status (1)

Country Link
CN (1) CN106910202B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107527352B (en) * 2017-08-09 2020-07-07 中国电子科技集团公司第五十四研究所 Remote sensing ship target contour segmentation and detection method based on deep learning FCN network
CN108537824B (en) * 2018-03-15 2021-07-16 上海交通大学 Feature map enhanced network structure optimization method based on alternating deconvolution and convolution
CN108710863A (en) * 2018-05-24 2018-10-26 东北大学 Unmanned plane Scene Semantics dividing method based on deep learning and system
CN111582004A (en) * 2019-02-15 2020-08-25 阿里巴巴集团控股有限公司 Target area segmentation method and device in ground image
CN109918449B (en) * 2019-03-16 2021-04-06 中国农业科学院农业资源与农业区划研究所 Internet of things-based agricultural disaster information remote sensing extraction method and system
CN111242132A (en) * 2020-01-07 2020-06-05 广州赛特智能科技有限公司 Outdoor road scene semantic segmentation method and device, electronic equipment and storage medium
CN112419266B (en) * 2020-11-23 2022-09-30 山东建筑大学 Remote sensing image change detection method based on ground surface coverage category constraint

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156987A (en) * 2011-04-25 2011-08-17 深圳超多维光电子有限公司 Method and device for acquiring depth information of scene
CN102903110A (en) * 2012-09-29 2013-01-30 宁波大学 Segmentation method for image with deep image information
CN104504709A (en) * 2014-12-28 2015-04-08 大连理工大学 Feature ball based classifying method of three-dimensional point-cloud data of outdoor scene
CN105740894A (en) * 2016-01-28 2016-07-06 北京航空航天大学 Semantic annotation method for hyperspectral remote sensing image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156987A (en) * 2011-04-25 2011-08-17 深圳超多维光电子有限公司 Method and device for acquiring depth information of scene
CN102903110A (en) * 2012-09-29 2013-01-30 宁波大学 Segmentation method for image with deep image information
CN104504709A (en) * 2014-12-28 2015-04-08 大连理工大学 Feature ball based classifying method of three-dimensional point-cloud data of outdoor scene
CN105740894A (en) * 2016-01-28 2016-07-06 北京航空航天大学 Semantic annotation method for hyperspectral remote sensing image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
全卷积网络结合改进的条件随机场-循环神经网络用于SAR图像场景分类;汤浩,何楚;《计算机应用》;20161221;第36卷(第12期);第3436-3441页 *

Also Published As

Publication number Publication date
CN106910202A (en) 2017-06-30

Similar Documents

Publication Publication Date Title
CN106910202B (en) Image segmentation method and system for ground object of remote sensing image
CN110443143B (en) Multi-branch convolutional neural network fused remote sensing image scene classification method
CN108038445B (en) SAR automatic target identification method based on multi-view deep learning framework
CN106909924B (en) Remote sensing image rapid retrieval method based on depth significance
CN110929607B (en) Remote sensing identification method and system for urban building construction progress
CN106897681B (en) Remote sensing image contrast analysis method and system
CN104915676B (en) SAR image sorting technique based on further feature study and watershed
CN107862261A (en) Image people counting method based on multiple dimensioned convolutional neural networks
CN111612807A (en) Small target image segmentation method based on scale and edge information
CN111915592A (en) Remote sensing image cloud detection method based on deep learning
CN105005760B (en) A kind of recognition methods again of the pedestrian based on Finite mixture model
CN106408030A (en) SAR image classification method based on middle lamella semantic attribute and convolution neural network
CN112488025B (en) Double-temporal remote sensing image semantic change detection method based on multi-modal feature fusion
Choung et al. Comparison between a machine-learning-based method and a water-index-based method for shoreline mapping using a high-resolution satellite image acquired in Hwado Island, South Korea
CN110807485B (en) Method for fusing two-classification semantic segmentation maps into multi-classification semantic map based on high-resolution remote sensing image
CN113256649B (en) Remote sensing image station selection and line selection semantic segmentation method based on deep learning
CN108932455B (en) Remote sensing image scene recognition method and device
CN115471467A (en) High-resolution optical remote sensing image building change detection method
CN116797787B (en) Remote sensing image semantic segmentation method based on cross-modal fusion and graph neural network
CN112950780A (en) Intelligent network map generation method and system based on remote sensing image
Li et al. An aerial image segmentation approach based on enhanced multi-scale convolutional neural network
CN113610097A (en) SAR ship target segmentation method based on multi-scale similarity guide network
Gao et al. Road extraction using a dual attention dilated-linknet based on satellite images and floating vehicle trajectory data
CN113657414B (en) Object identification method
CN106897683B (en) Ground object detection method and system of remote sensing image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant