CN106897968B - Image splicing method and system for ground object of remote sensing image - Google Patents
Image splicing method and system for ground object of remote sensing image Download PDFInfo
- Publication number
- CN106897968B CN106897968B CN201710081140.XA CN201710081140A CN106897968B CN 106897968 B CN106897968 B CN 106897968B CN 201710081140 A CN201710081140 A CN 201710081140A CN 106897968 B CN106897968 B CN 106897968B
- Authority
- CN
- China
- Prior art keywords
- image
- remote sensing
- ground
- coordinate point
- probability map
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 19
- 230000011218 segmentation Effects 0.000 claims description 22
- 230000004927 fusion Effects 0.000 claims description 9
- 239000003086 colorant Substances 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 4
- 230000009286 beneficial effect Effects 0.000 abstract description 5
- 230000006870 function Effects 0.000 description 15
- 238000013135 deep learning Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 230000000694 effects Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention relates to an image splicing method and system for a ground object of a remote sensing image, wherein the method comprises the following steps: s1: identifying and segmenting all ground features in the remote sensing image through a full convolution network to obtain segmented images of all ground features in the remote sensing image; s2: judging whether the image of a high-rise building exists in the splicing area of the segmented images; s3: and when the high-rise building exists, the splicing lines in the moving splicing area do not coincide with the image of the high-rise building, or the image area except the building is selected and identified to replace the image of the high-rise building. The invention has the beneficial effects that: after ground objects in the remote sensing image are identified and segmented, high buildings are kept away in the splicing process, and the condition that one building is changed into a plurality of buildings which fall to different directions can be avoided.
Description
Technical Field
The invention relates to the technical field of remote sensing image processing, in particular to an image splicing method and system for a ground object of a remote sensing image.
Background
In the technical field of remote sensing, the image splicing technology can be used for splicing the shot ground images into more accurate complete images which serve as a basis for further processing, and the method has very important functions in the fields of land planning, disaster prevention and control, unmanned aerial vehicles, satellites, unmanned ships and resource monitoring. The remote sensing image is shot at high altitude, a plurality of high-rise buildings are arranged on the image, the shooting angles are different, the shapes of the shot high-rise buildings are different, one building is likely to become a plurality of buildings which face backwards in different directions during splicing, and obviously, the result cannot meet the requirements of users.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the remote sensing images are shot in different angles, the shapes of the shot high-rise buildings are different, and one building is likely to become a plurality of buildings which fall to different directions during splicing.
The technical scheme for solving the technical problems is as follows:
an image splicing method for ground objects of remote sensing images comprises the following steps:
s1: identifying and segmenting all ground features in the remote sensing image through a full convolution network to obtain segmented images of all ground features in the remote sensing image;
s2: judging whether the splicing area of the segmented images has the image of a high-rise building or not;
s3: when the high-rise building is determined to exist, moving the splicing line in the splicing area not to coincide with the image of the high-rise building, or replacing the image of the high-rise building with the identified image area other than the building.
The invention has the beneficial effects that: after ground objects in the remote sensing image are identified and segmented, high buildings are kept away in the splicing process, and the condition that one building is changed into a plurality of buildings which fall to different directions can be avoided.
On the basis of the technical scheme, the invention can be further improved as follows.
Preferably, the step S1 includes:
s11: putting the remote sensing image into a full convolution network, wherein the full convolution network comprises a plurality of convolution layer groups, a plurality of deconvolution layers and a CRF model layer which are sequentially arranged, and the convolution layer groups comprise convolution layers and loose convolution layers which are alternately arranged;
s12: carrying out coordinate point marking on the remote sensing image through the plurality of convolution layer groups and the plurality of deconvolution layers to obtain a ground feature classification probability map, wherein different ground features in the ground feature classification probability map have different coordinate point colors and coordinate point depths;
s13: and inputting the coordinate point color and the coordinate point depth into the CRF model layer to classify all the coordinate points in the ground feature classification probability map to obtain segmentation images of different ground features.
The beneficial effect of adopting the above further scheme is: adding the color and the depth of the remote sensing image into image recognition and segmentation, comprehensively analyzing color information and depth information, taking a CRF model layer as an upper sampling layer of a deep learning neural network, and realizing fine cutting of the image on the basis of rough segmentation output by the network.
Preferably, the step S12 includes:
s121: fusing the image of the remote sensing image marked by at least one convolution layer group coordinate point with the image marked by all convolution layer groups and at least one deconvolution layer coordinate point for multiple times to obtain a fused image;
s122: and fusing the remote sensing image and the image marked by the at least one deconvolution layer coordinate point of the fused image for multiple times to obtain a ground feature classification probability map.
The beneficial effect of adopting the further scheme is that: the full convolution network replaces the full connection of the traditional network with convolution, adds an anti-convolution layer, and blends the results of the first layers of the network with the final result of the network, thereby obtaining more image information.
Preferably, the step S13 includes:
s131: inputting the coordinate point color into an energy function of the CRF model layer to calculate to obtain first energy values of all coordinate points in the ground feature classification probability map;
s132: inputting the depth of the coordinate points into an energy function of the CRF model layer to calculate to obtain second energy values of all the coordinate points in the terrain classification probability map;
s133: calculating to obtain final energy values of all coordinate points according to the first energy value and the second energy value;
s134: and classifying all coordinate points in the ground feature classification probability map according to the final energy value to obtain segmentation images of different ground features.
The beneficial effect of adopting the above further scheme is: the CRF algorithm and the Gibbs energy function are improved, the color and the depth of the coordinate point are taken as judgment bases and are put into the energy function, the coordinate point is correctly classified through iteration, the value of the energy function is reduced, and image cutting is realized.
Preferably, the identified image area other than the building includes: flat ground, road or river areas.
An image stitching system for remote sensing image ground objects comprises:
the segmentation module is used for identifying and segmenting all ground objects in the remote sensing image through a full convolution network to obtain segmented images of all ground objects in the remote sensing image;
the judging module is used for judging whether the image of the high-rise building exists in the splicing area of the segmented images;
and the splicing module is used for moving the splicing line in the splicing area not to be superposed with the image of the high-rise building or selecting the identified image area except for the building to replace the image of the high-rise building when the high-rise building is determined to exist.
Preferably, the segmentation module comprises:
the remote sensing image acquisition device comprises an input unit, a remote sensing image acquisition unit and a remote sensing image acquisition unit, wherein the input unit is used for inputting a remote sensing image into a full convolution network, the full convolution network comprises a plurality of convolution layer groups, a plurality of deconvolution layers and CRF (cross domain gradient) model layers which are sequentially arranged, and the convolution layer groups comprise convolution layers and sparse convolution layers which are alternately arranged;
the marking unit is used for marking coordinate points of the remote sensing image through the convolution layer groups and the deconvolution layers to obtain a ground feature classification probability map, wherein different ground features in the ground feature classification probability map have different coordinate point colors and coordinate point depths;
and the classification unit is used for inputting the coordinate point color and the coordinate point depth into the CRF model layer to classify all coordinate points in the ground feature classification probability map so as to obtain segmentation images of different ground features.
Preferably, the marking unit includes:
the first fusion component is used for fusing the image of the remote sensing image marked by the coordinate points of at least one convolution layer group with the image marked by all the convolution layer groups and at least one deconvolution layer coordinate point for multiple times to obtain a fused image;
and the second fusion component is used for fusing the remote sensing image and the image marked by the at least one deconvolution layer coordinate point of the fusion image for multiple times to obtain a ground feature classification probability map.
Preferably, the classification unit includes:
the first calculation component is used for inputting the coordinate point color into an energy function of the CRF model layer to calculate and obtain first energy values of all coordinate points in the terrain classification probability map;
the second calculation component is used for inputting the depth of the coordinate points into an energy function of the CRF model layer to calculate and obtain second energy values of all the coordinate points in the terrain classification probability map;
the third calculating component is used for calculating to obtain a final energy value of all coordinate points according to the first energy value and the second energy value;
and the classification component is used for classifying all coordinate points in the ground feature classification probability map according to the final energy value to obtain the segmentation images of different ground features.
Preferably, the identified image area other than the building includes: flat ground, road or river areas.
Drawings
Fig. 1 is a schematic flow chart of an image stitching method for a ground object of a remote sensing image according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of an image stitching method for a ground object of a remote sensing image according to another embodiment of the present invention;
FIG. 3 is a schematic flow chart of a method for image stitching of a ground object in a remote sensing image according to another embodiment of the present invention;
FIG. 4 is a schematic flow chart of a method for stitching images of a ground object in a remote sensing image according to another embodiment of the present invention;
FIG. 5 is a schematic structural diagram of an image stitching system for a remote sensing image ground object according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an image stitching system for remote sensing image features according to another embodiment of the present invention;
fig. 7 is a schematic structural diagram of an image stitching system for a remote sensing image ground object according to another embodiment of the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth to illustrate, but are not to be construed to limit the scope of the invention.
As shown in fig. 1, in an embodiment, an image stitching method for a ground object of a remote sensing image is provided, which includes:
s1: identifying and segmenting all ground objects in the remote sensing image through a full convolution network to obtain segmented images of all ground objects in the remote sensing image;
s2: judging whether the splicing area of the segmented images has the image of a high-rise building or not;
s3: and when the high-rise building is determined to exist, the splicing line in the moving splicing area is not superposed with the image of the high-rise building, or the image area except the building is selected for replacing the image of the high-rise building.
It should be understood that in the embodiment, after the ground objects in the remote sensing image are identified and segmented, the high buildings are kept away in the splicing process, and the condition that one building is turned into a plurality of buildings which face backwards in different directions can be avoided.
Specifically, in this embodiment, all the ground features in the remote sensing image are identified and segmented by the deep learning network such as the full convolution network to obtain segmented images of all the ground features, and then the ground features are spliced, in the splicing process, it is first determined whether a high-rise building exists in the spliced area, and if so, the splicing line is moved to avoid the high-rise building or an identified area other than the building is selected, such as a flat ground, a road or a river area, to replace the high-rise building.
As shown in fig. 2, in another embodiment, step S1 in fig. 1 includes:
s11: putting the remote sensing image into a full convolution network, wherein the full convolution network comprises a plurality of convolution layer groups, a plurality of deconvolution layers and a CRF model layer which are sequentially arranged, and the convolution layer groups comprise convolution layers and loose convolution layers which are alternately arranged;
s12: marking coordinate points of the remote sensing image through the convolution layer groups and the deconvolution layers to obtain a ground feature classification probability map, wherein different ground features in the ground feature classification probability map have different coordinate point colors and coordinate point depths;
s13: and inputting the color and the depth of the coordinate points into a CRF (model reference number) model layer to classify all the coordinate points in the ground object classification probability map to obtain segmentation images of different ground objects.
It should be understood that, in this embodiment, the color and the depth of the remote sensing image are added into the image recognition and segmentation, the color information and the depth information are comprehensively analyzed, the CRF model layer is used as an upsampling layer of the deep learning neural network, and the fine cutting of the image is realized on the basis of the coarse segmentation of the network output.
Specifically, in this embodiment, first, a conventional full convolutional network is improved, a convolutional layer is used instead of a full link layer, and an image is up-sampled by using an inverse convolutional layer and a CRF model layer after the convolutional layer; then, the image to be segmented is placed in the improved full convolution network, coordinate point marking is carried out on the remote sensing image through the seven layers of convolution layers and the three layers of deconvolution layers, different colors and depths are marked on the coordinate points, finally, all coordinate points in the image after the coordinate point marking are subjected to iterative classification through a CRF model layer according to the colors and the depths of the coordinate points, and fine segmentation is carried out to obtain segmented images of different ground objects. The CRF (conditional random field) combines the characteristics of a maximum entropy model and a hidden Markov model, is an undirected graph model, and has good effect in sequence labeling tasks such as word segmentation, part of speech labeling, named entity recognition and the like in recent years. CRF is a typical discriminant model.
As shown in fig. 3, in another embodiment, step S12 in fig. 2 includes:
s121: fusing the image of the remote sensing image marked by at least one convolution layer group coordinate point with the image marked by all convolution layer groups and at least one deconvolution layer coordinate point for multiple times to obtain a fused image;
s122: and fusing the remote sensing image and the fused image for multiple times after the remote sensing image and the fused image are marked by at least one deconvolution layer coordinate point to obtain a ground feature classification probability map.
It should be understood that in this embodiment, the full convolutional network replaces the fully connected layer of the conventional network with the convolutional layer, adds the anti-convolutional layer, and blends the results of the first layers of the network with the final result of the network, so as to obtain more image information.
As shown in fig. 4, in another embodiment, step S13 in fig. 2 includes:
s131: inputting the coordinate point color into an energy function of a CRF model layer to calculate to obtain first energy values of all coordinate points in the ground feature classification probability map;
s132: inputting the depth of the coordinate points into an energy function of a CRF model layer to calculate to obtain second energy values of all the coordinate points in the ground feature classification probability map;
s133: calculating to obtain final energy values of all coordinate points according to the first energy value and the second energy value;
s134: and classifying all coordinate points in the ground feature classification probability map according to the final energy value to obtain segmentation images of different ground features.
It should be understood that, in this embodiment, the color and the depth of the coordinate point are used as the judgment basis, and the coordinate point is put into the energy function, and the coordinate point is correctly classified through iteration, so that the value of the energy function is reduced, and the image cutting is realized.
Specifically, in this embodiment, the color and the depth of the coordinate point are input into an energy function of the CRF model layer to calculate a first energy value corresponding to the color of the coordinate point and a second energy value corresponding to the depth of the coordinate point for all coordinate points in the feature classification probability map, the first energy value and the second energy value are added to obtain the total energy of each coordinate point, and the feature classification probability map is accurately segmented according to the total energy of each coordinate point to obtain a feature segmentation image.
It should be understood that, in this embodiment, the identified image areas other than buildings include: flat ground, road or river areas.
As shown in fig. 5, in an embodiment, an image stitching system for remote sensing image features is provided, which includes:
the segmentation module 1 is used for identifying and segmenting all ground objects in the remote sensing image through a full convolution network to obtain segmented images of all ground objects in the remote sensing image;
the judging module 2 is used for judging whether the image of the high-rise building exists in the splicing area of the segmented images;
and the splicing module 3 is used for moving the splicing line in the splicing area not to coincide with the image of the high-rise building or selecting the identified image area except the building to replace the image of the high-rise building when the high-rise building is determined to exist.
As shown in fig. 6, in another embodiment, the segmentation module 1 in fig. 5 includes:
the remote sensing image acquisition device comprises an input unit 11, a full convolution network and a CRF model layer, wherein the input unit is used for inputting a remote sensing image into the full convolution network, the full convolution network comprises a plurality of convolution layer groups, a plurality of deconvolution layers and CRF model layers which are sequentially arranged, and the convolution layer groups comprise convolution layers and loose convolution layers which are alternately arranged;
the marking unit 12 is used for marking coordinate points of the remote sensing image through the convolution layer groups and the deconvolution layers to obtain a ground feature classification probability map, wherein different ground features in the ground feature classification probability map have different coordinate point colors and coordinate point depths;
and the classification unit 13 is configured to input the color of the coordinate point and the depth of the coordinate point into a CRF model layer to classify all coordinate points in the feature classification probability map, so as to obtain segmented images of different features.
As shown in fig. 7, in another embodiment, the marking unit 12 in fig. 6 includes:
the first fusion component 121 is used for fusing the image of the remote sensing image marked by the coordinate point of at least one convolution layer group with the image marked by the coordinate points of all convolution layer groups and at least one deconvolution layer for multiple times to obtain a fused image;
and the second fusion component 122 is configured to perform multiple fusion on the remote sensing image and the image of the fusion image after being marked by at least one deconvolution layer coordinate point to obtain a ground feature classification probability map.
As shown in fig. 7, in another embodiment, the classification unit 13 in fig. 6 includes:
the first calculating component 131 is configured to calculate a first energy value of all coordinate points in the ground feature classification probability map by inputting the coordinate point color into an energy function of the CRF model layer;
the second calculating component 132 is used for inputting the depth of the coordinate point into an energy function of the CRF model layer to calculate second energy values of all the coordinate points in the terrain classification probability map;
a third calculating component 133, configured to calculate final energy values of all coordinate points according to the first energy value and the second energy value;
and the classification component 134 is configured to classify all coordinate points in the ground object classification probability map according to the final energy value to obtain segmented images of different ground objects.
It should be understood that, in this embodiment, the identified image areas other than buildings include: flat ground, road or river areas.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (6)
1. An image splicing method for ground objects of remote sensing images is characterized by comprising the following steps:
s1: identifying and segmenting all ground features in the remote sensing image through a full convolution network to obtain segmented images of all ground features in the remote sensing image, wherein the step S1 comprises the following steps:
s11: putting a remote sensing image into a full convolution network, wherein the full convolution network comprises a plurality of convolution layer groups, a plurality of deconvolution layers and a CRF model layer which are sequentially arranged, and the convolution layer groups comprise convolution layers and loose convolution layers which are alternately arranged;
s12: carrying out coordinate point marking on the remote sensing image through the plurality of convolution layer groups and the plurality of deconvolution layers to obtain a ground feature classification probability map, wherein different ground features in the ground feature classification probability map have different coordinate point colors and coordinate point depths;
s13: inputting the coordinate point color and the coordinate point depth into the CRF model layer to classify all coordinate points in the terrain classification probability map to obtain segmentation images of different terrain;
s2: judging whether the splicing area of the segmented images has the image of a high-rise building or not;
s3: when the high-rise building is determined to exist, moving the splicing line in the splicing area not to coincide with the image of the high-rise building, or selecting the identified image area except for the building to replace the image of the high-rise building;
the step S12 includes:
s121: fusing the image of the remote sensing image marked by at least one convolution layer group coordinate point with the image marked by all the convolution layer groups and at least one deconvolution layer coordinate point for multiple times to obtain a fused image;
s122: and fusing the remote sensing image and the fused image for multiple times after the remote sensing image and the fused image are marked by at least one deconvolution layer coordinate point to obtain a ground feature classification probability map.
2. The image stitching method according to claim 1, wherein the step S13 comprises:
s131: inputting the coordinate point color into an energy function of the CRF model layer to calculate to obtain first energy values of all coordinate points in the ground feature classification probability map;
s132: inputting the depth of the coordinate points into an energy function of the CRF model layer to calculate to obtain second energy values of all the coordinate points in the ground feature classification probability map;
s133: calculating to obtain final energy values of all coordinate points according to the first energy value and the second energy value;
s134: and classifying all coordinate points in the ground feature classification probability map according to the final energy value to obtain segmentation images of different ground features.
3. The image stitching method according to claim 1 or 2, wherein the identified image areas other than buildings include: flat ground, road or river areas.
4. An image stitching system for remote sensing image ground features is characterized by comprising the following components:
the segmentation module (1) is used for identifying and segmenting all ground features in the remote sensing image through a full convolution network to obtain segmented images of all ground features in the remote sensing image, and comprises:
the remote sensing image acquisition device comprises an input unit (11) for inputting a remote sensing image into a full convolution network, wherein the full convolution network comprises a plurality of convolution layer groups, a plurality of deconvolution layers and CRF model layers which are sequentially arranged, and the convolution layer groups comprise convolution layers and loose convolution layers which are alternately arranged;
the marking unit (12) is used for marking coordinate points on the remote sensing image through the convolution layer groups and the deconvolution layers to obtain a ground feature classification probability map, wherein different ground features in the ground feature classification probability map have different coordinate point colors and coordinate point depths;
the classification unit (13) is used for inputting the coordinate point color and the coordinate point depth into the CRF model layer to classify all coordinate points in the terrain classification probability map so as to obtain segmentation images of different terrain;
the judging module (2) is used for judging whether the image of the high-rise building exists in the splicing area of the segmented images;
a splicing module (3) for moving a splicing line within the splicing area not to coincide with the image of the high-rise building or replacing the image of the high-rise building with an identified image area other than a building, when the presence of the high-rise building is determined;
wherein the marking unit (12) comprises:
the first fusion component (121) is used for fusing the image of the remote sensing image marked by at least one convolution layer group coordinate point with the image marked by all the convolution layer groups and at least one deconvolution layer coordinate point for multiple times to obtain a fused image;
and the second fusion component (122) is used for fusing the remote sensing image and the image marked by the at least one deconvolution layer coordinate point of the fused image for multiple times to obtain a ground feature classification probability map.
5. The image stitching system according to claim 4, characterized in that the classification unit (13) comprises:
the first calculation component (131) is used for inputting the coordinate point color into an energy function of the CRF model layer to calculate to obtain a first energy value of all coordinate points in the terrain classification probability map;
the second calculation component (132) is used for inputting the coordinate point depth into an energy function of the CRF model layer to calculate to obtain second energy values of all coordinate points in the terrain classification probability map;
a third calculating component (133) for calculating final energy values of all coordinate points according to the first energy value and the second energy value;
and the classification component (134) is used for classifying all coordinate points in the ground feature classification probability map according to the final energy value to obtain segmentation images of different ground features.
6. The image stitching system according to claim 4 or 5, wherein the identified image areas other than buildings comprise: flat ground, road or river areas.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710081140.XA CN106897968B (en) | 2017-02-15 | 2017-02-15 | Image splicing method and system for ground object of remote sensing image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710081140.XA CN106897968B (en) | 2017-02-15 | 2017-02-15 | Image splicing method and system for ground object of remote sensing image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106897968A CN106897968A (en) | 2017-06-27 |
CN106897968B true CN106897968B (en) | 2022-10-14 |
Family
ID=59198705
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710081140.XA Active CN106897968B (en) | 2017-02-15 | 2017-02-15 | Image splicing method and system for ground object of remote sensing image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106897968B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107704840A (en) * | 2017-10-24 | 2018-02-16 | 汕头大学 | A kind of remote sensing images Approach for road detection based on deep learning |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106295714A (en) * | 2016-08-22 | 2017-01-04 | 中国科学院电子学研究所 | A kind of multi-source Remote-sensing Image Fusion based on degree of depth study |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104182985B (en) * | 2014-09-01 | 2017-02-01 | 西安电子科技大学 | Remote sensing image change detection method |
CN104217414B (en) * | 2014-09-10 | 2017-09-26 | 中科九度(北京)空间信息技术有限责任公司 | Splicing line extracting method and device for image joint |
US10074041B2 (en) * | 2015-04-17 | 2018-09-11 | Nec Corporation | Fine-grained image classification by exploring bipartite-graph labels |
CN105957018B (en) * | 2016-07-15 | 2018-12-14 | 武汉大学 | A kind of unmanned plane images filter frequency dividing joining method |
-
2017
- 2017-02-15 CN CN201710081140.XA patent/CN106897968B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106295714A (en) * | 2016-08-22 | 2017-01-04 | 中国科学院电子学研究所 | A kind of multi-source Remote-sensing Image Fusion based on degree of depth study |
Also Published As
Publication number | Publication date |
---|---|
CN106897968A (en) | 2017-06-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109255334B (en) | Remote sensing image ground feature classification method based on deep learning semantic segmentation network | |
CN107527352B (en) | Remote sensing ship target contour segmentation and detection method based on deep learning FCN network | |
US10636169B2 (en) | Synthesizing training data for broad area geospatial object detection | |
CN109086668B (en) | Unmanned aerial vehicle remote sensing image road information extraction method based on multi-scale generation countermeasure network | |
CN106897681B (en) | Remote sensing image contrast analysis method and system | |
CN106910202B (en) | Image segmentation method and system for ground object of remote sensing image | |
CN106127204B (en) | A kind of multi-direction meter reading Region detection algorithms of full convolutional neural networks | |
CN108776772B (en) | Cross-time building change detection modeling method, detection device, method and storage medium | |
CN108596108B (en) | Aerial remote sensing image change detection method based on triple semantic relation learning | |
Pi et al. | Detection and semantic segmentation of disaster damage in UAV footage | |
JP6397379B2 (en) | CHANGE AREA DETECTION DEVICE, METHOD, AND PROGRAM | |
CN106570874B (en) | Image marking method combining image local constraint and object global constraint | |
CN106408030A (en) | SAR image classification method based on middle lamella semantic attribute and convolution neural network | |
CN108960404B (en) | Image-based crowd counting method and device | |
CN111611861B (en) | Image change detection method based on multi-scale feature association | |
CN110222767B (en) | Three-dimensional point cloud classification method based on nested neural network and grid map | |
CN113989662A (en) | Remote sensing image fine-grained target identification method based on self-supervision mechanism | |
CN112287983B (en) | Remote sensing image target extraction system and method based on deep learning | |
Hamida et al. | Deep learning for semantic segmentation of remote sensing images with rich spectral content | |
CN113920436A (en) | Remote sensing image marine vessel recognition system and method based on improved YOLOv4 algorithm | |
CN112560675A (en) | Bird visual target detection method combining YOLO and rotation-fusion strategy | |
CN113610070A (en) | Landslide disaster identification method based on multi-source data fusion | |
Wang et al. | Feature extraction and segmentation of pavement distress using an improved hybrid task cascade network | |
CN106897683B (en) | Ground object detection method and system of remote sensing image | |
CN117475236B (en) | Data processing system and method for mineral resource exploration |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |