CN106897968A - The image split-joint method and system of a kind of remote sensing images atural object - Google Patents
The image split-joint method and system of a kind of remote sensing images atural object Download PDFInfo
- Publication number
- CN106897968A CN106897968A CN201710081140.XA CN201710081140A CN106897968A CN 106897968 A CN106897968 A CN 106897968A CN 201710081140 A CN201710081140 A CN 201710081140A CN 106897968 A CN106897968 A CN 106897968A
- Authority
- CN
- China
- Prior art keywords
- coordinate points
- image
- remote sensing
- sensing images
- segmentation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- VMXUWOKSQNHOCA-UKTHLTGXSA-N ranitidine Chemical compound [O-][N+](=O)\C=C(/NC)NCCSCC1=CC=C(CN(C)C)O1 VMXUWOKSQNHOCA-UKTHLTGXSA-N 0.000 title claims abstract description 64
- 238000000034 method Methods 0.000 title claims abstract description 16
- 230000011218 segmentation Effects 0.000 claims abstract description 39
- 238000003475 lamination Methods 0.000 claims description 29
- 230000004927 fusion Effects 0.000 claims description 10
- 238000004804 winding Methods 0.000 claims description 2
- 230000009286 beneficial effect Effects 0.000 abstract description 5
- 230000006870 function Effects 0.000 description 13
- 239000003086 colorant Substances 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000004040 coloring Methods 0.000 description 2
- 230000007935 neutral effect Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The present invention relates to the image split-joint method and system of a kind of remote sensing images atural object, method includes:S1:All atural objects in remote sensing images are identified and split by full convolutional network, the segmentation figure picture of all atural objects in remote sensing images is obtained;S2:Judge the image of the splicing regions with the presence or absence of high-rise of segmentation figure picture;S3:When it is determined that there is high-rise, splicing line in mobile splicing regions not with the picture registration of high-rise, or replace the image of high-rise from the image-region in addition to building of identification.The beneficial effects of the invention are as follows:After atural object in remote sensing images is identified and is split, building high is avoided in splicing, it is possible to avoid the occurrence of a building and become to swing to many buildings of different directions.
Description
Technical field
The present invention relates to technical field of remote sensing image processing, more particularly to a kind of image split-joint method of remote sensing images atural object
And system.
Background technology
In remote sensing technology field, the ground image of shooting can be spliced into using image mosaic technology more accurately complete
Whole image, as the foundation of further treatment, in the reallocation of land, diaster prevention and control, unmanned plane, satellite, unmanned boat and monitoring resource
Field has a very important role.The shooting of remote sensing images has many buildings high on high-altitude, image, and shooting angle is different,
Take the high-rise form come different, being likely to a building in splicing becomes to swing to many buildings of different directions
Room, it is clear that such result can not meet the demand of user.
The content of the invention
The technical problems to be solved by the invention are:Remote sensing images are different because of shooting angle, take the high-rise for coming
Form is different, and being likely to a building in splicing becomes to swing to many buildings of different directions.
The technical scheme that the present invention solves above-mentioned technical problem is as follows:
A kind of image split-joint method of remote sensing images atural object, including:
S1:All atural objects in remote sensing images are identified and split by full convolutional network, the remote sensing images are obtained
In all atural objects segmentation figure picture;
S2:Judge the image of the splicing regions with the presence or absence of high-rise of the segmentation figure picture;
S3:When it is determined that there is the high-rise, splicing line in the movement splicing regions not with the high level
The picture registration of building, or the image of the high-rise is replaced from the image-region in addition to building for recognizing.
The beneficial effects of the invention are as follows:After atural object in remote sensing images is identified and is split, avoided in splicing
Building high, it is possible to avoid the occurrence of a building and become to swing to many buildings of different directions.
On the basis of above-mentioned technical proposal, the present invention can also do following improvement.
Preferably, the step S1 includes:
S11:Remote sensing images are put into full convolutional network, the full convolutional network includes the multiple convolutional layers being arranged in order
Group, multiple warp laminations and CRF model layers, wherein, the convolutional layer group includes the convolutional layer and lax convolutional layer that are alternately arranged;
S12:Coordinate points mark is carried out to the remote sensing images by multiple convolutional layer groups and multiple warp laminations
Note, obtains terrain classification probability graph, wherein, in the terrain classification probability graph different atural objects have different coordinate points colors and
Coordinate points depth;
S13:The coordinate points color and the coordinate points depth are input into the CRF model layers general to the terrain classification
All coordinate points in rate figure are classified, and obtain the segmentation figure picture of different atural objects.
Beneficial effect using above-mentioned further scheme is:The color of remote sensing images and depth are added into image recognition and is divided
In cutting, comprehensive analysis colouring information and depth information, using CRF model layers as deep learning neutral net up-sampling layer,
On the basis of the coarse segmentation of network output, the fine cut of image is realized.
Preferably, the step S12 includes:
S121:By the remote sensing images by the image after convolutional layer group coordinate points mark described at least one and by institute
There is the image after the convolutional layer group and warp lamination coordinate points mark described at least one repeatedly to be merged, obtain fusion figure
Picture;
S122:By the remote sensing images with the fused images by after warp lamination coordinate points mark described at least one
Image repeatedly merged, obtain terrain classification probability graph.
Beneficial effect using above-mentioned further scheme is:The full convolutional network is substituted for the full connection of legacy network
Convolution, add warp lamination, and the final result of result several layers of before network and network is carried out it is warm, can obtain more
Image information.
Preferably, the step S13 includes:
S131:The energy function that the coordinate points color is input into the CRF model layers is calculated the terrain classification
First energy value of all coordinate points in probability graph;
S132:The energy function that the coordinate points depth is input into the CRF model layers is calculated the terrain classification
Second energy value of all coordinate points in probability graph;
S133:The final energy of all coordinate points is calculated according to first energy value and second energy value
Value;
S134:The all coordinate points in the terrain classification probability graph are classified according to the final energy value, is obtained
To the segmentation figure picture of different atural objects.
Beneficial effect using above-mentioned further scheme is:CRF algorithms and Gibbs energy flow function are improved, with coordinate points
Color and depth are put into energy function as basis for estimation, and coordinate points are correctly classified by iteration, reduce energy letter
Several values, realizes that image cuts.
Preferably, the image-region in addition to building of the identification includes:Level land, road or river region.
A kind of image mosaic system of remote sensing images atural object, including:
Segmentation module, for all atural objects in remote sensing images to be identified and split by full convolutional network, obtains institute
State the segmentation figure picture of all atural objects in remote sensing images;
Judge module, image of the splicing regions with the presence or absence of high-rise for judging the segmentation figure picture;
Concatenation module, for when it is determined that there is the high-rise, the splicing line in the movement splicing regions to be not
With the picture registration of the high-rise, or the replacement skyscraper of the image-region in addition to building for selecting identification
The image of thing.
Preferably, the segmentation module includes:
Unit is put into, for remote sensing images to be put into full convolutional network, the full convolutional network is more including what is be arranged in order
Individual convolutional layer group, multiple warp lamination and CRF model layers, wherein, the convolutional layer group includes the convolutional layer being alternately arranged and dilute
Loose winding lamination;
Indexing unit, for being carried out to the remote sensing images by multiple convolutional layer groups and multiple warp laminations
Coordinate points are marked, and obtain terrain classification probability graph, wherein, different atural objects have different coordinates in the terrain classification probability graph
Point color and coordinate points depth;
Taxon, for the coordinate points color and the coordinate points depth to be input into the CRF model layers to described
All coordinate points in terrain classification probability graph are classified, and obtain the segmentation figure picture of different atural objects.
Preferably, the indexing unit includes:
First fusion component, for by the remote sensing images by described at least one convolutional layer group coordinate points mark after
Image is repeatedly melted with the image by all convolutional layer groups and after warp lamination coordinate points mark described at least one
Close, obtain fused images;
Second fusion component, for by the remote sensing images with the fused images by warp lamination described at least one
Image after coordinate points mark is repeatedly merged, and obtains terrain classification probability graph.
Preferably, the taxon includes:
First computation module, the energy function for the coordinate points color to be input into the CRF model layers is calculated
First energy value of all coordinate points in the terrain classification probability graph;
Second computation module, the energy function for the coordinate points depth to be input into the CRF model layers is calculated
Second energy value of all coordinate points in the terrain classification probability graph;
3rd computation module, for being calculated all coordinate points according to first energy value and second energy value
Final energy value;
Classification component, for being carried out to all coordinate points in the terrain classification probability graph according to the final energy value
Classification, obtains the segmentation figure picture of different atural objects.
Preferably, the image-region in addition to building of the identification includes:Level land, road or river region.
Brief description of the drawings
Fig. 1 is a kind of schematic flow sheet of the image split-joint method of remote sensing images atural object provided in an embodiment of the present invention;
A kind of flow of the image split-joint method of remote sensing images atural object that Fig. 2 is provided for another embodiment of the present invention is illustrated
Figure;
A kind of flow of the image split-joint method of remote sensing images atural object that Fig. 3 is provided for another embodiment of the present invention is illustrated
Figure;
A kind of flow of the image split-joint method of remote sensing images atural object that Fig. 4 is provided for another embodiment of the present invention is illustrated
Figure;
Fig. 5 is a kind of structural representation of the image mosaic system of remote sensing images atural object provided in an embodiment of the present invention;
A kind of structural representation of the image mosaic system of remote sensing images atural object that Fig. 6 is provided for another embodiment of the present invention
Figure;
A kind of structural representation of the image mosaic system of remote sensing images atural object that Fig. 7 is provided for another embodiment of the present invention
Figure.
Specific embodiment
Principle of the invention and feature are described below in conjunction with accompanying drawing, example is served only for explaining the present invention, and
It is non-for limiting the scope of the present invention.
As shown in figure 1, in embodiment, there is provided a kind of image split-joint method of remote sensing images atural object, including:
S1:All atural objects in remote sensing images are identified and split by full convolutional network, institute in remote sensing images is obtained
There is the segmentation figure picture of atural object;
S2:Judge the image of the splicing regions with the presence or absence of high-rise of segmentation figure picture;
S3:When it is determined that there is high-rise, splicing line in mobile splicing regions not with the image of high-rise
Overlap, or the image of high-rise is replaced from the image-region in addition to building for recognizing.
It should be understood that in the embodiment, after atural object in remote sensing images is identified and is split, height is avoided in splicing
Building, it is possible to avoid the occurrence of a building and become to swing to many buildings of different directions.
Specifically, in the embodiment, by this deep learning network of full convolutional network to all atural objects in remote sensing images
It is identified and splits, obtain the segmentation figure picture of all atural objects, then these atural objects is spliced, it is first in splicing
Splicing regions are first judged with the presence or absence of high-rise, if it is present mobile splicing line avoids high-rise or selection
The region of the identification in addition to building, such as level land, road or river region replace the high-rise.
As shown in Fig. 2 in another embodiment, the step S1 in Fig. 1 includes:
S11:Remote sensing images are put into full convolutional network, it is multiple convolutional layer groups that full convolutional network includes being arranged in order, many
Individual warp lamination and CRF model layers, wherein, convolutional layer group includes the convolutional layer and lax convolutional layer that are alternately arranged;
S12:Coordinate points mark is carried out to remote sensing images by multiple convolutional layer groups and multiple warp laminations, atural object point is obtained
Class probability graph, wherein, different atural objects have different coordinate points colors and coordinate points depth in terrain classification probability graph;
S13:Coordinate points color and coordinate points depth are input into CRF model layers to all coordinates in atural object class probability figure
Point is classified, and obtains the segmentation figure picture of different atural objects.
It should be understood that in the embodiment, during the color of remote sensing images and depth are added image recognition and are split, comprehensive analysis
Colouring information and depth information, using CRF model layers as deep learning neutral net up-sampling layer, network output rough segmentation
On the basis of cutting, the fine cut of image is realized.
Specifically, in the embodiment, first, traditional full convolutional network is improved, full connection is replaced using convolutional layer
Layer, is up-sampled using warp lamination and CRF model layers after convolutional layer to image;Then, image to be split is put into this
In full convolutional network after improvement, coordinate points mark is carried out to remote sensing images by seven layers of convolutional layer and three layers of warp lamination, given
Coordinate points put on different colours and depth, finally, according to coordinate points color and depth after CRF model layers are marked to coordinate points
Image in all coordinate points be iterated classification, carry out fine segmentation, obtain the segmentation figure picture of different atural objects.CRF
(conditional random field algorithm, condition random field) combines maximum entropy model and hidden Markov mould
The characteristics of type, be a kind of undirected graph model, in recent years in the sequence labelling tasks such as participle, part-of-speech tagging and name Entity recognition
Achieve good effect.CRF is a typical discriminative model.
As shown in figure 3, in another embodiment, the step S12 in Fig. 2 includes:
S121:By remote sensing images by the image after at least one convolutional layer group coordinate points mark and by all convolutional layers
Image after group and at least one warp lamination coordinate points mark is repeatedly merged, and obtains fused images;
S122:Remote sensing images and fused images are carried out by the image after at least one warp lamination coordinate points mark many
Secondary fusion, obtains terrain classification probability graph.
It should be understood that in the embodiment, the full articulamentum of legacy network has been substituted for convolutional layer by the full convolutional network, addition
Warp lamination, and the final result of result several layers of before network and network is carried out warm, more image informations can be obtained.
As shown in figure 4, in another embodiment, the step S13 in Fig. 2 includes:
S131:The energy function of coordinate points color input CRF model layers is calculated the institute in terrain classification probability graph
There is the first energy value of coordinate points;
S132:The energy function of coordinate points depth input CRF model layers is calculated the institute in terrain classification probability graph
There is the second energy value of coordinate points;
S133:The final energy value of all coordinate points is calculated according to the first energy value and the second energy value;
S134:The all coordinate points in atural object class probability figure are classified according to final energy value, is obtained differently
The segmentation figure picture of thing.
It should be understood that in the embodiment, using coordinate points color and depth as basis for estimation, being put into energy function, pass through
Iteration is correctly classified to coordinate points, reduces the value of energy function, realizes that image cuts.
Specifically, in the embodiment, the energy function of coordinate points color and depth input CRF model layers is counted respectively respectively
Calculation obtain all coordinate points in terrain classification probability graph corresponding to the first energy value of coordinate points color and corresponding to coordinate
Second energy value of point depth, the first energy value is added the gross energy for obtaining each coordinate points with the second energy value, according to every
The gross energy of individual coordinate points carries out Accurate Segmentation to atural object class probability figure, obtains atural object segmentation figure picture.
It should be understood that in the embodiment, the image-region in addition to building of identification includes:Level land, road or river area
Domain.
As shown in figure 5, in embodiment, there is provided a kind of image mosaic system of remote sensing images atural object, including:
Segmentation module 1, for all atural objects in remote sensing images to be identified and split by full convolutional network, obtains distant
The segmentation figure picture of all atural objects in sense image;
Judge module 2, image of the splicing regions with the presence or absence of high-rise for judging segmentation figure picture;
Concatenation module 3, for when it is determined that there is high-rise, the splicing line in mobile splicing regions not to be built with high level
The picture registration of thing is built, or the image of high-rise is replaced from the image-region in addition to building for recognizing.
As shown in fig. 6, in another embodiment, the segmentation module 1 in Fig. 5 includes:
Unit 11 is put into, for remote sensing images to be put into full convolutional network, full convolutional network includes the multiple being arranged in order
Convolutional layer group, multiple warp laminations and CRF model layers, wherein, convolutional layer group includes the convolutional layer being alternately arranged and lax convolution
Layer;
Indexing unit 12, for carrying out coordinate points mark to remote sensing images by multiple convolutional layer groups and multiple warp laminations
Note, obtains terrain classification probability graph, wherein, different atural objects have different coordinate points color and coordinates in terrain classification probability graph
Point depth;
Taxon 13, for coordinate points color and coordinate points depth to be input into CRF model layers to atural object class probability figure
In all coordinate points classified, obtain the segmentation figure picture of different atural objects.
As shown in fig. 7, in another embodiment, the indexing unit 12 in Fig. 6 includes:
First fusion component 121, for the image by remote sensing images after at least one convolutional layer group coordinate points are marked
Repeatedly merged with by the image after all convolutional layer groups and at least one warp lamination coordinate points mark, obtained fusion figure
Picture;
Second fusion component 122, for by remote sensing images and fused images by least one warp lamination coordinate points mark
Image after note is repeatedly merged, and obtains terrain classification probability graph.
As shown in fig. 7, in another embodiment, the taxon 13 in Fig. 6 includes:
First computation module 131, for the energy function of coordinate points color input CRF model layers to be calculated into atural object point
First energy value of all coordinate points in class probability graph;
Second computation module 132, for the energy function of coordinate points depth input CRF model layers to be calculated into atural object point
Second energy value of all coordinate points in class probability graph;
3rd computation module 133, for being calculated all coordinate points most according to the first energy value and the second energy value
Whole energy value;
Classification component 134, for being classified to all coordinate points in atural object class probability figure according to final energy value,
Obtain the segmentation figure picture of different atural objects.
It should be understood that in the embodiment, the image-region in addition to building of identification includes:Level land, road or river area
Domain.
The foregoing is only presently preferred embodiments of the present invention, be not intended to limit the invention, it is all it is of the invention spirit and
Within principle, any modification, equivalent substitution and improvements made etc. should be included within the scope of the present invention.
Claims (10)
1. a kind of image split-joint method of remote sensing images atural object, it is characterised in that including:
S1:All atural objects in remote sensing images are identified and split by full convolutional network, institute in the remote sensing images is obtained
There is the segmentation figure picture of atural object;
S2:Judge the image of the splicing regions with the presence or absence of high-rise of the segmentation figure picture;
S3:When it is determined that there is the high-rise, splicing line in the movement splicing regions not with the skyscraper
The picture registration of thing, or the image of the high-rise is replaced from the image-region in addition to building for recognizing.
2. image split-joint method according to claim 1, it is characterised in that the step S1 includes:
S11:Remote sensing images are put into full convolutional network, it is multiple convolutional layer groups that the full convolutional network includes being arranged in order, many
Individual warp lamination and CRF model layers, wherein, the convolutional layer group includes the convolutional layer and lax convolutional layer that are alternately arranged;
S12:Coordinate points mark is carried out to the remote sensing images by multiple convolutional layer groups and multiple warp laminations, is obtained
To terrain classification probability graph, wherein, different atural objects have different coordinate points color and coordinates in the terrain classification probability graph
Point depth;
S13:The coordinate points color and the coordinate points depth are input into the CRF model layers with to the terrain classification probability
All coordinate points in figure are classified, and obtain the segmentation figure picture of different atural objects.
3. image split-joint method according to claim 2, it is characterised in that the step S12 includes:
S121:By the remote sensing images by the image after convolutional layer group coordinate points mark described at least one and by all institutes
State the image after warp lamination coordinate points mark described in convolutional layer group and at least one repeatedly to be merged, obtain fused images;
S122:By the remote sensing images with the fused images by the figure after warp lamination coordinate points mark described at least one
As repeatedly being merged, terrain classification probability graph is obtained.
4. image split-joint method according to claim 3, it is characterised in that the step S13 includes:
S131:The energy function that the coordinate points color is input into the CRF model layers is calculated the terrain classification probability
First energy value of all coordinate points in figure;
S132:The energy function that the coordinate points depth is input into the CRF model layers is calculated the terrain classification probability
Second energy value of all coordinate points in figure;
S133:The final energy value of all coordinate points is calculated according to first energy value and second energy value;
S134:The all coordinate points in the terrain classification probability graph are classified according to the final energy value, is obtained not
With the segmentation figure picture of atural object.
5. the image split-joint method according to claim any one of 1-4, it is characterised in that the identification except building it
Outer image-region includes:Level land, road or river region.
6. a kind of image mosaic system of remote sensing images atural object, it is characterised in that including:
Segmentation module (1), for all atural objects in remote sensing images to be identified and split by full convolutional network, obtains described
The segmentation figure picture of all atural objects in remote sensing images;
Judge module (2), image of the splicing regions with the presence or absence of high-rise for judging the segmentation figure picture;
Concatenation module (3), for when it is determined that there is the high-rise, splicing line in the movement splicing regions not with
The picture registration of the high-rise, or from the replacement high-rise of the image-region in addition to building of identification
Image.
7. image mosaic system according to claim 6, it is characterised in that the segmentation module includes:
Unit (11) is put into, for remote sensing images to be put into full convolutional network, the full convolutional network is more including what is be arranged in order
Individual convolutional layer group, multiple warp lamination and CRF model layers, wherein, the convolutional layer group includes the convolutional layer being alternately arranged and dilute
Loose winding lamination;
Indexing unit (12), for being carried out to the remote sensing images by multiple convolutional layer groups and multiple warp laminations
Coordinate points are marked, and obtain terrain classification probability graph, wherein, different atural objects have different coordinates in the terrain classification probability graph
Point color and coordinate points depth;
Taxon (13), for the coordinate points color and the coordinate points depth to be input into the CRF model layers to described
All coordinate points in terrain classification probability graph are classified, and obtain the segmentation figure picture of different atural objects.
8. image mosaic system according to claim 7, it is characterised in that the indexing unit (12) includes:
First fusion component (121), for by the remote sensing images by described at least one convolutional layer group coordinate points mark after
Image carried out repeatedly with the image after warp lamination coordinate points mark by all convolutional layer groups and described at least one
Fusion, obtains fused images;
Second fusion component (122), for by the remote sensing images with the fused images by deconvolution described at least one
Image after layer coordinate points mark is repeatedly merged, and obtains terrain classification probability graph.
9. image mosaic system according to claim 8, it is characterised in that the taxon (13) includes:
First computation module (131), the energy function for the coordinate points color to be input into the CRF model layers is calculated
First energy value of all coordinate points in the terrain classification probability graph;
Second computation module (132), the energy function for the coordinate points depth to be input into the CRF model layers is calculated
Second energy value of all coordinate points in the terrain classification probability graph;
3rd computation module (133), for being calculated all coordinates according to first energy value and second energy value
The final energy value of point;
Classification component (134), for being clicked through to all coordinates in the terrain classification probability graph according to the final energy value
Row classification, obtains the segmentation figure picture of different atural objects.
10. the image mosaic system according to claim any one of 6-9, it is characterised in that the identification except building
Outside image-region include:Level land, road or river region.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710081140.XA CN106897968B (en) | 2017-02-15 | 2017-02-15 | Image splicing method and system for ground object of remote sensing image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710081140.XA CN106897968B (en) | 2017-02-15 | 2017-02-15 | Image splicing method and system for ground object of remote sensing image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106897968A true CN106897968A (en) | 2017-06-27 |
CN106897968B CN106897968B (en) | 2022-10-14 |
Family
ID=59198705
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710081140.XA Active CN106897968B (en) | 2017-02-15 | 2017-02-15 | Image splicing method and system for ground object of remote sensing image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106897968B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107704840A (en) * | 2017-10-24 | 2018-02-16 | 汕头大学 | A kind of remote sensing images Approach for road detection based on deep learning |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104182985A (en) * | 2014-09-01 | 2014-12-03 | 西安电子科技大学 | Remote sensing image change detection method |
CN104217414A (en) * | 2014-09-10 | 2014-12-17 | 中科九度(北京)空间信息技术有限责任公司 | Method and device for extracting mosaicing line for image mosaic |
CN105957018A (en) * | 2016-07-15 | 2016-09-21 | 武汉大学 | Unmanned aerial vehicle image filtering frequency division jointing method |
US20160307072A1 (en) * | 2015-04-17 | 2016-10-20 | Nec Laboratories America, Inc. | Fine-grained Image Classification by Exploring Bipartite-Graph Labels |
CN106295714A (en) * | 2016-08-22 | 2017-01-04 | 中国科学院电子学研究所 | A kind of multi-source Remote-sensing Image Fusion based on degree of depth study |
-
2017
- 2017-02-15 CN CN201710081140.XA patent/CN106897968B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104182985A (en) * | 2014-09-01 | 2014-12-03 | 西安电子科技大学 | Remote sensing image change detection method |
CN104217414A (en) * | 2014-09-10 | 2014-12-17 | 中科九度(北京)空间信息技术有限责任公司 | Method and device for extracting mosaicing line for image mosaic |
US20160307072A1 (en) * | 2015-04-17 | 2016-10-20 | Nec Laboratories America, Inc. | Fine-grained Image Classification by Exploring Bipartite-Graph Labels |
CN105957018A (en) * | 2016-07-15 | 2016-09-21 | 武汉大学 | Unmanned aerial vehicle image filtering frequency division jointing method |
CN106295714A (en) * | 2016-08-22 | 2017-01-04 | 中国科学院电子学研究所 | A kind of multi-source Remote-sensing Image Fusion based on degree of depth study |
Non-Patent Citations (4)
Title |
---|
EMMANUEL MAGGIORI等: "Fully convolutional neural networks for remote sensing image classification", 《2016 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM (IGARSS)》 * |
EVAN SHELHAMER: "Fully Convolutional Networks for Semantic Segmentation", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 * |
HAO ZHOU 等: "Image semantic segmentation based on FCN-CRF model", 《2016 INTERNATIONAL CONFERENCE ON IMAGE, VISION AND COMPUTING (ICIVC)》 * |
豆丁网: "遥感的原理与实践--以上海市第三轮航空遥感调查为例", 《HTTPS://WWW.DOCIN.COM/P-921897789.HTML》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107704840A (en) * | 2017-10-24 | 2018-02-16 | 汕头大学 | A kind of remote sensing images Approach for road detection based on deep learning |
Also Published As
Publication number | Publication date |
---|---|
CN106897968B (en) | 2022-10-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104809187B (en) | A kind of indoor scene semanteme marking method based on RGB D data | |
CN107247938A (en) | A kind of method of high-resolution remote sensing image City Building function classification | |
CN108549893A (en) | A kind of end-to-end recognition methods of the scene text of arbitrary shape | |
CN107862261A (en) | Image people counting method based on multiple dimensioned convolutional neural networks | |
CN110414387A (en) | A kind of lane line multi-task learning detection method based on lane segmentation | |
CN109508710A (en) | Based on the unmanned vehicle night-environment cognitive method for improving YOLOv3 network | |
CN106910202A (en) | The image partition method and system of a kind of remote sensing images atural object | |
CN108960404B (en) | Image-based crowd counting method and device | |
CN110322499A (en) | A kind of monocular image depth estimation method based on multilayer feature | |
CN109086668A (en) | Based on the multiple dimensioned unmanned aerial vehicle remote sensing images road information extracting method for generating confrontation network | |
CN112149594B (en) | Urban construction assessment method based on deep learning and high-resolution satellite images | |
CN113408594B (en) | Remote sensing scene classification method based on attention network scale feature fusion | |
CN109034268B (en) | Pheromone trapper-oriented red-fat bark beetle detector optimization method | |
CN109409240A (en) | A kind of SegNet remote sensing images semantic segmentation method of combination random walk | |
CN105550687A (en) | RGB-D image multichannel fusion feature extraction method on the basis of ISA model | |
CN108090911A (en) | A kind of offshore naval vessel dividing method of remote sensing image | |
CN112084869A (en) | Compact quadrilateral representation-based building target detection method | |
CN110807485B (en) | Method for fusing two-classification semantic segmentation maps into multi-classification semantic map based on high-resolution remote sensing image | |
CN109858450A (en) | Ten meter level spatial resolution remote sensing image cities and towns extracting methods of one kind and system | |
CN107944437B (en) | A kind of Face detection method based on neural network and integral image | |
CN102902956A (en) | Ground-based visible cloud image recognition processing method | |
CN110110682A (en) | The semantic stereo reconstruction method of remote sensing images | |
CN110047139A (en) | A kind of specified target three-dimensional rebuilding method and system | |
CN106611421A (en) | SAR image segmentation method based on feature learning and sketch line constraint | |
CN110472628A (en) | A kind of improvement Faster R-CNN network detection floating material method based on video features |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |