CN109086668A - Based on the multiple dimensioned unmanned aerial vehicle remote sensing images road information extracting method for generating confrontation network - Google Patents
Based on the multiple dimensioned unmanned aerial vehicle remote sensing images road information extracting method for generating confrontation network Download PDFInfo
- Publication number
- CN109086668A CN109086668A CN201810707890.8A CN201810707890A CN109086668A CN 109086668 A CN109086668 A CN 109086668A CN 201810707890 A CN201810707890 A CN 201810707890A CN 109086668 A CN109086668 A CN 109086668A
- Authority
- CN
- China
- Prior art keywords
- network
- image
- remote sensing
- size
- sensing images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/29—Graphical models, e.g. Bayesian networks
Abstract
The invention discloses a kind of based on the multiple dimensioned unmanned plane road extraction method for generating confrontation network, remote sensing images are passed through to convolution sum deconvolution respectively to operate, length and width are obtained respectively to reduce one times and put twice image, then pass through the image segmentation network of an end-to-end training, obtain the output characteristic pattern of three kinds of corresponding scales, then it is operated by convolution sum deconvolution, it is unified to arrive original size scale, method by being added pixel-by-pixel, it will be fused together and be input in differentiation network and compare to obtain error with label image, and more newly-generated network and differentiate network parameter.By the training of certain amount training data, confrontation is finally generated to the road area image that the generation result of the generation network in network is extracted as the segmentation result in application.The excessive or too small situation of single image ratio is accounted for for road area in unmanned plane image, the present invention can extract road area well, while improve road area segmentation precision in unmanned aerial vehicle remote sensing images.
Description
Technical field
It is specifically a kind of based on generation confrontation the present invention relates to unmanned aerial vehicle remote sensing image automatic processing technical field
Network, and the method that the high-resolution unmanned aerial vehicle remote sensing images road information for merging Multiscale Image Processing Method extracts.
Background technique
Development trend one of of the unmanned aerial vehicle remote sensing as remote sensing has very strong timeliness, needle in data acquisition
The advantages that property and high flexibility is the important channel for obtaining remotely-sensed data.Road is as atural object most common in remote sensing image
One of information, the extraction of road information is in military strategy, space mapping, urban construction, traffic administration, current navigation etc. concerning state
Counting has great significance in the field of the people's livelihood.
In recent years, with the fast development of deep learning, the various fields such as machine learning include computer vision quilt rapidly
Deep learning is captured, including image classification, target detection and image, semantic segmentation.Compared to traditional algorithm, deep learning
Often have the raising of 20%-30% achievement, this key factor in convolutional neural networks to the powerful learning ability of characteristics of image,
This is traditional cannot to be compared based on pixel and Boundary Recognition scheduling algorithm.
Although existing many convolutional neural networks models have had very successful performance in image, semantic segmentation,
Some features in training data can not be sometimes acquired well, especially in image, semantic segmentation task, divide mould
Type predicts that the accuracy rate of pixel scale may be very high, but between pixel and pixel in the classification usually to each pixel
Correlation be easy to be ignored so that the object in segmentation result is sufficiently complete or the ruler of certain objects in segmentation result
Very little, shape is larger with size, the differences in shape in label, and the complicated and changeable of reality scene always leads to convolutional Neural net
There is shortcoming in the universal of network, object blocks and be overlapped, lacks high identification in a large amount of variations of target object, different scenes
Feature and illumination variation etc. be all the influence factor for making parted pattern lack generalization ability.
A kind of method that confrontation network is exactly to solve the above-mentioned problems and is suggested is generated, is generated in confrontation network
Model is generated, random noise is generated one by convolutional neural networks study and is distributed similar image with input data, then is led to
The difference for crossing the false figure and original input picture that differentiate that network-control generates allows the false figure of generation to connect as far as possible with original image
Closely, final so that can not differentiate difference.
In image, semantic segmentation task, the generation model generated in confrontation network passes through the RGB image feature that will be inputted
Study generates the probability graph of the tag class prediction an of pixel scale, and by differentiating network, differentiation is generated general by generation model
The difference of rate figure and authentic specimen label.Compared with traditional convolutional neural networks, figure can not only be improved by generating confrontation network model
As the integrality of single body in semantic segmentation result, moreover it is possible to keep the mutual independence between object, improve segmentation precision.
Due to the characteristic issues of unmanned aerial vehicle remote sensing platform, the flying height of different sortie unmanned planes is often different, causes
Identical atural object imaging size dimension is inconsistent, for road area, when drone flying height is lower, the image of unmanned plane
As in, road area may occupy 90% or more of single image area, even up to 100%;When unmanned plane aircraft altitude compared with
It is 10% even less may only to account for single image region for Gao Shi, the area of road area.And existing convolutional neural networks knot
Structure, after designing network model, when using biggish convolution collecting image carry out convolution operation extract feature when, target compared with
Small object is often ignored, and when carrying out convolution operation extraction feature using lesser convolution collecting image, target is larger
Object segmentation be then easy to appear discontinuous phenomenon, to influence image segmentation precision.
Summary of the invention
It is an object of the invention to for due to drone flying height it is inconsistent, road area when leading to camera imaging
It is excessive or too small that area accounts for single width remote sensing image area ratio, to propose one the problem of influencing road area extraction accuracy
Kind fights network in image language based on the multiple dimensioned unmanned plane road extraction method for generating confrontation network, using generating
Advantage in justice segmentation, in combination with the method for Multiscale Image Processing Method, to improve road area in unmanned aerial vehicle remote sensing images
Extraction accuracy.
For achieving the above object, the present invention is based on the multiple dimensioned unmanned plane image road information for generating confrontation network to mention
Take method, which comprises the following steps:
(1), training data is obtained
By original unmanned aerial vehicle remote sensing image cropping at a series of remote sensing images of n × n size, then production marks road
The label image in region, using each remote sensing images and its corresponding label image as training data;
(2), building generates network
2.1), in generating network, for the RGB triple channel image of the remote sensing images of n × n size, pass through convolution respectively
Operation and deconvolution operation, obtain the RGB triple channel image that size is respectively 0.5n × 0.5n and 2n × 2n;
2.2), in generating network, the RGB triple channel image that size obtained in step 2.1) is 2n × 2n is passed through one
The image segmentation network of a end-to-end training obtains the class probability characteristic pattern that a size is 2n × 2n, by convolution operation,
Obtain the probability characteristics figure that size is n × n;
2.3), in generating network, the RGB threeway image that size obtained in step 2.1) is 0.5n × 0.5n is passed through
With the image segmentation network of mutually isostructural end-to-end training in step 2.2), the classification that a size is 0.5n × 0.5n is obtained
Probability characteristics figure, operates by deconvolution, obtains the probability characteristics figure that size is n × n;
2.4), in generating network, the RGB image of the remote sensing images of n × n size is passed through into knot identical with step 2.2)
The image segmentation network of structure obtains the class probability characteristic pattern that a size is n × n, i.e. the probability characteristics figure of n × n;
It 2.4) is finally n × n's by step 2.2), (2.3), (2.4) three obtained size, in generating network
Probability characteristics figure, the method by being added pixel-by-pixel, merges the characteristics of image of three scales, obtains the output feature for generating network
Figure;
(3), the remote sensing images of n × n size in training data are input in the generation network of step (2) building, are obtained
Characteristic pattern is exported, the remote sensing images for exporting characteristic pattern and n × n size are passed through into a convolution operation respectively, convolution is obtained
Characteristic pattern is connected as the input for differentiating network, by differentiating that network obtains the output between 0 and 1 later,
Differentiate that this input as the input for being fault image, is differentiated that the desired output of network is 0 at this time, differentiate network output and this by network
When desired output subtract each other to obtain error;
(4), the remote sensing images of n × n size in training data and its corresponding label image are passed through into a convolution respectively
Operation, the characteristic pattern for then obtaining convolution is connected as the input for differentiating network, by differentiating that network obtains one later
A output between 0 and 1 differentiates that this input as the input for being true picture, is differentiated the expectation of network by network at this time
Output is 1, differentiates that network output subtracts each other to obtain error with desired output at this time;
(5), the error back propagation for obtaining step (3), (4), more newly-generated network and differentiation network parameter, wherein
Step 2.2), 2.3), 2.4) in end-to-end training image segmentation network share weighting parameter;
(6), all remote sensing images and corresponding label image in the training data that step (1) obtains pass through
Step (3), (4), (5) are trained to network is generated, and are made the generation network generated in confrontation network and are differentiated that network reaches one
A equilibrium state, generating the output characteristic pattern that network generates is false figure and label image difference very little, so that differentiating network
Do not differentiate that the image of its input comes from label image and also comes from output characteristic pattern i.e. vacation caused by generation network
Figure;
(7), the generation network in generation confrontation network being up under equilibrium state, which individually takes out, to be applied, will be real
The remote sensing images that border unmanned plane takes are cut into a series of remote sensing images of n × n size, and as input, will generate net
The output characteristic pattern of network is the road area image extracted as segmentation result.
The object of the present invention is achieved like this.
The present invention is based on the multiple dimensioned unmanned plane road extraction methods for generating confrontation network, firstly, by remote sensing
Image passes through convolution sum deconvolution respectively and operates, and obtains length and width and respectively reduces one times and put twice image;Secondly, by three kinds of rulers
The image of degree passes through the image segmentation network of an end-to-end training, obtains the pixel scale prediction probability figure of three kinds of corresponding scales
Export characteristic pattern;Again, it is operated by convolution sum deconvolution, the pixel scale output characteristic pattern of three kinds of scales is unified to original
Beginning training data image size scale, the method by being added pixel-by-pixel, together by three kinds of scale multi-features;Most
Afterwards, the output characteristic pattern for having merged three kinds of scale features is input to and is differentiated in network, by being carried out pair with authentic specimen label
Than obtaining error, and by error back propagation, more newly-generated network and differentiating network parameter.By certain amount training data
Training, generate the generation network in confrontation network and differentiate that network reaches a balance, the false figure that generated by generation network and
True tag image difference very little so that differentiate network does not differentiate its input image come from label image still by
Generate the false figure that network generates.Confrontation is finally generated to the generation result of the generation network in network as the segmentation knot in application
Fruit is the road area image extracted.
The present invention learns unmanned aerial vehicle remote sensing images feature by convolutional neural networks, and the advantage of network is fought in conjunction with generation,
The method for merging Multiscale Image Processing Method simultaneously, accounts for the too large or too small feelings of single width imagery coverage ratio for road area
Condition, the present invention can extract road area well, while improve road area segmentation precision in unmanned aerial vehicle remote sensing images.
Detailed description of the invention
Fig. 1 generates confrontation network overall construction drawing;
Fig. 2 is that the present invention is based on the multiple dimensioned unmanned plane road extraction method for generating confrontation network is a kind of specific
Embodiment flow chart;
Fig. 3 is that multiple dimensioned generation network structure is merged in the present invention;
Fig. 4 is to differentiate network structure;
Fig. 5 is the present invention and the one group of control of road area image for generating network output for not merging Analysis On Multi-scale Features
Figure;
Fig. 6 is the present invention and another group of control of road area image for generating network output for not merging Analysis On Multi-scale Features
Figure.
Specific embodiment
A specific embodiment of the invention is described with reference to the accompanying drawing, preferably so as to those skilled in the art
Understand the present invention.Requiring particular attention is that in the following description, when known function and the detailed description of design perhaps
When can desalinate main contents of the invention, these descriptions will be ignored herein.
Fig. 1 is to generate confrontation network overall construction drawing.
As shown in Figure 1, remote sensing images input generates network in generating confrontation network, the i.e. false figure of output characteristic pattern is obtained,
Label image or false figure and remote sensing images are input to differentiation network, obtain true/false probability, then with desired output 1/0 into
Row subtracts each other, and obtains error.Error back propagation, more newly-generated network and differentiation network parameter.It continually enters as trained distant
Feel image and its corresponding label image, make the generation network generated in confrontation network and differentiates that network reaches an equilibrium-like
State, generating the output characteristic pattern that network generates is false figure and label image difference very little, so that differentiating that network does not differentiate yet
Its image inputted comes from label image and also comes from the i.e. false figure of output characteristic pattern caused by generation network.In this way,
Obtained generation network can be applied to the Remote Sensing Image Segmentation task that practical unmanned plane takes.
Fig. 2 is that the present invention is based on the multiple dimensioned unmanned plane road extraction method for generating confrontation network is a kind of specific
Embodiment flow chart.
In the present embodiment, as shown in Figure 1, the present invention is based on the multiple dimensioned unmanned plane image road letters for generating confrontation network
Cease extracting method the following steps are included:
Step S1: training data is obtained
By original unmanned aerial vehicle remote sensing image cropping at a series of remote sensing images of n × n size, then production marks road
The label image in region, using each remote sensing images and its corresponding label image as training data.
In the present embodiment, original unmanned aerial vehicle remote sensing image cropping at 500*500 size a series of remote sensing images, then
Manual manufacture marks the label image of road area.It, in the present embodiment, will in order to verify Object Segmentation ability of the invention
Wherein 90% remote sensing images and its corresponding label image are used as training data, are left 10% remote sensing images with it respectively
Corresponding label image is as test data.
Step S2: building generates network
As shown in figure 3, for the RGB triple channel image I of the remote sensing images of n × n size, leading to respectively in generating network
Convolution operation and deconvolution operation are crossed, the RGB triple channel image I that size is respectively 0.5n × 0.5n and 2n × 2n is obtained1、I2.?
In the present embodiment, respectively 1000 × 1000 and 250 × 250 RGB triple channel image I1、I2。
The RGB triple channel image I for being 2n × 2n by obtained size1By the image segmentation net of an end-to-end training
Network obtains the class probability characteristic pattern I that a size is 2n × 2n3, by convolution operation, it is special to obtain the probability that size is n × n
Sign figure I4;
The RGB threeway image I for being 0.5n × 0.5n by obtained size2By mutually isostructural end-to-end training
Image segmentation network obtains the class probability characteristic pattern I that a size is 0.5n × 0.5n5, operate, obtain big by deconvolution
The small probability characteristics figure I for n × n6;
The RGB image I of the remote sensing images of n × n size is passed through into a mutually isostructural image segmentation network, obtains one
Size is the class probability characteristic pattern of n × n, i.e. the probability characteristics figure I of n × n7;
Finally, being the probability characteristics figure I of n × n by three obtained size4、I6、I7Method by being added pixel-by-pixel,
The characteristics of image for merging three scales obtains the output characteristic pattern I for generating network8。
Step S3: the remote sensing images of n × n size in training data are input in the generation network of step S2 building, are obtained
To output characteristic pattern.As shown in figure 4, the remote sensing images for exporting characteristic pattern and n × n size are passed through into a convolution operation respectively,
The characteristic pattern that convolution is obtained is connected as the input for differentiating network, by differentiating that network obtains one between 0 and 1 later
Between output, differentiate network by this input as be fault image input, at this time differentiate network desired output be 0, differentiate
Network output subtracts each other to obtain error with desired output at this time.
Step S4: the remote sensing images of n × n size in training data and its corresponding label image are passed through into a secondary volume respectively
Product operation, the characteristic pattern for then obtaining convolution is connected as the input for differentiating network, by differentiating that network obtains later
One output between 0 and 1 differentiates that this input as the input for being true picture, is differentiated the phase of network by network at this time
It hopes that output is 1, differentiates that network output subtracts each other to obtain error with desired output at this time.
Step S5: the error back propagation that step S3, S4 is obtained, more newly-generated network and differentiation network parameter, wherein
The image segmentation network of three end-to-end training in step S2 shares weighting parameter.
Step S6: all remote sensing images and corresponding label image in the training data that step S1 is obtained,
It is trained by step S3, S4, S5 to network is generated, makes the generation network generated in confrontation network and differentiates that network reaches one
A equilibrium state, generating the output characteristic pattern that network generates is false figure and label image difference very little, so that differentiating network
Do not differentiate that the image of its input comes from label image and also comes from output characteristic pattern i.e. vacation caused by generation network
Figure.
Step S7: the generation network in generation confrontation network being up under equilibrium state, which individually takes out, to be applied,
The remote sensing images that practical unmanned plane takes are cut into a series of remote sensing images of n × n size, and as input, will be given birth to
The road area image extracted at the output characteristic pattern of network as segmentation result.
Test data is inputted and generates network, obtained road area image is compared with label image, extraction effect
It is all relatively good.
Fig. 5 is the present invention and the one group of control of road area image for generating network output for not merging Analysis On Multi-scale Features
Figure.
In Fig. 5, first row image is the original unmanned aerial vehicle remote sensing image of input, and secondary series image is corresponding handmarking
Label image, third column data be merge Analysis On Multi-scale Features generate network output road area image, the 4th column data
For the road area image for generating network output of no fusion Analysis On Multi-scale Features.
It is by comparison it can be found that relatively easy in image background, and in the obvious situation of road target region contour,
It merges Analysis On Multi-scale Features structure and all preferably extracts road region information with the network for not merging Analysis On Multi-scale Features structure;
Fig. 6 is the present invention and another group of control of road area image for generating network output for not merging Analysis On Multi-scale Features
Figure.
In Fig. 6, first row image is the original unmanned aerial vehicle remote sensing image of input, and secondary series image is corresponding handmarking
Label image, third column data be merge Analysis On Multi-scale Features generate network output road area image, the 4th column data
For the road area image for generating network output of no fusion Analysis On Multi-scale Features.
By comparison it can be found that having shadow occlusion for road area in image, having that similar with roadway characteristic other are right
As in the presence of, it joined the road information ratio that the network of Analysis On Multi-scale Features structure is extracted and be added without Analysis On Multi-scale Features
The road region information that the network of structure is extracted is accurately more.
Although the illustrative specific embodiment of the present invention is described above, in order to the technology of the art
Personnel understand the present invention, it should be apparent that the present invention is not limited to the range of specific embodiment, to the common skill of the art
For art personnel, if various change the attached claims limit and determine the spirit and scope of the present invention in, these
Variation is it will be apparent that all utilize the innovation and creation of present inventive concept in the column of protection.
Claims (2)
1. a kind of based on the multiple dimensioned unmanned plane road extraction method for generating confrontation network, which is characterized in that including
Following steps:
(1), training data is obtained
By original unmanned aerial vehicle remote sensing image cropping at a series of remote sensing images of n × n size, then production marks road area
Label image, using each remote sensing images and its corresponding label image as training data;
(2), building generates network
2.1), in generating network, for the RGB triple channel image of the remote sensing images of n × n size, pass through convolution operation respectively
It is operated with deconvolution, obtains the RGB triple channel image that size is respectively 0.5n × 0.5n and 2n × 2n;
2.2), in generating network, the RGB triple channel image that size obtained in step 2.1) is 2n × 2n is held by one
To the image segmentation network of end training, obtains the class probability characteristic pattern that a size is 2n × 2n and obtained by convolution operation
Size is the probability characteristics figure of n × n;
2.3), in generating network, the RGB threeway image that size obtained in step 2.1) is 0.5n × 0.5n is passed through and step
The image segmentation network of rapid 2.2) middle mutually isostructural end-to-end training, obtains the class probability that a size is 0.5n × 0.5n
Characteristic pattern is operated by deconvolution, obtains the probability characteristics figure that size is n × n;
2.4), in generating network, the RGB image of the remote sensing images of n × n size is passed through middle mutually isostructural with step 2.2)
Image segmentation network obtains the class probability characteristic pattern that a size is n × n, i.e. the probability characteristics figure of n × n;
It 2.4) is finally the probability of n × n by step 2.2), (2.3), (2.4) three obtained size, in generating network
Characteristic pattern, the method by being added pixel-by-pixel, merges the characteristics of image of three scales, obtains the output characteristic pattern for generating network;
(3), the remote sensing images of n × n size in training data are input in the generation network of step (2) building, are exported
The remote sensing images for exporting characteristic pattern and n × n size are passed through a convolution operation, the feature that convolution is obtained by characteristic pattern respectively
Figure is connected as the input for differentiating network, by differentiating that network obtains the output between 0 and 1 later, is differentiated
This input as the input for being fault image, is differentiated that the desired output of network is 0 by network at this time, differentiate network output at this time
Desired output subtracts each other to obtain error;
(4), the remote sensing images of n × n size in training data and its corresponding label image are passed through into a convolution operation respectively,
Then characteristic pattern convolution obtained connect as differentiate network input, through differentiation network after obtain one between
Output between 0 and 1 differentiates that this input as the input for being true picture, is differentiated that the desired output of network is by network at this time
1, differentiate that network output subtracts each other to obtain error with desired output at this time;
(5), the error back propagation for obtaining step (3), (4), more newly-generated network and differentiation network parameter, wherein step
2.2), 2.3), 2.4) in end-to-end training image segmentation network share weighting parameter;
(6), all remote sensing images and corresponding label image in the training data that step (1) obtains, by step
(3), (4), (5) are trained generation network, make the generation network generated in confrontation network and differentiate that network reaches one and puts down
Weighing apparatus state generates output characteristic pattern (vacation figure) and label image difference very little that network generates, so that differentiating that network also differentiates
Image of its input, which does not come from label image and also comes from, generates output characteristic pattern caused by network (vacation figure);
(7), the generation network in generation confrontation network being up under equilibrium state, which individually takes out, to be applied, by practical nothing
The man-machine remote sensing images taken are cut into a series of remote sensing images of n × n size, and as input, will generate network
It is the road area image extracted that characteristic pattern, which is exported, as segmentation result.
2. the unmanned plane road extraction method according to claim 1 based on multiple dimensioned generation confrontation network,
It is characterized in that, n=500.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810707890.8A CN109086668B (en) | 2018-07-02 | 2018-07-02 | Unmanned aerial vehicle remote sensing image road information extraction method based on multi-scale generation countermeasure network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810707890.8A CN109086668B (en) | 2018-07-02 | 2018-07-02 | Unmanned aerial vehicle remote sensing image road information extraction method based on multi-scale generation countermeasure network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109086668A true CN109086668A (en) | 2018-12-25 |
CN109086668B CN109086668B (en) | 2021-05-14 |
Family
ID=64836907
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810707890.8A Active CN109086668B (en) | 2018-07-02 | 2018-07-02 | Unmanned aerial vehicle remote sensing image road information extraction method based on multi-scale generation countermeasure network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109086668B (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109978897A (en) * | 2019-04-09 | 2019-07-05 | 中国矿业大学 | A kind of multiple dimensioned heterologous remote sensing image registration method and device for generating confrontation network |
CN109993820A (en) * | 2019-03-29 | 2019-07-09 | 合肥工业大学 | A kind of animated video automatic generation method and its device |
CN110598673A (en) * | 2019-09-24 | 2019-12-20 | 电子科技大学 | Remote sensing image road extraction method based on residual error network |
CN111339950A (en) * | 2020-02-27 | 2020-06-26 | 北京交通大学 | Remote sensing image target detection method |
CN111376910A (en) * | 2018-12-29 | 2020-07-07 | 北京嘀嘀无限科技发展有限公司 | User behavior identification method and system and computer equipment |
CN111428678A (en) * | 2020-04-02 | 2020-07-17 | 山东卓元数据技术有限公司 | Method for generating confrontation network remote sensing image sample expansion under space constraint condition for ground object change detection |
CN111582104A (en) * | 2020-04-28 | 2020-08-25 | 中国科学院空天信息创新研究院 | Semantic segmentation method and device for remote sensing image |
CN111582175A (en) * | 2020-05-09 | 2020-08-25 | 中南大学 | High-resolution remote sensing image semantic segmentation method sharing multi-scale countermeasure characteristics |
CN111985464A (en) * | 2020-08-13 | 2020-11-24 | 山东大学 | Multi-scale learning character recognition method and system for court judgment documents |
CN113033608A (en) * | 2021-02-08 | 2021-06-25 | 北京工业大学 | Remote sensing image road extraction method and device |
CN113361508A (en) * | 2021-08-11 | 2021-09-07 | 四川省人工智能研究院(宜宾) | Cross-view-angle geographic positioning method based on unmanned aerial vehicle-satellite |
CN113538615A (en) * | 2021-06-29 | 2021-10-22 | 中国海洋大学 | Remote sensing image coloring method based on double-current generator deep convolution countermeasure generation network |
CN113688873A (en) * | 2021-07-28 | 2021-11-23 | 华东师范大学 | Vector road network generation method with intuitive interaction capability |
CN115641512A (en) * | 2022-12-26 | 2023-01-24 | 成都国星宇航科技股份有限公司 | Satellite remote sensing image road identification method, device, equipment and medium |
WO2023277793A3 (en) * | 2021-06-30 | 2023-02-09 | Grabtaxi Holdings Pte. Ltd | Segmenting method for extracting a road network for use in vehicle routing, method of training the map segmenter, and method of controlling a vehicle |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180137389A1 (en) * | 2016-11-16 | 2018-05-17 | Facebook, Inc. | Deep Multi-Scale Video Prediction |
CN108090902A (en) * | 2017-12-30 | 2018-05-29 | 中国传媒大学 | A kind of non-reference picture assessment method for encoding quality based on multiple dimensioned generation confrontation network |
CN108230264A (en) * | 2017-12-11 | 2018-06-29 | 华南农业大学 | A kind of single image to the fog method based on ResNet neural networks |
-
2018
- 2018-07-02 CN CN201810707890.8A patent/CN109086668B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180137389A1 (en) * | 2016-11-16 | 2018-05-17 | Facebook, Inc. | Deep Multi-Scale Video Prediction |
CN108230264A (en) * | 2017-12-11 | 2018-06-29 | 华南农业大学 | A kind of single image to the fog method based on ResNet neural networks |
CN108090902A (en) * | 2017-12-30 | 2018-05-29 | 中国传媒大学 | A kind of non-reference picture assessment method for encoding quality based on multiple dimensioned generation confrontation network |
Non-Patent Citations (1)
Title |
---|
PEIZHI WEN等: "Improved image automatic segmentation method based on", 《 APPLICATION RESEARCH OF COMPUTERS》 * |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111376910B (en) * | 2018-12-29 | 2022-04-15 | 北京嘀嘀无限科技发展有限公司 | User behavior identification method and system and computer equipment |
CN111376910A (en) * | 2018-12-29 | 2020-07-07 | 北京嘀嘀无限科技发展有限公司 | User behavior identification method and system and computer equipment |
CN109993820A (en) * | 2019-03-29 | 2019-07-09 | 合肥工业大学 | A kind of animated video automatic generation method and its device |
CN109993820B (en) * | 2019-03-29 | 2022-09-13 | 合肥工业大学 | Automatic animation video generation method and device |
CN109978897B (en) * | 2019-04-09 | 2020-05-08 | 中国矿业大学 | Registration method and device for heterogeneous remote sensing images of multi-scale generation countermeasure network |
CN109978897A (en) * | 2019-04-09 | 2019-07-05 | 中国矿业大学 | A kind of multiple dimensioned heterologous remote sensing image registration method and device for generating confrontation network |
CN110598673A (en) * | 2019-09-24 | 2019-12-20 | 电子科技大学 | Remote sensing image road extraction method based on residual error network |
CN111339950B (en) * | 2020-02-27 | 2024-01-23 | 北京交通大学 | Remote sensing image target detection method |
CN111339950A (en) * | 2020-02-27 | 2020-06-26 | 北京交通大学 | Remote sensing image target detection method |
CN111428678B (en) * | 2020-04-02 | 2023-06-23 | 山东卓智软件股份有限公司 | Method for generating remote sensing image sample expansion of countermeasure network under space constraint condition |
CN111428678A (en) * | 2020-04-02 | 2020-07-17 | 山东卓元数据技术有限公司 | Method for generating confrontation network remote sensing image sample expansion under space constraint condition for ground object change detection |
CN111582104B (en) * | 2020-04-28 | 2021-08-06 | 中国科学院空天信息创新研究院 | Remote sensing image semantic segmentation method and device based on self-attention feature aggregation network |
CN111582104A (en) * | 2020-04-28 | 2020-08-25 | 中国科学院空天信息创新研究院 | Semantic segmentation method and device for remote sensing image |
CN111582175A (en) * | 2020-05-09 | 2020-08-25 | 中南大学 | High-resolution remote sensing image semantic segmentation method sharing multi-scale countermeasure characteristics |
CN111985464A (en) * | 2020-08-13 | 2020-11-24 | 山东大学 | Multi-scale learning character recognition method and system for court judgment documents |
CN111985464B (en) * | 2020-08-13 | 2023-08-22 | 山东大学 | Court judgment document-oriented multi-scale learning text recognition method and system |
CN113033608A (en) * | 2021-02-08 | 2021-06-25 | 北京工业大学 | Remote sensing image road extraction method and device |
CN113538615A (en) * | 2021-06-29 | 2021-10-22 | 中国海洋大学 | Remote sensing image coloring method based on double-current generator deep convolution countermeasure generation network |
CN113538615B (en) * | 2021-06-29 | 2024-01-09 | 中国海洋大学 | Remote sensing image coloring method based on double-flow generator depth convolution countermeasure generation network |
WO2023277793A3 (en) * | 2021-06-30 | 2023-02-09 | Grabtaxi Holdings Pte. Ltd | Segmenting method for extracting a road network for use in vehicle routing, method of training the map segmenter, and method of controlling a vehicle |
CN113688873B (en) * | 2021-07-28 | 2023-08-22 | 华东师范大学 | Vector road network generation method with visual interaction capability |
CN113688873A (en) * | 2021-07-28 | 2021-11-23 | 华东师范大学 | Vector road network generation method with intuitive interaction capability |
CN113361508A (en) * | 2021-08-11 | 2021-09-07 | 四川省人工智能研究院(宜宾) | Cross-view-angle geographic positioning method based on unmanned aerial vehicle-satellite |
CN115641512A (en) * | 2022-12-26 | 2023-01-24 | 成都国星宇航科技股份有限公司 | Satellite remote sensing image road identification method, device, equipment and medium |
Also Published As
Publication number | Publication date |
---|---|
CN109086668B (en) | 2021-05-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109086668A (en) | Based on the multiple dimensioned unmanned aerial vehicle remote sensing images road information extracting method for generating confrontation network | |
CN107871124B (en) | A kind of Remote Sensing Target detection method based on deep neural network | |
CN106709568A (en) | RGB-D image object detection and semantic segmentation method based on deep convolution network | |
CN108549893A (en) | A kind of end-to-end recognition methods of the scene text of arbitrary shape | |
CN105320965B (en) | Sky based on depth convolutional neural networks composes united hyperspectral image classification method | |
CN109584248A (en) | Infrared surface object instance dividing method based on Fusion Features and dense connection network | |
CN110348445A (en) | A kind of example dividing method merging empty convolution sum marginal information | |
CN108875595A (en) | A kind of Driving Scene object detection method merged based on deep learning and multilayer feature | |
CN109146831A (en) | Remote sensing image fusion method and system based on double branch deep learning networks | |
CN109711413A (en) | Image, semantic dividing method based on deep learning | |
CN111612807A (en) | Small target image segmentation method based on scale and edge information | |
CN107220657A (en) | A kind of method of high-resolution remote sensing image scene classification towards small data set | |
CN110175576A (en) | A kind of driving vehicle visible detection method of combination laser point cloud data | |
CN108009509A (en) | Vehicle target detection method | |
CN109191369A (en) | 2D pictures turn method, storage medium and the device of 3D model | |
CN107871119A (en) | A kind of object detection method learnt based on object space knowledge and two-stage forecasting | |
CN109284670A (en) | A kind of pedestrian detection method and device based on multiple dimensioned attention mechanism | |
JP2019096006A (en) | Information processing device, and information processing method | |
CN108021889A (en) | A kind of binary channels infrared behavior recognition methods based on posture shape and movable information | |
CN109711288A (en) | Remote sensing ship detecting method based on feature pyramid and distance restraint FCN | |
CN110414387A (en) | A kind of lane line multi-task learning detection method based on lane segmentation | |
CN108960404B (en) | Image-based crowd counting method and device | |
CN109558902A (en) | A kind of fast target detection method | |
CN108388882A (en) | Based on the gesture identification method that the overall situation-part is multi-modal RGB-D | |
CN112434745A (en) | Occlusion target detection and identification method based on multi-source cognitive fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |