CN110059658B - Remote sensing satellite image multi-temporal change detection method based on three-dimensional convolutional neural network - Google Patents

Remote sensing satellite image multi-temporal change detection method based on three-dimensional convolutional neural network Download PDF

Info

Publication number
CN110059658B
CN110059658B CN201910342178.7A CN201910342178A CN110059658B CN 110059658 B CN110059658 B CN 110059658B CN 201910342178 A CN201910342178 A CN 201910342178A CN 110059658 B CN110059658 B CN 110059658B
Authority
CN
China
Prior art keywords
image
dimensional
time
change
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910342178.7A
Other languages
Chinese (zh)
Other versions
CN110059658A (en
Inventor
高昆
韩璐
倪崇
张庆君
王俊伟
张宇桐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN201910342178.7A priority Critical patent/CN110059658B/en
Publication of CN110059658A publication Critical patent/CN110059658A/en
Application granted granted Critical
Publication of CN110059658B publication Critical patent/CN110059658B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a remote sensing satellite image multi-temporal change detection method based on a three-dimensional convolution neural network, and provides a three-dimensional U-net model, wherein the model is input into four dimensions of image length, width, channel number and time, the length, the width and the time are simultaneously operated by utilizing the three-dimensional convolution, and the three-dimensional pooling and up-sampling operation are used in the same way. According to the invention, the relevance between the pictures is controlled by setting the convolution kernel size of a reasonable time dimension, and the relevance of more pictures can be considered by increasing the dimension. For the problem of heavy data labels in the past, the model can set the weight of a loss function of unsupervised data to be zero in the training process according to a small amount of supervised data, so that the model is directly trained, and the workload of required labels is greatly reduced.

Description

Remote sensing satellite image multi-temporal change detection method based on three-dimensional convolutional neural network
Technical Field
The invention relates to the technical field of image processing, in particular to a remote sensing satellite image multi-temporal change detection method based on a three-dimensional convolutional neural network.
Background
The conventional U-net model processes two-dimensional images in spatial dimension, and for a normal image, the input may be represented as CWH, where W and H represent the length and width of the image, C represents the number of channels, and for an RGB image, the number of channels is 3. The two-dimensional convolution kernel size in the model can be expressed as < KW, KH >. A convolution operation is performed on each channel in the spatial dimension. In the multi-temporal problem, the picture data to be predicted is a plurality of pictures observed at different times and having the same size.
The traditional U-net inputs two images (before and after change), two images (with the size of the images) are connected into a double channel (with the size of the images) to be regarded as one image as input, and because the traditional input is multi-temporal image data, the number of the channels is increased by one step, so that the workload is increased, and the data processing problem caused by the entering of new information quantity is solved.
Therefore, the invention provides a remote sensing satellite image multi-time phase change detection method based on a three-dimensional convolution neural network to solve the problems.
Disclosure of Invention
The invention aims to solve the defects in the prior art, such as: the traditional U-net inputs two images (before and after change), two images (with the size of the images) are connected into a double channel (with the size of the images) to be regarded as one image as input, and because the traditional input is multi-temporal image data, the number of the channels is increased by one step, so that the workload is increased, and the data processing problem caused by the entering of new information quantity is solved.
In order to achieve the purpose, the invention adopts the following technical scheme:
a remote sensing satellite image multi-temporal change detection method based on a three-dimensional convolutional neural network comprises the following steps:
the method comprises the following steps: the time dimension is added in the U-net, image information at different times can be effectively brought into calculation of change detection, and due to the introduction of convolution of the time dimension, the time receptive field is increased layer by layer in the U-net, and the general relation of the convolution neural network is satisfied:
Rout=Rin+(KT-1)D (1)
where Rout and Rin are the output and input receptive fields of one layer of the convolutional neural network, respectively, and D is the distance between the features.
Step two: the input and output data are stored to have the same time dimension, the space dimensions of the input and output data before and after the convolution operation are kept unchanged by using a method of filling Image edges in the U-net, so that the network design is simplified, and the output of the three-dimensional U-net, namely Change (i-1, i), corresponds to the input images Image i with the same sequence number one by one. Change (i-1, i) is represented as a Change between the graph i-1 and the graph i, and since the first graph Image 1 has no corresponding Change, the corresponding Change (0,1) is null, and the loss function and optimization are not calculated for the value in the training process.
Step three: the receptive field of the three-dimensional U-net on time is controlled by the size KT of a convolution kernel in the time dimension, when the KT is 1, the network only calculates one frame of image in the time dimension in each convolution, so that the difference of adjacent pictures cannot be compared for change detection, and therefore the minimum value of the KT is 2.
Preferably, in the second step, the same space-time size is used for all convolution kernels in the three-dimensional U-net, the fixed space size is KW ═ 3 and KH ═ 3, and KT is adjusted according to the requirement, and the moving step length (Stride) of the convolution kernels is 1.
Preferably, in the second step, for different KTs, the Padding method needs to be adjusted to keep the output time dimension unchanged, for a convolution network with an expansion rate of 1, the output size is 65, and to keep TOUT ═ TIN, when KT is 2 and Stride is 1, the Padding value is 1, that is, the number of Padding needed in this dimension.
Preferably, in the second step, padding is added before Image 1 before each convolution, and the padding method is to copy the positions of Image 1 to Image 0; padding is 2 when KT is 3, two same pictures can be inserted before Image 1 by filling at the beginning or the end of data of the dimension, and the method relates multiple temporal pictures by considering multiple pictures before change detection so as to strengthen the detection result.
Preferably, in the second step, when detecting Image i-1 and Image i, the feature of Image i-2 is also added to the detection calculation.
Preferably, in the second step, when detecting changes of Image i-1 and Image i, a picture of future time, Image i +1, is added to the detection calculation, and the two methods are compared in the subsequent result analysis, and for a larger KT, there are more combinations to assign Padding, and the analysis method is similar to that here.
Compared with the prior art, the invention has the beneficial effects that:
in order to solve the data processing problem caused by the entry of new information quantity, the invention provides a three-dimensional U-net model of a remote sensing satellite image multi-temporal change detection method based on a three-dimensional convolutional neural network. The model inputs four dimensions of image length, width, channel number and time, the length, width and time are simultaneously operated by utilizing three-dimensional convolution, and three-dimensional pooling and up-sampling operation are used in the same way. And can process multi-temporal satellite pictures of any time length. The correlation between pictures is controlled by setting a convolution kernel size of a reasonable time dimension. Increasing this dimension may allow for more picture associations. For the problem of heavy data labels in the past, the model can set the weight of a loss function of unsupervised data to be zero in the training process according to a small amount of supervised data, so that the model is directly trained, and the workload of required labels is greatly reduced.
The first embodiment is as follows:
a remote sensing satellite image multi-temporal change detection method based on a three-dimensional convolutional neural network comprises the following steps:
the method comprises the following steps: the time dimension is added in the U-net, image information at different times can be effectively brought into calculation of change detection, and due to the introduction of convolution of the time dimension, the time receptive field is increased layer by layer in the U-net, and the general relation of the convolution neural network is satisfied:
Rout=Rin+(KT-1)D (1)
where Rout and Rin are the output and input receptive fields of one layer of the convolutional neural network, respectively, and D is the distance between the features.
Step two: the input and output data are stored to have the same time dimension, the space dimensions of the input and output data before and after the convolution operation are kept unchanged by using a method of filling Image edges in the U-net, so that the network design is simplified, and the output of the three-dimensional U-net, namely Change (i-1, i), corresponds to the input images Image i with the same sequence number one by one. Change (i-1, i) is represented as a Change between the graph i-1 and the graph i, and since the first graph Image 1 has no corresponding Change, the corresponding Change (0,1) is null, and the loss function and optimization are not calculated for the value in the training process.
Step three: the receptive field of the three-dimensional U-net on time is controlled by the size KT of a convolution kernel in the time dimension, when the KT is 1, the network only calculates one frame of image in the time dimension in each convolution, so that the difference of adjacent pictures cannot be compared for change detection, and therefore the minimum value of the KT is 2.
In the second step, the same space-time size is used for all convolution kernels in the three-dimensional U-net, the fixed space size is KW-3 and KH-3, KT is adjusted according to the requirement, and the moving step length (Stride) of the convolution kernels is 1.
In the second step, for different KTs, the Padding method needs to be adjusted to keep the output time dimension unchanged, for a convolutional network with an expansion rate of 1, the output size is 65, in order to keep TOUT ═ TIN, when KT is 2 and Stride is 1, the Padding value is 1, that is, the number of Padding needed in this dimension.
Adding padding before Image 1 before each convolution, wherein the padding method is to copy the positions of Image 1 to Image 0; padding is 2 when KT is 3, two same pictures can be inserted before Image 1 by filling at the beginning or the end of data of the dimension, and the method relates multiple temporal pictures by considering multiple pictures before change detection so as to strengthen the detection result.
In the second step, when detecting Image i-1 and Image i, the characteristics of Image i-2 are also added to the detection calculation.
In step two, when detecting changes in Image i-1 and Image i, a future time picture Image i +1 is added to the detection calculation, and the two methods will be compared in the subsequent result analysis, and for a larger KT, there are more combinations to assign Padding, and the method of analysis is similar to that here.
In the invention, in order to solve the data processing problem caused by the entering of new information quantity, the patent innovatively provides a three-dimensional U-net model of a remote sensing satellite image multi-temporal change detection method based on a three-dimensional convolution neural network. The model inputs four dimensions of image length, width, channel number and time, the length, width and time are simultaneously operated by utilizing three-dimensional convolution, and three-dimensional pooling and up-sampling operation are used in the same way. And can process multi-temporal satellite pictures of any time length. The correlation between pictures is controlled by setting a convolution kernel size of a reasonable time dimension. Increasing this dimension may allow for more picture associations. For the problem of heavy data labels in the past, the model can also set the weight of a loss function of unsupervised data to be zero in the training process according to a small amount of supervised data, so that the model is directly trained, the workload of the required labels is greatly reduced, and the specific implementation steps are as follows: the time dimension is added in the U-net, image information at different times can be effectively brought into calculation of change detection, and due to the introduction of convolution of the time dimension, the time receptive field is increased layer by layer in the U-net, and the general relation of the convolution neural network is satisfied:
Rout=Rin+(KT-1)D (1)
where Rout and Rin are the output and input receptive fields of one layer of the convolutional neural network, respectively, D is the distance between the features,
further, the input and output data are stored to have the same time dimension, the space dimensions of the input and output data before and after the convolution operation are kept unchanged in the U-net by using an Image edge filling method, so that the network design is simplified, and the output of the three-dimensional U-net, namely Change (i-1, i), corresponds to the input Image i with the same sequence number in a one-to-one mode. Change (i-1, i) is represented as a Change between the graph i-1 and the graph i, and since the first graph Image 1 has no corresponding Change, the corresponding Change (0,1) is null, and a loss function and optimization are not calculated on the value in the training process; further: the receptive field of the three-dimensional U-net on time is controlled by the size KT of a convolution kernel in the time dimension, when the KT is 1, the network only calculates one frame of image in the time dimension in each convolution, so that the difference of adjacent pictures cannot be compared for change detection, and the minimum value of the KT is 2; further, the same space-time size is used for all convolution kernels in the three-dimensional U-net, the fixed space size is KW-3 and KH-3, KT is adjusted according to the requirement, and the moving step length (Stride) of the convolution kernels is 1; further, for different KTs, a Padding method needs to be adjusted to keep an output time dimension unchanged, for a convolutional network with an expansion rate of 1, the output size is 65, in order to keep TOUT ═ TIN, when KT is 2 and Stride is 1, the Padding value is 1, that is, the number of Padding needed in the dimension; furthermore, padding is added before Image 1 before each convolution, and the padding method is to copy the positions of Image 1 to Image 0; padding is 2 when KT is 3, and Padding can be started or ended in data of this dimension, two identical pictures are inserted before Image 1, this method relates multiple time-oriented pictures in a way of considering multiple pictures before change detection to strengthen the detection result, when detecting Image i-1 and Image i, the features of Image i-2 are also added in the detection calculation, finally, when detecting the change of Image i-1 and Image i, a picture of future time, Image i +1, is added in the detection calculation, these two methods will be compared in the subsequent result analysis, for a larger KT, there are more combinations to allocate Padding, and the method of analysis is similar to that here.
Experimental part:
in order to construct a multi-time phase change detection data set for research, flood disasters occur in the middle and lower reaches of Yangtze river due to continuous rainstorm in 2017, 6, 30, 7, 5, selecting relevant image data of river changes in Hubei province in the period of time of a sentinal-2 satellite, testing a three-dimensional time sequence U-net on the change detection data set of the Yangtze river basin river, comparing the testing result with a standard U-net, wherein yellow represents a part which is a real value and is detected successfully and is a changed area, red represents a part which is a real value and is not detected and blue represents a part which is a real value and is detected as a changed area.
When the time convolution kernel KT is 2, firstly testing the performance of the three-dimensional U-net of the time convolution kernel KT is 2, and comparing the performance with the two-dimensional U-net;
when the time convolution kernel KT is 3, comparing different filling modes under the condition that the three-dimensional neural network is compared and the KT is 3;
and (4) comparing the results:
(1) when the real value is changed, estimating the number of the change calculated by the model, and recording as TP;
(2) when the real value is changed, estimating the number of unchanged models and marking as FN;
(3) when the true value is unchanged, estimating the number of changes calculated by the model, and recording as FP;
(4) when the true value is unchanged, the estimation model calculates the number of the true value which is also unchanged, and the number is recorded as TN.
Then, the transformation detection evaluation indexes are as follows:
false rate (FA, False Alarm)
Figure BDA0002041094060000081
Missing rate (Missed Alarm, MA)
Figure BDA0002041094060000082
Accuracy (Accuracy, ACC)
Figure BDA0002041094060000091
Precision (Precision)
Figure BDA0002041094060000092
Recall ratio (Recall)
Figure BDA0002041094060000093
F1 value (F1-Score), representing the harmonic mean of accuracy and recall, allows for an overall accuracy assessment,
Figure BDA0002041094060000094
table result comparison
Figure BDA0002041094060000095
As can be seen from the first table, the detection accuracy of the three-dimensional network provided by the method is higher than that of the traditional method after the time information is added. The comprehensive effect of the U-net 3D KT 2 and the U-net 3D KT 3 header filling is good, and in the aspect of precision, the U-net 3D KT 3 header filling effect is root, but the recall value is lower than that of the U-net 3D KT 2.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.

Claims (5)

1. A remote sensing satellite image multi-temporal change detection method based on a three-dimensional convolution neural network is characterized by comprising the following steps,
the method comprises the following steps: the time dimension is added in the U-net, image information at different times can be effectively brought into calculation of change detection, and due to the introduction of convolution of the time dimension, the time receptive field is increased layer by layer in the U-net, and the general relation of the convolution neural network is satisfied:
Rout=Rin+(KT-1)D (1)
wherein Rout and Rin are respectively the output and input receptive fields of one layer of the convolutional neural network, and D is the distance between the characteristics;
step two: the input and output data are stored to have the same time dimension, the space dimensions of the input and output data before and after the convolution operation are kept unchanged by using a method of filling the Image edge in the U-net, so that the network design is simplified, and the output of the three-dimensional U-net, namely Change (i-1, i), corresponds to the input Image i with the same sequence number one by one; change (i-1, i) is represented as the Change between the graph i-1 and the graph i, because the first graph Image 1 has no corresponding Change, the corresponding Change (0,1) is empty, and the Change (i-1, i) is in the training processCalculating a loss function and optimization on the value, wherein in the second step, padding is added before Image 1 before convolution every time, and the padding method is to copy the positions of Image 1 to Image 0; kTPadding is 2 when the size is 3, two same pictures can be inserted before Image 1 at the beginning or the end of data of the dimension, and the method relates multiple time-oriented pictures in a mode of considering multiple pictures before change detection so as to strengthen the detection result;
step three: size K in time dimension by convolution kernelTTo control the time receptive field of the three-dimensional U-net, when KTAt 1, the network only computes one frame of image per convolution in the time dimension, so that the difference between adjacent pictures cannot be compared for change detection, so KTIs 2.
2. The method for detecting the multi-temporal change of the remote sensing satellite images based on the three-dimensional convolutional neural network as claimed in claim 1, wherein in the second step, the same space-time size is used for all convolutional kernels in the three-dimensional U-net, the fixed space size is KW-3 and KH-3, and K is adjusted according to the requirementTThe moving step (Stride) of the convolution kernel is 1.
3. The method for detecting the multi-temporal change of the remote sensing satellite image based on the three-dimensional convolutional neural network as claimed in claim 1, wherein in the second step, for different KsTThe padding method needs to be adjusted to keep the output time dimension unchanged, the output size of the convolutional network with the expansion rate of 1 is 65, and when K is equal to TIN, TOUT is keptTAt 2, when Stride is 1, Padding has a value of 1, i.e., the number of Padding needed in this dimension.
4. The method for detecting the multi-temporal-phase changes of the remote sensing satellite images based on the three-dimensional convolutional neural network as claimed in claim 1, wherein in the second step, when detecting Image i-1 and Image i, the characteristics of Image i-2 are also added into the detection calculation.
5. The method for detecting the multi-temporal-phase change of the remote sensing satellite images based on the three-dimensional convolutional neural network as claimed in claim 1, wherein in the second step, when detecting the changes of Image i-1 and Image i, a picture Image i +1 of the future time is added into the detection calculation, and the two methods are compared in the subsequent result analysis, wherein for a larger K, the two methods are usedTThere are more combinations to assign Padding and the method of analysis is similar here.
CN201910342178.7A 2019-04-26 2019-04-26 Remote sensing satellite image multi-temporal change detection method based on three-dimensional convolutional neural network Active CN110059658B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910342178.7A CN110059658B (en) 2019-04-26 2019-04-26 Remote sensing satellite image multi-temporal change detection method based on three-dimensional convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910342178.7A CN110059658B (en) 2019-04-26 2019-04-26 Remote sensing satellite image multi-temporal change detection method based on three-dimensional convolutional neural network

Publications (2)

Publication Number Publication Date
CN110059658A CN110059658A (en) 2019-07-26
CN110059658B true CN110059658B (en) 2020-11-24

Family

ID=67320971

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910342178.7A Active CN110059658B (en) 2019-04-26 2019-04-26 Remote sensing satellite image multi-temporal change detection method based on three-dimensional convolutional neural network

Country Status (1)

Country Link
CN (1) CN110059658B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111339890A (en) * 2020-02-20 2020-06-26 中国测绘科学研究院 Method for extracting newly-added construction land information based on high-resolution remote sensing image
CN112508936B (en) * 2020-12-22 2023-05-19 中国科学院空天信息创新研究院 Remote sensing image change detection method based on deep learning

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106682734A (en) * 2016-12-30 2017-05-17 中国科学院深圳先进技术研究院 Method and apparatus for increasing generalization capability of convolutional neural network
CN108564002A (en) * 2018-03-22 2018-09-21 中国科学院遥感与数字地球研究所 A kind of remote sensing image time series variation detection method and system
CN108596108A (en) * 2018-04-26 2018-09-28 中国科学院电子学研究所 Method for detecting change of remote sensing image of taking photo by plane based on the study of triple semantic relation
CN108846835A (en) * 2018-05-31 2018-11-20 西安电子科技大学 The image change detection method of convolutional network is separated based on depth
CN108921198A (en) * 2018-06-08 2018-11-30 山东师范大学 commodity image classification method, server and system based on deep learning
CN109409263A (en) * 2018-10-12 2019-03-01 武汉大学 A kind of remote sensing image city feature variation detection method based on Siamese convolutional network

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8144937B2 (en) * 2008-10-15 2012-03-27 The Boeing Company System and method for airport mapping database automatic change detection
CN103745087B (en) * 2013-12-18 2018-02-09 广西生态工程职业技术学院 A kind of dynamic changes of forest resources Forecasting Methodology based on remote sensing technology
US10262205B2 (en) * 2015-07-28 2019-04-16 Chiman KWAN Method and system for collaborative multi-satellite remote sensing
CN105608698B (en) * 2015-12-25 2018-12-25 西北工业大学 A kind of method for detecting change of remote sensing image based on SAE
CN106203283A (en) * 2016-06-30 2016-12-07 重庆理工大学 Based on Three dimensional convolution deep neural network and the action identification method of deep video
CN109272534B (en) * 2018-05-16 2022-03-04 西安电子科技大学 SAR image change detection method based on multi-granularity cascade forest model
CN108805083B (en) * 2018-06-13 2022-03-01 中国科学技术大学 Single-stage video behavior detection method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106682734A (en) * 2016-12-30 2017-05-17 中国科学院深圳先进技术研究院 Method and apparatus for increasing generalization capability of convolutional neural network
CN108564002A (en) * 2018-03-22 2018-09-21 中国科学院遥感与数字地球研究所 A kind of remote sensing image time series variation detection method and system
CN108596108A (en) * 2018-04-26 2018-09-28 中国科学院电子学研究所 Method for detecting change of remote sensing image of taking photo by plane based on the study of triple semantic relation
CN108846835A (en) * 2018-05-31 2018-11-20 西安电子科技大学 The image change detection method of convolutional network is separated based on depth
CN108921198A (en) * 2018-06-08 2018-11-30 山东师范大学 commodity image classification method, server and system based on deep learning
CN109409263A (en) * 2018-10-12 2019-03-01 武汉大学 A kind of remote sensing image city feature variation detection method based on Siamese convolutional network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Learning Spatiotemporal Features with 3D Convolutional Networks;Du Tran等;《2015 IEEE International Conference on Computer Vision》;20151231;第4489页Introduction、第4490-4491页第3.1节 *
多时相遥感影像变化检测的现状与展望;张良培等;《测绘学报》;20171031;第46卷(第10期);第1447-1459页 *

Also Published As

Publication number Publication date
CN110059658A (en) 2019-07-26

Similar Documents

Publication Publication Date Title
CN110889343B (en) Crowd density estimation method and device based on attention type deep neural network
CN110570363A (en) Image defogging method based on Cycle-GAN with pyramid pooling and multi-scale discriminator
CN112489164B (en) Image coloring method based on improved depth separable convolutional neural network
CN109389156B (en) Training method and device of image positioning model and image positioning method
CN104992403B (en) Hybrid operator image redirection method based on visual similarity measurement
CN112597985A (en) Crowd counting method based on multi-scale feature fusion
CN113011329A (en) Pyramid network based on multi-scale features and dense crowd counting method
CN111723693A (en) Crowd counting method based on small sample learning
CN112818969A (en) Knowledge distillation-based face pose estimation method and system
CN109145836A (en) Ship target video detection method based on deep learning network and Kalman filtering
CN110059658B (en) Remote sensing satellite image multi-temporal change detection method based on three-dimensional convolutional neural network
CN110827320B (en) Target tracking method and device based on time sequence prediction
CN110570402B (en) Binocular salient object detection method based on boundary perception neural network
CN110827312A (en) Learning method based on cooperative visual attention neural network
CN112907573B (en) Depth completion method based on 3D convolution
CN115457464B (en) Crowd counting method based on transformer and CNN
CN115424209A (en) Crowd counting method based on spatial pyramid attention network
CN112668532A (en) Crowd counting method based on multi-stage mixed attention network
CN116402851A (en) Infrared dim target tracking method under complex background
CN112101113B (en) Lightweight unmanned aerial vehicle image small target detection method
CN116519106B (en) Method, device, storage medium and equipment for determining weight of live pigs
WO2022120996A1 (en) Visual position recognition method and apparatus, and computer device and readable storage medium
CN113724293A (en) Vision-based intelligent internet public transport scene target tracking method and system
CN113066074A (en) Visual saliency prediction method based on binocular parallax offset fusion
CN117292324A (en) Crowd density estimation method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant