CN110378224A - A kind of detection method of feature changes, detection system and terminal - Google Patents

A kind of detection method of feature changes, detection system and terminal Download PDF

Info

Publication number
CN110378224A
CN110378224A CN201910515105.3A CN201910515105A CN110378224A CN 110378224 A CN110378224 A CN 110378224A CN 201910515105 A CN201910515105 A CN 201910515105A CN 110378224 A CN110378224 A CN 110378224A
Authority
CN
China
Prior art keywords
image
convolutional neural
neural networks
atural object
change intensity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910515105.3A
Other languages
Chinese (zh)
Other versions
CN110378224B (en
Inventor
史文中
张敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Research Institute HKUST
Shenzhen Research Institute HKPU
Original Assignee
Shenzhen Research Institute HKUST
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Research Institute HKUST filed Critical Shenzhen Research Institute HKUST
Priority to CN201910515105.3A priority Critical patent/CN110378224B/en
Publication of CN110378224A publication Critical patent/CN110378224A/en
Application granted granted Critical
Publication of CN110378224B publication Critical patent/CN110378224B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Abstract

The application be suitable for technical field of image processing, the detection method, detection system and terminal of a kind of feature changes are provided, wherein method include: obtain target area in atural object change before the first image and atural object change after the second image;First image and second image are input in signature differential convolutional neural networks model, the first change intensity figure exported;Based on the first change intensity figure, the bianry image of feature changes is generated, the robustness and stability of remote sensing image variation detection is promoted, promotes the precision of variation detection.

Description

A kind of detection method of feature changes, detection system and terminal
Technical field
The application belong to technical field of image processing more particularly to a kind of detection method of feature changes, detection system and Terminal.
Background technique
Remote sensing image can be direct or indirect reaction ground mulching and Land-use, be for obtaining earth surface The important means of change information.The variation detection of remote sensing image refers to be known in the different times remote sensing image in same geographical location The process not changed.Variation detection based on remote sensing image is led in geological disaster, urban changes, environmental monitoring, agricultural, forestry etc. There is important application in domain.
In recent years, the appearance of the high-space resolution, the remote sensing image of high time resolution of multiple sensors detects variation Algorithm proposes new demand, needs to realize and quickly obtains effective change information from the remotely-sensed data of these magnanimity.
And the uncertainty in the complexity and change procedure due to remote sensing image itself can all influence variation detection Precision, so that the variation detection of existing remote sensing image lacks robustness and stability, variation testing result and actual conditions difference It is very big.
Summary of the invention
In view of this, the embodiment of the present application provides the detection method, detection system and terminal of a kind of feature changes, with solution Certainly existing remote sensing image variation detection lacks robustness and stability, variation testing result with actual conditions are widely different asks Topic.
The first aspect of the embodiment of the present application provides a kind of detection method of feature changes, and the detection method includes:
Obtain target area in atural object change before the first image and atural object change after the second image;
First image and second image are input in signature differential convolutional neural networks model, exported The first change intensity figure;
Based on the first change intensity figure, the bianry image of feature changes is generated;
It wherein, include: the depth learnt under different remote sensing atural object scenes in the signature differential convolutional neural networks model The twin depth convolutional neural networks of degree feature, the signature differential network being coupled with the twin depth convolutional neural networks, and With the Fusion Features network of the signature differential network link.
The second aspect of the embodiment of the present application provides a kind of detection system of feature changes, and the detection system includes:
Module is obtained, for obtaining the first image before atural object changes in target area and after atural object changes Second image;
First obtaining module, for first image and second image to be input to signature differential convolutional Neural net In network model, the first change intensity figure for being exported;
First generation module generates the bianry image of feature changes for being based on the first change intensity figure;
It wherein, include: the depth learnt under different remote sensing atural object scenes in the signature differential convolutional neural networks model The twin depth convolutional neural networks of degree feature, the signature differential network being coupled with the twin depth convolutional neural networks, and With the Fusion Features network of the signature differential network link.
The third aspect of the embodiment of the present application provides a kind of terminal, including memory, processor and is stored in described In memory and the computer program that can run on the processor, the processor are realized when executing the computer program The step of method as described in relation to the first aspect.
The fourth aspect of the embodiment of the present application provides a kind of computer readable storage medium, the computer-readable storage Media storage has computer program, and the step of method as described in relation to the first aspect is realized when the computer program is executed by processor Suddenly.
The 5th aspect of the application provides a kind of computer program product, and the computer program product includes computer Program is realized when the computer program is executed by one or more processors such as the step of above-mentioned first aspect the method.
Therefore in the embodiment of the present application, by obtain target area in atural object change before the first image and Atural object change after the second image, the first image and second image are input to signature differential convolutional neural networks mould In type, the change intensity figure exported is based on the change intensity figure, generates the bianry image of feature changes, wherein feature is poor It include: the twin depth convolutional Neural for having learnt the depth characteristic under different remote sensing atural object scenes in bundling product neural network model Network, signature differential network and Fusion Features network promote the robustness and stability of remote sensing image variation detection, promote variation The precision of detection.
Detailed description of the invention
It in order to more clearly explain the technical solutions in the embodiments of the present application, below will be to embodiment or description of the prior art Needed in attached drawing be briefly described, it should be apparent that, the accompanying drawings in the following description is only some of the application Embodiment for those of ordinary skill in the art without any creative labor, can also be according to these Attached drawing obtains other attached drawings.
Fig. 1 is a kind of flow chart one of the detection method of feature changes provided by the embodiments of the present application;
Fig. 2 is a kind of flowchart 2 of the detection method of feature changes provided by the embodiments of the present application;
Fig. 3 is a kind of structure chart of the detection system of feature changes provided by the embodiments of the present application;
Fig. 4 is a kind of structure chart of terminal provided by the embodiments of the present application.
Specific embodiment
In being described below, for illustration and not for limitation, the tool of such as particular system structure, technology etc is proposed Body details, so as to provide a thorough understanding of the present application embodiment.However, it will be clear to one skilled in the art that there is no these specific The application also may be implemented in the other embodiments of details.In other situations, it omits to well-known system, device, electricity The detailed description of road and method, so as not to obscure the description of the present application with unnecessary details.
It should be appreciated that ought use in this specification and in the appended claims, term " includes " instruction is described special Sign, entirety, step, operation, the presence of element and/or component, but be not precluded one or more of the other feature, entirety, step, Operation, the presence or addition of element, component and/or its set.
It is also understood that mesh of the term used in this present specification merely for the sake of description specific embodiment And be not intended to limit the application.As present specification and it is used in the attached claims, unless on Other situations are hereafter clearly indicated, otherwise " one " of singular, "one" and "the" are intended to include plural form.
It will be further appreciated that the term "and/or" used in present specification and the appended claims is Refer to any combination and all possible combinations of one or more of associated item listed, and including these combinations.
As used in this specification and in the appended claims, term " if " can be according to context quilt Be construed to " when ... " or " once " or " in response to determination " or " in response to detecting ".Similarly, phrase " if it is determined that " or " if detecting [described condition or event] " can be interpreted to mean according to context " once it is determined that " or " in response to true It is fixed " or " once detecting [described condition or event] " or " in response to detecting [described condition or event] ".
In the specific implementation, terminal described in the embodiment of the present application is including but not limited to such as with touch sensitive surface The mobile phone, laptop computer or tablet computer of (for example, touch-screen display and/or touch tablet) etc it is other just Portable device.It is to be further understood that in certain embodiments, the equipment is not portable communication device, but there is touching Touch the desktop computer of sensing surface (for example, touch-screen display and/or touch tablet).
In following discussion, the terminal including display and touch sensitive surface is described.It is, however, to be understood that It is that terminal may include one or more of the other physical user-interface device of such as physical keyboard, mouse and/or control-rod.
Terminal supports various application programs, such as one of the following or multiple: drawing application program, demonstration application journey Sequence, word-processing application, website create application program, disk imprinting application program, spreadsheet applications, game application Program, telephony application, videoconference application, email application, instant messaging applications, exercise Support application program, photo management application program, digital camera application program, digital camera application program, web-browsing application Program, digital music player application and/or video frequency player application program.
The various application programs that can be executed at the terminal can be used such as touch sensitive surface at least one is public Physical user-interface device.It can adjust and/or change among applications and/or in corresponding application programs and touch sensitive table The corresponding information shown in the one or more functions and terminal in face.In this way, the public physical structure of terminal is (for example, touch Sensing surface) it can support the various application programs with user interface intuitive and transparent for a user.
It should be understood that the size of the serial number of each step is not meant that the order of the execution order in the present embodiment, each process Execution sequence should be determined by its function and internal logic, and the implementation process without coping with the embodiment of the present application constitutes any restriction.
In order to illustrate technical solution described herein, the following is a description of specific embodiments.
It is a kind of flow chart one of the detection method of feature changes provided by the embodiments of the present application referring to Fig. 1, Fig. 1.Such as Fig. 1 It is shown, a kind of detection method of feature changes, method includes the following steps:
Step 101, obtain target area in atural object change before the first image and atural object change after second Image.
Wherein, first image and the second image are specially remote sensing image.
Here it is variation detection process, needs to obtain the remote sensing image in two periods in target area.Two period Remote sensing image refers to the remote sensing image at T2 moment after the remote sensing image and atural object at T1 moment before atural object changes change, T1 ≠T2。
Step 102, first image and second image are input in signature differential convolutional neural networks model, The the first change intensity figure exported.
It wherein, include: the depth learnt under different remote sensing atural object scenes in the signature differential convolutional neural networks model The twin depth convolutional neural networks of degree feature, the signature differential network being coupled with the twin depth convolutional neural networks, and With the Fusion Features network of the signature differential network link.
Twin neural network feeds two inputs into two neural network (Network1 and there are two input Network2).In this feature difference convolutional neural networks model, two period images are the defeated of twin depth convolutional neural networks Enter, input of the output of twin depth convolutional neural networks as signature differential network, the output of signature differential network is as special Levy the input of converged network.
Wherein, which is specifically chosen as the shared depth convolutional Neural of double phase input weights Network reduces the complexity of network training early period.
Specifically, described that first image and second image are input to signature differential volume in implementation process In product neural network model, before the first change intensity figure exported, further includes:
Remote sensing scene classification training dataset based on building, is trained initial depth convolutional neural networks, described Remote sensing scene classification training data concentrate sample data include: different remote sensing atural object scenes remote sensing image and corresponding scene Label;
Based on the initial depth convolutional neural networks that training is completed, the twin depth convolutional neural networks are generated.
It here is the implementation process of scene classification task.Pass through the remote sensing scene classification training dataset of building, realization pair The pre-training of initial depth convolutional neural networks, then based on the initial depth convolutional neural networks after training, generate twin depth Convolutional neural networks, to realize the composition and establishment of subsequent characteristics difference convolutional neural networks model.
Remote sensing scene classification training dataset D is constructed, is divided into training set and test set according to a certain percentage, defaults 70% Data are as training set, and 30% data are as test set.Wherein, data set by comprising the specifically remote sensing image of object field scape and Corresponding scene tag, wherein there are many remote sensor type, a variety of resolution ratio, a variety of atural object scenes for remote sensing image correspondence.
It constructs remote sensing scene classification CNN model (i.e. initial depth convolutional neural networks model), which is tied based on VGG16 Structure, VGG16 network are a depth convolutional neural networks, its effect is exactly to extract from low level to high-level atural object spy Sign, so as to correctly identify atural object scene.The purpose of deep learning is exactly to learn these depth characteristics.Wherein, VGG16 is by rolling up Lamination maximizes pond layer and full articulamentum (interior lamination) composition, and convolutional layer combines ReLU function, including 13 convolutional layers, 5 A maximum pond layer and 3 full articulamentums.Conv (i) indicates that i-th of convolutional Neural layer, such as conv3 indicate the 3rd convolution Layer.It, which exports result, indicates that input picture belongs to the probability of each remote sensing scene type.
It can use the data set D and Softmax loss function training initial depth convolutional neural networks model, utilize this The depth characteristic of model learning remote sensing image.Specifically, it is instructed by Softmax loss function and stochastic gradient descent strategy Practice, the deconditioning when loss function value no longer declines.
Specifically, depth characteristic study is carried out by data set D, is learned by classifying to different remote sensing atural object scenes The feature for practising each atural object, when initial depth convolutional neural networks model can correctly distinguish each remote sensing atural object scene, table Show the study to the feature of the atural object scene in all training samples, loss function reaches minimum value, i.e. deconditioning at this time Process, model training are completed.
Wherein, in the above process, the remote sensing scene classification training dataset based on building, to initial depth convolutional Neural net Network is trained, and is specifically included:
Remote sensing scene classification training dataset based on building carries out sample augmentation, obtains the training of target remote sensing scene classification Data set, wherein the sample that the target remote sensing scene classification training data is concentrated includes: that three wave bands of setting pixel size are distant Feel image blocks and corresponding scene true value label;According to the target remote sensing scene classification training dataset, to the initial depth Degree convolutional neural networks are trained.
On the basis of data set D, pass through random cropping, image mirrors, colour dither equal samples strategy augmented sample.Most The each sample obtained eventually is made of the three wave band remote sensing image blocks and corresponding scene true value label of 224 × 224 pixel sizes, True value label refers to main type of ground objects in remote sensing scene.
The process, it is distant by the e-learning difference using remote sensing scene classification task training convolutional neural networks model The depth characteristic under atural object scene is felt, for generating subsequent feature difference figure and being changed detection process.
Further, completion, initial depth convolutional neural networks are being trained to initial depth convolutional neural networks After practising atural object depth information, the initial depth convolutional neural networks completed based on training are needed, generate the twin depth volume Product neural network.
The twin depth convolutional neural networks are chosen as the shared depth convolutional neural networks of double phase input weights.
During specific implementation, using transfer learning thought, the multi-level depth characteristic that two period image learnings are arrived In such a way that weight is shared, the shared target depth convolutional neural networks (Sub-VGG16 Net) of weight are formed, the target is deep Spending convolutional neural networks is specially two, and double phase input weights are shared each other between two target depth convolutional neural networks.
Wherein, it when the initial depth convolutional neural networks completed by training generate twin depth convolutional neural networks, needs The initial depth convolutional neural networks model to complete to training intercepts, and forms the sub-network Sub-VGG16 of VGG16, by The sub-network that the interception obtains obtains twin depth convolutional neural networks.The twin depth convolutional neural networks can be pseudo- twin Two shared depth convolutional neural networks of either double phase input weights.As shown in table 1, as signature differential convolutional Neural Composition part in network model.Wherein, the binary channels convolutional network that VGG16 sub-network (Sub-VGG16) is constituted, in table 1 The part Sub-VGG16 Net.
The parameter information of 1 signature differential convolutional neural networks model (FDCNN) of table
Here, the depth characteristic learnt from scene classification task is directly used in variation by transfer learning by us Detection process.This is based partially on the thought of transfer learning, generates T1 and T2 two periods image respectively using two Sub-VGG16 The depth characteristic of different depth and scale.It is that weight is total in order to reduce the complexity and size of model, between two Sub-VGG16 It enjoys.That select in the present embodiment is the three of conv2 (conv2_p), conv4 (conv4_p) and conv7 (conv7_p) output The feature of a different scale and different depth, the input picture size of Sub-VGG16 network are 224 × 224 pixels, therefore corresponding Scale be 224 × 224 × 64,112 × 112 × 128,56 × 56 × 256 pixels respectively.
It wherein, further include the signature differential network (FD-Net) of building, the net in signature differential convolutional neural networks model Network generates multiple and different scales, different depth, the difference image for representing different characteristic.
As shown in table 1, it is also necessary to construction feature converged network (FF-Net).It specifically can be the spy generated based on FD-Net Difference image is levied, according to the sample size size of variation Detection task, construction feature converged network (FF-Net) specially constructs one The seldom convolutional neural networks of a parameter amount, the FF-Net network generate final change intensity image.
Aforementioned process proposes one on the basis of analyzing existing change detection algorithm comprehensively and deep learning is applied Variation detection convolutional neural networks of the kind based on signature differential change detection scheme, and the program is logical using depth convolutional neural networks The depth characteristic for crossing scene classification task study remote sensing image passes through construction feature after the depth characteristic for obtaining remote sensing image Differential networks and Fusion Features network module obtain the variation detection network that can export change intensity figure.
Further, before being changed detection, it is also necessary to realize the instruction to signature differential convolutional neural networks model Practice.Specifically, it is contemplated that Sub-VGG16Net was characterized in being learnt by the training process of remote sensing scene classification, it can be with Its learning rate is set as 0, reduction can training parameter quantity, to reduce the demand to training sample;FD-Net generates feature Difference image does not need to introduce new trainable parameter, therefore does not need to realize the backpropagation of the network.Therefore variation inspection Survey problem has been converted into the problem of how being changed detection using these feature difference images.To the process of FF-Net training It does not need to learn new depth characteristic, but as random forest or support vector machines, its effect is similar to feature Selection and dimensionality reduction.Therefore the sample size needed is seldom, so as to avoid the difficulty for needing a large amount of Pixel-level label datas to be trained Topic.Therefore the process of training FDCNN, essence is the process of trained FF-Net, therefore joins quantitative analysis and training in training process Strategy is a critically important problem.If parameter amount is too small, Fusion Features can not be preferably completed, cannot be made full use of Depth Differential Characteristics, category of model ability is weak, causes variation testing result low;If parameter amount is excessive, small sample amount was easy Fitting, causes generalization ability weak, as follows here according to the structure of the quantitative design of training sample: one 3 × 3 convolution is used only Layer conv_f carries out Fusion Features and one convolutional layer conv_cm exports change intensity figure.
And accordingly, it is described to be input to first image and second image as an optional embodiment In signature differential convolutional neural networks model, before the first change intensity figure exported, further includes:
Remote sensing image variation detection training dataset is constructed, the remote sensing image variation detection training data concentration includes: The 4th image after atural object changes in third image, the random areas before atural object changes in random areas and right The variation true value answered;
Based on the third image and the 4th image, the second change intensity figure is generated;
According to the second change intensity figure, change intensity weight matrix is generated;
Based on the variation true value, calculates variation pixel quantity in image and account for the first ratio of total pixel number amount and do not change Pixel quantity accounts for the second ratio of the total pixel number amount;
Based on the change intensity weight matrix, first ratio and second ratio, obtain intersecting entropy loss letter Number:
N is sample size, wherein one group of sample includes a third image and the 4th image;W is the change Change intensity weight matrix, wherein the change intensity value of pixel is less than variation in the second change intensity figure after normalization When strength mean value, corresponding element is determined as the change intensity mean value in the change intensity weight matrix, after normalization When the change intensity value of pixel is greater than or equal to the change intensity mean value in the second change intensity figure, the change intensity Corresponding element is determined as the change intensity value in weight matrix;β+For first ratio, β_For second ratio;ynFor The variation true value;For the predicted value of change intensity;
Based on the cross entropy loss function and remote sensing image variation detection training dataset, to the signature differential Convolutional neural networks model is trained.
When being trained to signature differential convolutional neural networks model, based on remote sensing image variation detection training dataset In two period images as input, and in combination with variation true value and generation change intensity guidance weighting intersect entropy loss Function, implementation model training.
Wherein, there are many ways to being based on third image and the 4th image, generating the second change intensity figure, including but it is unlimited CVA, Multivariate alteration detection MAD, the methods of the Multivariate alteration detection IR-MAD of iteration weighting are detected in change vector.The present embodiment It is middle to select CVA as change intensity generation method, formula are as follows:
In formula, DN1jIndicate the pixel value of T1 period image jth wave band, DN2jIndicate the pixel of T2 period image jth wave band Value, n is total wave band number, and default is equal to 3.
Wherein, W is change intensity weight matrix, the second change intensity figure after the normalization used in the generation of WWherein, max () indicates the maximum change intensity value of pixel in change intensity figure, CM the Two change intensity figures.
In the generating process of the W, consider that unchanged image is in the great majority in image, and change intensity can be very low, leads Cause learning rate too small, training speed reduces, and the pixel for being 0 for change intensity, without learning rate, therefore in order to keep away Exempt from this problem, using the change intensity mean value of pixel in the second change intensity figure as point of cut-off, realizes and change intensity is weighed The generation of value matrix.Wherein, the value range of element is [0,1] in W;ynFor the variation true value, element value is 0 or 1.
Further, wherein consider β+And β-It is the numerical value of a global nature, the spatial distribution and characteristic with sample It is unrelated, therefore can be learnt by the change intensity of each pixel as priori knowledge.The boundary of variation is fuzzy and not Determining, even also it is difficult to define change and constant boundary during acquisition true value.The present embodiment proposes strong based on variation The intersection entropy function of degree carries out the weighting of loss function by change intensity, it considers the corresponding relationship of each pixel space With change intensity information.
The weighting cross entropy loss function that the above process is instructed using change intensity is trained FF-Net network, obtains Variation based on signature differential detects convolutional neural networks, which exports change intensity figure.
First with the depth characteristic in remote sensing scene classification CNN model learning remote sensing scene;It is then based on transfer learning Thought, the multi-level depth characteristic that two period image learnings are arrived is in such a way that weight is shared, construction feature differential networks (FD-Net), which can generate multiple and different scales, different depth, the difference image for representing different characteristic;It is then based on The feature difference image that FD-Net is generated, according to the sample size size of variation Detection task, construction feature converged network (FF- Net), which generates final change intensity image;Then using the weighting cross entropy loss function of change intensity guidance to upper It states network to be trained, obtains variation detection convolutional neural networks (FDCNN) based on signature differential, two period of network inputs Image exports change intensity figure;Finally obtained on the basis of final change intensity image using thresholding algorithm or sorting algorithm To the bianry image of variation detection.
Specifically, when being trained to FDCNN, variation detection sample data set C can be constructed, is divided according to a certain percentage For training set and test set, 50% data are defaulted as training set, 50% data are as test set.It is every in data set C A sample standard deviation includes two period images (image after image and feature changes before feature changes) and corresponding variation true value figure Picture.Two period remote sensings all pass through the preprocessing process such as geometric correction and radiant correction, including R, tri- wave bands of G, B, variation True value image is bianry image, and use 1 indicates to change, and 0 indicates not change.Wherein variation true value includes a variety of feature changes types. On the basis of data set C, pass through random cropping, image mirrors, colour dither equal samples strategy augmented sample.It is finally obtained Three wave band remote sensing image blocks and corresponding 224 × 224 pixel size variation true value of each sample by 224 × 224 pixel sizes Label composition.It is trained FDCNN using the weighting cross entropy loss function and stochastic gradient descent strategy of change intensity guidance, The deconditioning when loss function value no longer declines.
Herein, Fusion Features network (FF-Net) study is carried out by data set C, by a variety of in learning training sample The priori knowledge of feature changes type, FF-Net effectively can choose and combine the feature difference image of FD-Net generation, raw At change intensity image, when network can correctly distinguish various feature changes types, indicate that all training have been arrived in FDCNN study Feature changes situation in sample, loss function reaches minimum value, i.e. deconditioning process at this time.
The above process, it is contemplated that feature changes are a fuzzy and uncertain processes, devise a kind of change intensity and refer to The weighting cross entropy loss function led detects network training to variation, which is intended to reduce the wrong report of variation testing result With the training speed for improving network, more preferably change testing result to obtain.
Step 103, it is based on the first change intensity figure, generates the bianry image of feature changes.
When generating the bianry image of feature changes, it can use thresholding algorithm or sorting algorithm obtain variation detection Bianry image.
Herein, the pixel in change intensity figure is divided into variation using thresholding algorithm or sorting algorithm and do not changed Two classes obtain final two-value variation testing result, realize the detection to feature changes.Default choice K-Means sorting algorithm.
In above-described embodiment, it is based on transfer learning thought, utilizes the more of remote sensing scene classification CNN model learning remote sensing image Level depth characteristic devises a kind of convolutional neural networks structure based on signature differential and a kind of is suitable for variation Detection task Change intensity guidance weighting cross entropy loss function, it is only necessary to a small amount of Pixel-level sample, which is trained, can be obtained by variation The convolutional neural networks of Detection task can alleviate Pixel-level training sample in remote sensing application and obtain difficult problem, and this reality It applies example to guarantee to extract the diversity of feature using deep learning, removes some puppets using priori knowledge and change, appoint in variation detection Higher precision can be obtained in business, and there is preferable robustness and practicability.
In the embodiment of the present application, the first image and atural object before being changed by atural object in acquisition target area become First image and second image are input in signature differential convolutional neural networks model, obtain by the second image after change The change intensity figure of output is based on the change intensity figure, generates the bianry image of feature changes, wherein signature differential convolution mind It is included: the twin depth convolutional neural networks for having learnt the depth characteristic under different remote sensing atural object scenes in network model, with The signature differential network that the twin depth convolutional neural networks are coupled, and the Fusion Features with the signature differential network link Network promotes the robustness and stability of remote sensing image variation detection, promotes the precision of variation detection.
The different embodiments of the detection method of feature changes are additionally provided in the embodiment of the present application.
Referring to fig. 2, Fig. 2 is a kind of flowchart 2 of the detection method of feature changes provided by the embodiments of the present application.Such as Fig. 2 It is shown, a kind of detection method of feature changes, method includes the following steps:
Step 201, obtain target area in atural object change before the first image and atural object change after second Image.
The implementation process of the step is identical as the realization process of the step 101 in aforementioned embodiments, and details are not described herein again.
Further, it needs for the first image and the second image to be input in signature differential convolutional neural networks model, obtain To the first change intensity figure of output, specifically include:
Step 202, first image and second image are input in signature differential convolutional neural networks model.
This feature difference convolutional neural networks model is the network model that training is completed, and is needed using two period images as defeated Enter, to realize the detection process of subsequent feature changes.
Step 203, it is based on first image and second image, by the twin depth convolutional neural networks, The depth characteristic figure after changing before the atural object changes with the atural object is generated respectively.
It is handled, is generated correspondingly by two period images of the twin depth convolutional neural networks to input Object change before depth characteristic figure and atural object change after depth characteristic figure.
Step 204, the depth characteristic figure after changing before being changed based on the atural object with the atural object, passes through institute Signature differential network is stated, feature difference image is obtained.
Signature differential network (FD-Net) can generate multiple and different scales, different depth, the feature for representing different characteristic Difference image.It is constituted in such a way that weight is shared by the sub-network (Sub-VGG16Net) of remote sensing scene classification CNN model Binary channels convolutional network, and set 0 for its learning rate;Feature extraction is carried out using binary channels convolutional network, generates two The characteristic image of the different scale of period image, different depth, and realize signature differential and normalization, obtain feature difference shadow Picture;Resampling is carried out to the difference image of different scale, scale is unified, it is consistent with the scale of input image, to carry out Subsequent Fusion Features process.
As an optional embodiment, it is described changed based on the atural object before change with the atural object after Depth characteristic figure obtains feature difference image by the signature differential network, comprising:
Depth characteristic figure after changing before the atural object is changed with the atural object carries out difference and normalizing Change, obtains the feature difference image
Wherein,When indicating that input picture is the first image, obtained i-th of depth characteristic;Indicate that input picture is When the second image, obtained i-th of depth characteristic;N indicates depth characteristic sum.
In conjunction with the table 1 in aforementioned process, the part FD-Net in table 1 is produced two period images of T1 and T2 as input Raw depth characteristic carries out difference and normalization and obtains three different size of feature difference images using above-mentioned calculation formula (fd2, fd3, fd4), feature quantity are respectively 64,128 and 256.In addition to retaining some original boundaries letter of variation detection Breath has obtained 3 wave band differential images (fd1) by the way that two period images are carried out difference.Due to conv4 (conv4_p) and The output of conv7 (conv7_p) is characterized in by pond, and size is respectively 112 and 56, thus in order to 224 sizes Feature is overlapped, and is needed to amplify the up-sampling that ratio is 2 and 4 and is operated (up1, up2).
Step 205, it is based on the feature difference image, by the Fusion Features network, it is strong to generate first variation Degree figure.
The process of Fusion Features is specially that the feature difference image unified to scale merges, and exports one and reflect this The change intensity image of a little difference images.
Step 206, it is based on the first change intensity figure, generates the bianry image of feature changes.
The implementation process of the step is identical as the realization process of the step 103 in aforementioned embodiments, and details are not described herein again.
In the embodiment of the present application, the first image and atural object before being changed by atural object in acquisition target area become First image and second image are input in signature differential convolutional neural networks model, obtain by the second image after change The change intensity figure of output is based on the change intensity figure, generates the bianry image of feature changes, wherein signature differential convolution mind It is included: the twin depth convolutional neural networks for having learnt the depth characteristic under different remote sensing atural object scenes, spy in network model Differential networks and Fusion Features network are levied, the robustness and stability of remote sensing image variation detection are promoted, promotes variation detection Precision.
It is a kind of structure chart of the detection system of feature changes provided by the embodiments of the present application referring to Fig. 3, Fig. 3, in order to just In explanation, part relevant to the embodiment of the present application is illustrated only.
The detection system 300 of the feature changes includes:
Module 301 is obtained, is changed for obtaining the first image before atural object changes in target area and atural object The second image afterwards;
First obtaining module 302, for first image and second image to be input to signature differential convolution mind The first change intensity figure through being exported in network model;
First generation module 303 generates the bianry image of feature changes for being based on the first change intensity figure;
It wherein, include: the depth learnt under different remote sensing atural object scenes in the signature differential convolutional neural networks model The twin depth convolutional neural networks of degree feature, the signature differential network being coupled with the twin depth convolutional neural networks, and With the Fusion Features network of the signature differential network link.
Wherein, the first obtaining module 302 is specifically used for:
First image and second image are input in signature differential convolutional neural networks model;
It is generated respectively based on first image and second image by the twin depth convolutional neural networks The atural object change before changing with the atural object after depth characteristic figure;
Depth characteristic figure after changing before being changed based on the atural object with the atural object, it is poor by the feature Subnetwork obtains feature difference image;
The first change intensity figure is generated by the Fusion Features network based on the feature difference image.
Further, the first obtaining module 302 is specifically used for:
Depth characteristic figure after changing before the atural object is changed with the atural object carries out difference and normalizing Change, obtains the feature difference image
Wherein,When indicating that input picture is the first image, obtained i-th of depth characteristic;Indicate that input picture is When the second image, obtained i-th of depth characteristic;N indicates depth characteristic sum.
The system further include:
Training module, for the remote sensing scene classification training dataset based on building, to initial depth convolutional neural networks It is trained, the sample data that the remote sensing scene classification training data is concentrated includes: the remote sensing shadow of different remote sensing atural object scenes Picture and corresponding scene tag;
Second generation module, the initial depth convolutional neural networks for being completed based on training, is generated described twin Depth convolutional neural networks.
Wherein, which is specifically used for:
Remote sensing scene classification training dataset based on building carries out sample augmentation, obtains the training of target remote sensing scene classification Data set, wherein the sample that the target remote sensing scene classification training data is concentrated includes: that three wave bands of setting pixel size are distant Feel image blocks and corresponding scene true value label;
According to the target remote sensing scene classification training dataset, the initial depth convolutional neural networks are instructed Practice.
The system further include:
Data construct module, for constructing remote sensing image variation detection training dataset, the remote sensing image variation detection It includes: third image before atural object changes in random areas that training data, which is concentrated, atural object becomes in the random areas The 4th image and corresponding variation true value after change;
Third generation module generates the second change intensity figure for being based on the third image and the 4th image;
4th generation module, for generating change intensity weight matrix according to the second change intensity figure;
Computing module calculates variation pixel quantity in image and accounts for the of total pixel number amount for being based on the variation true value One ratio and do not change the second ratio that pixel quantity accounts for the total pixel number amount;
Second obtains module, for being based on the change intensity weight matrix, first ratio and second ratio, Obtain cross entropy loss function:
N is sample size, wherein one group of sample includes a third image and the 4th image;W is the change Change intensity weight matrix, wherein the change intensity value of pixel is less than variation in the second change intensity figure after normalization When strength mean value, corresponding element is determined as the change intensity mean value in the change intensity weight matrix, after normalization When the change intensity value of pixel is greater than or equal to the change intensity mean value in the second change intensity figure, the change intensity Corresponding element is determined as the change intensity value in weight matrix;β+For first ratio, β_For second ratio;ynFor The variation true value;For the predicted value of change intensity;
Based on the cross entropy loss function and remote sensing image variation detection training dataset, to the signature differential Convolutional neural networks model is trained.
In the embodiment of the present application, the first image and atural object before being changed by atural object in acquisition target area become First image and second image are input in signature differential convolutional neural networks model, obtain by the second image after change The change intensity figure of output is based on the change intensity figure, generates the bianry image of feature changes, wherein signature differential convolution mind It is included: the twin depth convolutional neural networks for having learnt the depth characteristic under different remote sensing atural object scenes, spy in network model Differential networks and Fusion Features network are levied, the robustness and stability of remote sensing image variation detection are promoted, promotes variation detection Precision.
The detection system of feature changes provided by the embodiments of the present application can be realized the detection method of above-mentioned feature changes Each process of embodiment, and identical technical effect can be reached, to avoid repeating, which is not described herein again.
Fig. 4 is a kind of structure chart of terminal provided by the embodiments of the present application.As shown in Fig. 4, the terminal 4 of the embodiment is wrapped It includes: processor 40, memory 41 and being stored in the computer that can be run in the memory 41 and on the processor 40 Program 42.
Illustratively, the computer program 42 can be divided into one or more module/units, it is one or Multiple module/units are stored in the memory 41, and are executed by the processor 40, to complete the application.Described one A or multiple module/units can be the series of computation machine program instruction section that can complete specific function, which is used for Implementation procedure of the computer program 42 in the terminal 4 is described.For example, the computer program 42 can be divided into It is as follows to obtain module, first obtaining module and the first generation module, each module concrete function:
Module is obtained, for obtaining the first image before atural object changes in target area and after atural object changes Second image;
First obtaining module, for first image and second image to be input to signature differential convolutional Neural net In network model, the first change intensity figure for being exported;
First generation module generates the bianry image of feature changes for being based on the first change intensity figure;
It wherein, include: the depth learnt under different remote sensing atural object scenes in the signature differential convolutional neural networks model The twin depth convolutional neural networks of degree feature, the signature differential network being coupled with the twin depth convolutional neural networks, and With the Fusion Features network of the signature differential network link.
The terminal 4 can be desktop PC, notebook, palm PC and cloud server etc. and calculate equipment.Institute Stating terminal 4 may include, but be not limited only to, processor 40, memory 41.It will be understood by those skilled in the art that Fig. 4 is only eventually The example at end 4, the not restriction of structure paired terminal 4 may include than illustrating more or fewer components, or the certain portions of combination Part or different components, such as the terminal can also include input-output equipment, network access equipment, bus etc..
Alleged processor 40 can be central processing unit (Central Processing Unit, CPU), can also be Other general processors, digital signal processor (Digital Signal Processor, DSP), specific integrated circuit (Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field- Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic, Discrete hardware components etc..General processor can be microprocessor or the processor is also possible to any conventional processor Deng.
The memory 41 can be the internal storage unit of the terminal 4, such as the hard disk or memory of terminal 4.It is described Memory 41 is also possible to the External memory equipment of the terminal 4, such as the plug-in type hard disk being equipped in the terminal 4, intelligence Storage card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card, flash card (Flash Card) Deng.Further, the memory 41 can also both include the internal storage unit of the terminal 4 or set including external storage It is standby.The memory 41 is for other programs and data needed for storing the computer program and the terminal.It is described to deposit Reservoir 41 can be also used for temporarily storing the data that has exported or will export.
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each function Can unit, module division progress for example, in practical application, can according to need and by above-mentioned function distribution by different Functional unit, module are completed, i.e., the internal structure of described device is divided into different functional unit or module, more than completing The all or part of function of description.Each functional unit in embodiment, module can integrate in one processing unit, can also To be that each unit physically exists alone, can also be integrated in one unit with two or more units, it is above-mentioned integrated Unit both can take the form of hardware realization, can also realize in the form of software functional units.In addition, each function list Member, the specific name of module are also only for convenience of distinguishing each other, the protection scope being not intended to limit this application.Above system The specific work process of middle unit, module, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, is not described in detail or remembers in some embodiment The part of load may refer to the associated description of other embodiments.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed Scope of the present application.
In embodiment provided herein, it should be understood that disclosed terminal and method can pass through others Mode is realized.For example, terminal embodiment described above is only schematical, for example, the division of the module or unit, Only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components can be with In conjunction with or be desirably integrated into another system, or some features can be ignored or not executed.Another point, it is shown or discussed Mutual coupling or direct-coupling or communication connection can be through some interfaces, the INDIRECT COUPLING of device or unit or Communication connection can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme 's.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list Member both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated module/unit be realized in the form of SFU software functional unit and as independent product sale or In use, can store in a computer readable storage medium.Based on this understanding, the application realizes above-mentioned implementation All or part of the process in example method, can also instruct relevant hardware to complete, the meter by computer program Calculation machine program can be stored in a computer readable storage medium, the computer program when being executed by processor, it can be achieved that on The step of stating each embodiment of the method.Wherein, the computer program includes computer program code, the computer program generation Code can be source code form, object identification code form, executable file or certain intermediate forms etc..The computer-readable medium It may include: any entity or device, recording medium, USB flash disk, mobile hard disk, magnetic that can carry the computer program code Dish, CD, computer storage, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium etc..It should be noted that described The content that computer-readable medium includes can carry out increasing appropriate according to the requirement made laws in jurisdiction with patent practice Subtract, such as does not include electric carrier signal and electricity according to legislation and patent practice, computer-readable medium in certain jurisdictions Believe signal.
Embodiment described above is only to illustrate the technical solution of the application, rather than its limitations;Although referring to aforementioned reality Example is applied the application is described in detail, those skilled in the art should understand that: it still can be to aforementioned each Technical solution documented by embodiment is modified or equivalent replacement of some of the technical features;And these are modified Or replacement, the spirit and scope of each embodiment technical solution of the application that it does not separate the essence of the corresponding technical solution should all Comprising within the scope of protection of this application.

Claims (10)

1. a kind of detection method of feature changes, which is characterized in that the detection method includes:
Obtain target area in atural object change before the first image and atural object change after the second image;
First image and second image are input in signature differential convolutional neural networks model, exported One change intensity figure;
Based on the first change intensity figure, the bianry image of feature changes is generated;
It wherein, include: the depth spy learnt under different remote sensing atural object scenes in the signature differential convolutional neural networks model Sign twin depth convolutional neural networks, the signature differential network being coupled with the twin depth convolutional neural networks, and with institute State the Fusion Features network of signature differential network link.
2. detection method according to claim 1, which is characterized in that described by first image and second image It is input in signature differential convolutional neural networks model, the first change intensity figure exported, comprising:
First image and second image are input in signature differential convolutional neural networks model;
Based on first image and second image, by the twin depth convolutional neural networks, respectively described in generation Atural object change before changing with the atural object after depth characteristic figure;
Depth characteristic figure after changing before being changed based on the atural object with the atural object passes through the signature differential net Network obtains feature difference image;
The first change intensity figure is generated by the Fusion Features network based on the feature difference image.
3. detection method according to claim 2, which is characterized in that
It is described changed based on the atural object before change with the atural object after depth characteristic figure, it is poor by the feature Subnetwork obtains feature difference image, comprising:
Depth characteristic figure after changing before the atural object is changed with the atural object carries out difference and normalization, obtains To the feature difference image
Wherein,When indicating that input picture is the first image, obtained i-th of depth characteristic;Indicate that input picture is second When image, obtained i-th of depth characteristic;N indicates depth characteristic sum.
4. detection method according to claim 1, which is characterized in that described by first image and second image It is input in signature differential convolutional neural networks model, before the first change intensity figure exported, further includes:
Remote sensing scene classification training dataset based on building, is trained initial depth convolutional neural networks, the remote sensing The sample data that scene classification training data is concentrated includes: the remote sensing image and corresponding scene mark of different remote sensing atural object scenes Label;
Based on the initial depth convolutional neural networks that training is completed, the twin depth convolutional neural networks are generated.
5. detection method according to claim 4, which is characterized in that the remote sensing scene classification training number based on building According to collection, initial depth convolutional neural networks are trained, comprising:
Remote sensing scene classification training dataset based on building carries out sample augmentation, obtains target remote sensing scene classification training data Collection, wherein the sample that the target remote sensing scene classification training data is concentrated includes: the three wave band remote sensing shadows for setting pixel size As block and corresponding scene true value label;
According to the target remote sensing scene classification training dataset, the initial depth convolutional neural networks are trained.
6. detection method according to claim 1, which is characterized in that described by first image and second image It is input in signature differential convolutional neural networks model, before the first change intensity figure exported, further includes:
Remote sensing image variation detection training dataset is constructed, the remote sensing image variation detection training data concentration includes: random The 4th image after atural object changes in third image, the random areas before atural object changes in region and corresponding Change true value;
Based on the third image and the 4th image, the second change intensity figure is generated;
According to the second change intensity figure, change intensity weight matrix is generated;
Based on the variation true value, calculates variation pixel quantity in image and account for the first ratio of total pixel number amount and do not change pixel Quantity accounts for the second ratio of the total pixel number amount;
Based on the change intensity weight matrix, first ratio and second ratio, cross entropy loss function is obtained:
N is sample size, wherein one group of sample includes a third image and the 4th image;W is that the variation is strong Spend weight matrix, wherein the change intensity value of pixel is less than change intensity in the second change intensity figure after normalization When mean value, corresponding element is determined as the change intensity mean value in the change intensity weight matrix, after normalization described When the change intensity value of pixel is greater than or equal to the change intensity mean value in second change intensity figure, the change intensity weight Corresponding element is determined as the change intensity value in matrix;β+For first ratio, β_For second ratio;ynIt is described Change true value;For the predicted value of change intensity;
Based on the cross entropy loss function and remote sensing image variation detection training dataset, to the signature differential convolution Neural network model is trained.
7. a kind of detection system of feature changes, which is characterized in that the detection system includes:
Obtain module, for obtain the first image before atural object changes in target area and atural object change after second Image;
First obtaining module, for first image and second image to be input to signature differential convolutional neural networks mould In type, the first change intensity figure for being exported;
First generation module generates the bianry image of feature changes for being based on the first change intensity figure;
It wherein, include: the depth spy learnt under different remote sensing atural object scenes in the signature differential convolutional neural networks model Sign twin depth convolutional neural networks, the signature differential network being coupled with the twin depth convolutional neural networks, and with institute State the Fusion Features network of signature differential network link.
8. detection system according to claim 7, which is characterized in that the first obtaining module is specifically used for:
First image and second image are input in signature differential convolutional neural networks model;
Based on first image and second image, by the twin depth convolutional neural networks, respectively described in generation Atural object change before changing with the atural object after depth characteristic figure;
Depth characteristic figure after changing before being changed based on the atural object with the atural object passes through the signature differential net Network obtains feature difference image;
The first change intensity figure is generated by the Fusion Features network based on the feature difference image.
9. a kind of terminal, including memory, processor and storage can be run in the memory and on the processor Computer program, which is characterized in that the processor is realized when executing the computer program as claim 1 to 6 is any The step of item the method.
10. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, and feature exists In when the computer program is executed by processor the step of any one of such as claim 1 to 6 of realization the method.
CN201910515105.3A 2019-06-14 2019-06-14 Detection method and detection system for ground feature change and terminal Active CN110378224B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910515105.3A CN110378224B (en) 2019-06-14 2019-06-14 Detection method and detection system for ground feature change and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910515105.3A CN110378224B (en) 2019-06-14 2019-06-14 Detection method and detection system for ground feature change and terminal

Publications (2)

Publication Number Publication Date
CN110378224A true CN110378224A (en) 2019-10-25
CN110378224B CN110378224B (en) 2021-01-05

Family

ID=68248776

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910515105.3A Active CN110378224B (en) 2019-06-14 2019-06-14 Detection method and detection system for ground feature change and terminal

Country Status (1)

Country Link
CN (1) CN110378224B (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110826632A (en) * 2019-11-11 2020-02-21 深圳前海微众银行股份有限公司 Image change detection method, device, equipment and computer readable storage medium
CN110827269A (en) * 2019-11-11 2020-02-21 深圳前海微众银行股份有限公司 Crop growth change condition detection method, device, equipment and medium
CN110969088A (en) * 2019-11-01 2020-04-07 华东师范大学 Remote sensing image change detection method based on significance detection and depth twin neural network
CN110991751A (en) * 2019-12-06 2020-04-10 讯飞智元信息科技有限公司 User life pattern prediction method and device, electronic equipment and storage medium
CN111986193A (en) * 2020-08-31 2020-11-24 香港中文大学(深圳) Remote sensing image change detection method, electronic equipment and storage medium
CN112016400A (en) * 2020-08-04 2020-12-01 香港理工大学深圳研究院 Single-class target detection method and device based on deep learning and storage medium
CN112233062A (en) * 2020-09-10 2021-01-15 浙江大华技术股份有限公司 Surface feature change detection method, electronic device, and storage medium
CN112396594A (en) * 2020-11-27 2021-02-23 广东电网有限责任公司肇庆供电局 Change detection model acquisition method and device, change detection method, computer device and readable storage medium
CN112529897A (en) * 2020-12-24 2021-03-19 上海商汤智能科技有限公司 Image detection method and device, computer equipment and storage medium
CN112990045A (en) * 2021-03-25 2021-06-18 北京百度网讯科技有限公司 Method and apparatus for generating image change detection model and image change detection
CN114049568A (en) * 2021-11-29 2022-02-15 中国平安财产保险股份有限公司 Object shape change detection method, device, equipment and medium based on image comparison
CN114612694A (en) * 2022-05-11 2022-06-10 合肥高维数据技术有限公司 Picture invisible watermark detection method based on two-channel differential convolutional network
CN114708260A (en) * 2022-05-30 2022-07-05 阿里巴巴(中国)有限公司 Image detection method
CN115170979A (en) * 2022-06-30 2022-10-11 国家能源投资集团有限责任公司 Mining area fine land classification method based on multi-source data fusion
CN115240081A (en) * 2022-09-19 2022-10-25 航天宏图信息技术股份有限公司 Method and device for detecting full element change of remote sensing image
CN115393966A (en) * 2022-10-27 2022-11-25 中鑫融信(北京)科技有限公司 Dispute mediation data processing method and system based on credit supervision
CN115641509A (en) * 2022-11-16 2023-01-24 自然资源部第三地理信息制图院 Method and system for detecting changes of ground objects in remote sensing image, electronic device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017040691A1 (en) * 2015-08-31 2017-03-09 Cape Analytics, Inc. Systems and methods for analyzing remote sensing imagery
CN107248172A (en) * 2016-09-27 2017-10-13 中国交通通信信息中心 A kind of remote sensing image variation detection method based on CVA and samples selection
CN109033998A (en) * 2018-07-04 2018-12-18 北京航空航天大学 Remote sensing image atural object mask method based on attention mechanism convolutional neural networks
CN109409263A (en) * 2018-10-12 2019-03-01 武汉大学 A kind of remote sensing image city feature variation detection method based on Siamese convolutional network
CN109558806A (en) * 2018-11-07 2019-04-02 北京科技大学 The detection method and system of high score Remote Sensing Imagery Change

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017040691A1 (en) * 2015-08-31 2017-03-09 Cape Analytics, Inc. Systems and methods for analyzing remote sensing imagery
CN107248172A (en) * 2016-09-27 2017-10-13 中国交通通信信息中心 A kind of remote sensing image variation detection method based on CVA and samples selection
CN109033998A (en) * 2018-07-04 2018-12-18 北京航空航天大学 Remote sensing image atural object mask method based on attention mechanism convolutional neural networks
CN109409263A (en) * 2018-10-12 2019-03-01 武汉大学 A kind of remote sensing image city feature variation detection method based on Siamese convolutional network
CN109558806A (en) * 2018-11-07 2019-04-02 北京科技大学 The detection method and system of high score Remote Sensing Imagery Change

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
塞班智能: "《基于深度双路卷积神经网络的遥感图像变化检测》", 《HTTPS://KUAIBAO.QQ.COM/S/20180720G0Q1JN00?REFER=SPIDER》 *
李卫华等: "《多尺度多特征融合的高分辨率遥感影像变化检测》", 《中文科技期刊数据库》 *

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110969088A (en) * 2019-11-01 2020-04-07 华东师范大学 Remote sensing image change detection method based on significance detection and depth twin neural network
CN110826632A (en) * 2019-11-11 2020-02-21 深圳前海微众银行股份有限公司 Image change detection method, device, equipment and computer readable storage medium
CN110827269A (en) * 2019-11-11 2020-02-21 深圳前海微众银行股份有限公司 Crop growth change condition detection method, device, equipment and medium
CN110826632B (en) * 2019-11-11 2024-02-13 深圳前海微众银行股份有限公司 Image change detection method, device, equipment and computer readable storage medium
CN110827269B (en) * 2019-11-11 2024-03-05 深圳前海微众银行股份有限公司 Crop growth change condition detection method, device, equipment and medium
CN110991751A (en) * 2019-12-06 2020-04-10 讯飞智元信息科技有限公司 User life pattern prediction method and device, electronic equipment and storage medium
CN112016400A (en) * 2020-08-04 2020-12-01 香港理工大学深圳研究院 Single-class target detection method and device based on deep learning and storage medium
CN111986193A (en) * 2020-08-31 2020-11-24 香港中文大学(深圳) Remote sensing image change detection method, electronic equipment and storage medium
CN111986193B (en) * 2020-08-31 2024-03-19 香港中文大学(深圳) Remote sensing image change detection method, electronic equipment and storage medium
CN112233062A (en) * 2020-09-10 2021-01-15 浙江大华技术股份有限公司 Surface feature change detection method, electronic device, and storage medium
CN112396594A (en) * 2020-11-27 2021-02-23 广东电网有限责任公司肇庆供电局 Change detection model acquisition method and device, change detection method, computer device and readable storage medium
CN112396594B (en) * 2020-11-27 2024-03-29 广东电网有限责任公司肇庆供电局 Method and device for acquiring change detection model, change detection method, computer equipment and readable storage medium
CN112529897A (en) * 2020-12-24 2021-03-19 上海商汤智能科技有限公司 Image detection method and device, computer equipment and storage medium
CN112990045A (en) * 2021-03-25 2021-06-18 北京百度网讯科技有限公司 Method and apparatus for generating image change detection model and image change detection
CN114049568A (en) * 2021-11-29 2022-02-15 中国平安财产保险股份有限公司 Object shape change detection method, device, equipment and medium based on image comparison
CN114612694A (en) * 2022-05-11 2022-06-10 合肥高维数据技术有限公司 Picture invisible watermark detection method based on two-channel differential convolutional network
CN114612694B (en) * 2022-05-11 2022-07-29 合肥高维数据技术有限公司 Picture invisible watermark detection method based on two-channel differential convolutional network
CN114708260A (en) * 2022-05-30 2022-07-05 阿里巴巴(中国)有限公司 Image detection method
CN115170979B (en) * 2022-06-30 2023-02-24 国家能源投资集团有限责任公司 Mining area fine land classification method based on multi-source data fusion
CN115170979A (en) * 2022-06-30 2022-10-11 国家能源投资集团有限责任公司 Mining area fine land classification method based on multi-source data fusion
CN115240081B (en) * 2022-09-19 2023-01-17 航天宏图信息技术股份有限公司 Method and device for detecting full element change of remote sensing image
CN115240081A (en) * 2022-09-19 2022-10-25 航天宏图信息技术股份有限公司 Method and device for detecting full element change of remote sensing image
CN115393966A (en) * 2022-10-27 2022-11-25 中鑫融信(北京)科技有限公司 Dispute mediation data processing method and system based on credit supervision
CN115641509A (en) * 2022-11-16 2023-01-24 自然资源部第三地理信息制图院 Method and system for detecting changes of ground objects in remote sensing image, electronic device and storage medium
CN115641509B (en) * 2022-11-16 2023-03-21 自然资源部第三地理信息制图院 Method and system for detecting changes of ground objects in remote sensing image, electronic device and storage medium

Also Published As

Publication number Publication date
CN110378224B (en) 2021-01-05

Similar Documents

Publication Publication Date Title
CN110378224A (en) A kind of detection method of feature changes, detection system and terminal
Wang et al. Change detection based on Faster R-CNN for high-resolution remote sensing images
Zhong et al. SatCNN: Satellite image dataset classification using agile convolutional neural networks
CN103745201B (en) A kind of program identification method and device
CN107609399A (en) Malicious code mutation detection method based on NIN neutral nets
CN109815770A (en) Two-dimentional code detection method, apparatus and system
CN112801146B (en) Target detection method and system
Novack et al. Urban land cover and land use classification of an informal settlement area using the open-source knowledge-based system InterIMAGE
CN110288602A (en) Come down extracting method, landslide extraction system and terminal
Zhang et al. Deepbackground: Metamorphic testing for deep-learning-driven image recognition systems accompanied by background-relevance
Ge et al. Multiple-point simulation-based method for extraction of objects with spatial structure from remotely sensed imagery
Lei et al. A non-local capsule neural network for hyperspectral remote sensing image classification
CN113239914B (en) Classroom student expression recognition and classroom state evaluation method and device
CN107341440A (en) Indoor RGB D scene image recognition methods based on multitask measurement Multiple Kernel Learning
Xu et al. A semantic segmentation method with category boundary for Land Use and Land Cover (LULC) mapping of Very-High Resolution (VHR) remote sensing image
CN108564569B (en) A kind of distress in concrete detection method and device based on multicore classification learning
Chen et al. Learning a two-stage CNN model for multi-sized building detection in remote sensing images
CN115830449A (en) Remote sensing target detection method with explicit contour guidance and spatial variation context enhancement
Jian et al. A hypergraph-based context-sensitive representation technique for VHR remote-sensing image change detection
KR102250996B1 (en) Method for classifying stratum using neural network model and device for the same method
Hu et al. Multi-information PointNet++ fusion method for DEM construction from airborne LiDAR data
CN109961103A (en) The training method of Feature Selection Model, the extracting method of characteristics of image and device
CN112270671B (en) Image detection method, device, electronic equipment and storage medium
Sun et al. Mapping land cover using a developed U-Net model with weighted cross entropy
CN115033700A (en) Cross-domain emotion analysis method, device and equipment based on mutual learning network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant