CN108038851A - A kind of radar image difference detecting method based on feedback and iteration - Google Patents
A kind of radar image difference detecting method based on feedback and iteration Download PDFInfo
- Publication number
- CN108038851A CN108038851A CN201711303801.5A CN201711303801A CN108038851A CN 108038851 A CN108038851 A CN 108038851A CN 201711303801 A CN201711303801 A CN 201711303801A CN 108038851 A CN108038851 A CN 108038851A
- Authority
- CN
- China
- Prior art keywords
- mrow
- msub
- msubsup
- value
- iteration
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
- G06T2207/10044—Radar image
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Radar Systems Or Details Thereof (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The present invention relates to a kind of radar image difference detecting method based on feedback and iteration, two-value two key steps of calibration are carried out for differential image generation in existing method and according to differential image, it is proposed to be used as the foundation fed back by the result demarcated according to two-value, the function parameter of adjustment differential image generation and the threshold value of two-value calibration are set.The present invention tends to be accurate by iteration into result is exercised, while considering that the local message of image makes the disparity map generation of diverse location in image and difference by way of two-value calibration, adds the robustness of algorithm, can improve the accuracy rate of detection to a certain extent;And detection method uses end condition of the MRF functions as iteration, also ensure that the accuracy of detection method to a certain extent.
Description
Technical field
The present invention relates to data and image processing, more particularly to a kind of radar image Difference test side based on feedback and iteration
Method.
Background technology
In recent years, with the rapid development of computer technology, is become increasingly dependent on based on the acquisition of information of image with analysis
In computer technology.And the image difference opposite sex detection based on computer vision is also one of research hotspot in recent years.This project
What is mainly solved is the test problems of unsupervised lower SAR image otherness.As its name suggests, the detection of the image difference opposite sex is exactly given
Difference therein is detected and marks out in two basically identical images.One of them important application is exactly to analyze radar map
Picture.The detection of change can be used for the detection of environmental change, landforms investigation, forest cover change etc..This otherness detection
The main two width radar images for needing to analyze same area different time.
The method for solving problems at present has been broadly divided into measure of supervision and unsupervised approaches【1,2】.The former main base
In the grader for having supervision, therefore such method needs the substantial amounts of differential image marked to instruct the training of grader.Afterwards
Person is then directly compared without extra information two containing discrepant image.Although the method for having supervision is generally come
Unsupervised algorithm can be better than by saying the effect of detection, but it is that comparison is high to produce the differential image cost marked, and be compared
Difficult.Therefore unsupervised image difference opposite sex detection has great significance in practical applications.
Solve problems mainly have two key steps, be first by containing discrepant image by certain method into
Differential image, is then carried out the calibration of difference and non-diff area by row processing generation differential image.On obtaining for differential image
Method is taken to have the method for some comparative maturities at present.Some researchs【5,6,7】Carried out using log ratio (LR) operator to image
Handled, then generate disparity map, Gong【3】A certain size neighborhood around is each put Deng by comparing two image to be detected
The difference generation differential image of interior correspondence position.Zheng【4】Deng by the difference and logarithm of the pixel value of corresponding points in two pictures
Difference carry out fusion generation disparity map, when carrying out the calibration of diff area and non-diff area according to the disparity map of generation, one
Part research is concerned with how to set suitable threshold value to carry out binaryzation【8】.Another popular method is to utilize k-means
Clustered【4】So as to carry out the differentiation of discrepancy and non-discrepancy.Lorenzo uses similar side when generating differential image
Method, then by using EM【8】Algorithm carries out the setting of difference and non-discrepancy threshold, is combined further according to threshold value around each point
Information carries out the adjustment of otherness calibration.
Find that existing algorithm is all that difference map generalization and two-value are demarcated map generalization to regard two as solely by analysis
The problem of vertical, solves, and there are succession on the time, but it is considered herein that there is inseparable contact between the two, therefore
Consider to regard two benches as an entirety to be handled.Iteratively optimize acquired results using the thought of feedback at the same time, this
It is the main subversiveness place of this algorithm.
【1】Bruzzone L,Serpico S B.An iterative technique for the detection of
land-cover transitions in multitemporal remote-sensing images[J].IEEE
Transactions on Geoscience&Remote Sensing,1997,35(4):858-867.
【2】ASHBINDU SINGH.Review Article Digital change detection techniques
using remotely-sensed data[J].International Journal of Remote Sensing,1989,10
(6):989-1003.
【3】Gong M,Cao Y,Wu Q.A Neighborhood-Based Ratio Approach for Change
Detection in SAR Images[J].IEEE Geoscience&Remote Sensing Letters,2012,9(2):
307-311.
【4】Zheng Y,Zhang X,Hou B,et al.Using Combined Difference Image and k-
Means Clustering for SAR Image Change Detection[J].IEEE Geoscience and Remote
Sensing Letters,2014,11(3):691-695.
【5】Celik T.Change Detection in Satellite Images Using a Genetic
Algorithm Approach[J].IEEE Geoscience&Remote Sensing Letters,2010,7(2):386-
390.
【6】Bazi Y,Bruzzone L,Melgani F.An unsupervised approach based on the
generalized Gaussian model to automatic change detection in multitemporal SAR
images[J].IEEE Transactions on Geoscience&Remote Sensing,2005,43(4):874-887.
【7】Bovolo F,Bruzzone L.A detail-preserving scale-driven approach to
change detection in multitemporal SAR images[J].IEEE Transactions on
Geoscience&Remote Sensing,2006,43(12):2963-2972.
【8】Bruzzone L,Prieto DF.Automatic analysis of the difference image
for unsupervised change detection[J].IEEE Transactions on Geoscience&Remote
Sensing,2000,38(3):1171-1182.
The content of the invention
The technology of the present invention solves the problems, such as:Overcome the deficiencies of the prior art and provide a kind of radar based on feedback and iteration
(SAR) image difference detection algorithm, two-value calibration two is carried out for differential image generation in existing method and according to differential image
A key step, proposes to be used as the foundation of feedback by the result demarcated according to two-value, the function ginseng of adjustment differential image generation
Number and the threshold value of two-value calibration are set.Tend to be accurate into result is exercised by iteration, while by considering the local message of image
Make the disparity map generation of diverse location in image different with two-value calibration mode, add the robustness of algorithm, can be certain
The accuracy rate of detection is improved in degree.Algorithm uses end condition of the MRF functions as iteration, also ensure that to a certain extent
The accuracy of algorithm.
Radar (SAR) image difference detection algorithm provided by the invention based on feedback and iteration, specific implementation step is such as
Under:
Step1:The SAR image shot to given the same area different time, gives birth to first by disparity map generating function
Into disparity map (DI);The parameter of the disparity map generating function includes filtering weighting parameter.
Step2:Use two and artwork threshold matrix figure of a size, each element of the two threshold matrix figures
Represent the high threshold and Low threshold of every bit, and corresponding two-value calibration maps are generated using the two threshold matrix figures.
Step3:Disparity map is divided into believable diff area, credible using the two two-value calibration maps obtained in Step2
Rely zone of indifference and untrustworthy region, and in the high threshold and Low threshold matrix in different region modification Step2
Element and Step1 in disparity map generating function in filtering weighting parameter.
Step4:The MRF energy function value U of the two-value calibration maps in Step2 are calculated, if U reaches local minimum, are stopped
Only iteration, two-value calibration maps as final difference calibration result figure and are exported, Step1 is otherwise gone to and continues to execute next time
Iteration, while the element in the high threshold matrix and Low threshold matrix in the filtering weighting parameter and Step2 in Step1 is carried out
Adjustment.
The present invention compared with prior art the advantages of be:
(1) using brand-new disparity map generating function generation differential image, which examines when calculating differential image every
Consider certain neighborhood information around it, while size by the point residing for region property of its smoothing weights at every place determines, and lead to
Cross feedback mechanism iteration adjustment.
(2) it is two threshold matrixes by the two-value calibration threshold spread based on differential image, the threshold of every in this sampled images
Value can be different, and the region property according to residing for the point determines its size.
(3) according to two-value calibration result by differential image divide into believable diff area, it is believable expense diff area and
Untrustworthy three kinds of regions in region, at the same it is calibrated according to the classification of affiliated area adjustment differential image generating process and two-value
Journey, the introducing of iterator mechanism make result constantly tend to be accurate.
Brief description of the drawings
Fig. 1 realizes flow chart for SAR image difference detecting method of the present invention based on feedback and iteration.
Embodiment
The specific implementation of the present invention is described in detail below in conjunction with the accompanying drawings.
As shown in Figure 1, the method for the present invention specific steps include Step1-4:
Step1. disparity map generates:The SAR image shot to given the same area different time, first by disparity map
Generating function generation disparity map (DI);The parameter of the disparity map generating function includes filtering weighting parameter.
Give two SAR image I1And I2, PPB filtering is carried out first, then considers each neighborhood of a point information generation difference
Figure:
M represents the index of pixel, and DI represents disparity map, and DI (m) represents the difference of the point, ωmRepresent the neighborhood around m,Represent the filtering weighting of t step iteration, t represents iterations.
2. two-values of Step demarcate map generalization:
It is every using two and artwork threshold matrix figure of a size, each element representation of the two threshold matrix figures
The high threshold of a bitAnd Low thresholdCorresponding two-value calibration maps CM is being generated using the two threshold matrix figureslowWith
CMhighWhen, use following judgment formula;
Wherein 0 represents no difference, and 1 represents to have differences.M represents the index of pixel, and t represents iterations.
3. feedback adjustment parameters of Step:
Disparity map is divided into believable diff area, believable indifference using the two two-value calibration maps obtained in Step2
Different region and untrustworthy region, and for the element in the high threshold and Low threshold matrix in different region modification Step2
With the filtering weighting parameter in the disparity map generating function in Step1.
Step (1), using the two two-value calibration maps obtained in Step2 classified disparity map:
Initialize a count matrix N, size and image I1It is identical, and all elements are initialized as 0, in each iteration
Beginning we according to Step2 update N;
If CMhigh(m)=1,
If CMlow(m)=1,
According to N, differential image is divided into three kinds of different types of regions by us:
δ is a positive threshold value;
In step (2), adjustment Step2Value of feedback:
If belonging to believable diff area for any one pixel m,
If belonging to believable non-diff area for any one pixel m,
Initial value sets as follows:
MD=[max (DI)-min (DI)]/2, α ∈ (0,1) are a constants, and max (DI), min (DI) represent DI respectively
In maximum and minimum value;
In step (3) adjustment Step1Value
In t walks iterationValue be calculated as:
Wherein (xm,ym) represent the coordinate of pixel m,
Z is a constant more than 1.
Step 4. judges stopping criterion for iteration:
The MRF energy function value U of the two-value calibration maps in Step2 are calculated, if U reaches local minimum, stop iteration,
Two-value calibration maps as final difference calibration result figure and are exported, Step1 is otherwise gone to and continues to execute next iteration, together
When the element in the high threshold matrix and Low threshold matrix in the filtering weighting parameter and Step2 in Step1 is adjusted.
Two-value calibration maps CM in calculation procedure (2)lowIt is as follows with the MRF energy function values of disparity map DI:
U (DI, CM)=Udata(DI,CM)+εUcontext(DI,CM)
Wherein ε is the weight for controlling two influences.
δ, μ represent G respectively0, G1Average and variance,
G0=m | and CM (m)=0 }, G1=m | CM (m)=1 }
Iteration terminates when U (DI CM) reaches local minimum, and by CMlowIt is no as final calibration result
Step1 is then returned to continue to execute.
Claims (5)
- A kind of 1. radar image difference detecting method based on feedback and iteration, it is characterised in that:Step1:The SAR image shot to given the same area different time, it is poor to be generated first by disparity map generating function Different figure;The parameter of the disparity map generating function includes filtering weighting parameter;Step2:Use two and artwork threshold matrix figure of a size, each element representation of the two threshold matrix figures The high threshold and Low threshold of every bit, and generate corresponding two-value calibration maps using the two threshold matrix figures;Step3:Disparity map is divided into believable diff area, believable nothing using the two two-value calibration maps obtained in Step2 Diff area and untrustworthy region, and for the member in the high threshold and Low threshold matrix in different region modification Step2 The filtering weighting parameter in disparity map generating function in element and Step1;Step4:The MRF energy function value U of the two-value calibration maps in Step2 are calculated, if U reaches local minimum, stop changing In generation, two-value calibration maps as final difference calibration result figure and are exported, and are otherwise gone to Step1 and are continued to execute and change next time Generation, while the element in the high threshold matrix and Low threshold matrix in the filtering weighting parameter and Step2 in Step1 is adjusted It is whole.
- 2. the radar image difference detecting method according to claim 1 based on feedback and iteration, it is characterised in that:It is described Disparity map create-rule is as follows in Step1:Give two SAR image I1And I2, PPB filtering is carried out first, then considers each neighborhood of a point information generation disparity map:<mrow> <mi>D</mi> <mi>I</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>&Element;</mo> <msub> <mi>&omega;</mi> <mi>m</mi> </msub> </mrow> </munder> <msubsup> <mi>&beta;</mi> <mrow> <mi>m</mi> <mo>,</mo> <mi>i</mi> </mrow> <mi>t</mi> </msubsup> <mo>|</mo> <mi>log</mi> <mrow> <mo>(</mo> <msub> <mi>I</mi> <mn>1</mn> </msub> <mo>(</mo> <mi>i</mi> <mo>)</mo> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>-</mo> <mi>log</mi> <mrow> <mo>(</mo> <msub> <mi>I</mi> <mn>2</mn> </msub> <mo>(</mo> <mi>i</mi> <mo>)</mo> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>|</mo> </mrow>M represents the index of pixel, and DI represents disparity map, and DI (m) represents the difference of the point, ωmRepresent the neighborhood around m,Table Show the filtering weighting of t step iteration, t represents iterations.
- 3. the radar image difference detecting method according to claim 2 based on feedback and iteration, it is characterised in that:It is described The generation of two-value calibration maps comprises the following steps that in Step2:Use two and artwork threshold matrix figure of a size, each element representation every bit of the two threshold matrix figures High thresholdAnd Low thresholdCorresponding two-value calibration maps CM is being generated using the two threshold matrix figureslowAnd CMhigh When, use following judgment formula;Wherein 0 represents no difference, and 1 represents to have differences, and m represents the index of pixel, and t represents iterations.
- 4. the radar image difference detecting method according to claim 3 based on feedback and iteration, it is characterised in that:It is described The high threshold matrix T in Step2 is changed in Step3highWith Low threshold matrix TlowElement and Step1 in disparity map generation Correspondence parameter in function;Comprise the following steps that:Step (1), using the two two-value calibration maps obtained in Step2 classified disparity map:Initialize a count matrix N, size and image I1It is identical, and all elements are initialized as 0, in opening for each iteration Beginning, we update N according to Step2;If CMhigh(m)=1,<mrow> <mi>N</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>N</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mo>&ForAll;</mo> <mi>i</mi> <mo>&Element;</mo> <msub> <mi>&omega;</mi> <mi>m</mi> </msub> <mo>;</mo> </mrow>If CMlow(m)=1,<mrow> <mi>N</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>N</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>-</mo> <mn>1</mn> <mo>,</mo> <mo>&ForAll;</mo> <mi>i</mi> <mo>&Element;</mo> <msub> <mi>&omega;</mi> <mi>m</mi> </msub> <mo>;</mo> </mrow>According to N, differential image is divided into three kinds of different types of regions by us:δ is a positive threshold value;In step (2), adjustment Step2Value of feedback:If belonging to believable diff area for any one pixel m,<mrow> <msubsup> <mi>T</mi> <mrow> <mi>h</mi> <mi>i</mi> <mi>g</mi> <mi>h</mi> </mrow> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msubsup> <mi>T</mi> <mrow> <mi>h</mi> <mi>i</mi> <mi>g</mi> <mi>h</mi> </mrow> <mi>t</mi> </msubsup> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> </mrow> <msup> <mi>V</mi> <mrow> <mi>N</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>&Delta;</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> </mfrac> <mo>;</mo> </mrow>If belonging to believable non-diff area for any one pixel m,<mrow> <msubsup> <mi>T</mi> <mrow> <mi>l</mi> <mi>o</mi> <mi>w</mi> </mrow> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msubsup> <mi>T</mi> <mrow> <mi>l</mi> <mi>o</mi> <mi>w</mi> </mrow> <mi>t</mi> </msubsup> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> </mrow> <msup> <mi>V</mi> <mrow> <mo>-</mo> <mi>N</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>&Delta;</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> </mfrac> <mo>;</mo> </mrow>Initial value sets as follows:<mrow> <msubsup> <mi>T</mi> <mrow> <mi>l</mi> <mi>o</mi> <mi>w</mi> </mrow> <mn>1</mn> </msubsup> <mo>=</mo> <msub> <mi>M</mi> <mi>D</mi> </msub> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>&alpha;</mi> <mo>)</mo> </mrow> <mo>,</mo> <msubsup> <mi>T</mi> <mrow> <mi>h</mi> <mi>i</mi> <mi>g</mi> <mi>h</mi> </mrow> <mn>1</mn> </msubsup> <mo>=</mo> <msub> <mi>M</mi> <mi>D</mi> </msub> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <mi>&alpha;</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow>MD=[max (DI)-min (DI)]/2, α ∈ (0,1) are a constants, and max (DI), min (DI) are represented in DI most respectively Big value and minimum value;In step (3) adjustment Step1ValueIn t walks iterationValue be calculated as:<mrow> <msubsup> <mi>&beta;</mi> <mrow> <mi>m</mi> <mo>,</mo> <mi>i</mi> </mrow> <mi>t</mi> </msubsup> <mo>=</mo> <mfrac> <mfrac> <mn>1</mn> <mrow> <mn>1</mn> <mo>+</mo> <msubsup> <mi>&gamma;</mi> <mi>m</mi> <mi>t</mi> </msubsup> <msqrt> <mrow> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>m</mi> </msub> <mo>-</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mi>m</mi> </msub> <mo>-</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> </mrow> </mfrac> <mrow> <munder> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>&Element;</mo> <msub> <mi>&omega;</mi> <mi>m</mi> </msub> </mrow> </munder> <mfrac> <mn>1</mn> <mrow> <mn>1</mn> <mo>+</mo> <msubsup> <mi>&gamma;</mi> <mi>m</mi> <mi>t</mi> </msubsup> <msqrt> <mrow> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>m</mi> </msub> <mo>-</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mi>m</mi> </msub> <mo>-</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> </mrow> </mfrac> </mrow> </mfrac> </mrow>Wherein (xm,ym) represent the coordinate of pixel m,<mrow> <msubsup> <mi>&gamma;</mi> <mi>m</mi> <mn>1</mn> </msubsup> <mo>=</mo> <mn>1</mn> </mrow>Z is a constant more than 1.
- 5. the radar image difference detecting method according to claim 4 based on feedback and iteration, it is characterised in that:It is described The two-value calibration maps CM in Step2 is calculated in Step4lowWith the MRF energy function values of disparity map DI, comprise the following steps that:MRF functions calculate as follows:U (DI, CM)=Udata(DI,CM)+εUcontext(DI,CM)Wherein ε is the weight for controlling two influences;<mrow> <msub> <mi>U</mi> <mrow> <mi>d</mi> <mi>a</mi> <mi>t</mi> <mi>a</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>D</mi> <mi>I</mi> <mo>,</mo> <mi>C</mi> <mi>M</mi> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mi>m</mi> </munder> <mi>l</mi> <mi>o</mi> <mi>g</mi> <mrow> <mo>(</mo> <mn>2</mn> <msubsup> <mi>&pi;&delta;</mi> <mrow> <mi>C</mi> <mi>M</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> </mrow> <mn>2</mn> </msubsup> <mo>)</mo> </mrow> <mo>+</mo> <mfrac> <mn>1</mn> <msubsup> <mi>&delta;</mi> <mrow> <mi>C</mi> <mi>M</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> </mrow> <mn>2</mn> </msubsup> </mfrac> <msup> <mrow> <mo>(</mo> <mi>D</mi> <mi>I</mi> <mo>(</mo> <mi>m</mi> <mo>)</mo> <mo>-</mo> <msub> <mi>&mu;</mi> <mrow> <mi>C</mi> <mi>M</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow>δ, μ represent G respectively0, G1Average and variance,G0=m | and CM (m)=0 }, G1=m | CM (m)=1 }<mrow> <msub> <mi>U</mi> <mrow> <mi>c</mi> <mi>o</mi> <mi>n</mi> <mi>t</mi> <mi>e</mi> <mi>x</mi> <mi>t</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>D</mi> <mi>I</mi> <mo>,</mo> <mi>C</mi> <mi>M</mi> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mi>m</mi> </munder> <munder> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>&Element;</mo> <msub> <mi>&omega;</mi> <mi>m</mi> </msub> </mrow> </munder> <msub> <mi>c</mi> <mrow> <mi>m</mi> <mo>,</mo> <mi>i</mi> </mrow> </msub> </mrow><mrow> <msub> <mi>c</mi> <mrow> <mi>m</mi> <mo>,</mo> <mi>i</mi> </mrow> </msub> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </mtd> <mtd> <mrow> <mi>C</mi> <mi>M</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>C</mi> <mi>M</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mi>C</mi> <mi>M</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> <mo>&NotEqual;</mo> <mi>C</mi> <mi>M</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>Iteration terminates when U (DI CM) reaches local minimum, and by CMlowAs final calibration result, otherwise return Step1 is returned to continue to execute.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711303801.5A CN108038851B (en) | 2017-12-11 | 2017-12-11 | Radar image difference detection method based on feedback and iteration |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711303801.5A CN108038851B (en) | 2017-12-11 | 2017-12-11 | Radar image difference detection method based on feedback and iteration |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108038851A true CN108038851A (en) | 2018-05-15 |
CN108038851B CN108038851B (en) | 2020-02-07 |
Family
ID=62102201
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711303801.5A Active CN108038851B (en) | 2017-12-11 | 2017-12-11 | Radar image difference detection method based on feedback and iteration |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108038851B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108918932A (en) * | 2018-09-11 | 2018-11-30 | 广东石油化工学院 | Power signal adaptive filter method in load decomposition |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5654772A (en) * | 1993-04-10 | 1997-08-05 | Robert Bosch Gmbh | Method for change detection in moving images |
CN1760888A (en) * | 2005-11-03 | 2006-04-19 | 复旦大学 | Method for recognizing change of earth's surface by using satellite SAR carried images at multiple time phases |
US7336803B2 (en) * | 2002-10-17 | 2008-02-26 | Siemens Corporate Research, Inc. | Method for scene modeling and change detection |
CN103020978A (en) * | 2012-12-14 | 2013-04-03 | 西安电子科技大学 | SAR (synthetic aperture radar) image change detection method combining multi-threshold segmentation with fuzzy clustering |
CN105118065A (en) * | 2015-09-14 | 2015-12-02 | 中国民航大学 | Polari SAR (metric synthetic aperture radar) image variation detection method of wavelet domain polarization distance transformation |
US20160137202A1 (en) * | 2014-11-19 | 2016-05-19 | Denso Corporation | Travel lane marking recognition apparatus |
CN105844637A (en) * | 2016-03-23 | 2016-08-10 | 西安电子科技大学 | Method for detecting SAR image changes based on non-local CV model |
CN106650571A (en) * | 2016-09-09 | 2017-05-10 | 河海大学 | Multi-temporal remote sensing image change detection method based on adaptive chi-squared transform (CST) |
-
2017
- 2017-12-11 CN CN201711303801.5A patent/CN108038851B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5654772A (en) * | 1993-04-10 | 1997-08-05 | Robert Bosch Gmbh | Method for change detection in moving images |
US7336803B2 (en) * | 2002-10-17 | 2008-02-26 | Siemens Corporate Research, Inc. | Method for scene modeling and change detection |
CN1760888A (en) * | 2005-11-03 | 2006-04-19 | 复旦大学 | Method for recognizing change of earth's surface by using satellite SAR carried images at multiple time phases |
CN103020978A (en) * | 2012-12-14 | 2013-04-03 | 西安电子科技大学 | SAR (synthetic aperture radar) image change detection method combining multi-threshold segmentation with fuzzy clustering |
US20160137202A1 (en) * | 2014-11-19 | 2016-05-19 | Denso Corporation | Travel lane marking recognition apparatus |
CN105118065A (en) * | 2015-09-14 | 2015-12-02 | 中国民航大学 | Polari SAR (metric synthetic aperture radar) image variation detection method of wavelet domain polarization distance transformation |
CN105844637A (en) * | 2016-03-23 | 2016-08-10 | 西安电子科技大学 | Method for detecting SAR image changes based on non-local CV model |
CN106650571A (en) * | 2016-09-09 | 2017-05-10 | 河海大学 | Multi-temporal remote sensing image change detection method based on adaptive chi-squared transform (CST) |
Non-Patent Citations (2)
Title |
---|
王望: "《基于国产资源卫星的遥感图像变化检测》", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
胡召玲: "《广义高斯模型及KI双阈值法的SAR图像非监督变化检测》", 《测绘学报》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108918932A (en) * | 2018-09-11 | 2018-11-30 | 广东石油化工学院 | Power signal adaptive filter method in load decomposition |
CN108918932B (en) * | 2018-09-11 | 2021-01-15 | 广东石油化工学院 | Adaptive filtering method for power signal in load decomposition |
Also Published As
Publication number | Publication date |
---|---|
CN108038851B (en) | 2020-02-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Bai et al. | An optimized railway fastener detection method based on modified Faster R-CNN | |
CN106504276B (en) | Non local solid matching method | |
CN111461038B (en) | Pedestrian re-identification method based on layered multi-mode attention mechanism | |
CN103226821B (en) | Stereo matching method based on disparity map pixel classification correction optimization | |
CN110689043A (en) | Vehicle fine granularity identification method and device based on multiple attention mechanism | |
CN106778687A (en) | Method for viewing points detecting based on local evaluation and global optimization | |
CN105046197A (en) | Multi-template pedestrian detection method based on cluster | |
CN111340855A (en) | Road moving target detection method based on track prediction | |
CN102999761B (en) | Based on the Classification of Polarimetric SAR Image method that Cloude decomposes and K-wishart distributes | |
CN106408030A (en) | SAR image classification method based on middle lamella semantic attribute and convolution neural network | |
CN109508731A (en) | A kind of vehicle based on fusion feature recognition methods, system and device again | |
CN103606164A (en) | SAR image segmentation method based on high-dimensional triple Markov field | |
CN108257154A (en) | Polarimetric SAR Image change detecting method based on area information and CNN | |
CN105738915B (en) | Three-dimensional radar measuring method and device | |
CN105989334A (en) | Road detection method based on monocular vision | |
CN104732552B (en) | SAR image segmentation method based on nonstationary condition | |
CN105488541A (en) | Natural feature point identification method based on machine learning in augmented reality system | |
CN108564083A (en) | A kind of method for detecting change of remote sensing image and device | |
CN103226825B (en) | Based on the method for detecting change of remote sensing image of low-rank sparse model | |
CN114299405A (en) | Unmanned aerial vehicle image real-time target detection method | |
CN103310464B (en) | A kind of method of the direct estimation camera self moving parameter based on normal direction stream | |
Pan et al. | A novel computer vision‐based monitoring methodology for vehicle‐induced aerodynamic load on noise barrier | |
CN104573722A (en) | Three-dimensional face race classifying device and method based on three-dimensional point cloud | |
CN104200458A (en) | MeanShift based high-resolution remote sensing image segmentation distance measurement optimization method | |
CN103679740A (en) | ROI (Region of Interest) extraction method of ground target of unmanned aerial vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP02 | Change in the address of a patent holder | ||
CP02 | Change in the address of a patent holder |
Address after: No.443 Huangshan Road, Shushan District, Hefei City, Anhui Province 230022 Patentee after: University of Science and Technology of China Address before: 230026 Jinzhai Road, Baohe District, Hefei, Anhui Province, No. 96 Patentee before: University of Science and Technology of China |