CN107705295B - Image difference detection method based on robust principal component analysis method - Google Patents

Image difference detection method based on robust principal component analysis method Download PDF

Info

Publication number
CN107705295B
CN107705295B CN201710828732.3A CN201710828732A CN107705295B CN 107705295 B CN107705295 B CN 107705295B CN 201710828732 A CN201710828732 A CN 201710828732A CN 107705295 B CN107705295 B CN 107705295B
Authority
CN
China
Prior art keywords
difference
image
matrix
length
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710828732.3A
Other languages
Chinese (zh)
Other versions
CN107705295A (en
Inventor
杨曦
杨东
高新波
宋彬
王楠楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201710828732.3A priority Critical patent/CN107705295B/en
Publication of CN107705295A publication Critical patent/CN107705295A/en
Application granted granted Critical
Publication of CN107705295B publication Critical patent/CN107705295B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method for stabilizingA method for detecting image difference by using a key principal component analysis (RPCA) method mainly solves the problem of detecting difference change in image or video data. The method comprises the following implementation steps: 1. acquiring images of the same scene at different time and different visual angles; 2. performing geometric registration on the images; 3. respectively carrying out column vectorization processing on the registered image data, and synthesizing all column vectors into a matrix X; 4. decomposing the matrix X by using RPCA to obtain a corresponding sparse matrix S containing difference point information0(ii) a 5. From a sparse matrix S0Obtaining a filling area of difference points of each image, and filtering noise points; 6. and obtaining the central coordinate and the length and width of the difference region of each image according to the result of the filled region, and labeling the difference region in the registered image. Compared with the prior art, the method has the advantage of being more stable to various non-ideal disturbances such as visual angles, illumination, noise and the like, and can be used for detecting the difference area under the multi-temporal unmanned aerial vehicle platform.

Description

Image difference detection method based on robust principal component analysis method
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to an image difference detection method based on a steady principal component analysis method.
Background
The image difference detection system based on the unmanned aerial vehicle platform has important use value and wide application prospect in military and civil use, is a popular field for the research of a new generation of unmanned monitoring technology, can find the change of a potential region of interest and the movement of a target by carrying out difference detection on picture or video information acquired by the unmanned aerial vehicle platform with multiple visual angles and multiple time phases, realizes the extraction and description of the characteristics of the region, and provides a powerful basis for further decision making.
The existing difference detection method mainly realizes the difference description of images by using image gray level images or color changes and the like, has higher detection performance on images acquired at fixed visual angles, and can better realize the discovery and identification of difference areas. However, in practical applications, especially the development and popularization of a new generation of unmanned monitoring platform represented by an unmanned aerial vehicle require that monitoring of different areas is realized under different time and different viewing angles and even different monitoring equipment conditions. Therefore, if the detection method based on the image gray-scale map is still adopted, the detection performance is seriously deteriorated by the non-ideal factors such as the visual angle difference, the illumination condition and the like, so that a great amount of false alarms exist in the difference detection result, and the difference detection result cannot be used for judging a real difference area.
In order to better adapt to the monitoring requirement of a new generation of unmanned platform, particularly to meet the application requirement of simultaneous detection of a plurality of images, at the moment, the traditional method based on differential detection of two images is not suitable, a self-adaptive background structure and an information extraction technology are required to be adopted, background images meeting certain relevant characteristics are uniformly identified and extracted through analysis and optimized solution of background characteristics of the plurality of images, and the self-adaptive detection of a difference area is realized by combining an image processing method.
Disclosure of Invention
The invention aims to provide an image difference detection method based on a steady principal component analysis method. The invention utilizes the background correlation among a plurality of images to reduce color disturbance caused by non-ideal factors such as time, visual angle, illumination and the like, adaptively extracts the characteristics of a difference region, improves the identification robustness, reduces false alarm points and realizes the unmanned platform adaptive difference detection under multiple time phases and multiple visual angles.
In order to achieve the technical purpose, the invention is realized by adopting the following technical scheme.
An image difference detection method based on a robust principal component analysis method comprises the following steps:
s1: acquiring a plurality of images of the same scene at different time and different visual angles by using an optical camera, wherein the number of the acquired images is M;
s2: performing geometric registration on the M images to obtain registered image data which is expressed as a matrix X1To XMOr XiI takes values from 1 to M, XiA matrix representing registered image data of the ith image;
s3: are respectively paired with matrix X1To XMPerforming column vectorization to obtain column vector eta1To etaMThen the column vector eta is1To etaMCombined into a matrix X, X ═ eta1,...,ηM];
S4: decomposing the matrix X by using a steady principal component analysis method to obtain a corresponding sparse matrix S containing difference point information0
S5: the sparse matrix S0Each column of (A) is redrawn to XiMatrices of the same dimension, S0Is drawn into a matrix
Figure BDA0001408164170000021
i takes values from 1 to M, and the drawn matrix is the difference point information of each image;
s6: to pair
Figure BDA0001408164170000022
Noise points are filtered, the difference points connected with the same region are combined into a difference region, and each difference region after being filtered is marked in sequence and is marked as
Figure BDA0001408164170000023
To
Figure BDA0001408164170000024
Or
Figure BDA0001408164170000025
L takes values from 1 to Li,LiThe number of the difference regions obtained in the ith image,
Figure BDA0001408164170000026
representing the ith block difference region in the ith image;
s7: calculating each difference area to obtain the difference area
Figure BDA0001408164170000027
(i=1,...,M,l=1,...,Li) Corresponding center coordinate (m)i_l,ni_l) Length of lengthi_lAnd heighti_lAnd marking the difference region information of the difference region as xi_out_l=[mi_l,ni_l,lengthi_l,heighti_l](i_out_l=i_out_1,...,i_out_Li,i=1,...,M);
S8: using xi_out_l(i_out_l=i_out_1,...,i_out_LiM) obtaining a difference area information matrix of the ith image
Figure BDA0001408164170000031
(i ═ 1.... M), labeling the difference region corresponding to the difference region information matrix on the registered image XiIn (1).
In some embodiments, in step S2, the M images are geometrically registered, and the registered image data is represented as a matrix X1To XMOr XiI takes values from 1 to M, XiA matrix representing registered image data of an ith image, comprising the steps of: performing geometric registration on the M images, sequentially performing geometric registration on other images by using a SIFT operator or an improved algorithm thereof with a first image as a reference, transforming the other images to a view angle and a scene size which are consistent with the first image, and recording image data after registration as a matrix X1To XMOr XiI takes values from 1 to M, XiA matrix representing registered image data of the ith image.
In some embodiments, in the step S4, the matrix X is decomposed by using a robust principal component analysis method to obtain a corresponding sparse matrix S containing the difference point information0The method specifically comprises the following steps:
the decomposition model for matrix X is: x is L0+S0+N0Wherein L is0、S0And N0Into three sub-matrices after decomposition, L0Is a low rank matrix, N0Representing residual noise, S0Being a sparse matrix, matrix S0The same dimension as matrix X; extracting L from the optimization model0、S0And N0
min||L0||*+μ||S0||1
s.t.||X-L0-S0||F
Wherein | · | purple sweet1Expressing to solve 1-norm, | ·| non-woven phosphorFExpressing to solve F-norm, | ·| non-woven phosphor*Represents calculating the nuclear norm, delta is a set constant, mu represents a weight factor and mu>0, min represents minimization, s.t. is the abbreviation of subject to represents "constrained to", and the meaning of the whole equation is that the constraint condition is satisfied to be | | X-L0-S0||F<Delta, make the target function L0||*+μ||S0||1The value of (c) is minimal.
In some embodiments, in the step S5, the sparse matrix S is divided into0Each column of (A) is redrawn to XiMatrices of the same dimension, S0Is drawn into a matrix
Figure BDA0001408164170000041
The value of i is 1 to M, and the drawn matrix is the difference point information of each image, and the steps specifically comprise the following steps: for the sparse matrix S0I th column S0(i) Will S0(i) Dividing b column vectors in the order from top to bottom, b being image XiThe length of the corresponding horizontal axis, the number of elements of the b column vectors is a, and a is the image XiThe length of the corresponding vertical axis combines the b column vectors into a corresponding matrix according to the dividing sequence
Figure BDA0001408164170000042
Wherein i takes values from 1 to M.
In some embodiments, in the step S6, the method comprises
Figure BDA0001408164170000043
Noise points are filtered, the difference points connected with the same region are combined into a difference region, and each difference region after being filtered is marked in sequence and is marked as
Figure BDA0001408164170000044
To
Figure BDA0001408164170000045
Or
Figure BDA0001408164170000046
The steps specifically include the following steps:
setting an image prior threshold T, wherein the threshold setting criterion is as follows: t is the minimum value of the number of resolution units corresponding to the size of the type of the target of interest to be detected in the M images after registration;
for the ith (i ═ 1.., M.) matrix, the matrix is drawn
Figure BDA0001408164170000047
The following three steps are sequentially executed:
s1: to pair
Figure BDA0001408164170000048
Performing candy operator edge detection, and confirming a closed curve of the edge after the detection;
s2: filling edges belonging to a closed curve, calculating the number of pixel points of the edges, and defining all difference points in the closed curve as a difference area when the number of the pixel points in the closed curve after filling is greater than or equal to a threshold T;
s3: for a non-closed curve or a closed curve but with the number of pixel points less than a threshold T, setting the edge information or the difference region information to zero, namely filtering, and sequentially recording the remaining difference regions reserved after filtering as
Figure BDA0001408164170000049
To
Figure BDA00014081641700000410
Or
Figure BDA00014081641700000411
Representing the ith block difference in the ith imageHetero region, L takes values from 1 to Li,LiThe number of the difference regions obtained in the ith image is shown.
In some embodiments, in the step S7, each difference region is calculated to obtain a difference region
Figure BDA00014081641700000412
Corresponding center coordinate (m)i_l,ni_l) Length of lengthi_lAnd heighti_lAnd marking the difference region information of the difference region as xi_out_l=[mi_l,ni_l,lengthi_l,heighti_l]The method specifically comprises the following steps:
calculating the difference region of each block
Figure BDA0001408164170000051
(i=1,...,M,l=1,...,Li) Obtaining the maximum value and the minimum value of the horizontal axis of the difference region as bi_l_maxAnd bi_l_minThe maximum value and the minimum value of the vertical axis of the difference region are respectively ai_l_maxAnd ai_l_min
Calculating the difference region as defined below
Figure BDA0001408164170000052
Central coordinate (m) ofi_l,ni_l) Length of lengthi_lAnd heighti_l
mi_l=(bi_l_max+bi_l_min)/2
ni_l=(ai_l_max+ai_l_min)/2
lengthi_l=bi_l_max-bi_l_min
heighti_l=ai_l_max-ai_l_min
Marking the difference region
Figure BDA0001408164170000053
Difference of (2)The region information is xi_out_l
xi_out_l=[mi_l,ni_l,lengthi_l,heighti_l]
The invention has the beneficial effects that: 1) in the prior art, a method of image pixel subtraction is mainly adopted, and a good difference region detection effect can be achieved under an ideal condition, however, in practical application, the detection performance is rapidly reduced due to the difference of image acquisition visual angles in the same region, the difference of non-ideal conditions such as illumination and the like and the difference of reference selection among a plurality of images, a large amount of clutter false alarms are generated, and the method cannot be used for judging the difference region information of subsequent images. The invention can self-adaptively extract the clutter backgrounds of a plurality of images at the same time by utilizing the strong correlation of the background clutter in the structure and the color information distribution among the images, thereby reducing the false alarm caused by non-ideal errors. 2) The traditional method is mainly based on image difference detection of two images, if a plurality of images are required to be compared, traversal is required, and the result difference is large due to different selection of reference images. According to the method, a large number of images can be processed simultaneously by utilizing the characteristic of the low-rank matrix under the condition of permission of computing power, the problem of data traversal and the like is avoided only by one-time overall solving process, all image common parts are obtained as background images without selecting reference images, and the obtained difference region has universality relative to all other images. 3) The invention does not add any hardware constraint condition in the realization process, has stronger practical range, and can be applied to a change platform, multi-view, multi-time-phase observation conditions, an unmanned intelligent system and the like.
Drawings
FIG. 1 is a flow chart of an image difference detection method based on robust principal component analysis of the present invention;
fig. 2a is a scene image a acquired by the drone platform;
FIG. 2B is a same scene image B acquired by the UAV platform after the image A is acquired for a plurality of days;
FIG. 3 shows the result of feature point registration of image A and image B;
FIG. 4a is an image after registration of image A;
FIG. 4B is a registered image of image B relative to image A;
FIG. 5 is a graph of the image difference detection results obtained by principal component analysis employed in the present invention;
FIG. 6 shows the edge detection result of the image difference points;
FIG. 7 shows the filling result of the closed region after edge detection;
FIG. 8a shows the result of the non-occlusion region removal and noise point filtering for the filling result;
FIG. 8b is a diagram illustrating the difference region result of FIG. 8a labeled in the registered image A;
FIG. 9a is a diagram illustrating a difference region detection result obtained by a conventional method;
fig. 9b is a diagram illustrating the difference region result of fig. 9a in the registered image a.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Referring to fig. 1, it is a flowchart of an image difference detection method based on robust principal component analysis of the present invention. The image difference detection method based on the robust principal component analysis method comprises the following steps:
step 101, acquiring a plurality of images of the same scene at different time and different viewing angles by using an optical camera, wherein the number of the acquired images is M.
Acquiring a plurality of images of the same scene at different time and different viewing angles by using an optical camera, wherein the number of the acquired images is M, and the image information of each image is respectively represented as a matrix Xorg_1To Xorg_MOr Xorg_iI is 1 to M, Xorg_iImage information representing an image acquired by the optical camera at the i-th time.
Wherein the image information X of each imageorg_iThe data format of (i ═ 1.,. said., M) is a three-dimensional array a × b × c, where the first dimension a is the length of the geometric vertical axis of the image, the second dimension b is the length of the geometric horizontal axis of the image, and the third dimension c is the image color information.
102, carrying out geometric registration on the M images to obtain registered image data which is expressed as a matrix Xi
For the M images Xorg_1To Xorg_MThe registration is carried out by firstly selecting the characteristic points of the images to obtain a first image Xorg_1Respectively calculating the projection matrix from other images to the first image by using SIFT operator or its improved algorithm as reference, and performing corresponding geometric transformation to obtain registered data expressed as matrix X1To XMOr Xi,XiAnd a matrix representing the image data after the registration of the ith image, wherein i is 1 to M.
Matrix X of registered image dataiWith the matrix X of the pre-registration image dataorg_iThe dimensions of the images are the same, the data format of the images is still a three-dimensional array a × b × c, and the geometric relationship and the scale information of the objects reflected under the corresponding coordinates of the images are consistent, that is, the view angle and the scene size are consistent. However, after geometric registration, the image is projected onto the first image Xorg_1When part of the original image data is lost due to the geometric rotation, the registered part of the image data is zero.
In some optional implementation manners of this embodiment, the geometric registration is performed on the M images, any one of the M images may be selected as a reference, geometric registration is sequentially performed on other images by using an SIFT operator or an improved algorithm thereof, the other images are transformed to a view angle and a scene size consistent with those of the selected reference image, and the image data after registration is recorded as a matrix X1To XM
Step 103, respectively aligning the matrix X1To XMPerforming column vectorization to obtain column vector eta1To etaMThen the column vector eta is1To etaMCombined into a matrix X, X ═ eta1,...,ηM]。
Matrix Xi(i 1.. times.m) has a length a in the vertical axis dimension, and the matrix X is a matrix XiThe length in the horizontal axis dimension is b; that is, matrix XiThe number of rows is a and the number of columns is b. For matrix XiThe row vectorization processing comprises the following steps: will matrix XiEach column of (i 1.... times.m) is extracted and then arranged in a matrix X according to the columnsiIn order of (1), the matrix XiAre combined into a column to form a column vector etai(i 1.., M); it is obvious that the matrix X of a X b dimensionsiAfter the column vectorization processing, the column vector η is converted into a column vector η of ab × 1 dimensioniSo that the matrix X ═ η1,...,ηM]Dimension (b) is ab × M.
From the composition of the matrix X, the matrix X is a matrix containing all image information. If all objects in the original scene do not change in different observation images, namely X1≈X2…≈XMWherein, the occurrence of symbols approximately equal to are: image information deviation caused by jitter influence, noise influence, illumination, blowing, visual angle and other differences of the unmanned aerial vehicle platform, and then the matrix X is a low-rank matrix with approximate rank of 1; if the original scene changes between different images, X may be considered to be composed of two parts, one is a low rank matrix obtained from a static scene, and the other is a sparse matrix containing a change region (the change region has sparsity with respect to the scene).
104, decomposing the matrix X by using a steady principal component analysis method to obtain a corresponding sparse matrix S containing difference point information0
Robust Principal Component Analysis (RPCA) is used to separate the varying and non-varying regions of a plurality of images. In the robust principal component analysis method, the decomposition model of matrix X is:
X=L0+S0+N0
wherein L is0、S0And N0Into three sub-matrices after decomposition, S0As a sparse matrix, L0Is a low rank matrix, N0Representing the remaining noise.
Then, L is extracted from the optimization model0、S0And N0
min||L0||*+μ||S0||1
s.t.||X-L0-S0||F
Wherein | · | purple sweet1Expressing to solve 1-norm, | ·| non-woven phosphorFExpressing to solve F-norm, | ·| non-woven phosphor*Expressing the solution of the nuclear norm, specifically, | | L0||*Represents L0Where the sum of all singular values, δ is a set constant, μ represents a weight factor and μ>0. In the above formula, min represents minimization, s.t. is subject to, which is abbreviated as "constraint to", and the meaning of the whole equation is that the constraint condition is satisfied as | | | X-L0-S0||F<Delta, make the target function L0||*+μ||S0||1The value of (c) is minimal.
Step 105, the sparse matrix S0Each column of (A) is redrawn to XiMatrices of the same dimension, S0Is drawn into a matrix
Figure BDA0001408164170000091
And i takes values from 1 to M, and the drawn matrix is the difference point information of each image.
As can be seen from the analysis, S obtained in step 1040Contains the difference point information, each row of which corresponds to the distribution of the change points between different images, by the inverse change opposite to the step 103, i.e. S0Each column in (1) is redrawn to be AND XiAnd obtaining the difference point information of each image by using the matrixes with the same dimension. In particular, the sparse matrix S0Is listed as S0(i) Will S0(i) According toDividing b column vectors from top to bottom, b being image XiThe length of the corresponding horizontal axis, the number of elements of the b column vectors is a, and a is the image XiThe length of the corresponding vertical axis combines the b column vectors into a corresponding matrix according to the dividing sequence
Figure BDA0001408164170000092
Wherein i takes values from 1 to M.
Step 106, for
Figure BDA0001408164170000093
Noise points are filtered, the difference points connected with the same region are combined into a difference region, and each difference region after being filtered is marked in sequence and is marked as
Figure BDA0001408164170000094
Setting an image prior threshold T, wherein the threshold setting criterion is as follows: and T is the minimum value of the number of resolution units corresponding to the size of the target type to be detected in the M images after registration.
For the ith (i ═ 1.., M.) matrix, the matrix is drawn
Figure BDA0001408164170000095
The following three steps are sequentially executed:
s1: to pair
Figure BDA0001408164170000096
And performing candy operator edge detection to obtain edge information of the difference points, and confirming the edge by a closed curve according to the edge information of the difference points. Because the difference points caused by the ground feature change often have a certain geometric structure and can be clustered into closed areas, the edge information can be extracted, and the non-closed curves can be filtered.
S2: and filling edges belonging to the closed curve, calculating the number (area) of pixel points of the edges, and defining that all the difference points in the closed curve belong to a difference area when the number (area) of the pixel points in the closed curve after filling is greater than or equal to a threshold T because the difference area caused by ground feature change is larger than the difference point of the image caused by clutter disturbance, namely the curve area of the closed curve is a difference area.
S3: for a non-closed curve or a closed curve but with the number of pixel points less than a threshold T, setting the edge information or difference region information of the closed curve with the number (area) of the non-closed curve or the closed curve less than the threshold T to zero, namely filtering, and sequentially recording the remaining difference regions remained after filtering as
Figure BDA0001408164170000101
To
Figure BDA0001408164170000102
Or
Figure BDA0001408164170000103
L takes values from 1 to Li,LiThe number of the difference regions obtained in the ith image,
Figure BDA0001408164170000104
and (4) representing the ith block difference area in the ith image, wherein i is 1 to M.
Step 107, calculating each difference area to obtain the difference area
Figure BDA0001408164170000105
Corresponding center coordinate (m)i_l,ni_l) Length of lengthi_lAnd heighti_lAnd marking the difference region information of the difference region as xi_out_l=[mi_l,ni_l,lengthi_l,heighti_l]。
Calculating the difference region of each block
Figure BDA0001408164170000106
(i=1,...,M,l=1,...,Li) Obtaining the maximum value and the minimum value of the horizontal axis of the difference region as bi_l_maxAnd bi_l_minThe maximum value and the minimum value of the vertical axis of the difference region are respectively ai_l_maxAnd ai_l_min
The above-mentioned difference region is calculated as defined below
Figure BDA0001408164170000107
Central coordinate (m) ofi_l,ni_l) Length of lengthi_lAnd heighti_l
mi_l=(bi_l_max+bi_l_min)/2
ni_l=(ai_l_max+ai_l_min)/2
lengthi_l=bi_l_max-bi_l_min
heighti_l=ai_l_max-ai_l_min
Marking the above-mentioned difference region
Figure BDA0001408164170000108
Has difference area information of xi_out_l:xi_out_l=[mi_l,ni_l,lengthi_l,heighti_l]。
Step 108, utilizing xi_out_lObtaining a difference area information matrix of the ith image
Figure BDA0001408164170000111
Labeling the difference region corresponding to the difference region information matrix on the registered image XiIn (1).
Using xi_out_lObtaining a difference area information matrix of the ith image
Figure BDA0001408164170000112
With a data dimension of LiX4, according to the difference area information matrix, further marking the difference area corresponding to the difference area information matrix on the image X after original registrationiIn (1). The labeled difference region is the difference part between each image and the background image, and the background image refers to the common part of the M images (scene)A stationary background in (1).
From the above analysis, it can be seen that: the invention mainly solves the problem of detecting the different areas of a plurality of images under the non-ideal conditions of different visual angles, different time phases, different illumination and the like. The invention vectorizes the data of a plurality of images, puts the vectorized data of the plurality of images together to form a new matrix, and can consider that the formed part of the static background in the scene has stronger color and structure correlation in the plurality of images as a low-rank matrix in the new matrix, and the formed part of the difference area is a sparse matrix.
The advantages of the present invention can be illustrated by the following measured data:
the same scene is observed by using the unmanned aerial vehicle platform, and the images obtained twice before and after are shown in fig. 2 (the representation is visual, the number of the images is 2, the method of the invention can also be used in the comparison of a plurality of images), wherein the time interval obtained by the images of fig. 2a and 2b is two days, and the used acquisition devices are completely consistent. It can be seen that a certain viewing angle difference exists between the two images and the illumination conditions of the images have obvious difference. Comparing fig. 2a and 2b, it can be seen that there are many terrain variations, such as vehicles on the road.
In order to realize the differentiation detection of the images, the images need to be registered, and image feature point matching is usually performed by adopting an SIFT operator and an improved algorithm thereof, as shown in FIG. 3, feature point matching performance corresponding to objects with obvious features such as houses, automobiles and the like is better. And after a transformation matrix between the images is obtained by using the characteristic points, performing projection transformation on the images. Here, since the registration is based on image a, image B is projectively transformed, and the transformed image is shown in fig. 4B. Fig. 4a is an image after image a is registered, since image a is registered with reference to the image itself, there is no projection operation, and fig. 4B is an image B, which is subjected to projection conversion with reference to feature point registration, it can be seen that the image after image B projection conversion is identical to image a in geometric relationship, but due to rotation and change, there is partial image data missing in the upper left and upper right corners, and at the same time, partial image data in the upper left, upper right and lower right corners in the original image data is lost, and in order to make image a and converted image B completely registered, the same data loss processing is performed on image a, as shown in the upper left and upper right corners of fig. 4 a.
The method of the present invention is adopted, namely, a plurality of images are processed by using a steady principal component analysis method to obtain a sparse matrix containing a difference region, a first column of the sparse matrix is taken out, a column vector is changed into an image with dimension consistent with the dimension A of the original image again, and an image result is obtained as shown in fig. 5, wherein the image is a difference detection result. By comparing the image with the original image A, it can be seen that the area contained therein is mainly composed of the difference area of the image A and the image B, which verifies the effectiveness of the method of the present invention.
In order to further improve the difference region detection performance, the image is processed, and first, edge information is extracted from the detected difference region, as shown in fig. 6. Secondly, by utilizing the characteristic that the difference points caused by the change of the ground features often have a certain geometric structure and can be clustered into a closed area, and the edge information of the closed area is a closed curve, the edges of the closed curve with the number of the pixel points more than or equal to 150 are filled, as shown in fig. 7. Finally, the non-closed edge curve is filtered, and the difference region where the number of the pixels in the closed curve is less than 150 pixels is filtered (here, the threshold is set to 150, and the value of the threshold is the number of the pixel units occupied by one third of a typical car, so that the detection and extraction of the difference change of the car-level target can be realized), and the filtered image is shown in fig. 8 a. For better verification of the detection result, the detection result in fig. 8a is re-labeled in the registered image a, as shown by the ellipse solid line label in fig. 8b, and it can be seen that all the difference regions are detected and labeled. For further comparison, fig. 9a shows a difference region detection result obtained by using a conventional frame-to-frame difference method, and fig. 9b shows that the difference region result of fig. 9a is marked in the registered image a, wherein a dotted line is used in fig. 9b to frame out an error result, so that it can be seen that there are obvious image missing detection and error detection results, which are mainly caused by disturbance of image color difference due to the difference of the view angles of the unmanned aerial vehicle platform.
In summary, the invention utilizes the strong correlation between the color and the geometric structure of the static scene among the plurality of images, reduces the disturbance caused by the factors such as the visual angle, the time phase, the platform, the acquisition condition and the like, reduces the missing detection and the error detection, improves the target self-adaptive detection performance and improves the steady difference region detection performance.
It will be apparent to those skilled in the art that various changes and modifications can be made in the invention without departing from the spirit and scope of the invention. Thus, it is intended that the present invention cover the modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents.

Claims (4)

1. An image difference detection method based on a robust principal component analysis method is characterized by comprising the following steps:
s1: acquiring a plurality of images of the same scene at different time and different visual angles by using an optical camera, wherein the number of the acquired images is M; the data format of the image information of each image is a three-dimensional array comprising a geometric longitudinal axis length, a geometric transverse axis length and image color information;
s2: performing geometric registration on the M images, sequentially performing geometric registration on other images by using a SIFT operator or an improved algorithm thereof with a first image as a reference, transforming the other images to a view angle and a scene size which are consistent with the first image, and obtaining registered image data to express as a matrix X1To XMOr XiI takes values from 1 to M, XiA matrix representing registered image data of the ith image;
s3: are respectively paired with matrix X1To XMPerforming column vectorization to obtain column vector eta1To etaMThen the column vector eta is1To etaMCombined into a matrix X, X ═ eta1,...,ηM];
S4: decomposing the matrix X by using a steady principal component analysis method to obtain a corresponding sparse matrix S containing difference point information0
S5: the sparse matrix S0Each column of (A) is redrawn to XiMatrices of the same dimension, S0Is drawn into a matrix
Figure FDF0000013607410000011
i takes values from 1 to M, and the drawn matrix is the difference point information of each image;
s6: to pair
Figure FDF0000013607410000012
Noise points are filtered, the difference points connected with the same region are combined into a difference region, and each difference region after being filtered is marked in sequence and is marked as
Figure FDF0000013607410000013
To
Figure FDF0000013607410000014
Or
Figure FDF0000013607410000015
L takes values from 1 to Li,LiThe number of the difference regions obtained in the ith image,
Figure FDF0000013607410000016
representing the ith block difference region in the ith image; the method comprises the following steps:
s61: to pair
Figure FDF0000013607410000017
Performing candy operator edge detection, and confirming a closed curve of the edge after the detection;
s62: filling edges belonging to a closed curve, calculating the number of pixel points of the edges, and defining all difference points in the closed curve as a difference area when the number of the pixel points in the closed curve after filling is greater than or equal to a threshold T; the threshold T is set as the following criteria: t is the minimum value of the number of resolution units corresponding to the size of the type of the target of interest to be detected in the M images after registration;
s63: for a non-closed curve or a closed curve but with the number of pixel points less than a threshold T, setting the edge information or the difference region information to zero, namely filtering, and sequentially recording the remaining difference regions reserved after filtering as
Figure FDF0000013607410000021
To
Figure FDF0000013607410000022
Or
Figure FDF0000013607410000023
Figure FDF0000013607410000024
Representing the ith block difference area in the ith image, wherein L takes values from 1 to Li,LiThe number of the difference areas obtained in the ith image is obtained;
s7: calculating each difference area to obtain the difference area
Figure FDF0000013607410000025
Corresponding center coordinate (m)i_l,ni_l) Length of lengthi_lAnd heighti_lAnd marking the difference region information of the difference region as xi_out_l=[mi_l,ni_l,lengthi_l,heighti_l](i_out_l=i_out_1,...,i_out_Li,i=1,...,M);
S8: using xi_out_l(i_out_l=i_out_1,...,i_out_LiM) obtaining a difference area information matrix of the ith image
Figure FDF0000013607410000026
(i ═ 1.... M), labeling the difference region corresponding to the difference region information matrix on the registered image XiIn (1).
2. The method according to claim 1, wherein in step S4, the matrix X is decomposed by robust principal component analysis to obtain a corresponding sparse matrix S containing difference point information0The method specifically comprises the following steps:
the decomposition model for matrix X is: x is L0+S0+N0Wherein L is0、S0And N0Into three sub-matrices after decomposition, L0Is a low rank matrix, N0Representing residual noise, S0Is a sparse matrix;
extracting L from the optimization model0、S0And N0
min||L0||*+μ||S0||1
s.t.||X-L0-S0||F<δ
Wherein | · | purple sweet1Expressing to solve 1-norm, | ·| non-woven phosphorFExpressing to solve F-norm, | ·| non-woven phosphor*The expression is to calculate the kernel norm, δ is a set constant, μ represents a weight factor and μ > 0, min represents the minimization, s.t. is the abbreviation of subject to which the expression "constraint is", and the meaning of the whole equation is that | | | X-L is satisfied when the constraint condition is | | X-L0-S0||FUnder the condition of < delta, the target function L is enabled to be0||*+μ||S0||1The value of (c) is minimal.
3. The method according to claim 1, wherein in step S5, the sparse matrix S is used as the basis for image difference detection0Each column of (A) is redrawn to XiMatrices of the same dimension, S0Is drawn into a matrix
Figure FDF0000013607410000031
The value of i is 1 to M, and the drawn matrix is the difference point information of each image, and the steps specifically comprise the following steps:
for the sparse matrix S0I th column S0(i) Will S0(i) Dividing b column vectors in the order from top to bottom, b being image XiThe length of the corresponding horizontal axis, the number of elements of the b column vectors is a, and a is the image XiThe length of the corresponding vertical axis combines the b column vectors into a corresponding matrix according to the dividing sequence
Figure FDF0000013607410000032
Wherein i takes values from 1 to M.
4. The method according to claim 1, wherein in step S7, each difference region is calculated to obtain the difference region
Figure FDF0000013607410000033
Corresponding center coordinate (m)i_l,ni_l) Length of lengthi_lAnd heighti_lAnd marking the difference region information of the difference region as xi_out_l=[mi_l,ni_l,lengthi_l,heighti_l]The method specifically comprises the following steps:
calculating the difference region of each block
Figure FDF0000013607410000034
Obtaining the maximum value and the minimum value of the horizontal axis of the difference region as bi_l_maxAnd bi_l_minThe maximum value and the minimum value of the vertical axis of the difference region are respectively ai_l_maxAnd ai_l_min
Calculating the difference region as defined below
Figure FDF0000013607410000035
Central coordinate (m) ofi_l,ni_l) Length of lengthi_lAnd heighti_l
mi_l=(bi_l_max+bi_l_min)/2
ni_l=(ai_l_max+ai_l_min)/2
lengthi_l=bi_l_max-bi_l_min
heighti_l=ai_l_max-ai_l_min
Marking the difference region
Figure FDF0000013607410000041
Has difference area information of xi_out_l
xi_out_l=[mi_l,ni_l,lengthi_l,heighti_l] 。
CN201710828732.3A 2017-09-14 2017-09-14 Image difference detection method based on robust principal component analysis method Active CN107705295B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710828732.3A CN107705295B (en) 2017-09-14 2017-09-14 Image difference detection method based on robust principal component analysis method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710828732.3A CN107705295B (en) 2017-09-14 2017-09-14 Image difference detection method based on robust principal component analysis method

Publications (2)

Publication Number Publication Date
CN107705295A CN107705295A (en) 2018-02-16
CN107705295B true CN107705295B (en) 2021-11-30

Family

ID=61172498

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710828732.3A Active CN107705295B (en) 2017-09-14 2017-09-14 Image difference detection method based on robust principal component analysis method

Country Status (1)

Country Link
CN (1) CN107705295B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109741377B (en) * 2018-11-30 2021-07-06 四川译讯信息科技有限公司 Image difference detection method
CN110782459B (en) * 2019-01-08 2021-02-19 北京嘀嘀无限科技发展有限公司 Image processing method and device
CN113189543B (en) * 2021-04-27 2023-07-14 哈尔滨工程大学 Interference suppression method based on motion compensation robust principal component analysis
CN113393498B (en) * 2021-05-26 2023-07-25 上海联影医疗科技股份有限公司 Image registration method, device, computer equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101369310B (en) * 2008-09-27 2011-01-12 北京航空航天大学 Robust human face expression recognition method
CN102509264B (en) * 2011-11-01 2013-08-14 武汉大学 Image-segmentation-based scanning image dedusting method
CN103901416B (en) * 2014-03-31 2016-06-29 西安电子科技大学 A kind of multichannel clutter suppression method based on steadiness factor method
US10755395B2 (en) * 2015-11-27 2020-08-25 Canon Medical Systems Corporation Dynamic image denoising using a sparse representation

Also Published As

Publication number Publication date
CN107705295A (en) 2018-02-16

Similar Documents

Publication Publication Date Title
CN107705295B (en) Image difference detection method based on robust principal component analysis method
CN107358623B (en) Relevant filtering tracking method based on significance detection and robustness scale estimation
CN108229490B (en) Key point detection method, neural network training method, device and electronic equipment
Yahyanejad et al. A fast and mobile system for registration of low-altitude visual and thermal aerial images using multiple small-scale UAVs
CN108876723B (en) Method for constructing color background of gray target image
US9679387B2 (en) Depth-weighted group-wise principal component analysis for video foreground/background separation
CN109685045B (en) Moving target video tracking method and system
CN109376641B (en) Moving vehicle detection method based on unmanned aerial vehicle aerial video
US9247139B2 (en) Method for video background subtraction using factorized matrix completion
CN109215053B (en) Method for detecting moving vehicle with pause state in aerial video shot by unmanned aerial vehicle
CN110910421B (en) Weak and small moving object detection method based on block characterization and variable neighborhood clustering
CN110211046B (en) Remote sensing image fusion method, system and terminal based on generation countermeasure network
CN105931189B (en) Video super-resolution method and device based on improved super-resolution parameterized model
Wang et al. Fusing bird view lidar point cloud and front view camera image for deep object detection
CN116935214B (en) Space-time spectrum fusion method for satellite multi-source remote sensing data
Gao et al. Counting dense objects in remote sensing images
CN107392211B (en) Salient target detection method based on visual sparse cognition
Zhou et al. PADENet: An efficient and robust panoramic monocular depth estimation network for outdoor scenes
Xu et al. COCO-Net: A dual-supervised network with unified ROI-loss for low-resolution ship detection from optical satellite image sequences
CN115082533B (en) Near space remote sensing image registration method based on self-supervision
CN116883235A (en) Distributed photoelectric oriented image stitching method and device
CN101231693A (en) System and method for reconstructing restored facial images from video
CN116883447A (en) Infrared image target and background separation method and system based on space-time tensor decomposition
CN114519832A (en) Affine inverse transformation model-based video global motion compensation method
CN111008555B (en) Unmanned aerial vehicle image small and weak target enhancement extraction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant