CN109584273B - Moving target detection method based on self-adaptive convergence parameters - Google Patents

Moving target detection method based on self-adaptive convergence parameters Download PDF

Info

Publication number
CN109584273B
CN109584273B CN201811389171.2A CN201811389171A CN109584273B CN 109584273 B CN109584273 B CN 109584273B CN 201811389171 A CN201811389171 A CN 201811389171A CN 109584273 B CN109584273 B CN 109584273B
Authority
CN
China
Prior art keywords
matrix
order
sparse
frame
temp
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811389171.2A
Other languages
Chinese (zh)
Other versions
CN109584273A (en
Inventor
曾操
刘清燕
李世东
朱圣棋
廖桂生
李力新
郑鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201811389171.2A priority Critical patent/CN109584273B/en
Publication of CN109584273A publication Critical patent/CN109584273A/en
Application granted granted Critical
Publication of CN109584273B publication Critical patent/CN109584273B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Abstract

The invention belongs to the technical field of image processing, and discloses a moving target detection method based on self-adaptive convergence parameters. The method comprises the following steps: the method comprises the steps of continuously acquiring infrared images of a to-be-detected area by using a thermal infrared imager, then performing self-adaptive adjustment on a preset initial convergence parameter to obtain a self-adaptive convergence parameter, then training continuous infrared images by using a robust principal component analysis method to obtain a sparse image corresponding to each frame of infrared image, and then sequentially outputting the sparse image corresponding to each frame of infrared image to finish moving target detection. The method provided by the invention can adaptively adjust the preset initial convergence parameter before the continuous infrared images are trained by using a robust principal component analysis method, and further can reduce the iteration times of the training process, reduce the operation amount and improve the operation efficiency when the continuous infrared images are trained to obtain the sparse matrix corresponding to each frame of infrared image.

Description

Moving target detection method based on self-adaptive convergence parameters
Technical Field
The invention relates to the technical field of image processing, in particular to a moving target detection method based on self-adaptive convergence parameters.
Background
The infrared sensing technology based on the infrared sensor has the working capability all day long, so the infrared sensing technology is widely applied to the military and civil fields. Meanwhile, the moving object detection technology has been an important subject of image processing and computer vision research, and is also widely applied in many fields. The temperature information of the moving target extracted by the infrared sensor is one of important characteristics for distinguishing the moving target from the static scene, so that the thermal infrared image formed by the temperature information of the target is subjected to related processing, the moving target and the static scene can be distinguished, and the detection of the moving target is realized.
However, in a complex electromagnetic environment, the background of a moving target is also complex, and the detection and tracking of the moving target face the following challenges and problems: (1) When the size of the moving target is small, such as a subminiature rotor unmanned aerial vehicle and a remote area moving person, the moving target occupies fewer infrared image pixels in the data after infrared imaging; (2) The contrast between the moving target and the ambient temperature is small, so that the interference of non-target factors in data after infrared imaging is large. Aiming at the problems, a series of researches are carried out by scholars at home and abroad, and a robust principal component analysis method is provided, which can decompose an input data matrix corresponding to a continuous infrared image converted from a video into a low-rank matrix and a sparse matrix, wherein the sparse matrix represents a matrix of a moving target, but in the decomposition process, the method has more iteration times, so that the calculation amount is large, and the calculation efficiency is low.
Disclosure of Invention
In view of the above, the invention provides a moving object detection method based on adaptive convergence parameters, which can perform adaptive adjustment on preset initial convergence parameters for different regions to be detected before a robust principal component analysis method is used for training continuous infrared images, so that the iteration times and the operation amount can be reduced in the training process, and the operation efficiency can be improved.
In order to achieve the above purpose, the embodiment of the invention adopts the following technical scheme:
a moving target detection method based on self-adaptive convergence parameters is provided, which comprises the following steps:
step 1, continuously collecting an infrared image of a region to be detected by using a thermal infrared imager.
And 2, acquiring initial 4 frames of infrared images acquired by the thermal infrared imager, and performing self-adaptive adjustment on a preset initial convergence parameter epsilon by using the initial 4 frames of infrared images to obtain a self-adaptive convergence parameter epsilon'.
Step 3, training each n frames of continuous infrared images acquired by the thermal infrared imager by using a robust principal component analysis method by taking a self-adaptive convergence parameter epsilon' as a convergence parameter to obtain a sparse matrix corresponding to each frame of infrared image in each n frames of continuous infrared images; wherein n is an integer, and n is not less than 4.
And 4, obtaining a sparse image corresponding to each frame of infrared image according to the sparse matrix corresponding to each frame of infrared image, and further sequentially outputting the sparse image corresponding to each frame of infrared image, wherein the region formed by the pixel points with the gray scale value not being zero in each sparse image is the region where the moving target is located.
The moving target detection method based on the self-adaptive convergence parameters comprises the steps of firstly, collecting an infrared image of a region to be detected by using a thermal infrared imager, and then carrying out self-adaptive adjustment on preset initial convergence parameters to obtain self-adaptive convergence parameters; and then training the continuous infrared images by adopting a robust principal component analysis method to obtain a sparse matrix corresponding to each frame of infrared image in the continuous infrared images, and further obtain a sparse image corresponding to each frame of infrared image, wherein the region formed by the pixel points with the non-zero gray scale value in each sparse image is the region where the moving target is located, outputting the sparse image corresponding to each frame of infrared image according to the sequence of acquiring the infrared images, wherein the region formed by the pixel points with the non-zero gray scale value in each sparse image is the region where the moving target is located, and observing to obtain the moving trend of the moving target.
According to the method, aiming at different areas to be detected, the initial convergence parameters are adjusted in a self-adaptive mode by utilizing the initial 4 frames of infrared images, the self-adaptive convergence parameters are obtained, the self-adaptive convergence parameters are used as the convergence parameters, the robust principal component analysis is adopted to train the continuous infrared images, and then the sparse matrix corresponding to each frame of infrared image is obtained.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a moving object detection method based on adaptive convergence parameters according to an embodiment of the present invention;
FIG. 2 is a gray scale image of an infrared image of a region to be detected;
fig. 3 is a detection result diagram obtained by processing the 10 th frame, the 20 th frame, the 30 th frame and the 40 th frame of infrared images by using the method provided by the embodiment of the present invention, where fig. 3 (a) is a sparse image corresponding to the 10 th frame of infrared image, fig. 3 (b) is a sparse image corresponding to the 20 th frame of infrared image, fig. 3 (c) is a sparse image corresponding to the 30 th frame of infrared image, and fig. 3 (d) is a sparse image corresponding to the 40 th frame of infrared image;
fig. 4 is a sparse image and a local enlarged image thereof obtained by processing a certain frame of infrared image by using the method provided by the embodiment of the invention;
FIG. 5 is a graph showing the root mean square error and the operation time variation of the robust principal component analysis method under different convergence parameters.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic flow chart of a moving object detection method based on adaptive convergence parameters according to an embodiment of the present invention.
Referring to fig. 1, the moving object detection method based on the adaptive convergence parameter provided in the embodiment of the present invention includes the following steps:
step 1, continuously collecting an infrared image of a region to be detected by using a thermal infrared imager.
And 2, acquiring initial 4 frames of infrared images acquired by the thermal infrared imager, and performing self-adaptive adjustment on a preset initial convergence parameter epsilon by using the initial 4 frames of infrared images to obtain a self-adaptive convergence parameter epsilon'.
Specifically, the step 2 comprises the following steps:
(2.1) initialization: acquiring an input data matrix D corresponding to the initial 4 frames of infrared images; let i denote the number of iterative operations; e i Representing the sparse matrix in the ith iteration, E 0 Is a matrix of m × 4 order and E 0 All the elements in (A) are 0; a. The i Represents the low-rank matrix in the ith iterative operation, A 0 Is a matrix of m × 4 order and A 0 All the elements in (A) are 0; λ represents the weight assigned to the sparse matrix,
Figure BDA0001873642690000041
Y i representing Lagrange multiplier matrix in the i-th iteration, Y 0 =D/max(||D|| 2-1 ||D|| );μ i Indicating Y in the i-th iteration i Penalty factor of, mu 0 =1.0/||D|| 2 (ii) a ρ is μ k Rho is more than or equal to 0.001 and less than or equal to 5; ε is the initial convergence parameter, ε =1 × 10 -7 (ii) a The number of adjustments of the initial convergence parameter is denoted by I, I =0.
Wherein, an input data matrix D corresponding to the initial 4 frames of infrared images is an m multiplied by 4 order matrix,
Figure BDA0001873642690000042
h is the height of the image, W is the width of the image, H and W are even numbers, | D | | luminance 2 Representing the calculation of the two-norm of the input matrix D, | D | | non-woven cells It is shown that the infinite norm of the input matrix D is solved and max (-) indicates the maximum value of the element in the matrix in parentheses.
(2.2) let i =1.
(2.3) calculating the sparse matrix E in the ith iteration i
Figure BDA0001873642690000043
Wherein, E i Is an m × 4 order matrix, B is an m × 4 order matrix, in BEach element is lambda/mu i-1
(2.4) calculating an intermediate process matrix Temp in the ith iteration operation i
Temp i =D-E ii-1 Y i-1
Wherein, temp i Is an m x 4 order matrix.
(2.5) centering intermediate Process matrix Temp i Carrying out singular value decomposition;
(U i ,∑ i ,V i )=svd(Temp i ),
wherein U is i Is an m x m order matrix, and represents the intermediate process matrix Temp i Performing singular value decomposition to obtain a left unitary eigenvector matrix; v i Is a 4 x 4 order matrix, V i Representation to intermediate process matrix Temp i Performing singular value decomposition to obtain a right unitary eigenvector matrix; sigma i Is a matrix of m x 4 order, sigma i Representation to intermediate process matrix Temp i Singular value matrixes are obtained after singular value decomposition; svd denotes singular value decomposition of the matrix.
(2.6) computing the sparse matrix A in the ith iteration operation i
Α i =U i (∑ ii )V i T
Wherein A is i Is an m x 4 order matrix, the superscript T represents the transpose of the matrix, μ i Is a matrix of m × 4 order, mu i Each element of (1/mu) in the ith iteration i-1
(2.7) calculating Lagrange multiplier matrix Y in the ith iteration operation i
Y i =Y i-1i-1 (D-E i -A i ),
Wherein Y is i Is an m x 4 order matrix.
Calculating to obtain a relative error xi of the dynamic and static separation quality corresponding to the ith iteration operation i =||D-E i -A i || 1 /||D|| 1 To judge xi i Whether or not it is greater than ε: if ξ i If greater than epsilon, then calculate to get mu i =min(μ i-1 ×ρ,μ i-1 ×10 -7 ) Adding 1 to i, and executing the step (2.3); if ξ i If the value is less than or equal to epsilon, judging whether I is equal to 0 or not, and if I is equal to 0, executing the step (2.8); if I is not equal to 0, executing step (2.9); where min (·) represents taking the minimum value.
(2.8) let E S =E i Calculating an initial value of the root mean square error
Figure BDA0001873642690000051
And adding 1 to the I, calculating to obtain epsilon '=10 epsilon, taking epsilon as epsilon', and executing the step (2.2).
Wherein | · | charging 1 Representing the 1-norm of the matrix, sum () representing the sum of all elements of the column vector in brackets, E i (1) represents an access matrix E 1 First column of (E) s (1) represents an acquisition matrix E S Length () represents the number of elements of the matrix in parentheses, ones (length (E) i ) 1) represents that the elements are all 1 and the order is (length (E) i (: 1)), 1) × 1 matrix.
(2.9) computing the sparse matrix E in the ith iterative operation i Corresponding root mean square error:
Figure BDA0001873642690000061
calculate | RMSE i -S |; if RMSE i -S | is less than 1, then ∈ '=10 ∈, let ∈ take ∈', let I add 1, perform step (2.2); if RMSE i If the value of-S | is greater than or equal to 1, then ε '=0.1 ε, and the adaptive convergence parameter ε' is obtained.
It should be noted that the root mean square error RMSE refers to a root mean square error between a sparse matrix obtained by calculation under the condition that the convergence parameters are different and an effective sparse matrix, and the sparse matrix obtained by calculation when the adjustment times of the initial convergence parameters are 0 is used as the effective sparse matrix in the present invention.
Preferably, the specific generation mode of the input data matrix D is as follows:
for l frames of continuous infrared images, each frame is acquiredData matrix D corresponding to infrared image 1 ,D 2 ,…,D l (ii) a Extracting data of even rows and even columns of the data matrix corresponding to each frame of infrared image to obtain a downsampled data matrix D corresponding to each frame of infrared image 1 ',D 2 ',…,D l '; stretching the downsampling data matrix corresponding to each frame of infrared image into vectors according to columns to obtain column vectors corresponding to the downsampling data matrix corresponding to each frame of infrared image
Figure BDA0001873642690000062
Further obtain the input data matrix
Figure BDA0001873642690000063
Wherein l is more than or equal to 4, the elements in the data matrix are the pixel values of the corresponding positions of each frame of infrared image, and the data matrix D 1 ,D 2 ,…,D l The orders of (A) are all H multiplied by W; down-sampled data matrix of order
Figure BDA0001873642690000064
Figure BDA0001873642690000065
Is m × 1 and the input data matrix D has an order of m × l.
Step 3, training each n frames of continuous infrared images acquired by the thermal infrared imager by using a robust principal component analysis method by taking a self-adaptive convergence parameter epsilon' as a convergence parameter to obtain a sparse matrix corresponding to each frame of infrared image in each n frames of continuous infrared images; wherein n is an integer, and n is not less than 4.
Specifically, step 3 comprises the following steps:
(3.1) initialization: acquiring an input data matrix D corresponding to every n frames of continuous infrared images; k denotes the number of iterations, E k Representing the sparse matrix in the kth iterative operation, E 0 Is a matrix of m × n order and E 0 All the elements in (A) are 0; a. The k Represents the low-rank matrix in the kth iteration, A 0 Is of order mxnAnd A is 0 All the elements in (A) are 0; λ represents the sparse matrix E in the inter-frame correlation algorithm k The weight to be given to the user is,
Figure BDA0001873642690000071
Y k representing Lagrange multiplier matrix, Y, in the kth iteration 0 =D/max(||D|| 2-1 ||D|| );μ k Indicating Y in the kth iteration k Penalty factor of, mu 0 =1.0/||D|| 2 (ii) a ρ is μ k Rho is more than or equal to 0.001 and less than or equal to 5.
(3.2) let k =1.
(3.3) calculating the sparse matrix E in the kth iterative operation k
Figure BDA0001873642690000072
Wherein E is k Is an m x n order matrix, B is an m x n order matrix, and each element in the matrix B is lambda/mu k-1
(3.4) to the intermediate Process matrix Temp k Performing singular value decomposition:
Temp k =D-E kk-1 Y k-1
wherein, temp k Is an m × n order matrix.
(3.5) centering intermediate Process matrix Temp k Performing singular value decomposition
(U k ,∑ k ,V k )=svd(Temp k ),
Wherein, U is k Is an m x m order matrix and represents a middle process matrix Temp k Performing singular value decomposition to obtain a left unitary eigenvector matrix; v k Is an n x n order matrix, V k Representation to intermediate process matrix Temp k Performing singular value decomposition to obtain a right unitary eigenvector matrix; sigma k Is an m x n order matrix, sigma k Representation to intermediate process matrix Temp k And carrying out singular value decomposition to obtain a singular value matrix.
(3.6) calculating the sparse matrix A in the kth iterative operation k
Α k =U k (∑ ik )V k T
Wherein, A k Is a matrix of m × n order, mu k Is a matrix of m × n order, μ k Each element of (a) is 1/mu in the kth iteration k-1
(3.7) computing the Lagrangian multiplier matrix Y in the kth iteration k
Y k =Y k-1k-1 (D-E k -A k );
Wherein, Y k Is an m × n order matrix.
(3.8) calculating to obtain a relative error xi of the dynamic and static separation quality corresponding to the kth iterative operation k =||D-E k -A k || 1 /||D|| 1 If xi is k >ε', then μ is calculated k =min(μ k-1 ×ρ,μ k-1 ×10 -7 ) Adding 1 to k, and then executing the step (3.2); otherwise, step (3.9) is executed.
(3.9) stopping iteration and outputting a sparse matrix E k
Wherein, E k Is an m × n order matrix.
(3.10) let sparse matrix
Figure BDA0001873642690000081
Restoring the column vector
Figure BDA0001873642690000082
Obtaining a sparse matrix E corresponding to each frame of infrared image 1 ,E 2 ,…,E n
Wherein the content of the first and second substances,
Figure BDA0001873642690000083
which represents the p-th column vector and,
Figure BDA0001873642690000084
E p step (2)The number is h x w, and the number is h x w,
Figure BDA0001873642690000085
and 4, obtaining a sparse image corresponding to each frame of infrared image according to the sparse matrix corresponding to each frame of infrared image, and further sequentially outputting the sparse image corresponding to each frame of infrared image, wherein the region formed by the pixel points with the gray scale value not being zero in each sparse image is the region where the moving target is located.
It should be noted that, sequentially outputting the sparse images corresponding to each frame of infrared image means that each n frames of continuous infrared images have their own acquisition sequence, each frame of infrared image has its own corresponding sparse image, the sparse images are continuously output according to the acquisition sequence of the infrared images of the corresponding frames, and the motion trend of the moving object can be known by observing the region where the moving object is located.
Based on the scheme of the embodiment of the invention, aiming at different areas to be detected, the initial 4 frames of infrared images are used for carrying out self-adaptive adjustment on the preset initial convergence parameters to obtain the self-adaptive convergence parameters, the self-adaptive convergence parameters are used as the convergence parameters, the robust principal component analysis is adopted to train the continuous infrared images, and then the sparse matrix corresponding to each frame of infrared image is obtained.
Further, the above beneficial effects of the present invention are verified by simulation experiments as follows.
Experiment one:
taking a field behind a certain school teaching building as a to-be-detected area, continuously collecting an infrared image of the to-be-detected area by adopting a Mega MAG-62 online thermal infrared imager to obtain the infrared image of the to-be-detected area, and performing graying processing to obtain a series of grayscale images similar to those shown in FIG. 2. The on-line thermal infrared imager for the Mega MAG-62 adopts a 17-micron and 640 × 480 uncooled focal plane detector, the temperature resolution is 40mk, the size of each frame of an infrared image of a region to be detected is 640 × 480, and n =4 is taken.
And selecting 40 frames of infrared images collected by the thermal infrared imager, and processing by adopting the method provided by the invention to obtain sparse images corresponding to each frame of infrared image. The sparse images corresponding to the infrared images of the 10 th frame, the 20 th frame, the 30 th frame and the 40 th frame are respectively shown in fig. 3 (a), fig. 3 (b), fig. 3 (c) and fig. 3 (d).
Viewing fig. 3, it can be seen that the ultra-small rotorcraft in the upper right corner has a tendency to move to the left. Although the microminiature rotor unmanned aerial vehicle has small size, low heat productivity and less occupied infrared image pixels, the moving target can be detected by adopting the method provided by the invention. A corresponding sparse image of a certain frame of the 40 frames of infrared images is arbitrarily extracted, and a moving object in the sparse image is enlarged, as shown in fig. 4. As can be seen from fig. 4, the rotorcraft is not only detected, but also has a shape contour substantially identical to a solid body, and a target contour is clear.
Experiment two:
processing a certain 4 continuous infrared images in the 40 frames of infrared images collected by the thermal infrared imager by adopting a robust principal component analysis method under different convergence parameters to obtain a root mean square error and operation time change curve chart of the robust principal component analysis method under different convergence parameters, as shown in fig. 5. As can be seen from fig. 5, when epsilon is the initial convergence parameter, i.e., epsilon =1 × 10 -7 The calculation time is 0.923 seconds, and the root mean square error at the moment is taken as the initial root mean square error. Taking the absolute value of the difference between the initial root mean square error and the root mean square error less than 1 as an effective detection result, as can be seen from fig. 5, on the premise of ensuring that the detection result is effective, when epsilon is a convergence parameter adjusted by the method of the present invention, that is, epsilon =1 × 10 -2 The calculation time was 0.319 seconds. Obviously, the method provided by the embodiment of the invention can reduce the operation time and improve the operation efficiency on the premise of ensuring the effective detection result.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily think of the changes or substitutions within the technical scope of the present invention, and shall cover the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (3)

1. A moving target detection method based on self-adaptive convergence parameters is characterized by comprising the following steps:
step 1, continuously collecting an infrared image of a region to be detected by using a thermal infrared imager;
step 2, acquiring initial 4 frames of infrared images acquired by the thermal infrared imager, and performing adaptive adjustment on a preset initial convergence parameter epsilon by using the initial 4 frames of infrared images to obtain an adaptive convergence parameter epsilon';
the step 2 comprises the following steps:
(2.1) initialization: acquiring an input data matrix D corresponding to the initial 4 frames of infrared images; let i denote the number of iterative operations; e i Representing the sparse matrix in the ith iterative operation, E 0 Is a matrix of m × 4 order and E 0 All the elements in (A) are 0; a. The i Represents the low-rank matrix in the ith iterative operation, A 0 Is a matrix of m × 4 order and A 0 All the elements in (A) are 0; λ represents the weight assigned to the sparse matrix,
Figure FDA0003858204290000011
Y i representing Lagrange multiplier matrix in the i-th iteration, Y 0 =D/max(||D|| 2-1 ||D|| );μ i Indicating Y in the ith iteration i Penalty factor of, mu 0 =1.0/||D|| 2 (ii) a ρ is μ k Regulating factor ofRho is more than or equal to 0.001 and less than or equal to 5; ε is the initial convergence parameter, ε =1 × 10 -7 (ii) a The number of times of adjustment of the initial convergence parameter is represented by I, I =0;
wherein, the input data matrix D corresponding to the initial 4 frames of infrared images is an m multiplied by 4 order matrix,
Figure FDA0003858204290000012
h is the height of the image, W is the width of the image, H and W are even numbers, | D | | luminance 2 Representing to solve the two-norm of the input matrix D, | D | | non-woven phosphor The infinite norm of the input matrix D is solved, and max (·) represents the maximum value of the elements in the matrix in brackets;
(2.2) let i =1;
(2.3) calculating the sparse matrix E in the ith iteration i
Figure FDA0003858204290000013
Wherein E is i Is an m x 4 order matrix, B is an m x 4 order matrix, and each element in the matrix B is lambda/mu i-1
(2.4) calculating an intermediate process matrix Temp in the ith iterative operation i
Temp i =D-E ii-1 Y i-1
Wherein, temp i Is an m multiplied by 4 order matrix;
(2.5) for the intermediate process matrix Temp i Performing singular value decomposition:
(U i ,∑ i ,V i )=svd(Temp i ),
wherein U is i Is an m x m order matrix and represents the intermediate process matrix Temp i Performing singular value decomposition to obtain a left unitary eigenvector matrix; v i Is a 4 x 4 order matrix, V i Represents the matrix Temp for the intermediate process i Performing singular value decomposition to obtain a right unitary eigenvector matrix; sigma i Is a matrix of m x 4 order, sigma i Representing the intermediate processMatrix Temp i Singular value matrixes are obtained after singular value decomposition; svd represents singular value decomposition of the matrix;
(2.6) calculating the sparse matrix A in the ith iteration operation i
Α i =U i (∑ ii )V i T
Wherein, A i Is an m x 4 order matrix, the superscript T represents the transpose of the matrix, μ i Is a matrix of m × 4 order, μ i Each element of (a) is 1/mu in the ith iteration i-1
(2.7) calculating Lagrange multiplier matrix Y in the ith iteration operation i
Y i =Y i-1i-1 (D-E i -A i ),
Wherein, Y i Is an m multiplied by 4 order matrix;
calculating to obtain a relative error xi of the moving and static separation quality corresponding to the ith iterative operation i =||D-E i -A i || 1 /||D|| 1 To judge xi i Whether greater than epsilon: if ξ i If is greater than epsilon, then mu is calculated i =min(μ i-1 ×ρ,μ i-1 ×10 -7 ) Adding 1 to i, and executing the step (2.3); if xi i If the value is less than or equal to epsilon, judging whether I is equal to 0 or not, and if I is equal to 0, executing the step (2.8); if I is not equal to 0, executing step (2.9);
wherein min (·) represents taking the minimum value;
(2.8) order E S =E i Calculating an initial value of the root mean square error
Figure FDA0003858204290000021
Adding 1 to I, calculating to obtain epsilon '=10 epsilon, taking epsilon' and executing the step (2.2);
wherein | · | purple sweet 1 Representing the 1-norm of the matrix, sum () representing the sum of all elements of the column vector in brackets, E i (1) represents an access matrix E 1 The first column of (1);
(2.9) calculating the ith iterationSparse matrix in operation E i The corresponding root mean square error:
Figure FDA0003858204290000031
calculate | RMSE i -S |; if RMSE i -S | is less than 1, then ∈ '=10 ∈, let ∈ take ε', let I add 1, perform step (2.2); if | RMSE i If the value of S is more than or equal to 1, the epsilon '=0.1 epsilon, and the self-adaptive convergence parameter epsilon' is obtained;
wherein, E s (1) represents an access matrix E S Length () represents the number of elements of the matrix in parentheses, ones (length (E) i ) 1) represents that the elements are all 1 and the order is (length (E) i (: 1)), 1) x 1 matrix;
step 3, training each n frames of continuous infrared images acquired by the thermal infrared imager by using the self-adaptive convergence parameter epsilon' as a convergence parameter and adopting a robust principal component analysis method to obtain a sparse matrix corresponding to each frame of infrared image in each n frames of continuous infrared images; wherein n is an integer, and n is more than or equal to 4;
and 4, obtaining a sparse image corresponding to each frame of infrared image according to the sparse matrix corresponding to each frame of infrared image, and further sequentially outputting the sparse image corresponding to each frame of infrared image, wherein the area formed by pixel points with the gray value not being zero in each sparse image is the area where the moving target is located.
2. The method according to claim 1, wherein step 3 specifically comprises:
(3.1) initialization: acquiring an input data matrix D corresponding to each n frames of continuous infrared images; k denotes the number of iterations, E k Representing the sparse matrix in the kth iterative operation, E 0 Is a matrix of m × n order and E 0 All the elements in (A) are 0; a. The k Represents the low-rank matrix in the kth iteration, A 0 Is a matrix of m × n order and A 0 All the elements in (A) are 0; λ represents the sparse matrix E in the inter-frame correlation algorithm k The weight to be given to the user is,
Figure FDA0003858204290000032
Y k representing Lagrange multiplier matrix in the kth iteration, Y 0 =D/max(||D|| 2-1 ||D|| );μ k Indicating Y in the kth iterative operation k Penalty factor of, mu 0 =1.0/||D|| 2 (ii) a ρ is μ k Rho is more than or equal to 0.001 and less than or equal to 5;
(3.2) let k =1;
(3.3) calculating a sparse matrix E in the kth iterative operation k
Figure FDA0003858204290000041
Wherein, E k Is an m x n order matrix, B is an m x n order matrix, and each element in the matrix B is lambda/mu k-1
(3.4) for the intermediate process matrix Temp k Singular value decomposition is carried out:
Temp k =D-E kk-1 Y k-1
wherein, temp k Is an m multiplied by n order matrix;
(3.5) for the intermediate process matrix Temp k Carrying out singular value decomposition:
(U k ,∑ k ,V k )=svd(Temp k ),
wherein, U is k Is an m x m order matrix and represents the intermediate process matrix Temp k Performing singular value decomposition to obtain a left unitary eigenvector matrix; v k Is an n x n order matrix, V k Represents the matrix Temp for the intermediate process k Obtaining a right unitary eigenvector matrix after singular value decomposition; sigma k Is an m x n order matrix, sigma k Represents the matrix Temp for the intermediate process k Singular value matrixes are obtained after singular value decomposition;
(3.6) calculating the kth iterationSparse matrix a in generation operation k
Α k =U k (∑ ik )V k T
Wherein, A k Is a matrix of m × n order, mu k Is a matrix of m × n order, mu k Each element of (1/mu) in the kth iteration k-1
(3.7) calculating a Lagrange multiplier matrix Y in the kth iteration operation k
Y k =Y k-1k-1 (D-E k -A k );
Wherein Y is k Is an m multiplied by n order matrix;
(3.8) calculating to obtain a relative error xi of the dynamic and static separation quality corresponding to the kth iterative operation k =||D-E k -A k || 1 /||D|| 1 If xi is k >ε', then μ is calculated k =min(μ k-1 ×ρ,μ k-1 ×10 -7 ) Adding 1 to k, and then executing the step (3.2); otherwise, executing the step (3.9);
(3.9) stopping iteration and outputting the sparse matrix E k
Wherein E is k Is an m multiplied by n order matrix;
(3.10) let the sparse matrix
Figure FDA0003858204290000051
Restoring the column vector
Figure FDA0003858204290000052
Obtaining a sparse matrix E corresponding to each frame of infrared image 1 ,E 2 ,…,E n
Wherein the content of the first and second substances,
Figure FDA0003858204290000053
represents the pth column vector, p ∈ [1,2, \ 8230;, n],
Figure FDA0003858204290000054
E p The order of (a) is h x w,
Figure FDA0003858204290000055
3. the method according to claim 1 or 2, wherein the input data matrix D is generated in a specific manner:
for l frames of continuous infrared images, acquiring a data matrix D corresponding to each frame of infrared image 1 ,D 2 ,…,D l (ii) a Extracting data of even rows and even columns of the data matrix corresponding to each frame of infrared image to obtain a down-sampling data matrix D corresponding to each frame of infrared image 1 ',D 2 ',…,D l '; stretching the downsampling data matrix corresponding to each frame of infrared image into vectors according to columns to obtain column vectors corresponding to the downsampling data matrix corresponding to each frame of infrared image
Figure FDA0003858204290000056
Further obtain the input data matrix
Figure FDA0003858204290000057
Wherein l is more than or equal to 4, the elements in the data matrix are the pixel values of the corresponding position of each frame of infrared image, and the data matrix D 1 ,D 2 ,…,D l The orders of (A) are all H multiplied by W; the down-sampled data matrix has the order of
Figure FDA0003858204290000058
Figure FDA0003858204290000059
Is m × 1, and the input data matrix D has an order of m × l.
CN201811389171.2A 2018-11-21 2018-11-21 Moving target detection method based on self-adaptive convergence parameters Active CN109584273B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811389171.2A CN109584273B (en) 2018-11-21 2018-11-21 Moving target detection method based on self-adaptive convergence parameters

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811389171.2A CN109584273B (en) 2018-11-21 2018-11-21 Moving target detection method based on self-adaptive convergence parameters

Publications (2)

Publication Number Publication Date
CN109584273A CN109584273A (en) 2019-04-05
CN109584273B true CN109584273B (en) 2022-11-18

Family

ID=65923213

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811389171.2A Active CN109584273B (en) 2018-11-21 2018-11-21 Moving target detection method based on self-adaptive convergence parameters

Country Status (1)

Country Link
CN (1) CN109584273B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113674258B (en) * 2021-08-26 2022-09-23 展讯通信(上海)有限公司 Image processing method and related equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013143102A (en) * 2012-01-12 2013-07-22 Nikon Corp Mobile object detection device, mobile object detection method, and program
CN104361611A (en) * 2014-11-18 2015-02-18 南京信息工程大学 Group sparsity robust PCA-based moving object detecting method
CN104867162A (en) * 2015-05-26 2015-08-26 南京信息工程大学 Motion object detection method based on multi-component robustness PCA
CN105931264A (en) * 2016-04-14 2016-09-07 西安电子科技大学 Sea-surface infrared small object detection method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9390514B2 (en) * 2011-06-09 2016-07-12 The Hong Kong University Of Science And Technology Image based tracking

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013143102A (en) * 2012-01-12 2013-07-22 Nikon Corp Mobile object detection device, mobile object detection method, and program
CN104361611A (en) * 2014-11-18 2015-02-18 南京信息工程大学 Group sparsity robust PCA-based moving object detecting method
CN104867162A (en) * 2015-05-26 2015-08-26 南京信息工程大学 Motion object detection method based on multi-component robustness PCA
CN105931264A (en) * 2016-04-14 2016-09-07 西安电子科技大学 Sea-surface infrared small object detection method

Also Published As

Publication number Publication date
CN109584273A (en) 2019-04-05

Similar Documents

Publication Publication Date Title
Jin et al. A survey of infrared and visual image fusion methods
He et al. Incremental gradient on the grassmannian for online foreground and background separation in subsampled video
CN107680116B (en) Method for monitoring moving target in video image
Liu et al. Multiple-window anomaly detection for hyperspectral imagery
CN109325446B (en) Infrared weak and small target detection method based on weighted truncation nuclear norm
CN111080675B (en) Target tracking method based on space-time constraint correlation filtering
US7523078B2 (en) Bayesian approach for sensor super-resolution
CN104408742B (en) A kind of moving target detecting method based on space time frequency spectrum Conjoint Analysis
Xu et al. Low-rank decomposition and total variation regularization of hyperspectral video sequences
CN107680120A (en) Tracking Method of IR Small Target based on rarefaction representation and transfer confined-particle filtering
CN109584303B (en) Infrared weak and small target detection method based on Lp norm and nuclear norm
Qian et al. Infrared dim-small target tracking via singular value decomposition and improved Kernelized correlation filter
CN104899866A (en) Intelligent infrared small target detection method
CN107609571B (en) Adaptive target tracking method based on LARK features
CN110135344B (en) Infrared dim target detection method based on weighted fixed rank representation
CN110889442A (en) Object material classification method for pulse type ToF depth camera
Pang et al. STTM-SFR: Spatial–temporal tensor modeling with saliency filter regularization for infrared small target detection
CN115937254A (en) Multi-air flight target tracking method and system based on semi-supervised learning
CN109584273B (en) Moving target detection method based on self-adaptive convergence parameters
Hao et al. VDFEFuse: A novel fusion approach to infrared and visible images
Wang et al. A self-supervised deep denoiser for hyperspectral and multispectral image fusion
CN108171124B (en) Face image sharpening method based on similar sample feature fitting
Li et al. Progressive task-based universal network for raw infrared remote sensing imagery ship detection
CN111932452A (en) Infrared image convolution neural network super-resolution method based on visible image enhancement
CN110097579B (en) Multi-scale vehicle tracking method and device based on pavement texture context information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant