CN115564688B - Method for extracting turbulence by combining matrix low-rank decomposition and dynamic target - Google Patents

Method for extracting turbulence by combining matrix low-rank decomposition and dynamic target Download PDF

Info

Publication number
CN115564688B
CN115564688B CN202211445041.2A CN202211445041A CN115564688B CN 115564688 B CN115564688 B CN 115564688B CN 202211445041 A CN202211445041 A CN 202211445041A CN 115564688 B CN115564688 B CN 115564688B
Authority
CN
China
Prior art keywords
image
turbulence
matrix
current
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211445041.2A
Other languages
Chinese (zh)
Other versions
CN115564688A (en
Inventor
彭蓉华
黄飞
余知音
向北海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changsha Chaochuang Electronic Technology Co ltd
Original Assignee
Changsha Chaochuang Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changsha Chaochuang Electronic Technology Co ltd filed Critical Changsha Chaochuang Electronic Technology Co ltd
Priority to CN202211445041.2A priority Critical patent/CN115564688B/en
Publication of CN115564688A publication Critical patent/CN115564688A/en
Application granted granted Critical
Publication of CN115564688B publication Critical patent/CN115564688B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20201Motion blur correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for extracting turbulence by combining matrix low-rank decomposition and dynamic target extraction, which comprises the following steps: inputting an image sequence after turbulent interference; extracting a turbulent part in the image sequence: performing matrix low-rank decomposition on the image sequence to obtain a scene structure diagram and a turbulence mask diagram of the image; extracting a moving object of the current image: extracting a moving target part in a scene by adopting a SuBSENSE foreground extraction algorithm to form a moving target mask; selecting clear image blocks from the sequence image for the turbulence part in the current frame, and fusing to obtain a current image without turbulence; and fusing the images in the steps to obtain a map containing the current moving target and without turbulence. The invention adopts a moving target extraction algorithm to fuse the moving target part with the image without turbulence, and can eliminate the smear and the blurring problems of the moving target.

Description

Method for extracting turbulence by combining matrix low-rank decomposition and dynamic target
Technical Field
The invention belongs to the technical field of turbulence removal of video images, and particularly relates to a method for removing turbulence by combining matrix low-rank decomposition and dynamic target extraction.
Background
Atmospheric turbulence is an irregular random motion in the atmosphere that, when present, can cause irregular jitter, target distortion, and blurring in the imaged image. Numerous applications such as intelligent video monitoring, aerospace, laser communication, high-resolution and the like, which are observed on the ground, are seriously disturbed by turbulent flow conditions, and the influence on the positioning, detection and tracking of a target and how to remove the influence of the turbulent flow become a problem to be solved urgently.
Scholars at home and abroad carry out a great deal of research on image restoration under atmospheric turbulence from different aspects, and in the traditional algorithm, a region fusion algorithm based on wavelet transformation of a binary tree, a turbulence interference removing algorithm which uses a Sobolev gradient operator and a Laplace operator to reduce fluctuation caused by turbulence and combines a lucky region to carry out multi-frame fusion, an algorithm which corrects turbulence distortion based on B-spline non-rigid registration and a turbulence removing algorithm based on matrix low-rank decomposition are provided. Among the algorithms based on the deep neural network are TMT algorithms based on a generative countermeasure network (TSR-WGAN), based on a time-channel joint attention network, and the like. The above algorithm is mostly directed at a static scene, the form of an observed target can be recovered when the observed target is static, but when a moving target exists in a video, a smear phenomenon and a moving target blurring phenomenon appear in an image after turbulence is removed.
Disclosure of Invention
In view of the above, a method for extracting and removing turbulence by combining matrix low rank decomposition and dynamic targets in a motion scene is provided to solve the problems of smearing and blurring of the motion targets in the dynamic scene.
Specifically, the invention discloses a method for removing turbulence by combining matrix low-rank decomposition and dynamic target extraction, which comprises the following steps of:
inputting a sequence image after turbulence interference;
step two, extracting a turbulence part in the sequence image: performing matrix low-rank decomposition on the sequence image to obtain a scene structure diagram and a turbulence mask diagram of the image;
step three, extracting the moving target of the current image: extracting a moving target part in a scene by adopting a SuBSENSE foreground extraction algorithm to form a moving target mask;
selecting clear image blocks from the sequence image for the turbulence part in the current frame, and fusing to obtain a current image without turbulence;
and step five, fusing the images in the step three and the step four to obtain a turbulence-removed image containing the current moving target.
Further, performing matrix low rank decomposition on the sequence image, including:
each frame of image with width and height of w and h respectively is stretched line by line intoOne is
Figure 942669DEST_PATH_IMAGE001
Is given as
Figure 208565DEST_PATH_IMAGE002
The N column vectors are then formed into a matrix, denoted as
Figure 286243DEST_PATH_IMAGE003
The decomposition of M is represented as solving the following equation
Figure 662997DEST_PATH_IMAGE004
Figure 142520DEST_PATH_IMAGE005
Wherein A is a sparse turbulence matrix, and B is a low-rank scene structure matrix;
Figure 376055DEST_PATH_IMAGE006
is the Frobenius norm of the matrix,
Figure 144291DEST_PATH_IMAGE007
is the kernel norm of the matrix and,
Figure 324737DEST_PATH_IMAGE008
is a regularization parameter;
then, the ADM alternative direction multiplier method is used for solving the above formula, and the augmented Lagrange formula of the above formula is defined as follows:
Figure 924345DEST_PATH_IMAGE009
where Z is the lagrange multiplier,
Figure 532044DEST_PATH_IMAGE010
is a penalty factor that is a function of,
Figure 115472DEST_PATH_IMAGE011
representing the inner product of the matrix.
Further, the solution steps of the augmented lagrangian formula are as follows:
updating A to obtain:
Figure 302871DEST_PATH_IMAGE012
=
Figure 553724DEST_PATH_IMAGE013
wherein P represents a euclidean projection; k is the number of iterations;
computing matrices
Figure 332324DEST_PATH_IMAGE014
The singular value decomposition results in:
Figure 606311DEST_PATH_IMAGE015
Figure 128559DEST_PATH_IMAGE016
where U, V are orthogonal matrices in the singular value decomposition,
Figure 437180DEST_PATH_IMAGE017
is a singular value;
updating B to obtain:
Figure 407190DEST_PATH_IMAGE018
updating Z to obtain:
Figure 434052DEST_PATH_IMAGE019
when it comes toIteration stop conditions:
Figure 494411DEST_PATH_IMAGE020
and stopping iterative optimization, and finally obtaining a sparse turbulence matrix A and a low-rank scene structure matrix B.
Further, the fourth step is as follows:
taking all pixels with pixel values of 1 in the current sparse turbulence matrix A, and recording as Y pixels;
calculating the similarity value of the image blocks of the historical sequence and the image blocks of the current sequence at each position:
Figure 923119DEST_PATH_IMAGE021
wherein r represents
Figure 43522DEST_PATH_IMAGE022
The r-th image block in (1),
Figure 292100DEST_PATH_IMAGE023
Figure 156151DEST_PATH_IMAGE024
the ith image representing the input is displayed,
Figure 439365DEST_PATH_IMAGE025
is the current sequence image block;
calculating the definition value of the image block in the historical sequence at each position:
Figure 465090DEST_PATH_IMAGE026
wherein
Figure 466544DEST_PATH_IMAGE027
Representing a gradient calculation;
and (3) calculating the weight of the image block at the H time:
Figure 868706DEST_PATH_IMAGE028
wherein, the first and the second end of the pipe are connected with each other,
Figure 272006DEST_PATH_IMAGE029
is a constant number, V g To a sharpness value, V s Is a similarity value;
calculating a fusion image block:
Figure 468632DEST_PATH_IMAGE030
obtaining the image of the current frame reserved moving object according to the output of the step three and a singular value decomposition formula, and recording the image as the image
Figure 691803DEST_PATH_IMAGE031
Then go right again
Figure 163235DEST_PATH_IMAGE031
And
Figure 155462DEST_PATH_IMAGE032
performing image fusion processing to obtain a fusion image retaining the effect of turbulence removal of the moving object
Figure 785639DEST_PATH_IMAGE033
Figure 496106DEST_PATH_IMAGE034
Figure 505651DEST_PATH_IMAGE035
Is a moving object mask;
the formula for fusion is:
Figure 352384DEST_PATH_IMAGE036
image enhancement: and performing detail enhancement on the image by adopting a multi-scale method.
Further, the detail enhancement comprises the following specific steps:
and D, separating the basic layer image from the detail layer image of the fused image obtained in the step five, namely obtaining three basic layer images of the original image by three times of Gaussian filtering:
Figure 421971DEST_PATH_IMAGE037
wherein
Figure 354155DEST_PATH_IMAGE038
Gaussian kernels with a standard deviation of 1.0,2.0,4.0, respectively;
and (3) subtracting the original image from the basic image to obtain a detailed image of the original image:
Figure 167390DEST_PATH_IMAGE039
Figure 868630DEST_PATH_IMAGE040
Figure 109118DEST_PATH_IMAGE041
and performing weighted fusion on the original image and the three detail images to obtain a detail enhanced image:
Figure 528598DEST_PATH_IMAGE042
wherein w 1 ,w 2 ,w 3 Respectively, are weighting coefficients.
The invention has the beneficial effects that:
obtaining a turbulence-removed image by matrix low-rank decomposition and clear image block fusion; and (3) adopting a moving object extraction algorithm to fuse the moving object part with the turbulence-removed image to remove the smear of the moving object and the blurring problem of the moving object.
Drawings
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a detail enhancement flow diagram;
FIG. 3 is a diagram of the turbulence removal effect of the present invention (static target); (a) an original; (b) the post-turbulence map of the present invention;
FIG. 4 illustrates the turbulence free effect of the present invention (flying birds); (a) an original; (b) the invention after turbulence removal; (c) original drawing; (d) the drawing after the invention has been de-turbulently.
Detailed Description
The invention is further described with reference to the accompanying drawings, but the invention is not limited in any way, and any alterations or substitutions based on the teaching of the invention are within the scope of the invention.
In order to achieve the purpose, the invention adopts the following technical scheme:
a method for combining matrix low rank decomposition and dynamic target extraction de-turbulence, comprising the steps of:
(1) Inputting a sequence image after turbulent interference;
(2) Extracting turbulent parts in the sequence images: performing matrix low-rank decomposition on the sequence image to obtain a scene structure diagram and a turbulence mask diagram of the image;
(3) Extracting a moving object of the current image: extracting a moving target part in a scene by adopting a SuBSENSE foreground extraction algorithm to form a moving target mask;
(4) Selecting clear image blocks from the sequence image for the turbulence part in the current frame, and fusing to obtain a current image without turbulence;
(5) And (4) fusing the motion areas in the step (4) and the step (3) to obtain a turbulence-removed image containing the current motion target.
Example 1
As shown in figures 1 to 4, the invention constructs a method for extracting and removing turbulence by combining matrix low-rank decomposition and dynamic targets, and the basic idea is to decompose a turbulence interference sequence image into a scene structure image and a turbulence image by utilizing the matrix low-rank decomposition idea, select a clear image block to fuse in a corresponding position in the sequence image after obtaining a turbulence part to obtain an image block without turbulence, adopt a SuBSENSE foreground extraction algorithm to obtain a moving target mask in order to keep a moving target in a scene, fuse the moving target with a smooth turbulence image, and finally enhance image details by using a multi-scale detail enhancement algorithm to obtain a clear turbulence-removed image.
The SuBSENSE foreground extraction algorithm is based on the problem that the detection effect is poor due to poor adaptability of self-balanced sensitivity segmenter (SuBSENSE) algorithm in real complex scenes, and the SuBSENSE algorithm based on background complexity self-adaptive distance threshold correction is provided. The algorithm is superior to a comparison algorithm, and has higher robustness and detection precision in a dynamic scene.
The method specifically comprises the following steps:
(1) Acquiring an input turbulent interference sequence image (
Figure 879945DEST_PATH_IMAGE043
) And the value of N is 10, which can be adjusted according to the scene.
(2) Matrix low-rank decomposition is carried out on the turbulent flow sequence, the width and the height of an image are respectively set as w and h, and each frame of image is stretched into one frame of image line by line
Figure 701271DEST_PATH_IMAGE001
Is given as
Figure 112661DEST_PATH_IMAGE044
The N column vectors are then formed into a matrix, denoted as
Figure 19437DEST_PATH_IMAGE045
The decomposition of M can be expressed as the solution formula (1)
Figure 174474DEST_PATH_IMAGE046
(1)
Figure 850306DEST_PATH_IMAGE047
In the formula (1), A is a sparse turbulence matrix, and B is a low-rank scene structure matrix;
Figure 229335DEST_PATH_IMAGE048
is the Frobenius norm of the matrix,
Figure 620478DEST_PATH_IMAGE049
is the kernel norm of the matrix and,
Figure 313627DEST_PATH_IMAGE050
is a regularization parameter. Then, solving the formula (1) by using an ADMM alternating direction multiplier method, and defining an augmented Lagrange formula of the formula (1) as follows:
Figure 109545DEST_PATH_IMAGE051
(2)
in the formula (2), Z is a Lagrangian multiplier,
Figure 597158DEST_PATH_IMAGE052
is a penalty factor that is a function of the time,
Figure 478526DEST_PATH_IMAGE053
the inner product of the matrix is represented.
The Alternating Direction multiplier (ADMM) is a calculation framework for solving a separable convex optimization problem, and the ADMM is a combination of a dual decomposition method and an augmented lagrange multiplier, so that the algorithm has decomposability, good convergence and high processing speed. The ADMM is suitable for solving the distributed convex optimization problem, is mainly applied to the condition of large solution space scale, can carry out block solution, and has not high absolute precision requirement on the solution.
ADMM solves the problem in a form of decomposition and combination, namely, the original problem is decomposed into a plurality of sub-problems which are simpler relative to the original problem, and the solutions of the sub-problems are combined to obtain the global solution of the original problem.
The solving steps are as follows:
update A to obtain
Figure 975367DEST_PATH_IMAGE054
=
Figure 625791DEST_PATH_IMAGE055
, (3)
Where P represents the euclidean projection.
Computing matrices
Figure 549885DEST_PATH_IMAGE056
The singular value decomposition results in:
Figure 652970DEST_PATH_IMAGE057
, (4)
Figure 687922DEST_PATH_IMAGE016
update B to obtain
Figure 192853DEST_PATH_IMAGE058
(5)
Update Z to obtain
Figure 553427DEST_PATH_IMAGE059
(6)
When the iteration stop condition is reached:
Figure 878229DEST_PATH_IMAGE060
(7)
and stopping iterative optimization, and finally obtaining a sparse turbulence matrix A and a low-rank scene structure matrix B.
(3) Obtaining a moving target mask: inputting a sequence image, and acquiring a moving target of a current frame by adopting a SuBSENSE foreground extraction algorithmMask, note as
Figure 982451DEST_PATH_IMAGE061
(4) Carrying out clear image block fusion on the turbulence image to obtain a turbulence-removed image: and step two, acquiring position information of turbulence after acquiring the sparse turbulence matrix, and then performing image block fusion in the sequence image according to the definition and the similarity at the corresponding position to form an image without turbulence. The method comprises the following specific steps:
taking all pixels with the pixel value of 1 in the current sparse turbulence matrix A, and recording as Y pixels
Calculating the similarity value of the image block of the historical sequence and the image block of the current sequence at each position:
Figure 341888DEST_PATH_IMAGE062
(8)
wherein r represents
Figure 607785DEST_PATH_IMAGE022
The r-th image block in (1),
Figure 682532DEST_PATH_IMAGE063
Figure 59287DEST_PATH_IMAGE064
representing the input ith image.
Calculating the definition value of the image block in the historical sequence at each position:
Figure 538810DEST_PATH_IMAGE065
(9)
wherein
Figure 975608DEST_PATH_IMAGE066
Indicating a gradient calculation.
And (3) calculating the weight of the image block at the H time:
Figure 540581DEST_PATH_IMAGE028
(10)
wherein the content of the first and second substances,
Figure 721027DEST_PATH_IMAGE029
is a constant number, V g To a clarity value, V s Is a similarity value;
calculating a fusion image block:
Figure 55056DEST_PATH_IMAGE067
(11)
(5) Recording the image of the remaining moving object of the current frame obtained by the output of the step three according to the formula (4)
Figure 662755DEST_PATH_IMAGE068
Then go right again
Figure 715025DEST_PATH_IMAGE068
And
Figure 495899DEST_PATH_IMAGE069
performing image fusion processing to obtain a fusion image retaining the effect of turbulence removal of the moving object
Figure 887697DEST_PATH_IMAGE070
Figure 666297DEST_PATH_IMAGE071
(12)
The formula for fusion is:
Figure 205863DEST_PATH_IMAGE072
(13)
(6) Image enhancement: the method adopts a multi-scale method to enhance the details of the image, and comprises the following specific steps:
separating the image of the basic layer and the image of the detail layer from the fused image obtained in the step five, namely, three times of Gaussian filtering and three times of Gaussian filteringStandard deviation for sub-gaussian filtering
Figure 524849DEST_PATH_IMAGE073
In turn 1.0,2.0,4.0. And obtaining three base layer images of the original image.
Figure 833470DEST_PATH_IMAGE074
(14)
Wherein
Figure 782972DEST_PATH_IMAGE075
Respectively, gaussian kernels with a standard deviation of 1.0,2.0,4.0.
And subtracting the original image from the basic image to obtain a detailed image of the original image.
Figure 809834DEST_PATH_IMAGE076
Figure 70526DEST_PATH_IMAGE077
Figure 233654DEST_PATH_IMAGE078
(15)
And performing weighted fusion on the original image and the three detail images to obtain a detail enhanced image.
Figure 416374DEST_PATH_IMAGE079
(16)
The invention has the beneficial effects that:
obtaining a turbulence-removed image by matrix low-rank decomposition and clear image block fusion; and (3) adopting a moving target extraction algorithm to fuse the moving target part with the turbulence-removed image to remove the smear of the moving target and the blurring problem of the moving target.
The word "preferred" is used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as "preferred" is not necessarily to be construed as advantageous over other aspects or designs. Rather, use of the word "preferred" is intended to present concepts in a concrete fashion. The term "or" as used in this application is intended to mean an inclusive "or" rather than an exclusive "or". That is, unless specified otherwise or clear from context, "X employs A or B" is intended to include either of the permutations as a matter of course. That is, if X employs A; b is used as X; or X employs both A and B, then "X employs A or B" is satisfied in any of the foregoing examples.
Also, although the disclosure has been shown and described with respect to one or an implementation, equivalent alterations and modifications will occur to others skilled in the art based upon a reading and understanding of this specification and the annexed drawings. The present disclosure includes all such modifications and alterations, and is limited only by the scope of the appended claims. In particular regard to the various functions performed by the above described components (e.g., elements, etc.), the terms used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (e.g., that is functionally equivalent), even though not structurally equivalent to the disclosed structure which performs the function in the herein illustrated exemplary implementations of the disclosure. In addition, while a particular feature of the disclosure may have been disclosed with respect to only one of several implementations, such feature may be combined with one or other features of the other implementations as may be desired and advantageous for a given or particular application. Furthermore, to the extent that the terms "includes," has, "" contains, "or variants thereof are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term" comprising.
Each functional unit in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or a plurality of or more than one unit are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a separate product, may also be stored in a computer-readable storage medium. The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Each apparatus or system described above may execute the storage method in the corresponding method embodiment.
In summary, the above-mentioned embodiment is an implementation manner of the present invention, but the implementation manner of the present invention is not limited by the above-mentioned embodiment, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be regarded as equivalent replacements within the protection scope of the present invention.

Claims (2)

1. A method for combining matrix low rank decomposition and dynamic target extraction de-turbulence, comprising the steps of:
inputting a sequence image after turbulence interference;
step two, extracting a turbulence part in the sequence image: performing matrix low-rank decomposition on the sequence image to obtain a scene structure diagram and a turbulence mask diagram of the image;
the sequence image is subjected to matrix low-rank decomposition, and the matrix low-rank decomposition comprises the following steps:
each frame of image with width and height of w and h is stretched line by line into a column vector of (w × h, 1), which is denoted as vec (I) i ) I =1,2, …, N is the number of input turbulent interference sequence images, I i Is the ith column vector, and then the N column vectors are combined into a matrix, which is marked as M = { vec (I) 1 ),…,vec(I N ) M is decomposed as follows:
min A,B γ||A|| F +||B|| *
s.t.A+B=M
wherein A is a sparse turbulence matrix, and B is a low-rank scene structure matrix; | | non-woven hair F Is the Frobenius norm of the matrix, | | | | | calness * Is the nuclear norm of the matrix, γ is the regularization parameter;
solving the above formula by using ADMM alternating direction multiplier method, and defining formula min A,B γ||A|| F +||B|| * The augmented lagrange formula for s.t.a + B = M is:
Figure FDA0004058564190000011
wherein Z is a Lagrange multiplier, beta > 0 is a penalty factor, and <, > represents the inner product of the matrix;
the solution steps of the augmented Lagrange formula are as follows:
updating A to obtain:
Figure FDA0004058564190000012
where P represents the Euclidean projection and k is the number of iterations;
computing matrices
Figure FDA0004058564190000021
The singular value decomposition results in:
Figure FDA0004058564190000022
Figure FDA0004058564190000023
where U, V are orthogonal matrices in singular value decomposition,
Figure FDA0004058564190000024
is a singular value;
updating B to obtain:
Figure FDA0004058564190000025
updating Z to obtain:
Z k+1 =Z k -β(A k+1 +B k+1 -M)
when the iteration stop condition is reached:
Figure FDA0004058564190000026
stopping iterative optimization, and finally obtaining a sparse turbulence matrix A and a low-rank scene structure matrix B;
step three, extracting the moving target of the current image: extracting a moving target part in a scene by adopting a SuBSENSE foreground extraction algorithm to form a moving target mask;
selecting clear image blocks from the sequence image for the turbulence part in the current frame to fuse to obtain a turbulence-removed current image;
taking all pixels with pixel values of 1 in the current sparse turbulence matrix A, and recording as Y pixels;
calculating the similarity value of the image blocks of the historical sequence and the image blocks of the current sequence at each position:
V s =||I i (r)-I current (r)|| 2
where r represents the r-th image block in 1, …, M, I ∈ (1,N), I i () Representing the input ith image, I current Is the current sequence image block;
calculating the definition value of the image block in the historical sequence at each position:
Figure FDA0004058564190000027
wherein
Figure FDA0004058564190000031
Representing a gradient calculation;
and (3) calculating the weight of the image block at the H time:
W H,r =exp(-αV s )×exp(βV g )
wherein α is a constant, V g To a sharpness value, V s Is a similarity value;
calculating a fusion image block:
Figure FDA0004058564190000032
obtaining the image of the current frame reserved moving target according to the moving target mask output in the third step and a singular value decomposition formula, and marking the image as I wm Then to I wm And I merge Performing image fusion processing to obtain a fusion image I with the effect of turbulence removal of the moving object retained m
I wm =I current *I maskB
I maskB Is a moving object mask;
the formula for fusion is: i is m =I merge *(1-I maskB )+I wm
Image enhancement: performing detail enhancement on the image by adopting a multi-scale method;
and step five, fusing the images in the step three and the step four to obtain a map which contains the current moving target and is free of turbulence.
2. The method for combining matrix low rank decomposition and dynamic target extraction turbulence according to claim 1, wherein the detail enhancement comprises the following specific steps:
and D, separating the basic layer image from the detail layer image of the fused image obtained in the step five, namely obtaining three basic layer images of the original image by using three times of Gaussian filtering:
I base1 =G 1 *I m ,I basP2 =G 2 *I m ,I base3 =G 3 *I m
wherein G is 1 ,G 2 ,G 3 Gaussian kernels with a standard deviation of 1.0,2.0,4.0, respectively;
and (3) subtracting the original image and the basic image to obtain a detailed image of the original image:
I d1 =I m -I base1 ,I d2 =I m -I base2 ,I d3 =I m -I base3
and performing weighted fusion on the original image and the three detail images to obtain a detail enhanced image:
I e =w 1 ×I d1 +w 2 ×I d2 +w 3 ×I d3 +I current
wherein w 1 ,w 2 ,w 3 Respectively, are weighting coefficients.
CN202211445041.2A 2022-11-18 2022-11-18 Method for extracting turbulence by combining matrix low-rank decomposition and dynamic target Active CN115564688B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211445041.2A CN115564688B (en) 2022-11-18 2022-11-18 Method for extracting turbulence by combining matrix low-rank decomposition and dynamic target

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211445041.2A CN115564688B (en) 2022-11-18 2022-11-18 Method for extracting turbulence by combining matrix low-rank decomposition and dynamic target

Publications (2)

Publication Number Publication Date
CN115564688A CN115564688A (en) 2023-01-03
CN115564688B true CN115564688B (en) 2023-03-21

Family

ID=84769655

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211445041.2A Active CN115564688B (en) 2022-11-18 2022-11-18 Method for extracting turbulence by combining matrix low-rank decomposition and dynamic target

Country Status (1)

Country Link
CN (1) CN115564688B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115358953A (en) * 2022-10-21 2022-11-18 长沙超创电子科技有限公司 Turbulence removing method based on image registration and dynamic target fusion

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102122082B (en) * 2011-03-23 2012-11-07 中国科学院光电技术研究所 Phase shift error correction device for sparse optical synthetic aperture imaging system
CN106408530A (en) * 2016-09-07 2017-02-15 厦门大学 Sparse and low-rank matrix approximation-based hyperspectral image restoration method
US10600158B2 (en) * 2017-12-04 2020-03-24 Canon Kabushiki Kaisha Method of video stabilization using background subtraction
CN110874827B (en) * 2020-01-19 2020-06-30 长沙超创电子科技有限公司 Turbulent image restoration method and device, terminal equipment and computer readable medium
US11928811B2 (en) * 2021-03-30 2024-03-12 Rtx Corporation System and method for structural vibration mode identification
CN113963301A (en) * 2021-11-04 2022-01-21 西安邮电大学 Space-time feature fused video fire and smoke detection method and system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115358953A (en) * 2022-10-21 2022-11-18 长沙超创电子科技有限公司 Turbulence removing method based on image registration and dynamic target fusion

Also Published As

Publication number Publication date
CN115564688A (en) 2023-01-03

Similar Documents

Publication Publication Date Title
CN110176027B (en) Video target tracking method, device, equipment and storage medium
CN108665496B (en) End-to-end semantic instant positioning and mapping method based on deep learning
CN109685045B (en) Moving target video tracking method and system
CN108133456A (en) Face super-resolution reconstruction method, reconstructing apparatus and computer system
CN110910421B (en) Weak and small moving object detection method based on block characterization and variable neighborhood clustering
Ye et al. Gaussian grouping: Segment and edit anything in 3d scenes
CN115358953B (en) Turbulence removing method based on image registration and dynamic target fusion
WO2016030305A1 (en) Method and device for registering an image to a model
CN113657387B (en) Semi-supervised three-dimensional point cloud semantic segmentation method based on neural network
CN113111751B (en) Three-dimensional target detection method capable of adaptively fusing visible light and point cloud data
CN110147816B (en) Method and device for acquiring color depth image and computer storage medium
Zheng et al. Edge-conditioned feature transform network for hyperspectral and multispectral image fusion
CN107705295B (en) Image difference detection method based on robust principal component analysis method
CN115953513A (en) Method, device, equipment and medium for reconstructing drivable three-dimensional human head model
Zhou et al. PADENet: An efficient and robust panoramic monocular depth estimation network for outdoor scenes
Precioso et al. B-spline active contour with handling of topology changes for fast video segmentation
Sun et al. Adaptive image dehazing and object tracking in UAV videos based on the template updating Siamese network
CN115564688B (en) Method for extracting turbulence by combining matrix low-rank decomposition and dynamic target
Mathew et al. Self-attention dense depth estimation network for unrectified video sequences
CN111951191B (en) Video image snow removing method and device and storage medium
CN115187768A (en) Fisheye image target detection method based on improved YOLOv5
CN106550173A (en) Based on SURF and the video image stabilization method of fuzzy clustering
Zhang et al. A Self-Supervised Monocular Depth Estimation Approach Based on UAV Aerial Images
CN111461141A (en) Equipment pose calculation method device and equipment
CN117315274B (en) Visual SLAM method based on self-adaptive feature extraction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant