CN115564688B - Method for extracting turbulence by combining matrix low-rank decomposition and dynamic target - Google Patents
Method for extracting turbulence by combining matrix low-rank decomposition and dynamic target Download PDFInfo
- Publication number
- CN115564688B CN115564688B CN202211445041.2A CN202211445041A CN115564688B CN 115564688 B CN115564688 B CN 115564688B CN 202211445041 A CN202211445041 A CN 202211445041A CN 115564688 B CN115564688 B CN 115564688B
- Authority
- CN
- China
- Prior art keywords
- image
- turbulence
- matrix
- current
- sequence
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 239000011159 matrix material Substances 0.000 title claims abstract description 50
- 238000000354 decomposition reaction Methods 0.000 title claims abstract description 33
- 238000000034 method Methods 0.000 title claims abstract description 22
- 238000000605 extraction Methods 0.000 claims abstract description 16
- 238000010586 diagram Methods 0.000 claims abstract description 11
- 230000004927 fusion Effects 0.000 claims description 18
- 230000003190 augmentative effect Effects 0.000 claims description 6
- 230000000694 effects Effects 0.000 claims description 6
- 238000005457 optimization Methods 0.000 claims description 5
- 239000013598 vector Substances 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 4
- 238000007499 fusion processing Methods 0.000 claims description 3
- 230000000717 retained effect Effects 0.000 claims 1
- 230000006870 function Effects 0.000 description 5
- 230000004075 alteration Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000001788 irregular Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 241000764238 Isis Species 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/215—Motion-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/254—Analysis of motion involving subtraction of images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20201—Motion blur correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20224—Image subtraction
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method for extracting turbulence by combining matrix low-rank decomposition and dynamic target extraction, which comprises the following steps: inputting an image sequence after turbulent interference; extracting a turbulent part in the image sequence: performing matrix low-rank decomposition on the image sequence to obtain a scene structure diagram and a turbulence mask diagram of the image; extracting a moving object of the current image: extracting a moving target part in a scene by adopting a SuBSENSE foreground extraction algorithm to form a moving target mask; selecting clear image blocks from the sequence image for the turbulence part in the current frame, and fusing to obtain a current image without turbulence; and fusing the images in the steps to obtain a map containing the current moving target and without turbulence. The invention adopts a moving target extraction algorithm to fuse the moving target part with the image without turbulence, and can eliminate the smear and the blurring problems of the moving target.
Description
Technical Field
The invention belongs to the technical field of turbulence removal of video images, and particularly relates to a method for removing turbulence by combining matrix low-rank decomposition and dynamic target extraction.
Background
Atmospheric turbulence is an irregular random motion in the atmosphere that, when present, can cause irregular jitter, target distortion, and blurring in the imaged image. Numerous applications such as intelligent video monitoring, aerospace, laser communication, high-resolution and the like, which are observed on the ground, are seriously disturbed by turbulent flow conditions, and the influence on the positioning, detection and tracking of a target and how to remove the influence of the turbulent flow become a problem to be solved urgently.
Scholars at home and abroad carry out a great deal of research on image restoration under atmospheric turbulence from different aspects, and in the traditional algorithm, a region fusion algorithm based on wavelet transformation of a binary tree, a turbulence interference removing algorithm which uses a Sobolev gradient operator and a Laplace operator to reduce fluctuation caused by turbulence and combines a lucky region to carry out multi-frame fusion, an algorithm which corrects turbulence distortion based on B-spline non-rigid registration and a turbulence removing algorithm based on matrix low-rank decomposition are provided. Among the algorithms based on the deep neural network are TMT algorithms based on a generative countermeasure network (TSR-WGAN), based on a time-channel joint attention network, and the like. The above algorithm is mostly directed at a static scene, the form of an observed target can be recovered when the observed target is static, but when a moving target exists in a video, a smear phenomenon and a moving target blurring phenomenon appear in an image after turbulence is removed.
Disclosure of Invention
In view of the above, a method for extracting and removing turbulence by combining matrix low rank decomposition and dynamic targets in a motion scene is provided to solve the problems of smearing and blurring of the motion targets in the dynamic scene.
Specifically, the invention discloses a method for removing turbulence by combining matrix low-rank decomposition and dynamic target extraction, which comprises the following steps of:
inputting a sequence image after turbulence interference;
step two, extracting a turbulence part in the sequence image: performing matrix low-rank decomposition on the sequence image to obtain a scene structure diagram and a turbulence mask diagram of the image;
step three, extracting the moving target of the current image: extracting a moving target part in a scene by adopting a SuBSENSE foreground extraction algorithm to form a moving target mask;
selecting clear image blocks from the sequence image for the turbulence part in the current frame, and fusing to obtain a current image without turbulence;
and step five, fusing the images in the step three and the step four to obtain a turbulence-removed image containing the current moving target.
Further, performing matrix low rank decomposition on the sequence image, including:
each frame of image with width and height of w and h respectively is stretched line by line intoOne isIs given asThe N column vectors are then formed into a matrix, denoted asThe decomposition of M is represented as solving the following equation
Wherein A is a sparse turbulence matrix, and B is a low-rank scene structure matrix;is the Frobenius norm of the matrix,is the kernel norm of the matrix and,is a regularization parameter;
then, the ADM alternative direction multiplier method is used for solving the above formula, and the augmented Lagrange formula of the above formula is defined as follows:
where Z is the lagrange multiplier,is a penalty factor that is a function of,representing the inner product of the matrix.
Further, the solution steps of the augmented lagrangian formula are as follows:
updating A to obtain:
wherein P represents a euclidean projection; k is the number of iterations;
updating B to obtain:
updating Z to obtain:
and stopping iterative optimization, and finally obtaining a sparse turbulence matrix A and a low-rank scene structure matrix B.
Further, the fourth step is as follows:
taking all pixels with pixel values of 1 in the current sparse turbulence matrix A, and recording as Y pixels;
calculating the similarity value of the image blocks of the historical sequence and the image blocks of the current sequence at each position:
wherein r representsThe r-th image block in (1),,the ith image representing the input is displayed,is the current sequence image block;
calculating the definition value of the image block in the historical sequence at each position:
and (3) calculating the weight of the image block at the H time:
wherein, the first and the second end of the pipe are connected with each other,is a constant number, V g To a sharpness value, V s Is a similarity value;
calculating a fusion image block:
obtaining the image of the current frame reserved moving object according to the output of the step three and a singular value decomposition formula, and recording the image as the imageThen go right againAndperforming image fusion processing to obtain a fusion image retaining the effect of turbulence removal of the moving object:
image enhancement: and performing detail enhancement on the image by adopting a multi-scale method.
Further, the detail enhancement comprises the following specific steps:
and D, separating the basic layer image from the detail layer image of the fused image obtained in the step five, namely obtaining three basic layer images of the original image by three times of Gaussian filtering:
and (3) subtracting the original image from the basic image to obtain a detailed image of the original image:
and performing weighted fusion on the original image and the three detail images to obtain a detail enhanced image:
wherein w 1 ,w 2 ,w 3 Respectively, are weighting coefficients.
The invention has the beneficial effects that:
obtaining a turbulence-removed image by matrix low-rank decomposition and clear image block fusion; and (3) adopting a moving object extraction algorithm to fuse the moving object part with the turbulence-removed image to remove the smear of the moving object and the blurring problem of the moving object.
Drawings
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a detail enhancement flow diagram;
FIG. 3 is a diagram of the turbulence removal effect of the present invention (static target); (a) an original; (b) the post-turbulence map of the present invention;
FIG. 4 illustrates the turbulence free effect of the present invention (flying birds); (a) an original; (b) the invention after turbulence removal; (c) original drawing; (d) the drawing after the invention has been de-turbulently.
Detailed Description
The invention is further described with reference to the accompanying drawings, but the invention is not limited in any way, and any alterations or substitutions based on the teaching of the invention are within the scope of the invention.
In order to achieve the purpose, the invention adopts the following technical scheme:
a method for combining matrix low rank decomposition and dynamic target extraction de-turbulence, comprising the steps of:
(1) Inputting a sequence image after turbulent interference;
(2) Extracting turbulent parts in the sequence images: performing matrix low-rank decomposition on the sequence image to obtain a scene structure diagram and a turbulence mask diagram of the image;
(3) Extracting a moving object of the current image: extracting a moving target part in a scene by adopting a SuBSENSE foreground extraction algorithm to form a moving target mask;
(4) Selecting clear image blocks from the sequence image for the turbulence part in the current frame, and fusing to obtain a current image without turbulence;
(5) And (4) fusing the motion areas in the step (4) and the step (3) to obtain a turbulence-removed image containing the current motion target.
Example 1
As shown in figures 1 to 4, the invention constructs a method for extracting and removing turbulence by combining matrix low-rank decomposition and dynamic targets, and the basic idea is to decompose a turbulence interference sequence image into a scene structure image and a turbulence image by utilizing the matrix low-rank decomposition idea, select a clear image block to fuse in a corresponding position in the sequence image after obtaining a turbulence part to obtain an image block without turbulence, adopt a SuBSENSE foreground extraction algorithm to obtain a moving target mask in order to keep a moving target in a scene, fuse the moving target with a smooth turbulence image, and finally enhance image details by using a multi-scale detail enhancement algorithm to obtain a clear turbulence-removed image.
The SuBSENSE foreground extraction algorithm is based on the problem that the detection effect is poor due to poor adaptability of self-balanced sensitivity segmenter (SuBSENSE) algorithm in real complex scenes, and the SuBSENSE algorithm based on background complexity self-adaptive distance threshold correction is provided. The algorithm is superior to a comparison algorithm, and has higher robustness and detection precision in a dynamic scene.
The method specifically comprises the following steps:
(1) Acquiring an input turbulent interference sequence image () And the value of N is 10, which can be adjusted according to the scene.
(2) Matrix low-rank decomposition is carried out on the turbulent flow sequence, the width and the height of an image are respectively set as w and h, and each frame of image is stretched into one frame of image line by lineIs given asThe N column vectors are then formed into a matrix, denoted asThe decomposition of M can be expressed as the solution formula (1)
In the formula (1), A is a sparse turbulence matrix, and B is a low-rank scene structure matrix;is the Frobenius norm of the matrix,is the kernel norm of the matrix and,is a regularization parameter. Then, solving the formula (1) by using an ADMM alternating direction multiplier method, and defining an augmented Lagrange formula of the formula (1) as follows:
in the formula (2), Z is a Lagrangian multiplier,is a penalty factor that is a function of the time,the inner product of the matrix is represented.
The Alternating Direction multiplier (ADMM) is a calculation framework for solving a separable convex optimization problem, and the ADMM is a combination of a dual decomposition method and an augmented lagrange multiplier, so that the algorithm has decomposability, good convergence and high processing speed. The ADMM is suitable for solving the distributed convex optimization problem, is mainly applied to the condition of large solution space scale, can carry out block solution, and has not high absolute precision requirement on the solution.
ADMM solves the problem in a form of decomposition and combination, namely, the original problem is decomposed into a plurality of sub-problems which are simpler relative to the original problem, and the solutions of the sub-problems are combined to obtain the global solution of the original problem.
The solving steps are as follows:
Where P represents the euclidean projection.
and stopping iterative optimization, and finally obtaining a sparse turbulence matrix A and a low-rank scene structure matrix B.
(3) Obtaining a moving target mask: inputting a sequence image, and acquiring a moving target of a current frame by adopting a SuBSENSE foreground extraction algorithmMask, note as。
(4) Carrying out clear image block fusion on the turbulence image to obtain a turbulence-removed image: and step two, acquiring position information of turbulence after acquiring the sparse turbulence matrix, and then performing image block fusion in the sequence image according to the definition and the similarity at the corresponding position to form an image without turbulence. The method comprises the following specific steps:
taking all pixels with the pixel value of 1 in the current sparse turbulence matrix A, and recording as Y pixels
Calculating the similarity value of the image block of the historical sequence and the image block of the current sequence at each position:
Calculating the definition value of the image block in the historical sequence at each position:
And (3) calculating the weight of the image block at the H time:
wherein the content of the first and second substances,is a constant number, V g To a clarity value, V s Is a similarity value;
calculating a fusion image block:
(5) Recording the image of the remaining moving object of the current frame obtained by the output of the step three according to the formula (4)Then go right againAndperforming image fusion processing to obtain a fusion image retaining the effect of turbulence removal of the moving object。
(6) Image enhancement: the method adopts a multi-scale method to enhance the details of the image, and comprises the following specific steps:
separating the image of the basic layer and the image of the detail layer from the fused image obtained in the step five, namely, three times of Gaussian filtering and three times of Gaussian filteringStandard deviation for sub-gaussian filteringIn turn 1.0,2.0,4.0. And obtaining three base layer images of the original image.
And subtracting the original image from the basic image to obtain a detailed image of the original image.
And performing weighted fusion on the original image and the three detail images to obtain a detail enhanced image.
The invention has the beneficial effects that:
obtaining a turbulence-removed image by matrix low-rank decomposition and clear image block fusion; and (3) adopting a moving target extraction algorithm to fuse the moving target part with the turbulence-removed image to remove the smear of the moving target and the blurring problem of the moving target.
The word "preferred" is used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as "preferred" is not necessarily to be construed as advantageous over other aspects or designs. Rather, use of the word "preferred" is intended to present concepts in a concrete fashion. The term "or" as used in this application is intended to mean an inclusive "or" rather than an exclusive "or". That is, unless specified otherwise or clear from context, "X employs A or B" is intended to include either of the permutations as a matter of course. That is, if X employs A; b is used as X; or X employs both A and B, then "X employs A or B" is satisfied in any of the foregoing examples.
Also, although the disclosure has been shown and described with respect to one or an implementation, equivalent alterations and modifications will occur to others skilled in the art based upon a reading and understanding of this specification and the annexed drawings. The present disclosure includes all such modifications and alterations, and is limited only by the scope of the appended claims. In particular regard to the various functions performed by the above described components (e.g., elements, etc.), the terms used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (e.g., that is functionally equivalent), even though not structurally equivalent to the disclosed structure which performs the function in the herein illustrated exemplary implementations of the disclosure. In addition, while a particular feature of the disclosure may have been disclosed with respect to only one of several implementations, such feature may be combined with one or other features of the other implementations as may be desired and advantageous for a given or particular application. Furthermore, to the extent that the terms "includes," has, "" contains, "or variants thereof are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term" comprising.
Each functional unit in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or a plurality of or more than one unit are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a separate product, may also be stored in a computer-readable storage medium. The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Each apparatus or system described above may execute the storage method in the corresponding method embodiment.
In summary, the above-mentioned embodiment is an implementation manner of the present invention, but the implementation manner of the present invention is not limited by the above-mentioned embodiment, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be regarded as equivalent replacements within the protection scope of the present invention.
Claims (2)
1. A method for combining matrix low rank decomposition and dynamic target extraction de-turbulence, comprising the steps of:
inputting a sequence image after turbulence interference;
step two, extracting a turbulence part in the sequence image: performing matrix low-rank decomposition on the sequence image to obtain a scene structure diagram and a turbulence mask diagram of the image;
the sequence image is subjected to matrix low-rank decomposition, and the matrix low-rank decomposition comprises the following steps:
each frame of image with width and height of w and h is stretched line by line into a column vector of (w × h, 1), which is denoted as vec (I) i ) I =1,2, …, N is the number of input turbulent interference sequence images, I i Is the ith column vector, and then the N column vectors are combined into a matrix, which is marked as M = { vec (I) 1 ),…,vec(I N ) M is decomposed as follows:
min A,B γ||A|| F +||B|| *
s.t.A+B=M
wherein A is a sparse turbulence matrix, and B is a low-rank scene structure matrix; | | non-woven hair F Is the Frobenius norm of the matrix, | | | | | calness * Is the nuclear norm of the matrix, γ is the regularization parameter;
solving the above formula by using ADMM alternating direction multiplier method, and defining formula min A,B γ||A|| F +||B|| * The augmented lagrange formula for s.t.a + B = M is:
wherein Z is a Lagrange multiplier, beta > 0 is a penalty factor, and <, > represents the inner product of the matrix;
the solution steps of the augmented Lagrange formula are as follows:
updating A to obtain:
where P represents the Euclidean projection and k is the number of iterations;
updating B to obtain:
updating Z to obtain:
Z k+1 =Z k -β(A k+1 +B k+1 -M)
stopping iterative optimization, and finally obtaining a sparse turbulence matrix A and a low-rank scene structure matrix B;
step three, extracting the moving target of the current image: extracting a moving target part in a scene by adopting a SuBSENSE foreground extraction algorithm to form a moving target mask;
selecting clear image blocks from the sequence image for the turbulence part in the current frame to fuse to obtain a turbulence-removed current image;
taking all pixels with pixel values of 1 in the current sparse turbulence matrix A, and recording as Y pixels;
calculating the similarity value of the image blocks of the historical sequence and the image blocks of the current sequence at each position:
V s =||I i (r)-I current (r)|| 2
where r represents the r-th image block in 1, …, M, I ∈ (1,N), I i () Representing the input ith image, I current Is the current sequence image block;
calculating the definition value of the image block in the historical sequence at each position:
and (3) calculating the weight of the image block at the H time:
W H,r =exp(-αV s )×exp(βV g )
wherein α is a constant, V g To a sharpness value, V s Is a similarity value;
calculating a fusion image block:
obtaining the image of the current frame reserved moving target according to the moving target mask output in the third step and a singular value decomposition formula, and marking the image as I wm Then to I wm And I merge Performing image fusion processing to obtain a fusion image I with the effect of turbulence removal of the moving object retained m :
I wm =I current *I maskB
I maskB Is a moving object mask;
the formula for fusion is: i is m =I merge *(1-I maskB )+I wm
Image enhancement: performing detail enhancement on the image by adopting a multi-scale method;
and step five, fusing the images in the step three and the step four to obtain a map which contains the current moving target and is free of turbulence.
2. The method for combining matrix low rank decomposition and dynamic target extraction turbulence according to claim 1, wherein the detail enhancement comprises the following specific steps:
and D, separating the basic layer image from the detail layer image of the fused image obtained in the step five, namely obtaining three basic layer images of the original image by using three times of Gaussian filtering:
I base1 =G 1 *I m ,I basP2 =G 2 *I m ,I base3 =G 3 *I m
wherein G is 1 ,G 2 ,G 3 Gaussian kernels with a standard deviation of 1.0,2.0,4.0, respectively;
and (3) subtracting the original image and the basic image to obtain a detailed image of the original image:
I d1 =I m -I base1 ,I d2 =I m -I base2 ,I d3 =I m -I base3
and performing weighted fusion on the original image and the three detail images to obtain a detail enhanced image:
I e =w 1 ×I d1 +w 2 ×I d2 +w 3 ×I d3 +I current
wherein w 1 ,w 2 ,w 3 Respectively, are weighting coefficients.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211445041.2A CN115564688B (en) | 2022-11-18 | 2022-11-18 | Method for extracting turbulence by combining matrix low-rank decomposition and dynamic target |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211445041.2A CN115564688B (en) | 2022-11-18 | 2022-11-18 | Method for extracting turbulence by combining matrix low-rank decomposition and dynamic target |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115564688A CN115564688A (en) | 2023-01-03 |
CN115564688B true CN115564688B (en) | 2023-03-21 |
Family
ID=84769655
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211445041.2A Active CN115564688B (en) | 2022-11-18 | 2022-11-18 | Method for extracting turbulence by combining matrix low-rank decomposition and dynamic target |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115564688B (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115358953A (en) * | 2022-10-21 | 2022-11-18 | 长沙超创电子科技有限公司 | Turbulence removing method based on image registration and dynamic target fusion |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102122082B (en) * | 2011-03-23 | 2012-11-07 | 中国科学院光电技术研究所 | Phase shift error correction device for sparse optical synthetic aperture imaging system |
CN106408530A (en) * | 2016-09-07 | 2017-02-15 | 厦门大学 | Sparse and low-rank matrix approximation-based hyperspectral image restoration method |
US10600158B2 (en) * | 2017-12-04 | 2020-03-24 | Canon Kabushiki Kaisha | Method of video stabilization using background subtraction |
CN110874827B (en) * | 2020-01-19 | 2020-06-30 | 长沙超创电子科技有限公司 | Turbulent image restoration method and device, terminal equipment and computer readable medium |
US11928811B2 (en) * | 2021-03-30 | 2024-03-12 | Rtx Corporation | System and method for structural vibration mode identification |
CN113963301A (en) * | 2021-11-04 | 2022-01-21 | 西安邮电大学 | Space-time feature fused video fire and smoke detection method and system |
-
2022
- 2022-11-18 CN CN202211445041.2A patent/CN115564688B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115358953A (en) * | 2022-10-21 | 2022-11-18 | 长沙超创电子科技有限公司 | Turbulence removing method based on image registration and dynamic target fusion |
Also Published As
Publication number | Publication date |
---|---|
CN115564688A (en) | 2023-01-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110176027B (en) | Video target tracking method, device, equipment and storage medium | |
CN108665496B (en) | End-to-end semantic instant positioning and mapping method based on deep learning | |
CN109685045B (en) | Moving target video tracking method and system | |
CN108133456A (en) | Face super-resolution reconstruction method, reconstructing apparatus and computer system | |
CN110910421B (en) | Weak and small moving object detection method based on block characterization and variable neighborhood clustering | |
Ye et al. | Gaussian grouping: Segment and edit anything in 3d scenes | |
CN115358953B (en) | Turbulence removing method based on image registration and dynamic target fusion | |
WO2016030305A1 (en) | Method and device for registering an image to a model | |
CN113657387B (en) | Semi-supervised three-dimensional point cloud semantic segmentation method based on neural network | |
CN113111751B (en) | Three-dimensional target detection method capable of adaptively fusing visible light and point cloud data | |
CN110147816B (en) | Method and device for acquiring color depth image and computer storage medium | |
Zheng et al. | Edge-conditioned feature transform network for hyperspectral and multispectral image fusion | |
CN107705295B (en) | Image difference detection method based on robust principal component analysis method | |
CN115953513A (en) | Method, device, equipment and medium for reconstructing drivable three-dimensional human head model | |
Zhou et al. | PADENet: An efficient and robust panoramic monocular depth estimation network for outdoor scenes | |
Precioso et al. | B-spline active contour with handling of topology changes for fast video segmentation | |
Sun et al. | Adaptive image dehazing and object tracking in UAV videos based on the template updating Siamese network | |
CN115564688B (en) | Method for extracting turbulence by combining matrix low-rank decomposition and dynamic target | |
Mathew et al. | Self-attention dense depth estimation network for unrectified video sequences | |
CN111951191B (en) | Video image snow removing method and device and storage medium | |
CN115187768A (en) | Fisheye image target detection method based on improved YOLOv5 | |
CN106550173A (en) | Based on SURF and the video image stabilization method of fuzzy clustering | |
Zhang et al. | A Self-Supervised Monocular Depth Estimation Approach Based on UAV Aerial Images | |
CN111461141A (en) | Equipment pose calculation method device and equipment | |
CN117315274B (en) | Visual SLAM method based on self-adaptive feature extraction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |