CN106657713A - Video motion amplification method - Google Patents

Video motion amplification method Download PDF

Info

Publication number
CN106657713A
CN106657713A CN201611264001.2A CN201611264001A CN106657713A CN 106657713 A CN106657713 A CN 106657713A CN 201611264001 A CN201611264001 A CN 201611264001A CN 106657713 A CN106657713 A CN 106657713A
Authority
CN
China
Prior art keywords
matrix
row
video
sampling
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611264001.2A
Other languages
Chinese (zh)
Other versions
CN106657713B (en
Inventor
轩建平
李锐
刘超峰
翟康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201611264001.2A priority Critical patent/CN106657713B/en
Publication of CN106657713A publication Critical patent/CN106657713A/en
Application granted granted Critical
Publication of CN106657713B publication Critical patent/CN106657713B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/148Video amplifiers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation

Abstract

The invention provides a video motion amplification method, and belongs to image processing methods. The problems that the noise is greatly increased by adding an amplification factor in the existing video motion amplification method based on linear Euler and that the video motion amplification method based on plural operational pyramids suffers motion distortion and the like are solved. The video motion amplification method comprises the steps of video image decomposition, video frame color space conversion, N-layer pyramid decomposition, time domain bandpass filter, second spatial frequency band group amplification, acquisition of a fourth spatial frequency band group, reconstruction of a brightness matrix, restoration of a video frame color space, and output of a video image and the like. By adoption of the video motion amplification method, the problems that the noise is greatly increased by adding an amplification factor in the existing video motion amplification method based on linear Euler and that the video motion amplification method based on plural operational pyramids suffers motion distortion and the like are solved, and the video motion amplification method is suitable for small space-time motion amplification.

Description

A kind of video motion amplification method
Technical field
The invention belongs to image processing method, and in particular to a kind of video motion amplification method, for by small space-time Motion amplification.
Background technology
Human eye has limited space-time sensitiveness, and many real motions or color change can not be captured by human eye, The percutaneous skin of such as blood stream, skin color can occur slight change, and the arteries and veins for the causing such as due to periodicity of heart is beated Fight, but eye is difficult to observe the change of skin color and the bounce of pulse.Using video motion amplification method, can be by These can not cause the motion amplification of vision response, so that we preferably observe various phenomenons.
Mallat and Meyer propose wavelet multi_resolution analysis in later 1980s (Multiresolution Analysis, MRA), wavelet multi_resolution analysis are the scalings based on wavelet function and scaling function And translation, in Hilbert spaces L2(R) certain sub-spaces set up substrate, then convert through zooming and panning, by subspace Substrate extend to L2(R) in.In Wavelet Multiresolution Decomposition signal decomposition, scaling function φj,kWith wavelet function ψj,kJointly Wavelet decomposition is constituted, wherein j is zoom factor, and k is shift factor, and by scaling function the low frequency part of signal, small echo are characterized The detail section of function table reference number.Documents 1 and 2, in July, 1989 delivers, the U.S., S.G.Mallat, A theory for multiresolution signal decomposition:the wavelet representation,IEEE T PATTERN ANAL,11(1989)674-693;In September, 1989 is delivered, the U.S., S.G.Mallat, Multiresolution Approximations and Wavelet Orthonormal Bases of L2(R),T AM MATH SOC,315(1989) 69-87.Wavelet decomposition and the reconstruct of different scale can be carried out by small echo scaling, the wavelet decomposition of different scale and reconstruct are constituted Wavelet pyramid decomposes and reconstructs.
Existing video motion amplification method is broadly divided into two kinds, and the first is linear Euler's video motion amplification method, As documents August in 3,2012 is delivered on the 5th, the U.S., H.Wu, M.Rubinstein, E.Shih, J.Guttag, Fr, D.Durand,W.Freeman,Eulerian video magnification for revealing subtle changes in the world,ACM Trans.Graph.,31(2012)1-8.Space division when it adopts laplacian pyramid algorithm to carry out Solution and reconstruct, the method has two defects, first defect be when the bulk of video is less, its support compared with Little amplification factor, that is to say, that multiplication factor is limited, second defect is to increase amplification factor can significantly increase noise.Second It is to be based on the operable pyramidal video motion amplification method of plural number, such as documents 4 to plant video motion amplification method, 2013 Deliver, the U.S., N.Wadhwa, M.Rubinstein, Fr, D.Durand, W.T.Freeman, Phase-based video motion processing,ACM Trans.Graph.,32(2013)1-10;Although the method solves linear Euler's video Two defects of motion amplification method, but new defect is which introduced, and first defect is exactly that the motion amplified occurs distortion, Get muddled;Second defect is that the vibration to video acquisition system is more sensitive, that is to say, that the microvibration meeting of camera exists Embody in destination file.
Amplify for Euler's video motion and there is operable pyramid algorith fortune more sensitive to noise ratio and based on phase place Dynamic to be amplified in the disorder processed in ripple communication process, applicant is devoted to seeking a kind of method that can solve the problem that the problems referred to above.By There is advantage in terms of noise processed in small echo, applicant applies to wavelet decomposition in video motion amplification.Coiflet small echos Wave filter has linear phase this characteristic, the signal caused due to phase place can be significantly reduced when signal reconstruction and be lost Very, thus the small echo that adopts of this method is for Coiflet small echos, thus applicant is on the basis of previous work, it is proposed that one Plant new motion amplification method, referred to as the video motion amplification method based on Coiflet small echos.The method is first with small echo Multiscale analysis advantage, spatial decomposition is carried out to frame of video using wavelet pyramid algorithm, frame of video is decomposed into into difference The space frequency strip of yardstick, then in order to extract specific motion, using a bandpass filter being adapted with motion frequency Time-domain filtering is carried out to space frequency strip, then by filtered spatial frequency band, is amplified by a given factor-alpha, then be added back to In primary signal, finally using wavelet pyramid algorithm to video reconstruction, you can small sightless fluctuation is enlarged into substantially Fluctuation.
For convenience of the present invention is understood, now to relevant concept be further explained:
Rgb color space:The color space of general frame of video belongs to rgb color space, and rgb color space is to pass through To red (Red), the change and their superpositions each other of green (Green) and blue (Blue) three Color Channels is each to obtain The color of formula various kinds, it is generally the case that (conventional uint8,8 whole without symbol using three signless integers for rgb color space Number) a kind of color is represented, RGB respectively has 256 grades of brightness, is expressed as from 0,1,2... until 255 with numeral, and its brightness increases successively Plus, may be constructed up to 256 by a variety of combinations3Color is planted, thus rgb color space almost includes human eyesight The all colours that can be perceived are at present with one of most wide color system.
YIQ color spaces:YIQ color spaces are generally adopted by the television system of North America, belong to NTSC (National Television Standards Committee) system.In YIQ color spaces, the monochrome information of Y-component representative image, Two components of I, Q then carry colouring information, and I component is represented from orange to the color change of cyan, and Q component is then represented from purple To yellowish green color change.Compared with other color spaces, YIQ color spaces have can be by the luminance component separation and Extraction in image Advantage out, and luminance component can be used for the identification of the moving target under the complex background for collecting under field conditions (factors), because This is necessary for video to be transformed into YIQ color spaces by rgb color space;
Common wavelet function has Haar, Daubechies, Coiflets, Symlets etc.;With Coiflets wavelet functions As a example by, the entitled coif of abbreviation of the wavelet function is different according to the length for supporting, and coif1, coif2, coif3 are divided into again, Coif4 and coif5;So-called bearing length, refers to that small echo high-pass filter or low pass filter are a time finite mechanism sequences Row, the signal value non-zero in bearing length is interval, support Interval external signal value is zero, and the interval length of non-zero is its support length Degree;
Small echo high-pass filter coefficient and wavelet low-pass filter coefficient:Small echo high-pass filter coefficient is a vector, Wavelet low-pass filter coefficient is another vector, can be obtained corresponding to different wavelet functions by consulting pertinent literature Small echo high-pass filter coefficient and wavelet low-pass filter coefficient;Such as model prolongs refined,《Wavelet theory algorithm and wave filter group》, 160-192 page, Science Press, Beijing, the first edition of in June, 2011.
When using wavelet decomposition algorithm or wavelet reconstruction algorithm, LPF or high-pass filtering are carried out to row, column, it is real Matter is that the element of the row or column and wavelet low-pass filter coefficient or small echo high-pass filter coefficient are asked into convolution, when small echo low pass When the length of filter coefficient or small echo high-pass filter coefficient is less than the element number of the row or column, then the digit to shortage is needed Mend 0.
Computer documents:Any one computer documents, all comprising two parts, Part I is header file part, the Two parts are data division.The header file part of video image generally comprises width, height, memory space, the form of frame of video The key contents such as type, frame number rate, data compression scheme and declaratives, what the data division of video image was preserved is and head text The associated data of part.When computer reads video image, header file is read first to obtain various key messages, especially count According to information such as the composition form of file, such as width, height, the frame number rates of frame of video.When frame of video process is carried out, just It is the process to data, such as colour space transformation, wavelet pyramid decompose, wavelet pyramid is reconstructed and time-domain filtering.
The content of the invention
The present invention provides a kind of video motion amplification method, belongs to linear Euler's video motion amplification method, solves existing The increase amplification factor existed based on linear Euler's video motion amplification method can significantly increase the problem of noise and based on plural number The problems such as operable pyramidal video motion amplification method motion distortion.
A kind of video motion amplification method provided by the present invention, including video image decomposition step, frame of video color sky Between switch process, N layer pyramid decomposition steps, time domain bandpass filtering step, second space frequency band group amplification procedure, obtain Four space frequency strip group steps, reconstruct luminance matrix step, frame of video color space reduction step, output video image step, It is characterized in that:
(1) video image decomposition step:
Rgb color space video image of the moving person in space small movements is recorded, further according to time order and function by RGB color Color space video image is decomposed into rgb color space frame of video one by one, reads all of rgb color space frame of video;
(2) frame of video color space conversion step:
Each rgb color space frame of video R matrixes, G matrix, three two-dimensional matrixs of B matrixes are represented into that R matrixes are represented Pixel beauty's intensity of colour, represents pixel green intensity, and B matrixes represent pixel blue intensities;
Matrix operation is carried out to R matrixes, G matrix and B matrixes by following formula, the Y squares of YIQ color space video frames are obtained Battle array, three two-dimensional matrixs of I matrixes and Q matrixes:
Y=0.299R+0.587G+0.114B,
I=0.596R-0.275G-0.321B,
Q=0.212R-0.523G+0.311B;
Wherein, Y matrixes represent the brightness value of each pixel of YIQ color space video frames, and I matrixes represent that YIQ color spaces are regarded The each pixel of frequency frame from orange to the color intensity of cyan, Q matrixes represent each pixel of YIQ color space video frames from purple to Yellowish green color intensity;
(3) N layers pyramid decomposition step:
To the Y matrixes in each YIQ color space videos frame, four matrixes are broken down into using wavelet decomposition algorithm:1st Approximation coefficient matrix cA1, the 1st level detail matrix cH1, the 1st vertical detail matrix cV1With the 1st diagonal detail matrix cD1;This mistake Journey be ground floor pyramid decomposition, cH1、cV1And cD1Constitute pyramid ground floor;
Again using wavelet decomposition algorithm to the 1st approximation coefficient matrix cA1Decomposed, be broken down into the 2nd approximation coefficient Matrix cA2, the 2nd level detail matrix cH2, the 2nd vertical detail matrix cV2With the 2nd diagonal detail matrix cD2;This process is second Layer pyramid decomposition, cH2、cV2And cD2Constitute the pyramid second layer;
So n times decomposition is carried out successively, finally obtain N approximation coefficient matrix cAN, N level detail matrix cHN, N Vertical detail matrix cVNWith N diagonal detail matrix cDN;cAN、cHN、cVNAnd cDNConstitute pyramid n-th layer;N >=3, picture chi Very little bigger, N values are bigger, otherwise then N values are less;
All YIQ color space videos frames are carried out after n times pyramid decomposition, by each frame YIQ color space video frames N-th level detail matrix cHnComposition yardstick for n level detail space frequency strip, by the of each frame YIQ color space video frames N vertical detail matrix cVnComposition yardstick is the vertical detail space frequency strip of n, by the n-th of each frame YIQ color space video frames Diagonal detail matrix cDnComposition yardstick is the diagonal detail space frequency strip of n;Wherein n is referred to as yardstick, represents different pyramids Layer, n=1,2 ..., N;By the N approximation coefficient matrix cA of each frame YIQ color space video framesNComposition approximation coefficient space frequency Rate band;Above-mentioned each space frequency strip constitutes the first space frequency strip group;
(4) time domain bandpass filtering step:
Using the first low pass IIR (Infinite Impulse Response, IIR) digital filter to structure Each space frequency strip into the first space frequency strip group carries out respectively time-domain filtering, and each space frequency strip obtained after filtering is constituted First temporary space frequency band group;
The each space frequency strip for constituting the first space frequency strip group is carried out respectively using the second low pass iir digital filter Time-domain filtering, each space frequency strip obtained after filtering constitutes the second temporary space frequency band group;
Each space frequency strip in first temporary space frequency band group is corresponding with the second temporary space frequency band group each Space frequency strip is subtracted each other, and resulting each space frequency strip collectively forms second space frequency band group;
The cut-off frequency of the first low pass iir digital filter and the second low pass iir digital filter is respectively f1And f2, 0< f2<f1<fs/ 2, fsFor the recording frame rate of video image;
(5) second space frequency band group amplification procedure:
Maximum amplification α (n) of each layer of pyramid is calculated respectivelymax, it is respectively compared whether α (n)max≤ α, then will be α(n)maxAs the actual multiplication factor of equivalent layer, otherwise using α as the actual multiplication factor of equivalent layer;Wherein α is setting times magnification Number, 5<α<30;
The each space frequency strip for constituting second space frequency band group is multiplied by into respectively phase according to its pyramidal number of plies in place The actual multiplication factor of layer, resulting each space frequency strip is answered to collectively form the 3rd space frequency strip group;
(6) the 4th space frequency strip group step is obtained:
The each space frequency strip for constituting the 3rd space frequency strip group is corresponding to the first space frequency strip group respectively Space frequency strip carry out addition of matrices computing, resulting each space frequency strip collectively forms the 4th space frequency strip group;Institute It is two groups of correspondence yardsticks, matrix operations of correspondence position to state addition of matrices computing;
(7) luminance matrix Y steps are reconstructed:
The 4th space frequency strip group is reconstructed using wavelet reconstruction algorithm for the Y of YIQ color space video frames4Matrix, specifically Process is as follows:
Each layer of different pyramidal layers in the 4th space frequency strip group is successively reconstructed, is opened from pyramid n-th layer Begin, using wavelet reconstruction algorithm, be approximately by the N for constituting each frame YIQ color space video frames in the 4th space frequency strip group Matrix number cAN 4With N level detail matrix cHN 4, N vertical detail matrix cVN 4With N diagonal detail matrix cDN 4Reconstruct N- 1 approximation coefficient matrix cAN-1 4
The reconstruct of pyramid N-1 layers is carried out again, to each frame YIQ color space video frames, using wavelet reconstruction algorithm, will be upper The N-1 approximation coefficient matrix cA of one layer of reconstructN-1 4With N-1 level detail matrix cHN-1 4, N-1 vertical detail matrixes cVN-1 4, N-1 diagonal detail matrix cDN-1 4Reconstruct N-2 approximation coefficient matrix cAN-2 4
So carry out successively, until the 1st approximation coefficient matrix cA for obtaining reconstructing1 4;Then entered using wavelet reconstruction algorithm The 1st layer of reconstruct of row pyramid, to each frame YIQ color space video frames, using wavelet reconstruction algorithm, by the 1st approximation coefficient matrix cA1 4With the 1st level detail matrix cH1 4, the 1st vertical detail matrix cV1 4, the 1st diagonal detail matrix cD1 4It is reconstructed into Y4Matrix;
(8) frame of video color space reduction step:
I matrixes and Q matrixes in the Y matrixes, the step (2) that are reconstructed to step (7) by following formula carries out matrix operation, obtains New R matrixes, G matrix and B matrixes:
R=1.000Y4+ 0.956I+0.621Q,
G=1.000Y4- 0.272I-0.647Q,
B=1.000Y4-1.106I+1.703Q;
New R matrixes, G matrix and B matrixes synthesize new rgb color space frame of video;
Successively above-mentioned conversion is carried out to each new YIQ color space videos frame, be just reduced to YIQ color space video frames Rgb color space frame of video;
(9) output video image step:
One video image file of construction, including header file part and data division, arrange first video image file and deposit Storage space is put, then according to the header file format of inputted video image, constructs the header file of output video image, and is written into hard Disk specified location, then by each frame video requency frame data hard disk specified location, thus completes the structure of rgb color space video image Make.
Described video motion amplification method, it is characterised in that:
In the step (3), to Y matrixes, four matrixes are broken down into including following sub-steps using wavelet decomposition algorithm Suddenly:
(3.1) LPF is carried out to each row of Y matrixes, then enters ranks down-sampling, then each row are carried out low Pass filter, finally enters every trade down-sampling and obtains the 1st approximation coefficient cA1Matrix;
(3.2) LPF is carried out to each row of Y matrixes, then enters ranks down-sampling, then height is carried out to each row Pass filter, finally enters every trade down-sampling and obtains the 1st level detail cH1Matrix;
(3.3) high-pass filtering is carried out to each row of Y matrixes, then enters ranks down-sampling, then each row are carried out low Pass filter, finally enters every trade down-sampling and obtains the 1st vertical detail cV1Matrix;
(3.4) high-pass filtering is carried out to each row of Y matrixes, then enters ranks down-sampling, then height is carried out to each row Pass filter, finally enters every trade down-sampling and obtains the 1st diagonal detail cD1Matrix;
Using wavelet decomposition algorithm to the n-th approximation coefficient matrix cAn-1Decomposed, sub-step is same as described above, n=1, 2、…、N;
The row down-sampling is exactly to retain even column, abandons odd column;The row down-sampling is exactly to retain even number line, is abandoned Odd-numbered line;
Described pair of row carries out LPF and exactly the row element and wavelet low-pass filter coefficient is asked into convolution, obtains a line New element value, replaces original row;
Described pair of row carries out high-pass filtering and exactly the row element and small echo high-pass filter coefficient is asked into convolution, obtains a line New element value, replaces original row;
Described pair of row carry out LPF and exactly the column element and wavelet low-pass filter coefficient are asked into convolution, obtain a row New element value, replaces original row;
Described pair of row carry out high-pass filtering and exactly the column element and small echo high-pass filter coefficient are asked into convolution, obtain a row New element value, replaces original row;
The wavelet low-pass filter coefficient and small echo high-pass filter coefficient can be obtained by consulting pertinent literature.
Described video motion amplification method, it is characterised in that:
In the step (4), using each space of the first low pass iir digital filter to the first space frequency strip group of composition Frequency band carries out respectively time-domain filtering including following sub-steps:
(4.1) the filter coefficient r of low pass IIR filter is calculated1
r1=2 π f1/fs,
F in formula1For the cut-off frequency of the first low pass iir digital filter, fsFor the recording frame rate of video image;
(4.2) output Y (m) of the first low pass iir digital filter is calculated:
Y (m)=(1-r1)×Y(m-1)+r1× X (m),
In formula, X (m) for wave filter input, m be video frame number, m=1,2 ..., K;Wherein K is total frame of frame of video Number, when m is known for 1, X (1), Y (0) is:
The each space frequency strip for constituting the first space frequency strip group is carried out respectively using the second low pass iir digital filter Time-domain filtering, its process is identical with said process, differs only in f1Change f into2
Described video motion amplification method, it is characterised in that:
In the step (5), maximum amplification α (n) of each layer of pyramid is calculatedmaxIncluding following sub-steps:
(5.1) space wavelength λ (n) of each layer of pyramid is calculated:
In formula, WnAnd HnRespectively the width of pyramid n-th layer with it is high, unit is pixel;
(5.2) displacement function δ (t) is calculated:
In formula, λcFor space critical wavelength, value is 16~20 pixels;
(5.3) maximum amplification α (n) of each layer of pyramid is calculatedmax
Described video motion amplification method, it is characterised in that:
In the step (7), using wavelet reconstruction algorithm, each frame YIQ colors in the 4th space frequency strip group will be constituted empty Between frame of video N approximation coefficient matrix cAN 4With N level detail matrix cHN 4, N vertical detail matrix cVN 4It is diagonal with N Detail matrices cDN 4Reconstruct N-1 approximation coefficient matrix cAN-1 4, including following sub-steps:
(7.1) to matrix cAN 4Enter every trade up-sampling, then LPF carried out to each row of matrix after row up-sampling, Then enter ranks up-sampling to matrix after filtering, carry out LPF followed by each row that matrix is obtained to row up-sampling and obtain AN1Matrix;
(7.2) to matrix cHN 4Enter every trade up-sampling, then high-pass filtering carried out to each row of matrix after row up-sampling, Then enter ranks up-sampling to matrix after filtering, carry out LPF followed by each row that matrix is obtained to row up-sampling and obtain AN2Matrix;
(7.3) to cVN 4Matrix enters every trade up-sampling, then carries out LPF to each row of matrix after row up-sampling, Then enter ranks up-sampling to matrix after filtering, carry out high-pass filtering followed by each row that matrix is obtained to row up-sampling and obtain AN3Matrix;
(7.4) to cDN 4Matrix enters every trade up-sampling, then carries out high-pass filtering to each row of matrix after row up-sampling, Then enter ranks up-sampling to matrix after filtering, carry out high-pass filtering followed by each row that matrix is obtained to row up-sampling and obtain AN4Matrix;
(7.5) by above-mentioned AN1Matrix, AN2Matrix, AN3Matrix and AN4Matrix carries out addition of matrices computing, and to obtain N-1 approximate Coefficient matrix cAN-1 4
So-called row matrix up-sampling, exactly doubles the line number of matrix, inserts between any two row of original matrix A line null vector, finally to last column zero padding of new structure matrix;So-called rectangular array up-sampling, is exactly to increase matrix column number One times, a row null vector is inserted between any two row of original matrix, finally to last row zero padding of new structure matrix;
Described pair of row carries out LPF and exactly the row element and wavelet low-pass filter coefficient is asked into convolution, obtains a line New element value, replaces original row;
Described pair of row carries out high-pass filtering and exactly the row element and small echo high-pass filter coefficient is asked into convolution, obtains a line New element value, replaces original row;
Described pair of row carry out LPF and exactly the column element and wavelet low-pass filter coefficient are asked into convolution, obtain a row New element value, replaces original row;
Described pair of row carry out high-pass filtering and exactly the column element and small echo high-pass filter coefficient are asked into convolution, obtain a row New element value, replaces original row;
The wavelet low-pass filter coefficient and small echo high-pass filter coefficient can be obtained by consulting pertinent literature.
The phase gradient of video image has direct relation with the motion in video image, and the existing plural number based on phase place can The basic thought of operation pyramid motion amplification method is to be amplified the phase gradient of image, is reached the motion in image The purpose of amplification, because the phase gradient of image is distorted, causes the motion amplified also can be distorted.The present invention does not have It is related to the phase gradient of image, thus there is no motion and be distorted.
Existing linear Euler's video motion amplification method has that increasing amplification factor can significantly increase noise, this In step (4), the difference equation of the first low pass iir digital filter is for invention:Y (m)=(1-r1)×Y(m-1)+r1×X M (), in practical operation, existing linear Euler's video motion amplification method carries out the initial input X (1) and Y (0) of wave filter Give up, directly carry out time-domain filtering from X (2) and Y (1), wherein making Y (the 1)=X (1), concrete calculating process be:
Y (1)=X (1)
Y (2)=(1-r1)*Y(1)+r1*X(2)
Y (3)=(1-r1)*Y(2)+r1*X(3)
Y (m)=(1-r1)*Y(m-1)+r1*X(m)
Y (K)=(1-r1)*Y(K-1)+r1*X(K)
The concrete calculating process of step (4) of the present invention is:
Y (1)=(1-r1)*Y(0)+r1*X(1)
Y (2)=(1-r1)*Y(1)+r1*X(2)
Y (m)=(1-r1)*Y(m-1)+r1*X(m)
Y (K)=(1-r1)*Y(K-1)+r1*X(K)
Noise analysis is carried out by the output video to existing linear Euler's video motion amplification method, it is found that its noise is in It is now initial little, then quickly increase, then the trend of slow-decay.Shown using the result of the test of the present invention, in output video Noise it is very steady, be held essentially constant, in addition from the point of view of the size of noise, present invention greatly reduces output video in Noise.
In sum, the present invention solve it is existing based on linear Euler's video motion amplification method exist increase amplify because Son can significantly increase the problem of noise and existing based on operable pyramidal video motion amplification method motion distortion of plural number etc. Problem.
Description of the drawings
Fig. 1 is the schematic flow sheet of the present invention;
Fig. 2 is original video frame sequence;
Fig. 3 is wavelet pyramid decomposing schematic representation;
Fig. 4 (A) is original image;
Fig. 4 (B) is three layers of wavelet pyramid structural representation;
Fig. 5 is the first space frequency strip group schematic diagram;
Fig. 6 is wavelet decomposition schematic diagram,
Fig. 7 (A) is the Frequency Response schematic diagram of the first low pass iir digital filter;
Fig. 7 (B) is the Frequency Response schematic diagram of the second low pass iir digital filter;
Fig. 7 (C) is the low pass filter institute using first, second low pass iir digital filter, two different cut-off frequencies The Frequency Response schematic diagram of the bandpass filter of construction;
Fig. 8 is the 4th space frequency strip group;
Fig. 9 is wavelet reconstruction schematic diagram;
Figure 10 is the video sequence of reconstruct.
Specific embodiment
Below in conjunction with drawings and Examples, the present invention is further described.
As shown in figure 1, the present invention includes video image decomposition step, frame of video color space conversion step, N layer pyramids Decomposition step, time domain bandpass filtering step, second space frequency band group amplification procedure, obtain the 4th space frequency strip group step, Reconstruct luminance matrix step, frame of video color space reduction step, output video image step.
One embodiment of the present of invention, comprises the steps:
(1) video image decomposition step:
Rgb color space video image of the moving person in space small movements is recorded, further according to time order and function by RGB color Color space video image is decomposed into rgb color space frame of video one by one, reads all of rgb color space frame of video; As shown in Fig. 2 applicant recorded the video of two ginkgo small movements in the air, the frame rate of video using slr camera fsPer second for 25 frames, video size is 512 × 512 pixels.
(2) frame of video color space conversion step:
Each rgb color space frame of video R matrixes, G matrix, three two-dimensional matrixs of B matrixes are represented into that R matrixes are represented Pixel beauty's intensity of colour, represents pixel green intensity, and B matrixes represent pixel blue intensities;
Matrix operation is carried out to R matrixes, G matrix and B matrixes by following formula, the Y squares of YIQ color space video frames are obtained Battle array, three two-dimensional matrixs of I matrixes and Q matrixes:
Y=0.299R+0.587G+0.114B,
I=0.596R-0.275G-0.321B,
Q=0.212R-0.523G+0.311B;
Wherein, Y matrixes represent the brightness value of each pixel of YIQ color space video frames, and I matrixes represent that YIQ color spaces are regarded The each pixel of frequency frame from orange to the color intensity of cyan, Q matrixes represent each pixel of YIQ color space video frames from purple to Yellowish green color intensity;
(3) N layers pyramid decomposition step:
To the Y matrixes in each YIQ color space videos frame, four matrixes are broken down into using wavelet decomposition algorithm:1st Approximation coefficient matrix cA1, the 1st level detail matrix cH1, the 1st vertical detail matrix cV1With the 1st diagonal detail matrix cD1;This mistake Journey be ground floor pyramid decomposition, cH1、cV1And cD1Constitute pyramid ground floor;
Again using wavelet decomposition algorithm to the 1st approximation coefficient matrix cA1Decomposed, be broken down into the 2nd approximation coefficient Matrix cA2, the 2nd level detail matrix cH2, the 2nd vertical detail matrix cV2With the 2nd diagonal detail matrix cD2;This process is second Layer pyramid decomposition, cH2、cV2And cD2Constitute the pyramid second layer;
So n times decomposition is carried out successively, finally obtain N approximation coefficient matrix cAN, N level detail matrix cHN, N Vertical detail matrix cVNWith N diagonal detail matrix cDN;cAN、cHN、cVNAnd cDNConstitute pyramid n-th layer;N >=3, picture chi Very little bigger, N values are bigger, otherwise then N values are less;
All YIQ color space videos frames are carried out after n times pyramid decomposition, by each frame YIQ color space video frames N-th level detail matrix cHnComposition yardstick for n level detail space frequency strip, by the of each frame YIQ color space video frames N vertical detail matrix cVnComposition yardstick is the vertical detail space frequency strip of n, by the n-th of each frame YIQ color space video frames Diagonal detail matrix cDnComposition yardstick is the diagonal detail space frequency strip of n;Wherein n is referred to as yardstick, represents different pyramids Layer, n=1,2 ..., N;By the N approximation coefficient matrix cA of each frame YIQ color space video framesNComposition approximation coefficient space frequency Rate band;Above-mentioned each space frequency strip constitutes the first space frequency strip group;
Fig. 3 to be represented and carry out the storage form that wavelet pyramid decomposes view data using wavelet pyramid algorithm, and the figure is Wavelet pyramid decomposition is carried out for the picture of 512 × 512 pixels to a width size, C represents the number of matrix after decomposing, and S is represented The dimension information of each matrix.Fig. 4 (B) represents three layers of wavelet pyramid structural representation after wavelet decomposition, initially with small echo Picture is decomposed into cA by decomposition algorithm1, cH1, cV1And cD1, the upper left corner is cA in figure1, by cA1Second layer pyramid is decomposed into again cA2, cH2, cV2, cD2,cA1By cA2, cH2, cV2, cD2Replace, by cA2Third layer pyramid cA is decomposed into again3, cH3, cV3, cD3, cA2By cA3, cH3, cV3, cD3Replace, then just constitute three layers of pyramidal structure;Fig. 5 illustrates the first space frequency strip group, This graph structure layout is similar to Fig. 4 (B), to put it more simply, only having carried out 1 layer of pyramid decomposition, pyramidal total number of plies is 1;
As shown in fig. 6, using wavelet decomposition algorithm, by Y matrixes, it is decomposed into four matrixes including following sub-steps:
(3.1) LPF is carried out to each row of Y matrixes, then enters ranks down-sampling, then each row are carried out low Pass filter, finally enters every trade down-sampling and obtains the 1st approximation coefficient cA1Matrix;
(3.2) LPF is carried out to each row of Y matrixes, then enters ranks down-sampling, then height is carried out to each row Pass filter, finally enters every trade down-sampling and obtains the 1st level detail cH1Matrix;
(3.3) high-pass filtering is carried out to each row of Y matrixes, then enters ranks down-sampling, then each row are carried out low Pass filter, finally enters every trade down-sampling and obtains the 1st vertical detail cV1Matrix;
(3.4) high-pass filtering is carried out to each row of Y matrixes, then enters ranks down-sampling, then height is carried out to each row Pass filter, finally enters every trade down-sampling and obtains the 1st diagonal detail cD1Matrix;
Using wavelet decomposition algorithm to the n-th approximation coefficient matrix cAn-1Decomposed, sub-step is same as described above, n=1, 2、…、N;
The row down-sampling is exactly to retain even column, abandons odd column;The row down-sampling is exactly to retain even number line, is abandoned Odd-numbered line;
Described pair of row carries out LPF and exactly the row element and wavelet low-pass filter coefficient is asked into convolution, obtains one Row new element value, replaces original row;
Described pair of row carries out high-pass filtering and exactly the row element and small echo high-pass filter coefficient is asked into convolution, obtains a line New element value, replaces original row;
Described pair of row carry out LPF and exactly the column element and wavelet low-pass filter coefficient are asked into convolution, obtain a row New element value, replaces original row;
Described pair of row carry out high-pass filtering and exactly the column element and small echo high-pass filter coefficient are asked into convolution, obtain a row New element value, replaces original row;
In the present embodiment, the wavelet function for being adopted, by consulting reference paper 1 and reference paper 2, is obtained for coif5 Low pass filter and high-pass filter, as listed in table 1;
Reference paper 1:China, pavilion Jianping etc., " high-order Coiflet small echos series structure and application ", vibration engineering journal, (2008)521-529;Reference paper 2:Yu Rujie, " the wave propagation problem Research of Solving Method based on zero moment scaling function ", China Middle University of Science and Technology's Master's thesis, 1-52 is delivered for 2013;
(4) time domain bandpass filtering step:
Using the first low pass iir digital filter (Infinite Impulse Response, IIR numeral Wave filter) to constitute the first space frequency strip group each space frequency strip carry out time-domain filtering respectively, each sky obtained after filtering Between frequency band constitute the first temporary space frequency band group;
The each space frequency strip for constituting the first space frequency strip group is carried out respectively using the second low pass iir digital filter Time-domain filtering, each space frequency strip obtained after filtering constitutes the second temporary space frequency band group;
Each space frequency strip in first temporary space frequency band group is corresponding with the second temporary space frequency band group each Space frequency strip is subtracted each other, and resulting each space frequency strip collectively forms second space frequency band group;
In the present embodiment, the cut-off frequency f of the first low pass iir digital filter and the second low pass iir digital filter1With f2Respectively 0.4Hz and 0.2Hz, Fig. 7 (A) represent that the Frequency Response of the first low pass iir digital filter, Fig. 7 (B) represent second The Frequency Response of low pass iir digital filter, Fig. 7 (C) represent the band logical being made up of the first and second low pass iir digital filters Filter response.
(5) second space frequency band group amplification procedure:
Maximum amplification α (n) of each layer of pyramid is calculated respectivelymax, it is respectively compared whether α (n)max≤ α, is then By α (n)maxAs the actual multiplication factor of equivalent layer, otherwise using α as the actual multiplication factor of equivalent layer;Wherein α amplifies for setting Multiple, 5<α<30;
The each space frequency strip for constituting second space frequency band group is multiplied by into respectively phase according to its pyramidal number of plies in place The actual multiplication factor of layer, resulting each space frequency strip is answered to collectively form the 3rd space frequency strip group;
In the present embodiment, maximum amplification α (n) of each layer of pyramid is calculatedmaxIncluding following sub-steps:
(5.1) space wavelength λ (n) of each layer of pyramid is calculated:
In formula, WnAnd HnRespectively the width of pyramid n-th layer with it is high, unit is pixel;By taking layers 1 and 2 as an example, the 1st Slice width W1With high H1Respectively 256 and 256, the wide W of the second layer2With high H2Respectively 128 and 128, with reference to Fig. 3.
(5.2) displacement function δ (t) is calculated:
In formula, λcFor space critical wavelength, value is 16 pixels, and setting multiplication factor α is taken as 15;
(5.3) maximum amplification α (n) of each layer of pyramid is calculatedmax
By taking layers 1 and 2 as an example, the detailed process for calculating maximum amplification is:First with known ground floor Wide W1With high H1Respectively 256 and 256, the wide W of the second layer2With high H2Respectively 128 and 128, computing formula is substituted into, count Each layer of space wavelength is calculated, λ (1)=361.984 and λ (2)=180.992 is obtained, then according to the critical ripple in space of setting Long λc=16 and setting multiplication factor α=15 be calculated δ (t)=0.125, finally these three calculated values are substituted into, try to achieve the One layer is respectively with the maximum amplification of the second layer:α(1)max=119.661, α (2)max=59.331.
(6) the 4th space frequency strip group step is obtained:
The each space frequency strip for constituting the 3rd space frequency strip group is corresponding to the first space frequency strip group respectively Space frequency strip carry out addition of matrices computing, resulting each space frequency strip collectively forms the 4th space frequency strip group;Institute It is two groups of correspondence yardsticks, matrix operations of correspondence position to state addition of matrices computing;
As shown in figure 8, the figure is exactly the space frequency strip after superposition.This additive process is the approximate portion of core of linear Euler Point, in linear Euler's video motion amplifies, its core concept can be expressed as:
The change for assuming intensity I (x) t over time of any one pixel in an one dimensional image turns to I (x, t), will Initial value be written as I (x, 0)=h (x), then any time intensity of the pixel can be expressed as I (x, t)=h (x+ δ (t)), When the motion of object is little motion, can launch as shown in Equation 1 by Taylors approximation:
In equation 1, h (x+ δ (t)) is image original image vegetarian refreshments intensity, characterizes the first space frequency strip,To characterize the motion pixel Strength Changes of object, in order to obtain motion interested, can be with one Bandpass filter carries out time domain bandpass filtering to the first space frequency strip.Second space frequency is obtained by this Time Domain Processing Band, second space frequency band can be usedRepresent, the intensity level of second space frequency band is multiplied by into one and is put Big factor-alpha, has then just obtained the 3rd space frequency strip, usesRepresent the 3rd space frequency strip, then by its In being added to original first space frequency strip, with following formula the additive process is represented:
The right of formula 2 can be with approximate representation:
Added by returning, the 4th space frequency strip is just obtained, by the left side and the right of formula 3 of contrast equation 1, Ke Yifa Now characterize the motion for changing over time to be exaggerated (1+ α) times.
(7) luminance matrix step is reconstructed:
The 4th space frequency strip group is reconstructed using wavelet reconstruction algorithm for the Y of YIQ color space video frames4Matrix, specifically Process is as follows:
Each layer of different pyramidal layers in the 4th space frequency strip group is successively reconstructed, is opened from pyramid n-th layer Begin, using wavelet reconstruction algorithm, be approximately by the N for constituting each frame YIQ color space video frames in the 4th space frequency strip group Matrix number cAN 4With N level detail matrix cHN 4, N vertical detail matrix cVN 4With N diagonal detail matrix cDN 4Reconstruct the N-1 approximation coefficient matrix cAN-1 4
The reconstruct of pyramid N-1 layers is carried out again, to each frame YIQ color space video frames, using wavelet reconstruction algorithm, will be upper The N-1 approximation coefficient matrix cA of one layer of reconstructN-1 4With N-1 level detail matrix cHN-1 4, N-1 vertical detail matrixes cVN-1 4, N-1 diagonal detail matrix cDN-1 4Reconstruct N-2 approximation coefficient matrix cAN-2 4
So carry out successively, until the 1st approximation coefficient matrix cA for obtaining reconstructing1 4;Then entered using wavelet reconstruction algorithm The 1st layer of reconstruct of row pyramid, to each frame YIQ color space video frames, using wavelet reconstruction algorithm, by the 1st approximation coefficient matrix cA1 4With the 1st level detail matrix cH1 4, the 1st vertical detail matrix cV1 4, the 1st diagonal detail matrix cD1 4It is reconstructed into Y4Matrix;
As shown in figure 9, in the present embodiment, using wavelet reconstruction algorithm, each frame YIQ in the 4th space frequency strip group will be constituted The N approximation coefficient matrix cA of color space video frameN 4With N level detail matrix cHN 4, N vertical detail matrix cVN 4With N diagonal detail matrix cDN 4Reconstruct N-1 approximation coefficient matrix cAN-1 4, including following sub-steps:
(7.1) to matrix cAN 4Enter every trade up-sampling, then LPF carried out to each row of matrix after row up-sampling, Then enter ranks up-sampling to matrix after filtering, carry out LPF followed by each row that matrix is obtained to row up-sampling and obtain AN1Matrix;
(7.2) to matrix cHN 4Enter every trade up-sampling, then high-pass filtering carried out to each row of matrix after row up-sampling, Then enter ranks up-sampling to matrix after filtering, carry out LPF followed by each row that matrix is obtained to row up-sampling and obtain AN2Matrix;
(7.3) to cVN 4Matrix enters every trade up-sampling, then carries out LPF to each row of matrix after row up-sampling, Then enter ranks up-sampling to matrix after filtering, carry out high-pass filtering followed by each row that matrix is obtained to row up-sampling and obtain AN3Matrix;
(7.4) to cDN 4Matrix enters every trade up-sampling, then carries out high-pass filtering to each row of matrix after row up-sampling, Then enter ranks up-sampling to matrix after filtering, carry out high-pass filtering followed by each row that matrix is obtained to row up-sampling and obtain AN4Matrix;
(7.5) by above-mentioned AN1Matrix, AN2Matrix, AN3Matrix and AN4Matrix carries out addition of matrices computing, and to obtain N-1 near Like coefficient matrix cAN-1 4
So-called row matrix up-sampling, exactly doubles the line number of matrix, inserts between any two row of original matrix A line null vector, finally to last column zero padding of new structure matrix;So-called rectangular array up-sampling, is exactly to increase matrix column number One times, a row null vector is inserted between any two row of original matrix, finally to last row zero padding of new structure matrix;
Described pair of row carries out LPF and exactly the row element and wavelet low-pass filter coefficient is asked into convolution, obtains a line New element value, replaces original row;
Described pair of row carries out high-pass filtering and exactly the row element and small echo high-pass filter coefficient is asked into convolution, obtains a line New element value, replaces original row;
Described pair of row carry out LPF and exactly the column element and wavelet low-pass filter coefficient are asked into convolution, obtain a row New element value, replaces original row;
Described pair of row carry out high-pass filtering and exactly the column element and small echo high-pass filter coefficient are asked into convolution, obtain a row New element value, replaces original row;
In the present embodiment, the wavelet function for being adopted for coif5, by consulting aforementioned references 1 and reference paper 2, Obtain wavelet low-pass filter coefficient and small echo high-pass filter coefficient as listed in table 1;
(8) frame of video color space reduction step:
Step (7) is reconstructed Y by following formula4I matrixes and Q matrixes in matrix, step (2) carries out matrix operation, obtains To new R matrixes, G matrix and B matrixes:
R=1.000Y4+ 0.956I+0.621Q,
G=1.000Y4- 0.272I-0.647Q,
B=1.000Y4-1.106I+1.703Q;
New R matrixes, G matrix and B matrixes synthesize new rgb color space frame of video;
Successively above-mentioned conversion is carried out to each new YIQ color space videos frame, be just reduced to YIQ color space video frames Rgb color space frame of video;
(9) output video image step:
One video image file of construction, including header file part and data division, arrange first video image file and deposit Storage space is put, then according to the header file format of inputted video image, constructs the header file of output video image, and is written into hard Disk specified location, then by each frame video requency frame data hard disk specified location, thus completes the structure of rgb color space video image Make.The rgb color space video image that constructed as shown in Figure 10, itself and Fig. 2 is contrasted, can be apparent see in video Motion be exaggerated.
The coif5 low-pass filter coefficients of table 1 and high-pass filter coefficient

Claims (5)

1. a kind of video motion amplification method, including video image decomposition step, frame of video color space conversion step, N layers gold Word tower decomposition step, time domain bandpass filtering step, second space frequency band group amplification procedure, obtain the 4th space frequency strip group step Suddenly, luminance matrix step, frame of video color space reduction step, output video image step are reconstructed, it is characterised in that:
(1) video image decomposition step:
Rgb color space video image of the moving person in space small movements is recorded, it is further according to time order and function that rgb color is empty Between video image be decomposed into rgb color space frame of video one by one, read all of rgb color space frame of video;
(2) frame of video color space conversion step:
Each rgb color space frame of video R matrixes, G matrix, three two-dimensional matrixs of B matrixes are represented into that R matrixes represent pixel Point beauty's intensity of colour, represents pixel green intensity, and B matrixes represent pixel blue intensities;
Matrix operation is carried out to R matrixes, G matrix and B matrixes by following formula, Y matrixes, the I squares of YIQ color space video frames is obtained Battle array and three two-dimensional matrixs of Q matrixes:
Y=0.299R+0.587G+0.114B,
I=0.596R-0.275G-0.321B,
Q=0.212R-0.523G+0.311B;
Wherein, Y matrixes represent the brightness value of each pixel of YIQ color space video frames, and I matrixes represent YIQ color space video frames From orange to the color intensity of cyan, Q matrixes represent each pixel of YIQ color space video frames from purple to yellowish green to each pixel The color intensity of color;
(3) N layers pyramid decomposition step:
To the Y matrixes in each YIQ color space videos frame, four matrixes are broken down into using wavelet decomposition algorithm:1st is approximate Coefficient matrix cA1, the 1st level detail matrix cH1, the 1st vertical detail matrix cV1With the 1st diagonal detail matrix cD1;This process is Ground floor pyramid decomposition, cH1、cV1And cD1Constitute pyramid ground floor;
Again using wavelet decomposition algorithm to the 1st approximation coefficient matrix cA1Decomposed, be broken down into the 2nd approximation coefficient matrix cA2, the 2nd level detail matrix cH2, the 2nd vertical detail matrix cV2With the 2nd diagonal detail matrix cD2;This process is second layer gold Word tower decomposes, cH2、cV2And cD2Constitute the pyramid second layer;
So n times decomposition is carried out successively, finally obtain N approximation coefficient matrix cAN, N level detail matrix cHN, N it is vertical Detail matrices cVNWith N diagonal detail matrix cDN;cAN、cHN、cVNAnd cDNConstitute pyramid n-th layer;N >=3, dimension of picture is got over Greatly, N values are bigger, otherwise then N values are less;
All YIQ color space videos frames are carried out after n times pyramid decomposition, by the n-th water of each frame YIQ color space video frames Flat detail matrices cHnComposition yardstick is the level detail space frequency strip of n, vertical by the n-th of each frame YIQ color space video frames Detail matrices cVnComposition yardstick is the vertical detail space frequency strip of n, diagonal thin by the n-th of each frame YIQ color space video frames Section matrix cDnComposition yardstick is the diagonal detail space frequency strip of n;Wherein n is referred to as yardstick, represents different pyramidal layers, n=1, 2、…、N;By the N approximation coefficient matrix cA of each frame YIQ color space video framesNComposition approximation coefficient space frequency strip;It is above-mentioned Each space frequency strip constitutes the first space frequency strip group;
(4) time domain bandpass filtering step:
Time domain is carried out respectively to each space frequency strip for constituting the first space frequency strip group using the first low pass iir digital filter Filtering, each space frequency strip obtained after filtering constitutes the first temporary space frequency band group;
Time domain is carried out respectively to each space frequency strip for constituting the first space frequency strip group using the second low pass iir digital filter Filtering, each space frequency strip obtained after filtering constitutes the second temporary space frequency band group;
By each space frequency strip in the first temporary space frequency band group and corresponding each space in the second temporary space frequency band group Frequency band subtracts each other, and resulting each space frequency strip collectively forms second space frequency band group;
The cut-off frequency of the first low pass iir digital filter and the second low pass iir digital filter is respectively f1And f2, 0<f2<f1< fs/ 2, fsFor the recording frame rate of video image;
(5) second space frequency band group amplification procedure:
Maximum amplification α (n) of each layer of pyramid is calculated respectivelymax, it is respectively compared whether α (n)max≤ α is then by α (n)maxAs the actual multiplication factor of equivalent layer, otherwise using α as the actual multiplication factor of equivalent layer;Wherein α is setting times magnification Number, 5<α<30;
The each space frequency strip for constituting second space frequency band group is multiplied by into respectively equivalent layer according to its pyramidal number of plies in place Actual multiplication factor, resulting each space frequency strip collectively forms the 3rd space frequency strip group;
(6) the 4th space frequency strip group step is obtained:
The each space frequency strip for constituting the 3rd space frequency strip group is corresponding with the first space frequency strip group empty respectively Between frequency band carry out addition of matrices computing, resulting each space frequency strip collectively forms the 4th space frequency strip group;The square Battle array add operation is two groups of correspondence yardsticks, matrix operations of correspondence position;
(7) luminance matrix step is reconstructed:
The 4th space frequency strip group is reconstructed using wavelet reconstruction algorithm for the Y of YIQ color space video frames4Matrix, detailed process is such as Under:
Each layer of different pyramidal layers in the 4th space frequency strip group is successively reconstructed, from the beginning of pyramid n-th layer, is adopted Wavelet reconstruction algorithm is used, the N approximation coefficient matrixes of each frame YIQ color space video frames in the 4th space frequency strip group will be constituted cAN 4With N level detail matrix cHN 4, N vertical detail matrix cVN 4With N diagonal detail matrix cDN 4Reconstruct N-1 approximate Coefficient matrix cAN-1 4
The reconstruct of pyramid N-1 layers is carried out again, to each frame YIQ color space video frames, using wavelet reconstruction algorithm, by last layer The N-1 approximation coefficient matrix cA of reconstructN-1 4With N-1 level detail matrix cHN-1 4, N-1 vertical detail matrix cVN-1 4, N-1 diagonal detail matrix cDN-1 4Reconstruct N-2 approximation coefficient matrix cAN-2 4
So carry out successively, until the 1st approximation coefficient matrix cA for obtaining reconstructing1 4;Then gold is carried out using wavelet reconstruction algorithm The 1st layer of reconstruct of word tower, to each frame YIQ color space video frames, using wavelet reconstruction algorithm, by the 1st approximation coefficient matrix cA1 4With 1st level detail matrix cH1 4, the 1st vertical detail matrix cV1 4, the 1st diagonal detail matrix cD1 4It is reconstructed into Y4Matrix;
(8) frame of video color space reduction step:
Step (7) is reconstructed Y by following formula4I matrixes and Q matrixes in matrix, step (2) carries out matrix operation, obtains new R matrixes, G matrix and B matrixes:
R=1.000Y4+ 0.956I+0.621Q,
G=1.000Y4- 0.272I-0.647Q,
B=1.000Y4-1.106I+1.703Q;
New R matrixes, G matrix and B matrixes synthesize new rgb color space frame of video;
Successively above-mentioned conversion is carried out to each new YIQ color space videos frame, just YIQ color space video frames are reduced to into RGB Color space video frame;
(9) output video image step:
One video image file of construction, including header file part and data division, arrange first video image file storage position Put, then according to the header file format of inputted video image, construct the header file of output video image, and be written into hard disk and refer to Positioning is put, and then by each frame video requency frame data hard disk specified location, thus completes the construction of rgb color space video image.
2. video motion amplification method as claimed in claim 1, it is characterised in that:
In the step (3), to Y matrixes, four matrixes are broken down into including following sub-steps using wavelet decomposition algorithm:
(3.1) LPF is carried out to each row of Y matrixes, then enters ranks down-sampling, then low pass filtered is carried out to each row Ripple, finally enters every trade down-sampling and obtains the 1st approximation coefficient cA1Matrix;
(3.2) LPF is carried out to each row of Y matrixes, then enters ranks down-sampling, then high pass filter is carried out to each row Ripple, finally enters every trade down-sampling and obtains the 1st level detail cH1Matrix;
(3.3) high-pass filtering is carried out to each row of Y matrixes, then enters ranks down-sampling, then low pass filtered is carried out to each row Ripple, finally enters every trade down-sampling and obtains the 1st vertical detail cV1Matrix;
(3.4) high-pass filtering is carried out to each row of Y matrixes, then enters ranks down-sampling, then high pass filter is carried out to each row Ripple, finally enters every trade down-sampling and obtains the 1st diagonal detail cD1Matrix;
Using wavelet decomposition algorithm to the n-th approximation coefficient matrix cAn-1Decomposed, sub-step is same as described above, n=1,2 ..., N;
The row down-sampling is exactly to retain even column, abandons odd column;The row down-sampling is exactly to retain even number line, abandons odd number OK;
Described pair of row carries out LPF and exactly the row element and wavelet low-pass filter coefficient is asked into convolution, obtains a line Singapore dollar Element value, replaces original row;
Described pair of row carries out high-pass filtering and exactly the row element and small echo high-pass filter coefficient is asked into convolution, obtains a line Singapore dollar Element value, replaces original row;
Described pair of row carry out LPF and exactly the column element and wavelet low-pass filter coefficient are asked into convolution, obtain a row Singapore dollar Element value, replaces original row;
Described pair of row carry out high-pass filtering and exactly the column element and small echo high-pass filter coefficient are asked into convolution, obtain a row Singapore dollar Element value, replaces original row;
The wavelet low-pass filter coefficient and small echo high-pass filter coefficient can be obtained by consulting pertinent literature.
3. video motion amplification method as claimed in claim 1, it is characterised in that:
In the step (4), using each spatial frequency of the first low pass iir digital filter to the first space frequency strip group of composition Band carries out respectively time-domain filtering including following sub-steps:
(4.1) the filter coefficient r of low pass IIR filter is calculated1
r1=2 π f1/fs,
F in formula1For the cut-off frequency of the first low pass iir digital filter, fsFor the recording frame rate of video image;
(4.2) output Y (m) of the first low pass iir digital filter is calculated:
Y (m)=(1-r1)×Y(m-1)+r1× X (m),
In formula, X (m) for wave filter input, m be video frame number, m=1,2 ..., K;K is the totalframes of frame of video, when m is 1, X (1) is known, and Y (0) is:
Y ( 0 ) = 1 K &Sigma; m = 1 K X ( m ) ,
Time domain is carried out respectively to each space frequency strip for constituting the first space frequency strip group using the second low pass iir digital filter Filtering, its process is identical with said process, differs only in f1Change f into2
4. video motion amplification method as claimed in claim 1, it is characterised in that:
In the step (5), maximum amplification α (n) of each layer of pyramid is calculatedmaxIncluding following sub-steps:
(5.1) space wavelength λ (n) of each layer of pyramid is calculated:
&lambda; ( n ) = W n 2 + H n 2 , n = 1 , 2 , ... , N ;
In formula, WnAnd HnRespectively the width of pyramid n-th layer with it is high, unit is pixel;
(5.2) displacement function δ (t) is calculated:
&delta; ( t ) = &lambda; c 8 ( 1 + &alpha; ) ,
In formula, λcFor space critical wavelength, value is 16~20 pixels;
(5.3) maximum amplification α (n) of each layer of pyramid is calculatedmax
&alpha; ( n ) m a x = &lambda; ( n ) 3 &times; 8 &times; &delta; ( t ) - 1.
5. video motion amplification method as claimed in claim 1, it is characterised in that:
In the step (7), using wavelet reconstruction algorithm, each frame YIQ color spaces in the 4th space frequency strip group will be constituted and regarded The N approximation coefficient matrix cA of frequency frameN 4With N level detail matrix cHN 4, N vertical detail matrix cVN 4With N diagonal details Matrix cDN 4Reconstruct N-1 approximation coefficient matrix cAN-1 4, including following sub-steps:
(7.1) to matrix cAN 4Enter every trade up-sampling, then LPF is carried out to each row of matrix after row up-sampling, then Enter ranks up-sampling to matrix after filtering, carry out LPF followed by each row that matrix is obtained to row up-sampling and obtain AN1Square Battle array;
(7.2) to matrix cHN 4Enter every trade up-sampling, then high-pass filtering is carried out to each row of matrix after row up-sampling, then Enter ranks up-sampling to matrix after filtering, carry out LPF followed by each row that matrix is obtained to row up-sampling and obtain AN2Square Battle array;
(7.3) to cVN 4Matrix enters every trade up-sampling, then carries out LPF to each row of matrix after row up-sampling, then Enter ranks up-sampling to matrix after filtering, carry out high-pass filtering followed by each row that matrix is obtained to row up-sampling and obtain AN3Square Battle array;
(7.4) to cDN 4Matrix enters every trade up-sampling, then carries out high-pass filtering to each row of matrix after row up-sampling, then Enter ranks up-sampling to matrix after filtering, carry out high-pass filtering followed by each row that matrix is obtained to row up-sampling and obtain AN4Square Battle array;
(7.5) by above-mentioned AN1Matrix, AN2Matrix, AN3Matrix and AN4Matrix carries out addition of matrices computing and obtains N-1 approximation coefficients Matrix cAN-1 4
So-called row matrix up-sampling, exactly doubles the line number of matrix, and between any two row of original matrix a line is inserted Null vector, finally to last column zero padding of new structure matrix;So-called rectangular array up-sampling, is exactly that matrix column number is increased into one Times, a row null vector is inserted between any two row of original matrix, finally to last row zero padding of new structure matrix;
Described pair of row carries out LPF and exactly the row element and wavelet low-pass filter coefficient is asked into convolution, obtains a line Singapore dollar Element value, replaces original row;
Described pair of row carries out high-pass filtering and exactly the row element and small echo high-pass filter coefficient is asked into convolution, obtains a line Singapore dollar Element value, replaces original row;
Described pair of row carry out LPF and exactly the column element and wavelet low-pass filter coefficient are asked into convolution, obtain a row Singapore dollar Element value, replaces original row;
Described pair of row carry out high-pass filtering and exactly the column element and small echo high-pass filter coefficient are asked into convolution, obtain a row Singapore dollar Element value, replaces original row;
The wavelet low-pass filter coefficient and small echo high-pass filter coefficient can be obtained by consulting pertinent literature.
CN201611264001.2A 2016-12-30 2016-12-30 A kind of video motion amplification method Active CN106657713B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611264001.2A CN106657713B (en) 2016-12-30 2016-12-30 A kind of video motion amplification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611264001.2A CN106657713B (en) 2016-12-30 2016-12-30 A kind of video motion amplification method

Publications (2)

Publication Number Publication Date
CN106657713A true CN106657713A (en) 2017-05-10
CN106657713B CN106657713B (en) 2019-03-22

Family

ID=58837315

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611264001.2A Active CN106657713B (en) 2016-12-30 2016-12-30 A kind of video motion amplification method

Country Status (1)

Country Link
CN (1) CN106657713B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108433727A (en) * 2018-03-15 2018-08-24 广东工业大学 A kind of method and device of monitoring baby breathing
CN109063763A (en) * 2018-07-26 2018-12-21 合肥工业大学 Video minor change amplification method based on PCA
CN110517266A (en) * 2019-04-26 2019-11-29 深圳市豪视智能科技有限公司 Rope vibrations detection method and relevant apparatus
CN110595603A (en) * 2019-04-26 2019-12-20 深圳市豪视智能科技有限公司 Video-based vibration analysis method and related product
CN110595749A (en) * 2019-04-26 2019-12-20 深圳市豪视智能科技有限公司 Method and device for detecting vibration fault of electric fan
CN110617965A (en) * 2019-04-26 2019-12-27 深圳市豪视智能科技有限公司 Method for detecting gear set abnormality and related product
CN110631812A (en) * 2019-04-26 2019-12-31 深圳市豪视智能科技有限公司 Track vibration detection method and device and vibration detection equipment
CN111277833A (en) * 2020-01-20 2020-06-12 合肥工业大学 Multi-passband filter-based multi-target micro-vibration video amplification method
CN111476715A (en) * 2020-04-03 2020-07-31 三峡大学 Lagrange video motion amplification method based on image deformation technology
CN112597836A (en) * 2020-12-11 2021-04-02 昆明理工大学 Method for amplifying solar low-amplitude oscillation signal
CN112672072A (en) * 2020-12-18 2021-04-16 南京邮电大学 Segmented steady video amplification method based on improved Euler amplification
CN113791140A (en) * 2021-11-18 2021-12-14 湖南大学 Bridge bottom interior nondestructive testing method and system based on local vibration response
CN114222033A (en) * 2021-11-01 2022-03-22 三峡大学 Adaptive Euler video amplification method based on empirical mode decomposition
CN114646381A (en) * 2022-03-30 2022-06-21 西安交通大学 Rotary mechanical vibration measuring method, system, equipment and storage medium
CN115797335A (en) * 2023-01-31 2023-03-14 武汉地震工程研究院有限公司 Euler movement amplification effect evaluation and optimization method for bridge vibration measurement

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101023662A (en) * 2004-07-20 2007-08-22 高通股份有限公司 Method and apparatus for motion vector processing
CN101957983A (en) * 2009-07-15 2011-01-26 吴云东 Light homogenizing method for digital image
CN103873875A (en) * 2014-03-25 2014-06-18 山东大学 Layering sub pixel motion estimation method for image super resolution
CN105303535A (en) * 2015-11-15 2016-02-03 中国人民解放军空军航空大学 Global subdivision pyramid model based on wavelet transformation
CN105956632A (en) * 2016-05-20 2016-09-21 浙江宇视科技有限公司 Target detection method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101023662A (en) * 2004-07-20 2007-08-22 高通股份有限公司 Method and apparatus for motion vector processing
CN101957983A (en) * 2009-07-15 2011-01-26 吴云东 Light homogenizing method for digital image
CN103873875A (en) * 2014-03-25 2014-06-18 山东大学 Layering sub pixel motion estimation method for image super resolution
CN105303535A (en) * 2015-11-15 2016-02-03 中国人民解放军空军航空大学 Global subdivision pyramid model based on wavelet transformation
CN105956632A (en) * 2016-05-20 2016-09-21 浙江宇视科技有限公司 Target detection method and device

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108433727A (en) * 2018-03-15 2018-08-24 广东工业大学 A kind of method and device of monitoring baby breathing
CN109063763A (en) * 2018-07-26 2018-12-21 合肥工业大学 Video minor change amplification method based on PCA
CN110595603B (en) * 2019-04-26 2022-04-19 深圳市豪视智能科技有限公司 Video-based vibration analysis method and related product
CN110517266A (en) * 2019-04-26 2019-11-29 深圳市豪视智能科技有限公司 Rope vibrations detection method and relevant apparatus
CN110595749B (en) * 2019-04-26 2021-08-20 深圳市豪视智能科技有限公司 Method and device for detecting vibration fault of electric fan
CN110617965A (en) * 2019-04-26 2019-12-27 深圳市豪视智能科技有限公司 Method for detecting gear set abnormality and related product
CN110631812A (en) * 2019-04-26 2019-12-31 深圳市豪视智能科技有限公司 Track vibration detection method and device and vibration detection equipment
CN110595603A (en) * 2019-04-26 2019-12-20 深圳市豪视智能科技有限公司 Video-based vibration analysis method and related product
WO2021042907A1 (en) * 2019-04-26 2021-03-11 深圳市豪视智能科技有限公司 Rope vibration measurement method and related apparatus
WO2021036637A1 (en) * 2019-04-26 2021-03-04 深圳市豪视智能科技有限公司 Gear set abnormality detection method and related product
CN110595749A (en) * 2019-04-26 2019-12-20 深圳市豪视智能科技有限公司 Method and device for detecting vibration fault of electric fan
CN111277833B (en) * 2020-01-20 2022-04-15 合肥工业大学 Multi-passband filter-based multi-target micro-vibration video amplification method
CN111277833A (en) * 2020-01-20 2020-06-12 合肥工业大学 Multi-passband filter-based multi-target micro-vibration video amplification method
CN111476715A (en) * 2020-04-03 2020-07-31 三峡大学 Lagrange video motion amplification method based on image deformation technology
CN112597836A (en) * 2020-12-11 2021-04-02 昆明理工大学 Method for amplifying solar low-amplitude oscillation signal
CN112597836B (en) * 2020-12-11 2023-07-07 昆明理工大学 Amplifying method of solar low-amplitude oscillation signal
CN112672072A (en) * 2020-12-18 2021-04-16 南京邮电大学 Segmented steady video amplification method based on improved Euler amplification
CN114222033A (en) * 2021-11-01 2022-03-22 三峡大学 Adaptive Euler video amplification method based on empirical mode decomposition
CN114222033B (en) * 2021-11-01 2023-07-11 三峡大学 Adaptive Euler video amplification method based on empirical mode decomposition
CN113791140A (en) * 2021-11-18 2021-12-14 湖南大学 Bridge bottom interior nondestructive testing method and system based on local vibration response
CN113791140B (en) * 2021-11-18 2022-02-25 湖南大学 Bridge bottom interior nondestructive testing method and system based on local vibration response
CN114646381B (en) * 2022-03-30 2023-01-24 西安交通大学 Rotary mechanical vibration measuring method, system, equipment and storage medium
CN114646381A (en) * 2022-03-30 2022-06-21 西安交通大学 Rotary mechanical vibration measuring method, system, equipment and storage medium
CN115797335A (en) * 2023-01-31 2023-03-14 武汉地震工程研究院有限公司 Euler movement amplification effect evaluation and optimization method for bridge vibration measurement

Also Published As

Publication number Publication date
CN106657713B (en) 2019-03-22

Similar Documents

Publication Publication Date Title
CN106657713B (en) A kind of video motion amplification method
CN102722865B (en) Super-resolution sparse representation method
CN108734660A (en) A kind of image super-resolution rebuilding method and device based on deep learning
US10019642B1 (en) Image upsampling system, training method thereof and image upsampling method
Ramanath et al. Adaptive demosaicking
Ellmauthaler et al. Multiscale image fusion using the undecimated wavelet transform with spectral factorization and nonorthogonal filter banks
Petrovic et al. Gradient-based multiresolution image fusion
CN104680485B (en) A kind of image de-noising method and device based on multiresolution
CN109889800B (en) Image enhancement method and device, electronic equipment and storage medium
US20020118887A1 (en) Multiresolution based method for removing noise from digital images
CN106485764B (en) The quick exact reconstruction methods of MRI image
JP2015008414A (en) Image processing apparatus, image processing method, and image processing program
CN111784582A (en) DEC-SE-based low-illumination image super-resolution reconstruction method
DE102006038646A1 (en) Image color image data processing apparatus and color image data image processing apparatus
Ye et al. A geometric construction of multivariate sinc functions
CN112270646B (en) Super-resolution enhancement method based on residual dense jump network
CN106056565B (en) A kind of MRI and PET image fusion method decomposed based on Multiscale Morphological bilateral filtering and contrast is compressed
CN107203968A (en) Single image super resolution ratio reconstruction method based on improved subspace tracing algorithm
CN106254720B (en) A kind of video super-resolution method for reconstructing based on joint regularization
Ekanayake et al. Multi-branch Cascaded Swin Transformers with Attention to k-space Sampling Pattern for Accelerated MRI Reconstruction
CN113128583B (en) Medical image fusion method and medium based on multi-scale mechanism and residual attention
Rathi et al. A New Generic Progressive Approach based on Spectral Difference for Single-sensor Multispectral Imaging System.
EP2955691B1 (en) Device for determining of colour fraction of an image pixel of a BAYER matrix
Vega et al. A Bayesian super-resolution approach to demosaicing of blurred images
Ammar et al. Image Zooming and Multiplexing Techniques based on k-space Transformation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant