CN108900864A - Full reference video quality appraisement method based on motion profile - Google Patents

Full reference video quality appraisement method based on motion profile Download PDF

Info

Publication number
CN108900864A
CN108900864A CN201810812287.6A CN201810812287A CN108900864A CN 108900864 A CN108900864 A CN 108900864A CN 201810812287 A CN201810812287 A CN 201810812287A CN 108900864 A CN108900864 A CN 108900864A
Authority
CN
China
Prior art keywords
segment
track
cycle tests
frame
indicate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810812287.6A
Other languages
Chinese (zh)
Other versions
CN108900864B (en
Inventor
吴金建
刘永旭
谢雪梅
石光明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201810812287.6A priority Critical patent/CN108900864B/en
Publication of CN108900864A publication Critical patent/CN108900864A/en
Application granted granted Critical
Publication of CN108900864B publication Critical patent/CN108900864B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

The invention discloses a kind of full reference video quality appraisement method based on motion profile mainly solves the problems, such as that prior art evaluation accuracy is low, computation complexity is high.Its implementation is:Test video and reference video are divided into the sequence fragment of equal length;Calculate the rest mass of cycle tests segment;Point sampling is carried out to the first frame of cycle tests segment;Calculate separately the light stream of cycle tests segment and reference sequences segment;Motion profile of the sampled point in reference sequences segment is calculated, the invalid track in motion profile is rejected;Cycle tests segment and reference sequences segment are extracted respectively along the Optical-flow Feature of motion profile, and calculate the feature difference of the two;Calculate the dynamic mass of cycle tests segment;The quality of test video is calculated according to the rest mass of sequence fragment and dynamic mass.The present invention reduces computation complexity while improving the accuracy of quality evaluation, the quality testing assessment that can be used in video compress, transmission and processing.

Description

Full reference video quality appraisement method based on motion profile
Technical field
The invention belongs to image and technical field of video processing, in particular to a kind of full reference video quality appraisement method, The quality testing assessment that can be used in video compress, transmission and processing.
Technical background
With the development of hand-held intelligent equipment and the progress of network communication technology, wechat video, cell-phone camera, network direct broadcasting Deng being dissolved into daily life.Due to being limited by network bandwidth and capacity of memory device, video is usually all It needs to be transmitted and handled again by compressed encoding.On the one hand, it is influenced by hardware device, video in shooting process can not The interference by noise is avoided, the quality of video is influenced;On the other hand, video will also result in some original letters when being compressed The loss of breath;Meanwhile when carrying out transmission of video, by network environment influence, it is likely to occur Network Packet Loss phenomenon at any time, thus Subsequent video-see and processing are influenced, or even will cause the omission and misjudgement of key message.By artificially subjectively carrying out The monitoring of video quality is although accurate, but can waste a large amount of time and manpower, inefficiency, and cannot achieve certainly Dynamicization.
Based on above artificial the shortcomings that judging, researchers are dedicated to establishing objective method for evaluating video quality, replace It is artificial to judge.The number of reference video information as needed, can be divided into three for existing objective video quality evaluation method Class, wherein:
The first kind is:No-reference video quality evaluating method.This method is directly utilized completely without original reference video Computation model carries out prediction of quality to test video.Due to so evaluation result is very poor, limiting it without reference to video information Practical application.
Second class is:Part reference video quality appraisement method.This method is a certain number of to original reference video extraction Feature passes through the feature difference between comparison reference video and test video, the quality of evaluation test video.Due to can only obtain The reference video information of a part, evaluation result still allow of no optimist, and application field is narrow.
Third class is:Full reference video quality appraisement method.This method needs to use whole original reference video informations, By comparing the difference degree of original reference video and the video by noise pollution, the mass fraction of test video is provided.Due to Whole reference video information is used, so evaluation accuracy is best in three classes method.Around such method, the prior art Many technical solutions are proposed, such as:
1.Seshadrinathan et al. carries out time-space domain decomposition, degree of passing through to video using three-dimensional Gabor filter The filter coefficient difference between reference video and test video is measured, test video quality is calculated;Manasa et al. thinks video Noise will lead to the light stream between adjacent two frame and generate difference, similar by the light stream between reference metric video and test video Degree calculates the quality of test video.But the computation complexity of these methods is excessively high, calculate overlong time, limit its Application in practice.
2.Peng et al. proposes a kind of method for evaluating video quality based on time-space domain textural characteristics;Chandler et al. Regard video as a three-dimensional cube, respectively two-dimensional surface from different perspectives, i.e. x/y plane, yt plane and xt plane Calculate the otherness of test video and reference video.But motion perception characteristic of these methods due to not accounting for people, matter Amount evaluation result is difficult to be consistent with the subjective perception result of people, and the stability of evaluation is poor.
Summary of the invention
Present invention aims at the deficiencies being directed in above-mentioned full reference video quality appraisement method, in conjunction with human visual system Motion perception characteristic, propose a kind of full reference video quality appraisement method based on motion profile, with reduce calculate it is complicated The accuracy of video quality evaluation is improved while spending.
Technical thought of the invention is:Test video is divided into multiple sequence fragments, the static state of each sequence fragment is believed Breath and multidate information carry out quality estimation, estimate the product of rest mass and dynamic mass as total quality of sequence fragment Meter, finally using the mean value of the quality of sequence fragment as the final mass estimated value of test video.Implementation step includes as follows:
(1) by test video V to be measureddAnd its reference video VrN sequence fragment is divided into according to same length:Wherein, Vi dIndicate i-th of cycle tests segment, Vi rIndicate i-th A reference sequences segment, i=1,2 ..., n, n >=2 and n are integer;
(2) i-th of cycle tests segment V is calculatedi dRest mass estimated value Qsi
(3) i-th of cycle tests segment V is calculatedi dDynamic mass estimated value Qti
(3a) is to Vi dFirst frame carry out point sampling, the collection for obtaining sampled point is combined into P={ p1,p2,...,pk,...,pN, Wherein pkIndicate that k-th of sampled point, k=1 ..., N, N are the quantity for the point that sampling obtains, N >=5 and N are integer;
(3b) calculates separately reference sequences segment Vi rWith cycle tests segment Vi dDense optical flow between per adjacent two frame:
Wherein, FrIndicate reference sequences segment Vi rLight stream set,Indicate reference sequences segment Vi rIn m frame and m Light stream between+1 frame, FdIndicate cycle tests segment Vi dLight stream set,Indicate cycle tests segment Vi dIn m frame and Light stream between m+1 frame, m=1 ..., L-1, L are the frame number of sequence fragment, and L >=12 and L are integer;
(3c) calculates sampled point in reference sequences segment Vi rIn motion profile R;
(3d) rejects invalid track from motion profile R, obtains effective track
(3e) is to the effective track of j-th stripIndicate effective trackQuantity, respectively extract ginseng Examine sequence fragment Vi rAlong effective trackOptical-flow FeatureWith cycle tests segment Vi dAlong effective trackLight stream it is special Sign
(3f) calculates (3e) described two Optical-flow FeaturesWithSimilar sexual deviation, and using its mean value as cycle tests Segment Vi dAlong effective trackDynamic mass qtj
(3g) is by cycle tests segment Vi dAlong all effective tracksDynamic massIt is equal The sum of value and standard deviation are used as cycle tests segment Vi dDynamic mass value Qti
(4) by the rest mass value Qs of i-th of cycle tests segmentiWith the dynamic mass value of i-th of cycle tests segment QtiProduct as i-th of cycle tests segment Vi dTotal quality Qi
(5) by all cycle tests fragment masses { Q1,Q2,...,Qi,...,QnMean value as test video VdMatter Magnitude Q;
(6) judged according to quality of the mass value Q to test video:
If Q=0, then it represents that the test video is not by noise pollution;
If 0 Q≤0.005 <, then it represents that the test video is by noise slight pollution;
If 0.005 Q≤0.01 <, then it represents that the test video is by noise intermediate pollution;
If Q > 0.01, then it represents that the test video is by noise serious pollution.
Compared with prior art, the present invention having the following advantages that:
1) full reference video quality appraisement method proposed by the present invention sees the calculating of video quality as a section video The accumulation of sequence fragment quality more meets people for the perception of video quality.
2) video is divided into static part and dynamic part and carries out quality estimation respectively by the present invention, while only to dynamic part Motion component is paid close attention to along the information change of motion profile, is more consistent with the motion perception process in vision, thus quality evaluation As a result consistency is had more with the subjective evaluation result of people.
3) present invention is due to the not calculating of complicated three-dimensional filtering and the storage demand of big section video, thus reduces meter Calculate complexity and memory consumption.
Detailed description of the invention
Fig. 1 is implementation flow chart of the invention.
Specific embodiment
Below in conjunction with attached drawing, present invention is further described in detail.
It is referring to Fig.1, of the invention that the specific implementation steps are as follows:
Step 1, by reference video VrV is drawn with test videodSequence fragment is divided into according to equal length.
Test video VdWith reference video VrUsually there are 150 frames to 600 frames, length of this example according to L=18 frame, difference Test video and reference video are divided into n sequence fragment:Vd={ V1 d..., Vi d,...Vn d, Wherein, Vi dIndicate i-th of cycle tests segment, Vi rIndicate i-th of reference sequences segment, i=1,2 ..., n, n >=2 and n is Integer.
Step 2, i-th of cycle tests segment V is calculatedi dRest mass estimated value Qsi
Static information is presented with the image format of each frame, and rest mass can pass through cycle tests segment Vi dIn it is each The still image quality evaluating method of frame is estimated;Existing still image quality evaluating method includes that Multi-scale model is similar Property MSSSIM, characteristic similarity FSIM and gradient similarity deviation GMSD, this example is carried out static using gradient similarity deviation Quality estimation, implementation are as follows:
(2a) is to i-th of reference sequences segment Vi rWith i-th of cycle tests segment Vi dIn each frame I extract gradient point Measure Gx and Gy:
Gx=I*dx, Gy=I*dy
Wherein, * indicates that linear convolution operation, dx, dy are respectively convolution kernel horizontally and vertically,
(2b) calculates gradient magnitude GM according to the calculated result of (2a):
(2c) is by i-th of reference sequences segment Vi rIt is denoted as in the gradient magnitude of w frameBy i-th of cycle tests piece SectionIt is denoted as in the gradient magnitude of w frameThen i-th of cycle tests segment Vi dIn the gradient similarity deviation of w frameIt is expressed as:
Wherein, w=1,2 ..., L, expression carry out multiplication operation to each element in matrix respectively ,-indicate to square Each element in battle array carries out divide operations respectively, and c is constant, the standard being set as in 255, Stdev () expression calculating bracket Difference;
(2d) is by i-th of cycle tests segment Vi dIn each frame gradient similarity deviation mean value as i-th test Video sequence segment Vi dRest mass estimated value Qsi
Step 3, i-th of cycle tests segment V is calculatedi dDynamic mass estimated value Qti
Multidate information is the motion component information in video, is carried out as follows to the estimation of dynamic mass:
(3a) is to Vi dFirst frame carry out point sampling, sample mode includes dense interval sampling and based on angle point response Sampling, this example use the method for sampling based on angle point response, and the collection for obtaining sampled point is combined into P={ p1,p2,..., pk,...,pN, wherein pkIndicate that k-th of sampled point, k=1 ..., N, N are the quantity for the point that sampling obtains, N >=5 and N are whole Number;
(3b) calculates separately reference sequences segment Vi rWith cycle tests segment Vi dDense optical flow between per adjacent two frame:
The method of optical flow computation includes Farneback optical flow algorithm, BA optical flow algorithm and LK optical flow algorithm, this example uses Farneback algorithm calculates light stream, and result is as follows:
Wherein, FrIndicate reference sequences segment Vi rLight stream set,Indicate reference sequences segment Vi rIn m frame and m Light stream between+1 frame, FdIndicate cycle tests segment Vi dLight stream set,Indicate cycle tests segment Vi dIn m frame and Light stream between m+1 frame, m=1 ..., L-1, L are the frame number of sequence fragment, and L >=12 and L are integer;
(3c) calculates sampled point in reference sequences segment Vi rIn motion profile R:
(3c1) is usedIndicate k-th of sampled point p in step (3a)kCoordinate position, when m successively gets L- from 1 1, new coordinate position is iterated to calculate by following formula:
Wherein,Indicate k-th of sampled point pkX-axis coordinate in m frame,Indicate k-th of sampled point pkIn m frame In y-axis coordinate,Indicate m frame with reference to light streamIn coordinateHorizontal component values,Indicate m frame reference light StreamIn coordinateVertical component value;
(3c2) gathers m+1 point in (3c1) for one in series sequentially in time, as kth motion profileWherein w=1,2 ..., L;
K in (3c2) is got N from 1 by (3c3), constitutes i-th of reference sequences segment Vi rIn motion profile R={ R1, R2,...,Rk,...,RN};
(3d) rejects invalid track from motion profile R, obtains effective track
Due to only focusing on the content of movement during calculating dynamic mass, so being needed for totally stationary track It rejects;Also, since there are errors for optical flow algorithm, so obtained track is there may be wrong track and repeats track, this Example rejects wrong track by three threshold values of setting and repeats track.Specific implementation is as follows:
3d1) set adjacent thresholds D1=10, standard deviation threshold method D2=10, repeat threshold value D3=345.6, it is based on the following conditions Judge kth track RkIt whether is invalid track:
If kth track RkIn reference sequences segment Vi rIn totally stationary, then track RkFor invalid track;
If kth track RkThe Euclidean distance of middle two tracing points of arbitrary neighborhood is more than adjacent thresholds D1, then track RkFor Invalid track;
If kth track RkStandard deviation be above standard poor threshold value D2, then track RkFor invalid track;
If kth track RkIt is different from track R with any onekThe Euclidean distance of the track of itself, which is no more than, repeats threshold value D3, then track RkFor invalid track;
(3d2) rejects invalid track from motion profile R, obtains effective track
(3e) is to the effective track of j-th stripReference sequences segment V is extracted respectivelyi rAlong effective trackOptical-flow FeatureWith cycle tests segment Vi dAlong effective trackOptical-flow FeatureWhereinIndicate effective trackQuantity;
(3e1) is usedIndicate trackIn m-th of coordinate position, by the reference light stream of m frameIn with pointCentered on, the region projection that size is M × M ties up to B with reference to light stream histogramIn, take m to get L-1, structure from 1 At reference light stream histogram setWherein M=48, B=32;
(3e2) will refer to light stream histogram setIn each histogram according to histogram The dimension of figure adds up, and obtains reference sequences segment Vi rAlong effective trackOptical-flow Feature
(3e3) is usedIndicate trackIn m-th of coordinate position, by the test light stream of m frameIn with pointCentered on, size be M × M region projection to B dimension test light stream histogramIn, it takes m from 1 to L-1, constitutes Test light stream histogram setWherein M=48, B=32;
(3e4) will test light stream histogram setIn each histogram according to histogram The dimension of figure adds up, and obtains cycle tests segment Vi dAlong effective trackOptical-flow Feature
(3f) calculates (3e) described two Optical-flow FeaturesWithSimilar sexual deviation dj
Wherein, it indicates to carry out multiplication operation respectively to each element in matrix ,-indicate to each element in matrix Divide operations are carried out respectively, and T is that denominator is zero in order to prevent, is set as 0.0001;
(3g) calculates cycle tests segment Vi dAlong effective trackDynamic mass qtj
qtj=Mean (dj),
Wherein, Mean () indicates to be averaging all elements in bracket, djFor the similarity deviation in (3f);
(3h) calculates cycle tests segment Vi dDynamic mass value Qti
Wherein qtjFor cycle tests segment V in (3g)i dAlong effective trackDynamic mass,Indicate cycle tests segment Vi dAlong effective trackDynamic mass set.
Step 4, test video V is calculateddFinal mass value Q.
The rest mass estimated value Qs that (4a) is calculated according to step 2iThe dynamic mass estimation being calculated with step 3 Value Qti, i-th of test video sequence segment V is calculated as followsi dQuality evaluation value Qi
Qi=Qsi×Qti,
Wherein, × indicate numerical multiplication;
(4b) takes i from 1 to n, by all cycle tests fragment masses { Q1,Q2,...,Qi,...,QnMean value as survey Try video VdMass value Q:
Step 5, judged according to quality of the mass value Q to test video.
Since the value range for calculating mass value Q of each test sample is between 0 to 1, the value of Q is bigger to represent test specimens This pollution level is more serious, therefore the Quality estimation of test video can be carried out according to the size of mass value Q:
If Q=0, then it represents that the test video is not by noise pollution;
If 0 Q≤0.005 <, then it represents that the test video is by noise slight pollution;
If 0.005 Q≤0.01 <, then it represents that the test video is by noise intermediate pollution;
If Q > 0.01, then it represents that the test video is by noise serious pollution.
Effect of the invention can be further illustrated by following emulation experiment:
Emulation 1:Quality evaluation accuracy test is carried out on disclosed video quality evaluation data set LIVE with the present invention, The Spearman's correlation coefficient SROCC of its evaluation result and the subjective perception result of people is 0.88, complete with reference to view better than existing Frequency quality evaluation algorithm as a result, showing the sensing results of evaluation result and people of the invention with more consistency.
Emulation 2:It is calculated on the test platform of i7CPU for being loaded with Windows10 system, 3.6GHz with the present invention Velocity test calculates a resolution ratio as the test video of 768*432,250 frames and only needs 17s, and calculating speed is better than existing Most of full reference mass evaluation algorithms, such as the MOVIE method of Seshadrinathan, the light stream similarity side of Manasa The method of method, the Vis3 method and Peng of Chandler shows that the present invention has lower computation complexity compared to above-mentioned algorithm.
Above description is only example of the present invention, does not constitute any limitation of the invention.Obviously for this It, all may be without departing substantially from the principle of the invention, structure after having understood the content of present invention and principle for the professional in field In the case of, various modifications and change in form and details are carried out, but these modifications and variations based on inventive concept are still Within the scope of the claims of the present invention.

Claims (7)

1. a kind of full reference video quality appraisement method based on motion profile, including:
(1) by test video V to be measureddAnd its reference video VrN sequence fragment is divided into according to same length:Wherein, Vi dIndicate i-th of cycle tests segment, Vi rIndicate i-th A reference sequences segment, i=1,2 ..., n, n >=2 and n are integer;
(2) i-th of cycle tests segment V is calculatedi dRest mass estimated value Qsi
(3) i-th of cycle tests segment V is calculatedi dDynamic mass estimated value Qti
(3a) is to Vi dFirst frame carry out point sampling, the collection for obtaining sampled point is combined into P={ p1,p2,...,pk,...,pN, wherein pkIndicate that k-th of sampled point, k=1 ..., N, N are the quantity for the point that sampling obtains, N >=5 and N are integer;
(3b) calculates separately reference sequences segment Vi rWith cycle tests segment Vi dDense optical flow between per adjacent two frame:
Wherein, FrIndicate reference sequences segment Vi rLight stream set,Indicate reference sequences segment Vi rIn m frame and m+1 frame Between light stream, FdIndicate cycle tests segment Vi dLight stream set,Indicate cycle tests segment Vi dIn m frame and m+1 Light stream between frame, m=1 ..., L-1, L are the frame number of sequence fragment, and L >=12 and L are integer;
(3c) calculates sampled point in reference sequences segment Vi rIn motion profile R;
(3d) rejects invalid track from motion profile R, obtains effective track
(3e) is to the effective track of j-th stripIndicate effective trackQuantity, respectively extract refer to sequence Column-slice section Vi rAlong effective trackOptical-flow FeatureWith cycle tests segment Vi dAlong effective trackOptical-flow Feature
(3f) calculates (3e) described two Optical-flow FeaturesWithSimilar sexual deviation, and using its mean value as cycle tests segment Vi dAlong effective trackDynamic mass qtj
(3g) is by cycle tests segment Vi dAlong all effective tracksDynamic massMean value with The sum of standard deviation is used as cycle tests segment Vi dDynamic mass value Qti
(4) by the rest mass value Qs of i-th of cycle tests segmentiWith the dynamic mass value Qt of i-th of cycle tests segmenti's Product is as i-th of cycle tests segment Vi dTotal quality Qi
(5) by all cycle tests fragment masses { Q1,Q2,...,Qi,...,QnMean value as test video VdMass value Q;
(6) judged according to quality of the mass value Q to test video:
If Q=0, then it represents that the test video is not by noise pollution;
If 0 Q≤0.005 <, then it represents that the test video is by noise slight pollution;
If 0.005 Q≤0.01 <, then it represents that the test video is by noise intermediate pollution;
If Q > 0.01, then it represents that the test video is by noise serious pollution.
2. the method as described in claim 1, step (2) is middle to calculate i-th of sequence fragment Vi dRest mass estimate Qsi, It carries out as follows:
(2a) is to i-th of reference sequences segment Vi rWith i-th of cycle tests segment Vi dIn each frame I extract gradient component Gx and Gy:
Gx=I*dx, Gy=I*dy
Wherein, * indicates that linear convolution operation, dx, dy are respectively convolution kernel horizontally and vertically,
(2b) calculates gradient magnitude GM according to the calculated result of (2a):
(2c) is by i-th of reference sequences segment Vi rIt is denoted as in the gradient magnitude of w frameI-th of cycle tests segment Vi d? The gradient magnitude of w frame is denoted asThen i-th of cycle tests segment Vi dIn the gradient similarity deviation of w frameIt indicates For:
Wherein, w=1,2 ..., L, expression carry out multiplication operation to each element in matrix respectively ,-indicate in matrix Each element carry out divide operations respectively, c is constant, be set as 255, Stdev () indicate calculate bracket in standard deviation;
(2d) is by i-th of cycle tests segment Vi dIn each frame gradient similarity deviation mean value as i-th of test video Sequence fragment Vi dRest mass estimated value Qsi
3. the method as described in claim 1, the middle sampled point that calculates of step (3c) is in i-th of reference sequences segment Vi rIn fortune Dynamic rail mark R, carries out as follows:
(3c1) is usedIndicate k-th of sampled point p in step (3a)kCoordinate position lead to when m successively gets L-1 from 1 It crosses following formula and iterates to calculate new coordinate position:
Wherein,Indicate k-th of sampled point pkX-axis coordinate in m frame,Indicate k-th of sampled point pkIn m frame Y-axis coordinate,Indicate m frame with reference to light streamIn coordinateHorizontal component values,Indicate m frame with reference to light streamIn coordinateVertical component value;
(3c2) gathers m+1 point in (3c1) for one in series sequentially in time, as kth motion profileWherein w=1,2 ..., L;
K in (3c2) is got N from 1 by (3c3), constitutes i-th of reference sequences segment Vi rIn motion profile R={ R1, R2,...,Rk,...,RN}。
4. the method as described in claim 1, invalid track is rejected from motion profile R in step (3d), obtains effective rail MarkIt carries out as follows:
(3d1) sets adjacent thresholds D1=10, standard deviation threshold method D2=10, repeat threshold value D3=345.6, sentenced based on the following conditions Disconnected kth track RkIt whether is invalid track:
If kth track RkIn reference sequences segment Vi rIn totally stationary, then track RkFor invalid track;
If kth track RkThe Euclidean distance of middle two tracing points of arbitrary neighborhood is more than adjacent thresholds D1, then track RkIt is invalid Track;
If kth track RkStandard deviation be above standard poor threshold value D2, then track RkFor invalid track;
If kth track RkIt is different from track R with any onekThe Euclidean distance of the track of itself, which is no more than, repeats threshold value D3, then Track RkFor invalid track;
(3d2) rejects invalid track from motion profile R, obtains effective track
5. the method as described in claim 1, step (3e) is middle to extract reference sequences segment Vi rAlong effective trackLight Flow featureIt carries out as follows:
(3e1) is usedIndicate trackIn m-th of coordinate position, by the reference light stream of m frameIn with pointCentered on, the region projection that size is M × M ties up to B with reference to light stream histogramIn, it takes m to take L-1 from 1, obtains With reference to light stream histogram setWherein M=48, B=32;
(3e2) will refer to light stream histogram setIn each histogram according to histogram Dimension adds up, as reference sequences segment Vi rAlong effective trackOptical-flow Feature
6. the method as described in claim 1, step (3e) is middle to extract cycle tests segment Vi dAlong effective trackLight Flow featureIt carries out as follows:
(3e3) is usedIndicate trackIn m-th of coordinate position, by the test light stream of m frameIn with pointCentered on, size be M × M region projection to B dimension test light stream histogramIn, it takes m from 1 to L-1, obtains Test light stream histogram setWherein M=48, B=32;
(3e4) will test light stream histogram setIn each histogram according to histogram Dimension adds up, as cycle tests segment Vi dAlong effective trackOptical-flow Feature
7. the method as described in claim 1 wherein calculates (3e) described two Optical-flow Features in step (3f)WithPhase Like sexual deviation, calculation formula is as follows:
(3f1) calculates Optical-flow FeatureWithSimilar sexual deviation dj
Wherein, it indicates to carry out multiplication operation respectively to each element in matrix ,-indicate to distinguish each element in matrix Divide operations are carried out, T is that denominator is zero in order to prevent, is set as 0.0001.
CN201810812287.6A 2018-07-23 2018-07-23 full-reference video quality evaluation method based on motion trail Active CN108900864B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810812287.6A CN108900864B (en) 2018-07-23 2018-07-23 full-reference video quality evaluation method based on motion trail

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810812287.6A CN108900864B (en) 2018-07-23 2018-07-23 full-reference video quality evaluation method based on motion trail

Publications (2)

Publication Number Publication Date
CN108900864A true CN108900864A (en) 2018-11-27
CN108900864B CN108900864B (en) 2019-12-10

Family

ID=64351584

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810812287.6A Active CN108900864B (en) 2018-07-23 2018-07-23 full-reference video quality evaluation method based on motion trail

Country Status (1)

Country Link
CN (1) CN108900864B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110365966A (en) * 2019-06-11 2019-10-22 北京航空航天大学 A kind of method for evaluating video quality and device based on form
CN110880184A (en) * 2019-10-03 2020-03-13 上海淡竹体育科技有限公司 Method and device for carrying out automatic camera inspection based on optical flow field
CN111583304A (en) * 2020-05-09 2020-08-25 南京邮电大学 Feature extraction method for optimizing motion trail of key characters in video
CN112395542A (en) * 2020-11-19 2021-02-23 西安电子科技大学 Method for evaluating full-track position error
CN114449343A (en) * 2022-01-28 2022-05-06 北京百度网讯科技有限公司 Video processing method, device, equipment and storage medium
CN117041625A (en) * 2023-08-02 2023-11-10 成都梵辰科技有限公司 Method and system for constructing ultra-high definition video image quality detection network

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101420618A (en) * 2008-12-02 2009-04-29 西安交通大学 Adaptive telescopic video encoding and decoding construction design method based on interest zone
CN101489031A (en) * 2009-01-16 2009-07-22 西安电子科技大学 Adaptive frame rate up-conversion method based on motion classification
US7852948B2 (en) * 2005-05-27 2010-12-14 Fujifilm Corporation Moving picture real-time communications terminal, control method for moving picture real-time communications terminal, and control program for moving picture real-time communications terminal
CN104159104A (en) * 2014-08-29 2014-11-19 电子科技大学 Full-reference video quality evaluation method based on multi-stage gradient similarity
CN105812788A (en) * 2016-03-24 2016-07-27 北京理工大学 Video stability quality assessment method based on interframe motion amplitude statistics
US9619714B2 (en) * 2015-09-10 2017-04-11 Sony Corporation Device and method for video generation
CN107318014A (en) * 2017-07-25 2017-11-03 西安电子科技大学 The video quality evaluation method of view-based access control model marking area and space-time characterisation

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7852948B2 (en) * 2005-05-27 2010-12-14 Fujifilm Corporation Moving picture real-time communications terminal, control method for moving picture real-time communications terminal, and control program for moving picture real-time communications terminal
CN101420618A (en) * 2008-12-02 2009-04-29 西安交通大学 Adaptive telescopic video encoding and decoding construction design method based on interest zone
CN101489031A (en) * 2009-01-16 2009-07-22 西安电子科技大学 Adaptive frame rate up-conversion method based on motion classification
CN104159104A (en) * 2014-08-29 2014-11-19 电子科技大学 Full-reference video quality evaluation method based on multi-stage gradient similarity
US9619714B2 (en) * 2015-09-10 2017-04-11 Sony Corporation Device and method for video generation
CN105812788A (en) * 2016-03-24 2016-07-27 北京理工大学 Video stability quality assessment method based on interframe motion amplitude statistics
CN107318014A (en) * 2017-07-25 2017-11-03 西安电子科技大学 The video quality evaluation method of view-based access control model marking area and space-time characterisation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BO-HAO CHEN ET AL: "Accurate Detection of Moving Objects in Traffic Video Streams over Limited Bandwidth Networks", 《2013 IEEE INTERNATIONAL SYMPOSIUM ON MULTIMEDIA》 *
卢芳芳: "基于空时域失真测度的视频质量评价", 《上海电力学院学报》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110365966A (en) * 2019-06-11 2019-10-22 北京航空航天大学 A kind of method for evaluating video quality and device based on form
CN110880184A (en) * 2019-10-03 2020-03-13 上海淡竹体育科技有限公司 Method and device for carrying out automatic camera inspection based on optical flow field
CN110880184B (en) * 2019-10-03 2023-07-21 上海淡竹体育科技有限公司 Method and device for automatically inspecting camera based on optical flow field
CN111583304A (en) * 2020-05-09 2020-08-25 南京邮电大学 Feature extraction method for optimizing motion trail of key characters in video
CN111583304B (en) * 2020-05-09 2023-06-09 南京邮电大学 Feature extraction method for optimizing motion trail of video key character
CN112395542A (en) * 2020-11-19 2021-02-23 西安电子科技大学 Method for evaluating full-track position error
CN112395542B (en) * 2020-11-19 2024-02-20 西安电子科技大学 Full track position error evaluation method
CN114449343A (en) * 2022-01-28 2022-05-06 北京百度网讯科技有限公司 Video processing method, device, equipment and storage medium
CN117041625A (en) * 2023-08-02 2023-11-10 成都梵辰科技有限公司 Method and system for constructing ultra-high definition video image quality detection network
CN117041625B (en) * 2023-08-02 2024-04-19 成都梵辰科技有限公司 Method and system for constructing ultra-high definition video image quality detection network

Also Published As

Publication number Publication date
CN108900864B (en) 2019-12-10

Similar Documents

Publication Publication Date Title
CN108900864A (en) Full reference video quality appraisement method based on motion profile
US7590287B2 (en) Method for generating a quality oriented significance map for assessing the quality of an image or video
CN109522854A (en) A kind of pedestrian traffic statistical method based on deep learning and multiple target tracking
CN104079925B (en) Ultra high-definition video image quality method for objectively evaluating based on vision perception characteristic
CN106815839B (en) A kind of image quality blind evaluation method
CN104781848B (en) Image monitoring apparatus for estimating gradient of singleton, and method therefor
CN103533367B (en) A kind of no-reference video quality evaluating method and device
Choutas et al. Accurate 3D body shape regression using metric and semantic attributes
CN106341677B (en) Virtual view method for evaluating video quality
CN108537157B (en) A kind of video scene judgment method and device based on artificial intelligence classification realization
CN103763552A (en) Stereoscopic image non-reference quality evaluation method based on visual perception characteristics
CN110738154A (en) pedestrian falling detection method based on human body posture estimation
CN107742307A (en) Based on the transmission line galloping feature extraction and parameters analysis method for improving frame difference method
CN105338343A (en) No-reference stereo image quality evaluation method based on binocular perception
CN102034267A (en) Three-dimensional reconstruction method of target based on attention
CN105787895B (en) Statistics compressed sensing image reconstructing method based on Hierarchical GMM
CN111160210A (en) Video-based water flow velocity detection method and system
CN104036485A (en) Method about image resampling tampering detection
CN104408741A (en) Video global motion estimation method with sequential consistency constraint
CN105049838A (en) Objective evaluation method for compressing stereoscopic video quality
CN109902550A (en) The recognition methods of pedestrian&#39;s attribute and device
CN104063871A (en) Method for segmenting image sequence scene of wearable device
CN108921023A (en) A kind of method and device of determining low quality portrait data
CN108259893B (en) Virtual reality video quality evaluation method based on double-current convolutional neural network
CN104574424B (en) Based on the nothing reference image blur evaluation method of multiresolution DCT edge gradient statistics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant