CN108900864B - full-reference video quality evaluation method based on motion trail - Google Patents

full-reference video quality evaluation method based on motion trail Download PDF

Info

Publication number
CN108900864B
CN108900864B CN201810812287.6A CN201810812287A CN108900864B CN 108900864 B CN108900864 B CN 108900864B CN 201810812287 A CN201810812287 A CN 201810812287A CN 108900864 B CN108900864 B CN 108900864B
Authority
CN
China
Prior art keywords
track
test
sequence fragment
optical flow
quality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810812287.6A
Other languages
Chinese (zh)
Other versions
CN108900864A (en
Inventor
吴金建
刘永旭
谢雪梅
石光明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Electronic Science and Technology
Original Assignee
Xian University of Electronic Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Electronic Science and Technology filed Critical Xian University of Electronic Science and Technology
Priority to CN201810812287.6A priority Critical patent/CN108900864B/en
Publication of CN108900864A publication Critical patent/CN108900864A/en
Application granted granted Critical
Publication of CN108900864B publication Critical patent/CN108900864B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

the invention discloses a full-reference video quality evaluation method based on a motion trail, which mainly solves the problems of low evaluation accuracy and high calculation complexity in the prior art. The implementation scheme is as follows: dividing the test video and the reference video into sequence segments with equal length; calculating the static quality of the test sequence fragment; point sampling a first frame of a test sequence segment; respectively calculating optical flows of the test sequence segment and the reference sequence segment; calculating the motion track of the sampling point in the reference sequence segment, and eliminating an invalid track in the motion track; respectively extracting optical flow characteristics of the test sequence segment and the reference sequence segment along the motion trail, and calculating characteristic difference of the test sequence segment and the reference sequence segment; calculating the dynamic quality of the test sequence fragment; the quality of the test video is calculated from the static and dynamic quality of the sequence segments. The invention improves the accuracy of quality evaluation, reduces the computation complexity, and can be used for quality detection evaluation in video compression, transmission and processing.

Description

full-reference video quality evaluation method based on motion trail
Technical Field
The invention belongs to the technical field of image and video processing, and particularly relates to a full-reference video quality evaluation method which can be used for quality detection and evaluation in video compression, transmission and processing.
Technical Field
with the development of handheld intelligent equipment and the progress of network communication technology, WeChat videos, mobile phone cameras, network live broadcasts and the like are integrated into the daily life of people. Due to the limitations of network bandwidth and storage device capacity, video usually needs to be compressed and encoded before transmission and processing. On one hand, due to the influence of hardware equipment, the video is inevitably interfered by noise in the shooting process, and the quality of the video is influenced; on the other hand, the video can also cause some loss of original information when being compressed; meanwhile, when video transmission is performed, network packet loss is likely to occur at any time under the influence of a network environment, so that subsequent video watching and processing are affected, and even omission and misjudgment of key information can be caused. Although the video quality is monitored accurately and subjectively by people, a large amount of time and labor are wasted, the efficiency is low, and automation cannot be realized.
Based on the above disadvantages of artificial judgment, researchers are dedicated to establishing an objective video quality evaluation method to replace artificial judgment. According to the amount of required reference video information, the existing objective video quality evaluation method can be divided into three categories, wherein:
The first type is: a no-reference video quality evaluation method. The method completely does not need an original reference video, and directly utilizes a calculation model to carry out quality prediction on the test video. Since there is no reference video information, the evaluation result is poor, limiting its practical application.
The second type is: partial reference video quality evaluation method. The method extracts a certain amount of characteristics from an original reference video, and evaluates the quality of a test video by comparing the characteristic difference between the reference video and the test video. Because only a part of reference video information can be obtained, the evaluation result is still not optimistic, and the application field is narrow.
The third type is: a full reference video quality evaluation method. According to the method, all original reference video information is needed, and the quality score of the test video is given by comparing the difference degree of the original reference video and the video polluted by noise. Since all the reference video information is used, the evaluation accuracy is the best among the three types of methods. Around such methods, the prior art proposes numerous technical solutions, such as:
The method comprises the following steps that 1, a three-dimensional Gabor filter is adopted by Seshadrinathan and the like to carry out time-space domain decomposition on a video, and the quality of a test video is calculated by measuring the filter coefficient difference between a reference video and the test video; manasa et al believe that video noise causes differences in optical flow between two adjacent frames, and calculate the quality of the test video by measuring the degree of optical flow similarity between the reference video and the test video. However, the computation complexity of these methods is too high, and the computation time is too long, which limits the practical application of these methods.
Peng et al propose a video quality evaluation method based on temporal-spatial domain textural features; chandler et al consider the video as a three-dimensional cube and compute the difference between the test video and the reference video from two-dimensional planes at different angles, namely the xy plane, the yt plane, and the xt plane, respectively. However, these methods do not take the motion perception characteristics of a human into consideration, and thus it is difficult to keep the quality evaluation result consistent with the subjective perception result of a human, and the stability of evaluation is poor.
Disclosure of Invention
the invention aims to provide a full-reference video quality evaluation method based on a motion trail aiming at the defects in the full-reference video quality evaluation method and combining the motion perception characteristic of a human visual system, so that the accuracy of video quality evaluation is improved while the calculation complexity is reduced.
The technical idea of the invention is as follows: dividing the test video into a plurality of sequence segments, performing quality estimation on static information and dynamic information of each sequence segment, taking the product of static quality and dynamic quality as the total quality estimation of the sequence segments, and finally taking the average value of the quality of the sequence segments as the final quality estimation value of the test video. The method comprises the following implementation steps:
(1) Test video V to be testeddand its reference video VrDividing the sequence into n sequence segments according to the same length:Wherein, Vi dDenotes the ith test sequence fragment, Vi rRepresents the ith reference sequence fragment, i is 1,2, and n is an integer which is not less than 2;
(2) Calculating the ith test sequence fragment Vi dis estimated from the static quality Qsi
(3) Calculating the ith test sequence fragment Vi dDynamic quality estimation value Qt ofi
(3a) To Vi dThe first frame of (a) is point sampled to obtain a set of sample points P ═ P1,p2,...,pk,...,pNIn which p iskRepresenting the kth sampling point, wherein k is 1., N is the number of points obtained by sampling, N is more than or equal to 5, and N is an integer;
(3b) Calculating reference sequence fragments V separatelyi rand test sequence fragment Vi dDense optical flow between each two adjacent frames:
Wherein, FrDenotes the reference sequence fragment Vi rThe set of optical flows of (a) is,Denotes the reference sequence fragment Vi rOptical flow between the m-th frame and the m + 1-th frame, Fdrepresenting test sequence fragment Vi dthe set of optical flows of (a) is,Representing test sequence fragment Vi dlight flow between the m-th frame and the m + 1-th frame, wherein m is 1, L-1, L is the frame number of the sequence segment, L is more than or equal to 12, and L is an integer;
(3c) Calculating sampling point in reference sequence segment Vi rA motion trajectory R of (1);
(3d) Removing the invalid track from the motion track R to obtain the valid track
(3e) for the j effective trackRepresenting valid tracksRespectively extracting reference sequence fragments Vi rAlong the effective trackoptical flow characterization ofAnd test sequence fragment Vi dAlong the effective trackOptical flow characterization of
(3f) Computing (3e) the two optical flow characteristicsAndand taking the mean value as the test sequence segment Vi dAlong the effective trackDynamic mass qt ofj
(3g) Test sequence fragment Vi dAlong all the effective tracksDynamic mass ofThe sum of the mean and the standard deviation of (A) is taken as the test sequence segment Vi dDynamic quality value of (Qt)i
(4) static quality value Qs of ith test sequence fragmentiAnd the dynamic quality value Qt of the ith test sequence fragmentias the ith test sequence fragment Vi dOfQuality Q ofi
(5) All test sequence fragment quality { Q1,Q2,...,Qi,...,QnMean value of } as test video VdA quality value Q of;
(6) judging the quality of the test video according to the quality value Q:
If Q is 0, the test video is not polluted by noise;
If Q is more than 0 and less than or equal to 0.005, the test video is slightly polluted by noise;
If Q is more than 0.005 and less than or equal to 0.01, the test video is indicated to be moderately polluted by noise;
If Q > 0.01, it indicates that the test video is heavily contaminated with noise.
Compared with the prior art, the invention has the following advantages:
1) the full-reference video quality evaluation method provided by the invention considers the calculation of the video quality as the accumulation of the segment quality of a segment of video sequence, and is more in line with the perception process of human to the video quality.
2) the invention divides the video into a static part and a dynamic part for quality estimation respectively, and only concerns the information change of the motion content along the motion trail of the dynamic part at the same time, which is more consistent with the motion perception process in vision, so that the quality evaluation result is more consistent with the subjective evaluation result of people.
3) The invention reduces the computational complexity and memory consumption because of no complex three-dimensional filtering computation and no storage requirement of large-segment video.
Drawings
FIG. 1 is a flow chart of an implementation of the present invention.
Detailed Description
The present invention is described in further detail below with reference to the attached drawings.
Referring to fig. 1, the specific implementation steps of the present invention are as follows:
step 1, referring to a video VrAnd test video VdDividing the sequence into sequence segments according to equivalent length.
Test viewerfrequency VdAnd a reference video VrThere are typically 150 to 600 frames, and the present example divides the test and reference video into n sequence segments, respectively, according to the length of L ═ 18 frames: vd={V1 d,...,Vi d,...Vn d},wherein, Vi ddenotes the ith test sequence fragment, Vi rDenotes the ith reference sequence fragment, i is 1, 2.
Step 2, calculating the ith test sequence segment Vi dIs estimated from the static quality Qsi
The static information is presented in the form of an image of each frame, the static quality of which can be measured by testing the sequence segment Vi destimating the static image quality evaluation method of each frame; the existing static image quality evaluation method comprises multi-scale structure similarity MSSSIM, characteristic similarity FSIM and gradient similarity deviation GMSD, the static quality estimation is carried out by adopting the gradient similarity deviation in the example, and the implementation mode is as follows:
(2a) For the ith reference sequence fragment Vi rAnd the ith test sequence fragment Vi dExtracts the gradient components Gx and Gy per frame I:
Gx=I*dx,Gy=I*dy
Wherein, dx and dy are convolution kernels in the horizontal direction and the vertical direction respectively,
(2b) Calculating a gradient amplitude GM from the calculation result of (2 a):
(2c) dividing the ith reference sequence into a segment Vi rIn the w-th frameThe gradient magnitude of (D) is recorded asFragmenting the ith test sequenceThe gradient magnitude at the w-th frame is noted asThen the ith test sequence fragment Vi dGradient similarity bias at w-th frameExpressed as:
Where w is 1, 2., L, · denotes multiplying each element in the matrix, dividing each element in the matrix, c is a constant, set to 255, and Stdev () denotes calculating a standard deviation between parentheses;
(2d) The ith test sequence fragment Vi dTaking the mean value of the gradient similarity deviation of each frame as the ith test video sequence segment Vi dIs estimated from the static quality Qsi
Step 3, calculating the ith test sequence segment Vi dDynamic quality estimation value Qt ofi
The dynamic information is the motion content information in the video, and the estimation of the dynamic quality is carried out according to the following steps:
(3a) to Vi dThe first frame of (2) is point sampled in a manner including dense interval sampling and sampling based on a corner response value, and in the example, a sampling method based on the corner response value is adopted to obtain a set of sampling pointsIn the sum of P ═ P1,p2,...,pk,...,pNIn which p iskRepresenting the kth sampling point, wherein k is 1., N is the number of points obtained by sampling, N is more than or equal to 5, and N is an integer;
(3b) Calculating reference sequence fragments V separatelyi rand test sequence fragment Vi ddense optical flow between each two adjacent frames:
the optical flow calculation method comprises a Farneback optical flow algorithm, a BA optical flow algorithm and an LK optical flow algorithm, the Farneback algorithm is adopted in the embodiment to calculate the optical flow, and the result is as follows:
Wherein, FrDenotes the reference sequence fragment Vi rThe set of optical flows of (a) is,Denotes the reference sequence fragment Vi rOptical flow between the m-th frame and the m + 1-th frame, Fdrepresenting test sequence fragment Vi dThe set of optical flows of (a) is,Representing test sequence fragment Vi dLight flow between the m-th frame and the m + 1-th frame, wherein m is 1, L-1, L is the frame number of the sequence segment, L is more than or equal to 12, and L is an integer;
(3c) Calculating sampling point in reference sequence segment Vi rMotion trajectory R in (1):
(3c1) By usingrepresents the k-th sampling point p in step (3a)kwhen m is sequentially taken from 1 to L-1, a new coordinate position is iteratively calculated by the following formula:
Wherein the content of the first and second substances,Represents the kth sample point pkThe x-axis coordinate in the m-th frame,Represents the kth sample point pkThe y-axis coordinate in the m-th frame,representing the mth frame reference optical flowIn the coordinateThe value of the horizontal component of (a),Representing the mth frame reference optical flowin the coordinateA vertical component value of;
(3c2) M +1 points in (3c1) are connected in series in time sequence to form a set as the kth motion trailwherein w is 1, 2.., L;
(3c3) taking k in (3c2) from 1 to N to form the ith reference sequence fragment Vi rR ═ R of the motion trajectory in (1)1,R2,...,Rk,...,RN};
(3d) From the motion trajectory REliminating invalid tracks to obtain valid tracks
only the motion content is concerned in the process of calculating the dynamic quality, so that the completely static track needs to be eliminated; and, because the optical flow algorithm has errors, the obtained track may have an error track and a repeated track, and the example eliminates the error track and the repeated track by setting three thresholds. The specific implementation mode is as follows:
3d1) Setting adjacent threshold values D110, standard deviation threshold D2repeat threshold D as 103345.6, the k-th trajectory R is determined based on the following conditionkWhether it is an invalid track:
If the k-th track RkIn reference sequence fragment Vi rWhen the center is completely stationary, the track R iskan invalid track;
If the k-th track Rkthe Euclidean distance between any two adjacent track points exceeds an adjacent threshold value D1then the track R iskAn invalid track;
If the k-th track RkIs greater than a standard deviation threshold D2Then the track R iskAn invalid track;
If the k-th track RkDifferent from any one of the tracks RkThe Euclidean distance of the track of the self does not exceed a repetition threshold value D3then the track R iskAn invalid track;
(3d2) Removing the invalid track from the motion track R to obtain the valid track
(3e) For the j effective trackRespectively extracting reference sequence fragments Vi rAlong the effective trackOptical flow characterization ofAnd test sequence fragment Vi dAlong the effective trackOptical flow characterization ofWhereinRepresenting valid tracksThe number of (2);
(3e1) by usingRepresenting a trackThe mth coordinate position of the frame, the reference optical flow of the mth frameMiddle point ofprojection of a centered, M size region onto a B-dimensional reference optical flow histogramtaking m from 1 to L-1, a reference optical flow histogram set is formedWherein M is 48, B is 32;
(3e2) Reference optical flow histogram aggregationeach histogram in the sequence is accumulated according to the dimension of the histogram to obtain a reference sequence segment Vi rAlong the effective trackOptical flow characterization of
(3e3) by usingRepresenting a trackAt the mth coordinate position, testing the optical flow of the mth framemiddle point ofProjection of centered, M size area onto B-dimensional test optical flow histogramtaking m from 1 to L-1, a test optical flow histogram set is formedWherein M is 48, B is 32;
(3e4) Assembling a test optical flow histogramEach histogram in the sequence is accumulated according to the dimension of the histogram to obtain a test sequence segment Vi dAlong the effective trackOptical flow characterization of
(3f) Computing (3e) the two optical flow characteristicsAnddeviation of similarity of dj
Wherein, represents the multiplication operation to each element in the matrix, respectively, represents the division operation to each element in the matrix, respectively, T is set to 0.0001 to prevent the denominator from being zero;
(3g) Calculation of test sequence fragment Vi dAlong the effective trackdynamic mass qt ofj
qtj=Mean(dj),
Where Mean () represents the average of all elements in parentheses, djIs the similarity deviation in (3 f);
(3h) Calculation of test sequence fragment Vi ddynamic quality value of (Qt)i
Where qtjIs the test sequence fragment V in (3g)i dAlong the effective trackThe dynamic quality of the liquid crystal layer is improved,representing test sequence fragment Vi dAlong the effective trackthe dynamic quality of (2).
Step 4, calculating a test video VdIs equal to the final quality value Q.
(4a) Calculating the static quality estimated value Qs according to the step 2iAnd the dynamic quality estimated value Qt obtained by calculation in the step 3iCalculating the ith test video sequence segment V according to the following formulai dQuality estimation value Q ofi
Qi=Qsi×Qti
Wherein x represents a numerical multiplication;
(4b) take i from 1 to n, and test all sequence fragment qualities { Q }1,Q2,...,Qi,...,QnMean value of } as test video VdQuality value Q of:
And 5, judging the quality of the test video according to the quality value Q.
because the value range of the calculated quality value Q of each test sample is between 0 and 1, the larger the value of Q is, the more serious the pollution degree of the test sample is, the quality judgment of the test video can be carried out according to the size of the quality value Q:
If Q is 0, the test video is not polluted by noise;
if Q is more than 0 and less than or equal to 0.005, the test video is slightly polluted by noise;
If Q is more than 0.005 and less than or equal to 0.01, the test video is indicated to be moderately polluted by noise;
if Q > 0.01, it indicates that the test video is heavily contaminated with noise.
The effect of the invention can be further illustrated by the following simulation experiment:
Simulation 1: the invention is used for carrying out quality evaluation accuracy test on the disclosed video quality evaluation data set LIVE, the spearman correlation coefficient SROCC of the evaluation result and the human subjective perception result is 0.88, and the result is superior to the result of the existing full-reference video quality evaluation algorithm, thereby showing that the evaluation result of the invention has more consistency with the human perception result.
simulation 2: the invention is used for carrying out the calculation speed test on a test platform loaded with a Windows10 system and a 3.6GHz i7CPU, the calculation speed of the test platform is only 17s for calculating a test video with 768 × 432 and 250 frames, and the calculation speed is superior to most of the existing full reference quality evaluation algorithms, such as a MOVIE method of Senladrinathan, an optical flow similarity method of Manasa, a Vis3 method of Chandler and a Peng method, which shows that the invention has lower calculation complexity compared with the algorithms.
the above description is only one specific example of the present invention and should not be construed as limiting the invention in any way. It will be apparent to persons skilled in the relevant art(s) that, having the benefit of this disclosure and its principles, various modifications and changes in form and detail can be made without departing from the principles and structures of the invention, which are, however, encompassed by the appended claims.

Claims (7)

1. a full-reference video quality evaluation method based on motion trail comprises the following steps:
(1) Test video V to be testeddand its reference video VrDividing the sequence into n sequence segments according to the same length:wherein, Vi dDenotes the ith test sequence fragment, Vi rRepresents the ith reference sequence fragment, i is 1,2, and n is an integer which is not less than 2;
(2) Calculating the ith test sequence fragment Vi dIs estimated from the static quality Qsi
(3) calculating the ith test sequence fragment Vi dDynamic quality estimation value Qt ofi
(3a) To Vi dthe first frame of (a) is point sampled to obtain a set of sample points P ═ P1,p2,...,pk,...,pNin which p iskrepresenting the kth sampling point, wherein k is 1., N is the number of points obtained by sampling, N is more than or equal to 5, and N is an integer;
(3b) Calculating reference sequence fragments V separatelyi rAnd test sequence fragment Vi ddense optical flow between each two adjacent frames:
Wherein, FrDenotes the reference sequence fragment Vi rthe set of optical flows of (a) is,Denotes the reference sequence fragment Vi rOptical flow between the m-th frame and the m + 1-th frame, FdRepresenting test sequence fragment Vi dThe set of optical flows of (a) is,Representing test sequence fragment Vi dLight flow between the m-th frame and the m + 1-th frame, wherein m is 1, L-1, L is the frame number of the sequence segment, L is more than or equal to 12, and L is an integer;
(3c) Calculating sampling point in reference sequence segment Vi rA motion trajectory R of (1);
(3d) Removing the invalid track from the motion track R to obtain the valid track
(3e) For the j effective track Representing valid tracksRespectively extracting reference sequence fragments Vi rAlong the effective trackOptical flow characterization ofand test sequence fragment Vi dAlong the effective trackOptical flow characterization of
(3f) Calculating (3e) the reference sequence fragment Vi rAlong the effective trackOptical flow characterization ofAnd test sequence fragment Vi dAlong the effective trackoptical flow characterization ofand taking the mean value as the test sequence segment Vi dAlong the effective trackDynamic mass qt ofj
(3g) Test sequence fragment Vi dalong all the effective tracksDynamic mass ofThe sum of the mean and the standard deviation of (A) is taken as the test sequence segment Vi dDynamic quality value of (Qt)i
(4) Static quality value Qs of ith test sequence fragmentiAnd the dynamic quality value Qt of the ith test sequence fragmentiAs the ith test sequence fragment Vi dTotal mass Q ofi
(5) All test sequence fragment quality { Q1,Q2,...,Qi,...,QnMean value of } as test video VdA quality value Q of;
(6) Judging the quality of the test video according to the quality value Q:
If Q is 0, the test video is not polluted by noise;
If Q is more than 0 and less than or equal to 0.005, the test video is slightly polluted by noise;
If Q is more than 0.005 and less than or equal to 0.01, the test video is indicated to be moderately polluted by noise;
If Q > 0.01, it indicates that the test video is heavily contaminated with noise.
2. The method of claim 1, wherein the ith sequence fragment V is calculated in step (2)i dIs estimated by the static quality QsiThe method comprises the following steps:
(2a) For the ith reference sequence fragment Vi rand the ith test sequence fragment Vi dextracts the gradient components Gx and Gy per frame I:
Gx=I*dx,Gy=I*dy
Wherein, dx and dy are convolution kernels in the horizontal direction and the vertical direction respectively,
(2b) Calculating a gradient amplitude GM from the calculation result of (2 a):
(2c) Dividing the ith reference sequence into a segment Vi rThe gradient magnitude at the w-th frame is noted asIth test sequence fragment Vi dThe gradient magnitude at the w-th frame is noted asthen the ith test sequence fragment Vi dGradient similarity bias at w-th frameExpressed as:
Where w is 1, 2., L, · denotes multiplying each element in the matrix, dividing each element in the matrix, c is a constant, set to 255, and Stdev () denotes calculating a standard deviation between parentheses;
(2d) The ith test sequence fragment Vi dtaking the mean value of the gradient similarity deviation of each frame as the ith test video sequence segment Vi dIs estimated from the static quality Qsi
3. The method of claim 1, wherein the step (3c) of calculating the sampling point in the ith reference sequence segment Vi rThe motion track R in (1) is carried out according to the following steps:
(3c1) by usingRepresents the k-th sampling point p in step (3a)kwhen m is sequentially taken from 1 to L-1, a new coordinate position is iteratively calculated by the following formula:
Wherein the content of the first and second substances,Represents the kth sample point pkthe x-axis coordinate in the m-th frame,represents the kth sample point pkthe y-axis coordinate in the m-th frame,representing the mth frame reference optical flowIn the coordinateThe value of the horizontal component of (a),representing the mth frame reference optical flowIn the coordinateA vertical component value of;
(3c2) M +1 points in (3c1) are connected in series in time sequence to form a set as the kth motion trailwherein w is 1, 2.., L;
(3c3) Taking k in (3c2) from 1 to N to form the ith reference sequence fragment Vi rr ═ R of the motion trajectory in (1)1,R2,...,Rk,...,RN}。
4. The method as claimed in claim 1, wherein the invalid track is eliminated from the motion track R in step (3d) to obtain the valid trackthe method comprises the following steps:
(3d1) Setting adjacent threshold values D110, standard deviation threshold D2Repeat threshold D as 103345.6, the k-th trajectory R is determined based on the following conditionkWhether it is an invalid track:
if the k-th track Rkin reference sequence fragment Vi rWhen the center is completely stationary, the track R iskan invalid track;
If the k-th track Rkthe Euclidean distance between any two adjacent track points exceeds an adjacent threshold value D1Then the track R iskan invalid track;
If the k-th track RkIs greater than a standard deviation threshold D2Then the track R iskAn invalid track;
If the k-th track RkDifferent from any one of the tracks RkEuclidean distance of its own trajectoryNot exceeding a repetition threshold D3Then the track R iskAn invalid track;
(3d2) Removing the invalid track from the motion track R to obtain the valid track
5. The method of claim 1, wherein in step (3e) the reference sequence fragment V is extractedi ralong the effective trackOptical flow characterization ofThe method comprises the following steps:
(3e1) By usingRepresenting a trackthe mth coordinate position of the frame, the reference optical flow of the mth frameMiddle point ofProjection of a centered, M size region onto a B-dimensional reference optical flow histogramTaking m and L-1 from 1 to obtain a reference optical flow histogram setWherein M is 48, B is 32;
(3e2) Reference optical flow histogram aggregationIs accumulated according to the dimension of the histogram and is used as a reference sequence segment Vi rAlong the effective trackoptical flow characterization of
6. the method of claim 1, wherein in step (3e) test sequence fragment V is extractedi dAlong the effective trackoptical flow characterization ofThe method comprises the following steps:
(3e3) By usingRepresenting a trackAt the mth coordinate position, testing the optical flow of the mth frameMiddle point ofProjection of centered, M size area onto B-dimensional test optical flow histogramTaking m from 1 to L-1 to obtain a test luminous fluxset of diagramsWherein M is 48, B is 32;
(3e4) Assembling a test optical flow histogramEach histogram in (a) is accumulated according to the dimension of the histogram as a test sequence segment Vi dAlong the effective trackOptical flow characterization of
7. The method of claim 1, wherein the two optical flow features are computed (3e) in step (3f)AndThe calculation formula of the similarity deviation is as follows:
(3f1) computing optical flow featuresAndDeviation of similarity of dj
where, T is set to 0.0001 to prevent the denominator from being zero, and represents that multiplication operation is performed on each element in the matrix, and that division operation is performed on each element in the matrix.
CN201810812287.6A 2018-07-23 2018-07-23 full-reference video quality evaluation method based on motion trail Active CN108900864B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810812287.6A CN108900864B (en) 2018-07-23 2018-07-23 full-reference video quality evaluation method based on motion trail

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810812287.6A CN108900864B (en) 2018-07-23 2018-07-23 full-reference video quality evaluation method based on motion trail

Publications (2)

Publication Number Publication Date
CN108900864A CN108900864A (en) 2018-11-27
CN108900864B true CN108900864B (en) 2019-12-10

Family

ID=64351584

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810812287.6A Active CN108900864B (en) 2018-07-23 2018-07-23 full-reference video quality evaluation method based on motion trail

Country Status (1)

Country Link
CN (1) CN108900864B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110365966B (en) * 2019-06-11 2020-07-28 北京航空航天大学 Video quality evaluation method and device based on window
CN110880184B (en) * 2019-10-03 2023-07-21 上海淡竹体育科技有限公司 Method and device for automatically inspecting camera based on optical flow field
CN111583304B (en) * 2020-05-09 2023-06-09 南京邮电大学 Feature extraction method for optimizing motion trail of video key character
CN112395542B (en) * 2020-11-19 2024-02-20 西安电子科技大学 Full track position error evaluation method
CN114449343A (en) * 2022-01-28 2022-05-06 北京百度网讯科技有限公司 Video processing method, device, equipment and storage medium
CN116668737B (en) * 2023-08-02 2023-10-20 成都梵辰科技有限公司 Ultra-high definition video definition testing method and system based on deep learning

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101420618A (en) * 2008-12-02 2009-04-29 西安交通大学 Adaptive telescopic video encoding and decoding construction design method based on interest zone
CN101489031A (en) * 2009-01-16 2009-07-22 西安电子科技大学 Adaptive frame rate up-conversion method based on motion classification
US7852948B2 (en) * 2005-05-27 2010-12-14 Fujifilm Corporation Moving picture real-time communications terminal, control method for moving picture real-time communications terminal, and control program for moving picture real-time communications terminal
CN104159104A (en) * 2014-08-29 2014-11-19 电子科技大学 Full-reference video quality evaluation method based on multi-stage gradient similarity
CN105812788A (en) * 2016-03-24 2016-07-27 北京理工大学 Video stability quality assessment method based on interframe motion amplitude statistics
US9619714B2 (en) * 2015-09-10 2017-04-11 Sony Corporation Device and method for video generation
CN107318014A (en) * 2017-07-25 2017-11-03 西安电子科技大学 The video quality evaluation method of view-based access control model marking area and space-time characterisation

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7852948B2 (en) * 2005-05-27 2010-12-14 Fujifilm Corporation Moving picture real-time communications terminal, control method for moving picture real-time communications terminal, and control program for moving picture real-time communications terminal
CN101420618A (en) * 2008-12-02 2009-04-29 西安交通大学 Adaptive telescopic video encoding and decoding construction design method based on interest zone
CN101489031A (en) * 2009-01-16 2009-07-22 西安电子科技大学 Adaptive frame rate up-conversion method based on motion classification
CN104159104A (en) * 2014-08-29 2014-11-19 电子科技大学 Full-reference video quality evaluation method based on multi-stage gradient similarity
US9619714B2 (en) * 2015-09-10 2017-04-11 Sony Corporation Device and method for video generation
CN105812788A (en) * 2016-03-24 2016-07-27 北京理工大学 Video stability quality assessment method based on interframe motion amplitude statistics
CN107318014A (en) * 2017-07-25 2017-11-03 西安电子科技大学 The video quality evaluation method of view-based access control model marking area and space-time characterisation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于空时域失真测度的视频质量评价;卢芳芳;《上海电力学院学报》;20140415;全文 *

Also Published As

Publication number Publication date
CN108900864A (en) 2018-11-27

Similar Documents

Publication Publication Date Title
CN108900864B (en) full-reference video quality evaluation method based on motion trail
CN106778595B (en) Method for detecting abnormal behaviors in crowd based on Gaussian mixture model
CN103533367B (en) A kind of no-reference video quality evaluating method and device
WO2003101120A1 (en) A method and system for estimating sharpness metrics based on local edge statistical distribution
Zhou et al. Blind quality index for multiply distorted images using biorder structure degradation and nonlocal statistics
CN102231844B (en) Video image fusion performance evaluation method based on structure similarity and human vision
CN109583355B (en) People flow counting device and method based on boundary selection
CN109064418A (en) A kind of Images Corrupted by Non-uniform Noise denoising method based on non-local mean
CN108875587A (en) Target distribution detection method and equipment
CN108765407A (en) A kind of portrait picture quality determination method and device
Wang et al. Reference-free DIBR-synthesized video quality metric in spatial and temporal domains
CN104796580B (en) A kind of real-time steady picture video routing inspection system integrated based on selection
CN113810611A (en) Data simulation method and device for event camera
CN107295217B (en) Video noise estimation method based on principal component analysis
CN112714308B (en) Method and device for detecting video rolling stripes
CN106778822B (en) Image straight line detection method based on funnel transformation
CN111311584B (en) Video quality evaluation method and device, electronic equipment and readable medium
CN110889347B (en) Density traffic flow counting method and system based on space-time counting characteristics
CN106375756A (en) Single object removing and tampering detection method for monitored video
CN106375773B (en) Altering detecting method is pasted in frame duplication based on dynamic threshold
Tung et al. Why accuracy is not enough: The need for consistency in object detection
Zhao et al. Analysis and application of martial arts video image based on fuzzy clustering algorithm
CN114399532A (en) Camera position and posture determining method and device
Fan et al. Image quality evaluation of Sanda sports video based on BP neural network perception
CN113628225A (en) Fuzzy C-means image segmentation method and system based on structural similarity and image region block

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant