CN101594543A - Based on the whole LOF error concealing method of the video of dynamic texture model - Google Patents
Based on the whole LOF error concealing method of the video of dynamic texture model Download PDFInfo
- Publication number
- CN101594543A CN101594543A CN 200910062868 CN200910062868A CN101594543A CN 101594543 A CN101594543 A CN 101594543A CN 200910062868 CN200910062868 CN 200910062868 CN 200910062868 A CN200910062868 A CN 200910062868A CN 101594543 A CN101594543 A CN 101594543A
- Authority
- CN
- China
- Prior art keywords
- matrix
- frame
- current
- row
- xhat
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The present invention relates to video transmission error correction technical field, relate in particular to the whole LOF error concealing method of a kind of video based on the dynamic texture model.The present invention according to the dynamic texture model to the synthetic new frame of the preceding n frame (n 〉=2) of current lost frames; Judge current want recovery block at the motion vector of former frame corresponding position piece whether greater than given threshold value T1, select suitable motion vector according to judged result; Judge whether the current preceding n frame that will recover frame correctly receives, and reference frame is selected in decision for use according to judged result; Whether the former frame corresponding position motion vector of judging current block is greater than given threshold value T2, and whether decision carries out subsequent treatment according to judged result.Originally possess lost frames and can obtain better recovery effects, improved the characteristics of Video service quality in decoding end.
Description
Technical field
The present invention relates to video transmission error correction technical field, relate in particular to the whole LOF error concealing method of a kind of video based on the dynamic texture model.
Background technology
Error code or packet loss owing to be subjected to the influence of the network bandwidth and Network Transmission situation, can take place, thereby cause the deterioration of decoded video quality when Network Transmission in video flowing, and this just needs to adopt certain error recovery method to recover wrong vision signal; A kind of simple error recovery method is to introduce error concealment mechanism in decoding end; So-called error concealment is exactly time domain or the spatial correlation that utilizes vision signal, a kind of technology that the vision signal of damaging or lose is recovered.
Present existing video error concealment method has utilized the correlation of vision signal on time and space, by adopting post-processing technology to recover to lose image information to a certain extent in decoding end; Yet existing video error concealment method mostly only is applicable to the loss of data of one or more regional area in the two field picture, and is not good enough for the loss situation recovery effects of whole two field picture.
Though traditional error concealing method is necessarily used in Video transmission system, in the Internet video service of low code check, seldom use.Trace it to its cause is that the data volume of a two field picture is often less than a basic network transmission unit under the situation of low code check.In this case, when data-bag lost occurring in the Network Transmission, often put in order losing of two field picture content corresponding to one.
In existing error concealing method, the method that has (list of references 1:S.K.Bandyopadhyay, Z.Y.Wu, P.Panditet al., " An error concealment scheme for entire frame losses for H.264/AVC. " pp.97-100.) utilizes lost frames and the temporal correlation of former frame, adopt directly the method for copy former frame that it is recovered, but when consecutive frame and current lost frames correlation are more weak, this method to cover effect poor; The method that has (list of references 2:Z.Y.Wu, J.M.Boyce, and Ieee, " An error concealment scheme for entire frame losses based on H.264/AVC. " pp.4463-4466.) utilize the movement tendency of image to introduce the extrapolation motion vector, utilize the extrapolation motion vector of piece in former frame, to carry out motion search and recover lost frames, yet because not accurately at motion vector judgment criterion under the whole LOF condition, cause the motion vector of extrapolating not too accurate, make image to cover effect bad.
Therefore, in view of present most of error concealing methods can not be obtained better technical effect, therefore design the outstanding error concealing method of losing at whole two field picture of a kind of performance the video transmission service under the low code check is had great importance.
Summary of the invention
The purpose of this invention is to provide the whole LOF error concealing method of a kind of video, improve the video transmission service under the low code check based on the dynamic texture model.
For achieving the above object, the present invention adopts following technical scheme:
(1) according to the dynamic texture model to the synthetic new frame of the preceding n frame of current lost frames, the former frame of described new frame and lost frames is common to constitute current candidate's reference frame set of waiting to recover frame, wherein n 〉=2;
(2) judge current want recovery block at the motion vector of former frame corresponding position piece whether greater than given threshold value T1; If be false, then with current want recovery block at the motion vector of 4 * 4 of former frame correspondence positions as its motion vector, if set up then change next step over to;
(3) calculate the current intermediate value of wanting recovery block 8 motion vectors around 4 * 4 of former frame correspondence positions, as its motion vector;
(4) judge whether the current preceding n frame that will recover frame correctly receives; If set up, then with the synthetic new frame of dynamic texture model as the reference frame that recovers current block; If be false then with the current reference frame that will recover the former frame of frame as the recovery current block;
(5) judge that whether the former frame corresponding position motion vector of current block is greater than given threshold value T2; If set up then the motion vector of 8 neighbours' pieces around the current block is asked intermediate value, and this intermediate value motion vector taken exercises pixel value that compensation obtains as final error concealment value in reference frame, if be false then do not carry out post-processing operation.
Step (1) comprises following substep:
1. remember that (y12 is the brightness and the chromatic value matrix of preceding 2 frames of current lost frames y13) to Y, has deposited the brightness value and the chromatic value of all pixels in the frame in every row successively, and svd is a singular value decomposition method, obtains U, and S, V be totally 3 matrixes.
[U,S,V]=svd(Y,0);
2. take out all elements of 1~2 row in the matrix U, form Matrix C hat:
Chat=U(:,1:2);
3. the element with the row of 1~2 in the matrix S, 1~2 row forms matrix S (1:2,1:2), with all elements of 1~2 row in the matrix V form matrix V (:, 1:2) and to its ask transposition obtain matrix (V (:, 1:2)) ', then S (1:2,1:2) and (V (:, 1:2)) ' multiplying each other obtains matrix Xhat:
Xhat=S(1:2,1:2)*(V(:,1:2))′;
4. take out among the matrix Xhat 2~2 row (i.e. the 2nd row) all elements formation matrix Xhat (:, 2), take out 1:(2-1 among the matrix Xhat) all elements formation matrix of (i.e. the 1st row) row (Xhat (:, 1)), and it is asked generalized inverse matrix pinv (Xhat (:, 1)), then Xhat (:, 2) and pinv (Xhat (:, 1)) multiplying each other obtains matrix A hat:
Ahat=Xhat(:,2)*pinv(Xhat(:,1));
5. first column element that takes out among the matrix Xhat forms matrix x0:
x0=Xhat(:,1);
6. first column element of x0 as matrix x:
x(:,1)=x0;
7. establish variable t, t carried out the circulation assignment 2 times: with the element of the t of matrix X row form matrix X (:, t), with X (:, t) and matrix A hat multiply each other, the result as the value x of matrix X t+1 row (:, t+1); Again the element of matrix X t+1 row form matrix X (:, t+1), with X (:, t+1) and Matrix C hat multiply each other, the result as matrix I t+1 row I (:, t+1):
for?t=1:2
{
x(:,t+1)=Ahat*X(:,t);
I(:,t+1)=Chat*X(:,t+1);
}
8. brightness and the chromatic value matrix that the value of the 3rd among the matrix I that obtains row is synthesized as final dynamic texture:
I=I(:,3);
Brightness value and chromatic value in the I matrix are deposited in the reference frame buffer memory be designated as Y
1, the former frame Y of it and lost frames
13The current candidate's reference frame set of waiting to recover frame of common formation.
The current recovery block of wanting is meant current wanting directly over 4 * 4 of the recovery block former frame correspondence positions at 8 motion vectors around 4 * 4 of the former frame correspondence positions in the step (3), under, left, right-hand and upper left side, upper right side, lower left and bottom-right 84 * 4 motion vectors.
In the step (5) around the current block 8 neighbours' pieces be meant directly over current 4 * 4, under, left, right-hand and upper left side, upper right side, lower left and bottom-right 84 * 4.
Come setting threshold T1 according to cycle tests motion severe degree in the step (2), moving when slow is 20, and middle motion is 60, and strenuous exercise is 90.
Step is come setting threshold T2 according to the difference of cycle tests content in (5).
The present invention has the following advantages and good effect:
1) lost frames can obtain better recovery effects in decoding end, have improved the Video service quality;
2) can be according to adaptive selection reference frame of the motion severe degree of image and motion vector;
Description of drawings
Fig. 1 is a method flow diagram provided by the invention.
Fig. 2 is The simulation experiment result figure of the present invention.
Embodiment
The invention will be further described in conjunction with the accompanying drawings with specific embodiment below:
Figure 1 shows that the flow chart of the whole LOF error concealing method of the video based on the dynamic texture model provided by the invention,
To the synthetic new frame of preceding n (n 〉=2) frame of current lost frames, gather by a described new frame and the current candidate's reference frame that recovers frame of waiting of the common formation of former frame of lost frames according to the dynamic texture model for step S1; Step S2 judges that whether current 4 * 4 of will recover the motion vector of former frame corresponding position piece is greater than given threshold value T1; Judge according to the execution result of step S2, if the result sets up execution in step S3 with current want recovery block at the motion vector of 4 * 4 of former frame correspondence positions as its motion vector; Execution in step S4 calculates the current intermediate value of wanting recovery block 8 motion vectors around 4 * 4 of former frame correspondence positions if the result is false, as its motion vector; Step S5 judges whether the preceding n frame of the current frame that will recover correctly receives; Execution result according to step S5 judges that execution in step S6 waits to recover the former frame of frame as the reference frame that recovers current block with current if the result is false; If the result sets up execution in step S7 with the synthetic new frame of dynamic texture model as the reference frame that recovers current block; Step S8 judge current block at former frame corresponding position motion vector whether greater than given threshold value T2; Judge that according to the execution result of step S8 execution in step S9 asks intermediate value to the motion vector of 8 neighbours' pieces around the current block if the result sets up, and this intermediate value motion vector is taken exercises pixel value that compensation obtains as final error concealment value in reference frame; , execution result do not make subsequent treatment if being false.
Further be described in conjunction with the whole LOF error concealing method of the video based on the dynamic texture model provided by the invention below with specific embodiment.
The enforcement environment of present embodiment is as follows: employing reference software JM12.4 H.264 is as codec, the Internet error of transmission template that Network Transmission packet loss model adopts the ITU-T normal structure to provide, packet loss is made as 5%, " stefan " sequence of choosing CIF resolution sizes (352 * 288) is as cycle tests, with the 14th two field picture lost in the decoding stefan sequence is example, specifically describes as follows in conjunction with each step of the present invention:
(1) by preceding 2 frames of current lost frames according to the synthetic new frame of dynamic texture model extrapolation, finish by following substep:
1. remember that (y12 is the brightness and the chromatic value matrix of preceding 2 frames of current lost frames y13) to Y, has deposited the brightness value and the chromatic value of all pixels in the frame in every row successively, and svd is a singular value decomposition method, obtains U, and S, V be totally 3 matrixes.
[U,S,V]=svd(Y,0);
2. take out all elements of 1~2 row in the matrix U, form Matrix C hat:
Chat=U(:,1:2);
3. the element with the row of 1~2 in the matrix S, 1~2 row forms matrix S (1:2,1:2), with all elements of 1~2 row in the matrix V form matrix V (:, 1:2) and to its ask transposition obtain matrix (V (:, 1:2)) ', then S (1:2,1:2) and (V (:, 1:2)) ' multiplying each other obtains matrix Xhat:
Xhat=S(1:2,1:2)*(V(:,1:2))′;
4. take out among the matrix Xhat 2~2 row (i.e. the 2nd row) all elements formation matrix Xhat (:, 2), take out 1:(2-1 among the matrix Xhat) all elements formation matrix of (i.e. the 1st row) row (Xhat (:, 1)), and it is asked generalized inverse matrix pinv (Xhat (:, 1)), then Xhat (:, 2) and pinv (Xhat (:, 1)) multiplying each other obtains matrix A hat:
Ahat=Xhat(:,2)*pinv(Xhat(:,1));
5. first column element that takes out among the matrix Xhat forms matrix x0:
x0=Xhat(:,1);
6. first column element of x0 as matrix x:
x(:,1)=x0;
7. establish variable t, t carried out the circulation assignment 2 times: with the element of the t of matrix X row form matrix X (:, t), with X (:, t) and matrix A hat multiply each other, the result as the value x of matrix X t+1 row (:, t+1); Again the element of matrix X t+1 row form matrix X (:, t+1), with X (:, t+1) and Matrix C hat multiply each other, the result as matrix I t+1 row I (:, t+1):
for?t=1:2
{
x(:,t+1)=Ahat*X(:,t);
I(:,t+1)=Chat*X(:,t+1);
}
8. brightness and the chromatic value matrix that the value of the 3rd among the matrix I that obtains row is come out as extrapolation:
I=I(:,3);
Brightness value and chromatic value in the I matrix are deposited in the reference frame buffer memory be designated as Y
I, the former frame Y of it and lost frames
13The current candidate's reference frame set of waiting to recover frame of common formation.
(2) judge current 4 * 4 Mb that will recover
n kAt former frame corresponding position motion vector MV
N-1 kWhether greater than given threshold value Threshold1, if
Then with the current recovery block of wanting at 4 * 4 Mb of former frame correspondence position
N-1 kMotion vector MV
N-1 kAs its motion vector, otherwise change next step over to, wherein Threshold1 is being set to 60 in the present embodiment.
(3) the current 4*4 piece Mb that will recover
n k4 * 4 of former frame correspondence position lefts is Mb
N-1 Left, motion vector is MV
N-1 Left, right-hand 4 * 4 is Mb
N-1 Right, motion vector is MV
N-1 Right, 4 * 4 of tops are Mb
N-1 Up, motion vector is MV
N-1 Up, 4 * 4 of belows are Mb
N-1 Down, motion vector is MV
N-1 Down, 4 * 4 on upper left side is Mb
N-1 Left-up, motion vector is MV
N-1 Left-up, 4 * 4 in upper right side is Mb
N-1 Right-up, motion vector is MV
N-1 Right-up, 4 * 4 of lower lefts are Mb
N-1 Left-down, motion vector is MV
N-1 Left-down, 4 * 4 of lower rights are Mb
N-1 Right-down, motion vector is MV
N-1 Right-down, calculate the intermediate value of these 8 vectors:
And MV* as current 4 * 4 Mb that will recover
n kMotion vector.
(4) whether preceding 2 frames of judging the current frame that will recover correctly receive, if set up, then with the synthetic new frame Y of dynamic texture extrapolation
IAs the reference frame that recovers current block.Otherwise then with the current former frame Y that will recover frame
13As the reference frame that recovers current block.
(5) the whole frame Y to having recovered
14Carry out reprocessing, establish current 4 * 4 Mb that will recover
n k4 * 4 of former frame correspondence position lefts be Mb
N-1 Left, motion vector is MV
N-1 Left, right-hand 4 * 4 is Mb
N-1 Right, motion vector is MV
N-1 Right, 4 * 4 of tops are Mb
N-1 Up, motion vector is MV
N-1 Up, 4 * 4 of belows are Mb
N-1 Down, motion vector is MV
N-1 Down, 4 * 4 on upper left side is Mb
N-1 Left-up, motion vector is MV
N-1 Left-up, 4 * 4 in upper right side is Mb
N-1 Right-up, motion vector is MV
N-1 Right-up, 4 * 4 of lower lefts are Mb
N-1 Left-down, motion vector is MV
N-1 Left-down, 4 * 4 of lower rights are Mb
N-1 Right-down, motion vector is MV
N-1 Right-downIf current block Mb
n kFormer frame corresponding position motion vector MV
N-1 k>Threshold2, then ask intermediate value to the motion vector of 8 neighbours' pieces around the current block:
And this intermediate value motion vector MV* taken exercises the pixel value that obtains of compensation as final error concealment value in reference frame, otherwise do not carry out post-processing operation.Wherein Threshold2 is being set to 0 in the present embodiment.
Present embodiment is tested the stefan sequence of CIF form, and the coding frame number is 90 frames, and frame per second was 30 frame/seconds, and order is IPPPPPP, and packet loss is 5%.Wherein, decoding end is covered with the former frame method of directly copying in the list of references 1, extrapolation motion vector method in the list of references 2 and method proposed by the invention respectively, and the Y-PSNR PSNR result after covering as shown in Figure 2.
By The simulation experiment result figure shown in Figure 2 as can be seen the video Y-PSNR that obtains of the present invention be higher than above 2 kinds of control methods all the time, obtained good Video service quality.
Claims (6)
1. the whole LOF error concealing method of the video based on the dynamic texture model is characterized in that, may further comprise the steps:
(1) according to the dynamic texture model to the synthetic new frame of the preceding n frame of current lost frames, the former frame of described new frame and lost frames is common to constitute current candidate's reference frame set of waiting to recover frame, wherein n 〉=2;
(2) judge current want recovery block at the motion vector of former frame corresponding position piece whether greater than given threshold value T1; If be false, then with current want recovery block at the motion vector of 4 * 4 of former frame correspondence positions as its motion vector, if set up then change next step over to;
(3) calculate the current intermediate value of wanting recovery block 8 motion vectors around 4 * 4 of former frame correspondence positions, as its motion vector;
(4) judge whether the current preceding n frame that will recover frame correctly receives; If set up, then with the synthetic new frame of dynamic texture model as the reference frame that recovers current block; If be false then with the current reference frame that will recover the former frame of frame as the recovery current block;
(5) judge that whether the former frame corresponding position motion vector of current block is greater than given threshold value T2; If set up then the motion vector of 8 neighbours' pieces around the current block is asked intermediate value, and this intermediate value motion vector taken exercises pixel value that compensation obtains as final error concealment value in reference frame, if be false then do not carry out post-processing operation.
2. the whole LOF error concealing method of the video based on the dynamic texture model according to claim 1, it is characterized in that: step (1) comprises following substep:
1. remember that (y12 is the brightness and the chromatic value matrix of preceding 2 frames of current lost frames y13) to Y, has deposited the brightness value and the chromatic value of all pixels in the frame in every row successively, and svd is a singular value decomposition method, obtains U, and S, V be totally 3 matrixes.
[U,S,V]=svd(Y,0);
2. take out all elements of 1~2 row in the matrix U, form Matrix C hat:
Chat=U(:,1:2);
3. the element with the row of 1~2 in the matrix S, 1~2 row forms matrix S (1:2,1:2), with all elements of 1~2 row in the matrix V form matrix V (:, 1:2) and to its ask transposition obtain matrix (V (:, 1:2)) ', then S (1:2,1:2) and (V (:, 1:2)) ' multiplying each other obtains matrix Xhat:
Xhat=S(1:2,1:2)*(V(:,1:2))′;
4. take out among the matrix Xhat 2~2 row (i.e. the 2nd row) all elements formation matrix Xhat (:, 2), take out 1:(2-1 among the matrix Xhat) all elements formation matrix of (i.e. the 1st row) row (Xhat (:, 1)), and it is asked generalized inverse matrix pinv (Xhat (:, 1)), then Xhat (:, 2) and pinv (Xhat (:, 1)) multiplying each other obtains matrix A hat:
Ahat=Xhat(:,2)*pinv(Xhat(:,1));
5. first column element that takes out among the matrix Xhat forms matrix x0:
x0=Xhat(:,1);
6. first column element of x0 as matrix x:
x(:,1)=x0;
7. establish variable t, t carried out the circulation assignment 2 times: with the element of the t of matrix X row form matrix X (:, t), with X (:, t) and matrix A hat multiply each other, the result as the value x of matrix X t+1 row (:, t+1); Again the element of matrix X t+1 row form matrix X (:, t+1), with X (:, t+1) and Matrix C hat multiply each other, the result as matrix I t+1 row I (:, t+1):
for?t=1:2
{
x(:,t+1)=Ahat*X(:,t);
I(:,t+1)=Chat*X(:,t+1);
}
8. brightness and the chromatic value matrix that the value of the 3rd among the matrix I that obtains row is synthesized as final dynamic texture:
I=I(:,3);
Brightness value and chromatic value in the I matrix are deposited in the reference frame buffer memory be designated as Y
I, the former frame Y of it and lost frames
13The current candidate's reference frame set of waiting to recover frame of common formation.
3. the whole LOF error concealing method of the video based on the dynamic texture model according to claim 1 and 2 is characterized in that:
The current recovery block of wanting is meant current wanting directly over 4 * 4 of the recovery block former frame correspondence positions at 8 motion vectors around 4 * 4 of the former frame correspondence positions in the step (3), under, left, right-hand and upper left side, upper right side, lower left and bottom-right 84 * 4 motion vectors.
4. the whole LOF error concealing method of the video based on the dynamic texture model according to claim 1 and 2 is characterized in that:
In the step (5) around the current block 8 neighbours' pieces be meant directly over current 4 * 4, under, left, right-hand and upper left side, upper right side, lower left and bottom-right 84 * 4.
5. the whole LOF error concealing method of the video based on the dynamic texture model according to claim 1 and 2 is characterized in that:
Come setting threshold T1 according to cycle tests motion severe degree in the step (2), moving when slow is 20, and middle motion is 60, and strenuous exercise is 90.
6. the whole LOF error concealing method of the video based on the dynamic texture model according to claim 1 and 2 is characterized in that:
Step is come setting threshold T2 according to the difference of cycle tests content in (5).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 200910062868 CN101594543B (en) | 2009-06-26 | 2009-06-26 | Error concealment method of video frame loss based on dynamic texture model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 200910062868 CN101594543B (en) | 2009-06-26 | 2009-06-26 | Error concealment method of video frame loss based on dynamic texture model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101594543A true CN101594543A (en) | 2009-12-02 |
CN101594543B CN101594543B (en) | 2010-11-10 |
Family
ID=41408934
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 200910062868 Expired - Fee Related CN101594543B (en) | 2009-06-26 | 2009-06-26 | Error concealment method of video frame loss based on dynamic texture model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101594543B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102752670A (en) * | 2012-06-13 | 2012-10-24 | 广东威创视讯科技股份有限公司 | Method, device and system for reducing phenomena of mosaics in network video transmission |
CN104602028A (en) * | 2015-01-19 | 2015-05-06 | 宁波大学 | Entire frame loss error concealment method for B frame of stereoscopic video |
CN105931274A (en) * | 2016-05-09 | 2016-09-07 | 中国科学院信息工程研究所 | Method for rapidly segmenting and tracing object based on motion vector locus |
CN107277549A (en) * | 2017-06-07 | 2017-10-20 | 南京邮电大学 | A kind of HEVC intracoded frame error concealing methods based on grain angle predictive mode |
CN111556334A (en) * | 2020-03-27 | 2020-08-18 | 李惠芳 | Internet video smoothing method |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1269359C (en) * | 2003-03-03 | 2006-08-09 | 西南交通大学 | Video error blanketing method based on motion vector extrapolation and motion vector search |
CN101370145B (en) * | 2007-08-13 | 2010-06-09 | 中兴通讯股份有限公司 | Shielding method and apparatus for image frame |
CN100542299C (en) * | 2007-08-31 | 2009-09-16 | 广东威创视讯科技股份有限公司 | The concealing method of video image error |
-
2009
- 2009-06-26 CN CN 200910062868 patent/CN101594543B/en not_active Expired - Fee Related
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102752670A (en) * | 2012-06-13 | 2012-10-24 | 广东威创视讯科技股份有限公司 | Method, device and system for reducing phenomena of mosaics in network video transmission |
CN102752670B (en) * | 2012-06-13 | 2015-11-25 | 广东威创视讯科技股份有限公司 | Reduce method, the Apparatus and system of mosaic phenomenon in Network Video Transmission |
CN104602028A (en) * | 2015-01-19 | 2015-05-06 | 宁波大学 | Entire frame loss error concealment method for B frame of stereoscopic video |
CN104602028B (en) * | 2015-01-19 | 2017-09-29 | 宁波大学 | A kind of three-dimensional video-frequency B frames entire frame loss error concealing method |
CN105931274A (en) * | 2016-05-09 | 2016-09-07 | 中国科学院信息工程研究所 | Method for rapidly segmenting and tracing object based on motion vector locus |
CN105931274B (en) * | 2016-05-09 | 2019-02-15 | 中国科学院信息工程研究所 | A kind of quick object segmentation and method for tracing based on motion vector track |
CN107277549A (en) * | 2017-06-07 | 2017-10-20 | 南京邮电大学 | A kind of HEVC intracoded frame error concealing methods based on grain angle predictive mode |
CN107277549B (en) * | 2017-06-07 | 2020-05-12 | 南京邮电大学 | HEVC intra-frame coding frame error concealment method based on texture angle prediction mode |
CN111556334A (en) * | 2020-03-27 | 2020-08-18 | 李惠芳 | Internet video smoothing method |
Also Published As
Publication number | Publication date |
---|---|
CN101594543B (en) | 2010-11-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101275066B1 (en) | System and method to process motion vectors of video data | |
JP5844394B2 (en) | Motion estimation using adaptive search range | |
KR100870115B1 (en) | Method for forming image using block matching and motion compensated interpolation | |
WO2019154152A1 (en) | Multi-frame quality enhancement method and device for lossy compressed video | |
JP6022473B2 (en) | Method and apparatus for objective video quality assessment based on continuous estimates of packet loss visibility | |
Jakhetiya et al. | A prediction backed model for quality assessment of screen content and 3-D synthesized images | |
CN101594543B (en) | Error concealment method of video frame loss based on dynamic texture model | |
CN108289224B (en) | A kind of video frame prediction technique, device and neural network is compensated automatically | |
CN106713901B (en) | A kind of method for evaluating video quality and device | |
CN108574844B (en) | Multi-strategy video frame rate improving method for space-time significant perception | |
CN104602028A (en) | Entire frame loss error concealment method for B frame of stereoscopic video | |
CN104767993A (en) | Stereoscopic video objective quality evaluation method based on quality lowering time domain weighting | |
CN107277549B (en) | HEVC intra-frame coding frame error concealment method based on texture angle prediction mode | |
CN111524110A (en) | Video quality evaluation model construction method, evaluation method and device | |
CN113891069A (en) | Video quality assessment method, device and equipment | |
EP2736261A1 (en) | Method For Assessing The Quality Of A Video Stream | |
CN114513670B (en) | End-to-end video compression method, device and computer readable storage medium | |
Marvasti-Zadeh et al. | A novel boundary matching algorithm for video temporal error concealment | |
Huang | No-reference video quality assessment by HEVC codec analysis | |
Shang et al. | A new combined PSNR for objective video quality assessment | |
Huang et al. | Motion vector processing based on residual energy information for motion compensated frame interpolation | |
Akoa et al. | Video decoder monitoring using non-linear regression | |
CN104581185A (en) | Self-adaption error concealment method applied to stereoscopic video switching frames | |
CN108769675A (en) | The distributed video self-adapting reconstruction method of prediction is assumed based on two ranks more | |
Yang et al. | A new objective quality metric for frame interpolation used in video compression |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20101110 Termination date: 20210626 |
|
CF01 | Termination of patent right due to non-payment of annual fee |