CN103856779A - Packet switching network oriented multi-view video transmission distortion predication method - Google Patents

Packet switching network oriented multi-view video transmission distortion predication method Download PDF

Info

Publication number
CN103856779A
CN103856779A CN201410098310.1A CN201410098310A CN103856779A CN 103856779 A CN103856779 A CN 103856779A CN 201410098310 A CN201410098310 A CN 201410098310A CN 103856779 A CN103856779 A CN 103856779A
Authority
CN
China
Prior art keywords
viewpoint
frame
distortion
value
centerdot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410098310.1A
Other languages
Chinese (zh)
Inventor
周圆
陈莹
庞勃
崔波
侯春萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201410098310.1A priority Critical patent/CN103856779A/en
Publication of CN103856779A publication Critical patent/CN103856779A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The packet switching network oriented multi-viewpoint video transmission distortion predication method includes the steps that multi-viewpoint video sequences are coded with H. 264/MVC, and s is the viewpoint number and t is the frame number; distorted measurements under different packet loss rates are simulated, and inter-viewpoint prediction and macro block ratio V, Q, U covered up by intra-frame coding and motion compensation temporal are calculated; and are obtained through pixel value sum reconstructed at a encoder and a decoder by (s, t) frames and I pixels, and transmission parameters lambada a, lambada b and mu a are calculated through the least square method; distorted values - t e s t (s, t) of each viewpoint are iterated; multi-viewpoint video transmission distortion PSNR and MSE are predicted. The packet switching network oriented multi-viewpoint video transmission distortion predication method is capable of correctly and effectively estimating distortion caused by packet loss during multi-viewpoint video streaming transmission on the premise of keeping low computation complexity, predicting error propagation brought by errors in any frame and any viewpoint to a future frame, and better simulating actual frame distortion levels. The simulation value is basically matched with the actually measured value.

Description

Towards the multi-view point video transmission distortion Forecasting Methodology of packet network
Technical field
The present invention relates to a kind of transmission distortion Forecasting Methodology of three-dimensional video-frequency.Particularly relate to a kind of multi-view point video transmission distortion Forecasting Methodology towards packet network.
Background technology
In packet switch IP network, the memory block on certain node is overflowed and may be caused packet loss, also may be considered to certain packet loss because time delay is long.But, compressed vision signal, the three-dimensional video-frequency being particularly encoded, must rely on interframe encode to improve code efficiency owing to having adopted low bit rate video coding scheme, a little less than being therefore highly brittle when in the face of error of transmission.The coding structure of this employing motion compensation and parallax compensation can produce very strong space-time dependence in the time that intersymbol is predicted.Relation in research IP packet network between end-to-end packet loss and video communication quality, the transmission of video distortion model that will set up exactly applicable IP network packet loss characteristic of most critical.For the main purpose of transmission of video distortion modeling, wish exactly the video decode distortion can Accurate Prediction being caused by packet loss.
Although proposed multiple distortion computation method in some documents, they can only be applied to the single view video encoder under block-based motion compensated prediction framework mostly, are not suitable for the transmission of multi-viewpoint three-dimensional video.For the transmission distortion model of low complex degree, mostly consider that in-line coding and spatial loop path filter transmit to carry out distortion analysis, low complex degree estimation model is mainly used in the situation of low error rate, conventionally can not meet the requirement of rate distortion (R-D) Performance optimization to model accuracy.Estimate for the accurate transmission distortion that has appropriate complexity, conventionally estimate to accumulate the total amount distortion obtaining and decide by the distortion of frame above the coding mode of current macro.Best pixel that for example Yang and Rose provide is estimated the expansion of recursive algorithm (ROPE) and this algorithm, thereby it recursively calculates the single order of pixel value after each decoding and the second order total distortion (mean square deviation) apart from definite each pixel.The people such as He have set up the distortion prediction model of bit error in dynamic compensation, and this model are applied to information source rate distortion adaptive frame mode under time varying channel condition is selected and the speed control of combined signal source channel model.
Up to now, there is not yet the report of carrying out about the work of multi-view point video transmission distortion Modeling Research at home and abroad in the paper of publishing and document, therefore the modeling of multi-viewpoint three-dimensional video network transmission distortion is almost a blank research field.Planar video distortion prediction model is directly expanded to the situation of multi-view point video, need to calculate the propagation distortion on every propagation path, this can cause the huge increase of complexity.Therefore the Mathematical Modeling of, analyzing the transmission distortion of multi-view point video in the IP network of packet loss immediately and setting up distortion estimation is a challenging problem.
Summary of the invention
Technical problem to be solved by this invention is, a kind of error propagation of having considered in viewpoint and between viewpoint is provided, and the distortion of the frame of the frame before the transmission distortion of present frame and same viewpoint and adjacent viewpoint is connected, can effectively simulate with lower complexity the multi-view point video transmission distortion Forecasting Methodology towards packet network of two-dimensional distortion communication mode more complicated in multi-viewpoint three-dimensional video.
The technical solution adopted in the present invention is: a kind of multi-view point video transmission distortion Forecasting Methodology towards packet network, comprises the steps:
1) adopt and H.264/MVC multi-view point video sequence is encoded, represent that with s viewpoint numbering, t are the numbering of frame in same viewpoint;
2) first under the condition of different packet loss rate, obtain the measured value D of corresponding distortion by emulation c(s, t); D c(s, t) is defined as the distortion that in emulation, actual measurement is arrived, i.e. the pixel value difference of the same frame of same viewpoint before and after transmission of video; Come out the macro block ratio V of interview prediction and the macro block ratio Q of frame mode coding at coding side, and in macro block, adopt motion-compensated temporal to cover the percentage U of mode;
3) measure the pixel value F of (s, t) frame i pixel ithe pixel value of (s, t) frame i pixel that (s, t) rebuilds at encoder place respectively
Figure BDA0000478212630000021
with
Figure BDA0000478212630000022
4) calculate the mean square deviation D of consecutive frame in viewpoint tECthe mean square deviation D of the same number of frames of (s, t) and adjacent viewpoint vEC(s, t);
5) adopt least square method to calculate λ by following formula aand λ bvalue,
λ a , λ b = arg min λ a , λ b Σ ( s , t ) ∈ Recieved ( D c ( s , t ) - D R ( s , t ) ) 2
D R(s,t)=(1-Q)[V·λ b·D c(s,t-1)+(1-V)·λ a·D c(s-1,t)]
Wherein, the frame that (s, t) receives for decoding end,
Especially, the frame of the 0th viewpoint, does not comprise first I frame, has all adopted the predictive mode of inter prediction in viewpoint, now V=1, and all adopted inter prediction between viewpoint, now V=0 except the macro block of all viewpoints first frame of the 0th viewpoint;
6) calculate μ by least square method aand μ bvalue,
μ a , μ b = arg min μ a , μ b Σ ( s , t ) ∈ Lost ( D c ( s , t ) - D L ( s , t ) ) 2
D L ( s , t ) = U · ( D TEC ( s , t ) + μ b · D c ( s , t - 1 ) ) + ( 1 - U ) · ( D VEC ( s , t ) + μ a · D c ( s - 1 , t ) ) = [ U · D TEC ( s , t ) + ( 1 - U ) · D VEC ( s , t ) ] + [ U · μ b · D c ( s , t - 1 ) + ( 1 - U ) · μ a · D c ( s - 1 , t ) ]
Wherein, the frame that (s, t) loses for decoding end,
Especially, for the P frame of any one loss in the video sequence of the 0th viewpoint, make U=1; For the first frame of the each viewpoint except the 0th viewpoint, if there is frame losing, make U=0;
7) iteration goes out the distortion value of each viewpoint, i.e. D c-testthe value of (s, t)
Figure BDA0000478212630000026
thus, the average transmission distortion D of (s, t) frame c-test(s, t) can pass through iterative computation D c(s, t-1) and D c(s-1, t) obtains;
8) distortion of propagating according to the V of different frames, Q, U value prediction multi-view point video, i.e. the value of Y-PSNR PSNR and MSE.
The mean square deviation D of consecutive frame in viewpoint described in step (4) tECthe mean square deviation D of the same number of frames of (s, t) and adjacent viewpoint vEC(s, t) obtains by following formula respectively:
D TEC ( s , t ) = E { [ F ^ i ( s , t ) - F ^ i ( s , t - 1 ) ] 2 } ; D VEC ( s , t ) = E { [ F ^ i ( s , t ) - F ^ i ( s - 1 , t ) ] 2 } .
In step (8):
MSE value is reference picture and rebuilds the mean square error between image, the distortion value of representative reconstruction image,
Figure BDA0000478212630000032
wherein f (x, y) is for rebuilding the pixel value of image, f 0(x, y) is the pixel value of reference picture, f (x, the y)-f in transmission distortion prediction 0(x, y) is D c-testthe value of (s, t);
The unit of Y-PSNR with decibel represent, its formula is as follows:
RSNR = 10 log 10 ( 2 n - 1 ) 2 MSE
Wherein (2 n-1) 2for pixel amplitudes peak value square, n represents the bit number of each pixel, M and N are horizontal and vertical pixel count.
Multi-view point video transmission distortion Forecasting Methodology towards packet network of the present invention, can be keeping predicting under compared with the prerequisite of low computational complexity the error propagation to future frame of error band in any frame and any viewpoint, simulate actual frame level of distortion, simulation result and actual measured value are agreed with substantially.The present invention can be with more complicated two-dimentional error propagation pattern in the correct effectively simulation of lower complexity multi-viewpoint three-dimensional video, the distortion being caused by packet loss when multi-view point video flow transmission that estimation has been encoded.
Accompanying drawing explanation
Fig. 1 is the flow chart of the inventive method;
Fig. 2 is the MSE of Ballroom sequence the 0th viewpoint;
Fig. 3 is the MSE of Ballroom sequence the 1st viewpoint;
Fig. 4 is the MSE of Ballroom sequence the 2nd viewpoint;
Fig. 5 is the MSE of Ballroom sequence the 3rd viewpoint;
Fig. 6 is the MSE of Ballroom sequence the 4th viewpoint;
Fig. 7 is the MSE of Ballroom sequence the 5th viewpoint;
Fig. 8 is the MSE of Ballroom sequence the 6th viewpoint;
Fig. 9 is the MSE of Ballroom sequence the 7th viewpoint.
Embodiment
Below in conjunction with embodiment and accompanying drawing, the multi-view point video transmission distortion Forecasting Methodology towards packet network of the present invention is described in detail.
As shown in Figure 1, the multi-view point video transmission distortion Forecasting Methodology towards packet network of the present invention, comprises the steps:
1) adopt and H.264/MVC multi-view point video sequence is encoded, represent that with s viewpoint numbering, t are the numbering of frame in same viewpoint;
2) first under the condition of different packet loss rate, obtain the measured value D of corresponding distortion by emulation c(s, t); D c(s, t) is defined as the distortion that in emulation, actual measurement is arrived, i.e. the pixel value difference of the same frame of same viewpoint before and after transmission of video; Come out the macro block ratio V of interview prediction and the macro block ratio Q of frame mode coding at coding side, and in macro block, adopt motion-compensated temporal to cover the percentage U of mode;
3) measure the pixel value F of (s, t) frame i pixel ithe pixel value of (s, t) frame i pixel that (s, t) rebuilds at encoder place respectively
Figure BDA0000478212630000041
with
Figure BDA0000478212630000042
4) calculate respectively the mean square deviation D of consecutive frame in viewpoint by following formula tECthe mean square deviation D of the same number of frames of (s, t) and adjacent viewpoint vEC(s, t):
D TEC ( s , t ) = E { [ F ^ i ( s , t ) - F ^ i ( s , t - 1 ) ] 2 } ; D VEC ( s , t ) = E { [ F ^ i ( s , t ) - F ^ i ( s - 1 , t ) ] 2 } .
5) adopt least square method to calculate λ by following formula aand λ bvalue,
λ a , λ b = arg min λ a , λ b Σ ( s , t ) ∈ Recieved ( D c ( s , t ) - D R ( s , t ) ) 2
D R(s,t)=(1-Q)[V·λ b·D c(s,t-1)+(1-V)·λ a·D c(s-1,t)]
Wherein, the frame that (s, t) receives for decoding end,
Especially, the frame of the 0th viewpoint, does not comprise first I frame, has all adopted the predictive mode of inter prediction in viewpoint, now V=1, and all adopted inter prediction between viewpoint, now V=0 except the macro block of all viewpoints first frame of the 0th viewpoint;
6) calculate μ by least square method aand μ bvalue,
μ a , μ b = arg min μ a , μ b Σ ( s , t ) ∈ Lost ( D c ( s , t ) - D L ( s , t ) ) 2
D L ( s , t ) = U · ( D TEC ( s , t ) + μ b · D c ( s , t - 1 ) ) + ( 1 - U ) · ( D VEC ( s , t ) + μ a · D c ( s - 1 , t ) ) = [ U · D TEC ( s , t ) + ( 1 - U ) · D VEC ( s , t ) ] + [ U · μ b · D c ( s , t - 1 ) + ( 1 - U ) · μ a · D c ( s - 1 , t ) ]
Wherein, the frame that (s, t) loses for decoding end,
Especially, for the P frame of any one loss in the video sequence of the 0th viewpoint, make U=1; For the first frame of the each viewpoint except the 0th viewpoint, if there is frame losing, make U=0;
7) iteration goes out the distortion value of each viewpoint, i.e. D c-testthe value of (s, t)
Figure BDA0000478212630000047
thus, the average transmission distortion D of (s, t) frame c-test(s, t) can pass through iterative computation D c(s, t-1) and D c(s-1, t) obtains;
8) distortion of propagating according to the V of different frames, Q, U value prediction multi-view point video, i.e. the value of Y-PSNR PSNR and MSE:
MSE value is reference picture and rebuilds the mean square error between image, the distortion value of representative reconstruction image,
Figure BDA0000478212630000051
wherein f (x, y) is for rebuilding the pixel value of image, f 0(x, y) is the pixel value of reference picture, f (x, the y)-f in transmission distortion prediction 0(x, y) is D c-testthe value of (s, t);
The unit of Y-PSNR with decibel represent, its formula is as follows:
RSNR = 10 log 10 ( 2 n - 1 ) 2 MSE
Wherein (2 n-1) 2for pixel amplitudes peak value square, n represents the bit number of each pixel, M and N are horizontal and vertical pixel count.
Below the multi-view point video transmission distortion Forecasting Methodology towards packet network of the present invention is verified.Meanwhile, the simulation result transmitting in packet loss network by predicting the outcome of emulation experiment comparison algorithm and multi-view point video cycle tests, carrys out the validity of validation algorithm by a large amount of experimental results.Experimental result represents with the form of mean square deviation (MSE) and Y-PSNR (PSNR).
Adopt four different multi-view point video cycle testss to evaluate distortion prediction algorithm performance below, these four cycle testss comprise: a high-speed motion sequence " Ballroom ", two middling speed motion sequences " Vassar " and " Exit ", a low-speed motion video " Lotus ".In experiment, the quantization step (QP) of getting respectively them is 32,32,25,41.
In experiment, in each multi-view point video sequence, only have an I frame, and suppose that this I frame do not make mistakes.Meanwhile, think and the only corresponding sheet group of each P frame (slice) organize a corresponding independent bag for each, and the length of each bag is not more than Ethernet MTU (MTU).Choosing packet loss is 5%.
1, first, experiment is used H.264/MVC reference software (JMVM8.0) to encode.In each P frame, the macro block ratio Q of the macro block ratio V of interview prediction and frame mode coding can come out at coding side.Table one has provided in four video test sequence, the macro block of frame mode and the shared percentage of viewpoint inter mode macro block.Error concealment pattern is " frame copies ", i.e. the D of every frame tEC(s, t) can use formula
Figure BDA0000478212630000053
calculate in advance, in this D vEC(s, t) can use simultaneously D VEC ( s , t ) = { [ F ^ i ( s , t ) - F ^ i ( s - 1 , t ) ] 2 } Calculate in advance.
Frame mode ratio Q, the interview prediction ratio V of table one cycle tests, time domain error are covered ratio U
Sequence Ballroom Exit Vassar Lotus
Q 3.59% 3.19% 0.49% 0.16%
V 89.44% 97.28% 98.22% 90.19%
U 89.81% 97.7% 98.45% 91.43%
2, then, in the time using the error concealment method that frame copies in experiment, there is μ a=1 and μ b=1.In experiment of the present invention, the typical algorithm parameter value of each sequence as shown in Table 2.
The representative value of table two algorithm parameter
Figure BDA0000478212630000055
Figure 20141009831011000021
3, JVT SVC/AVC packet loss pattern is carried out emulation to packet loss.JVT SVC/AVC packet loss pattern derives from the actual measurement of error situation in packet network, has considered the actual conditions of packet loss in network, as Burst loss etc.Algorithm predicts value and simulation value have been compared in Fig. 2-9, have drawn the MSE value of each frame distortion in the each viewpoint of " Ballroom " sequence in figure.
4, the mean P SNR of four each viewpoints of multi-view point video sequence of test.Table three has provided algorithm predicts value and the actual measured value of each viewpoint average distortion of four cycle testss " Ballroom ", " Vassar ", " Exit " and " Lotus ".
The model predication value of many video sequences of table three under 2% to 10% packet loss and the comparison of actual measured value
Figure BDA0000478212630000062
Figure 20141009831011000022

Claims (3)

1. towards a multi-view point video transmission distortion Forecasting Methodology for packet network, it is characterized in that, comprise the steps:
1) adopt and H.264/MVC multi-view point video sequence is encoded, represent that with s viewpoint numbering, t are the numbering of frame in same viewpoint;
2) first under the condition of different packet loss rate, obtain the measured value D of corresponding distortion by emulation c(s, t); D c(s, t) is defined as the distortion that in emulation, actual measurement is arrived, i.e. the pixel value difference of the same frame of same viewpoint before and after transmission of video; Come out the macro block ratio V of interview prediction and the macro block ratio Q of frame mode coding at coding side, and in macro block, adopt motion-compensated temporal to cover the percentage U of mode;
3) measure the pixel value F of (s, t) frame i pixel ithe pixel value of (s, t) frame i pixel that (s, t) rebuilds at encoder place respectively
Figure FDA0000478212620000011
with
Figure FDA0000478212620000012
4) calculate the mean square deviation D of consecutive frame in viewpoint tECthe mean square deviation D of the same number of frames of (s, t) and adjacent viewpoint vEC(s, t);
5) adopt least square method to calculate λ by following formula aand λ bvalue,
λ a , λ b = arg min λ a , λ b Σ ( s , t ) ∈ Recieved ( D c ( s , t ) - D R ( s , t ) ) 2
D R(s,t)=(1-Q)[V·λ b·D c(s,t-1)+(1-V)·λ a·D c(s-1,t)]
Wherein, the frame that (s, t) receives for decoding end,
Especially, the frame of the 0th viewpoint, does not comprise first I frame, has all adopted the predictive mode of inter prediction in viewpoint, now V=1, and all adopted inter prediction between viewpoint, now V=0 except the macro block of all viewpoints first frame of the 0th viewpoint;
6) calculate μ by least square method aand μ bvalue,
μ a , μ b = arg min μ a , μ b Σ ( s , t ) ∈ Lost ( D c ( s , t ) - D L ( s , t ) ) 2
D L ( s , t ) = U · ( D TEC ( s , t ) + μ b · D c ( s , t - 1 ) ) + ( 1 - U ) · ( D VEC ( s , t ) + μ a · D c ( s - 1 , t ) ) = [ U · D TEC ( s , t ) + ( 1 - U ) · D VEC ( s , t ) ] + [ U · μ b · D c ( s , t - 1 ) + ( 1 - U ) · μ a · D c ( s - 1 , t ) ]
Wherein, the frame that (s, t) loses for decoding end,
Especially, for the P frame of any one loss in the video sequence of the 0th viewpoint, make U=1; For the first frame of the each viewpoint except the 0th viewpoint, if there is frame losing, make U=0;
7) iteration goes out the distortion value of each viewpoint, i.e. D c-testthe value of (s, t)
Figure FDA0000478212620000016
thus, the average transmission distortion D of (s, t) frame c-test(s, t) can pass through iterative computation D c(s, t-1) and D c(s-1, t) obtains;
8) distortion of propagating according to the V of different frames, Q, U value prediction multi-view point video, i.e. the value of Y-PSNR PSNR and MSE.
2. the multi-view point video transmission distortion Forecasting Methodology towards packet network according to claim 1, is characterized in that, the mean square deviation D of consecutive frame in the described viewpoint of step (4) tECthe mean square deviation D of the same number of frames of (s, t) and adjacent viewpoint vEC(s, t) obtains by following formula respectively:
D TEC ( s , t ) = E { [ F ^ i ( s , t ) - F ^ i ( s , t - 1 ) ] 2 } ; D VEC ( s , t ) = E { [ F ^ i ( s , t ) - F ^ i ( s - 1 , t ) ] 2 } .
3. the multi-view point video transmission distortion Forecasting Methodology towards packet network according to claim 1, is characterized in that, in step (8):
MSE value is reference picture and rebuilds the mean square error between image, the distortion value of representative reconstruction image,
Figure FDA0000478212620000022
wherein f (x, y) is for rebuilding the pixel value of image, f 0(x, y) is the pixel value of reference picture, f (x, the y)-f in transmission distortion prediction 0(x, y) is D c-testthe value of (s, t);
The unit of Y-PSNR with decibel represent, its formula is as follows:
RSNR = 10 log 10 ( 2 n - 1 ) 2 MSE
Wherein (2 n-1) 2for pixel amplitudes peak value square, n represents the bit number of each pixel, M and N are horizontal and vertical pixel count.
CN201410098310.1A 2014-03-18 2014-03-18 Packet switching network oriented multi-view video transmission distortion predication method Pending CN103856779A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410098310.1A CN103856779A (en) 2014-03-18 2014-03-18 Packet switching network oriented multi-view video transmission distortion predication method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410098310.1A CN103856779A (en) 2014-03-18 2014-03-18 Packet switching network oriented multi-view video transmission distortion predication method

Publications (1)

Publication Number Publication Date
CN103856779A true CN103856779A (en) 2014-06-11

Family

ID=50863919

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410098310.1A Pending CN103856779A (en) 2014-03-18 2014-03-18 Packet switching network oriented multi-view video transmission distortion predication method

Country Status (1)

Country Link
CN (1) CN103856779A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105657399A (en) * 2016-01-04 2016-06-08 浙江万里学院 3D medical video transmission method in wireless network environment
CN113489981A (en) * 2021-07-06 2021-10-08 电子科技大学 Zero-delay code rate control method considering time domain rate distortion optimization

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
周圆: "面向IP网络的多视点立体视频传输失真分析与建模", 《中国博士学位论文全文数据库信息科技辑》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105657399A (en) * 2016-01-04 2016-06-08 浙江万里学院 3D medical video transmission method in wireless network environment
CN113489981A (en) * 2021-07-06 2021-10-08 电子科技大学 Zero-delay code rate control method considering time domain rate distortion optimization
CN113489981B (en) * 2021-07-06 2023-02-03 电子科技大学 Zero-delay code rate control method considering time domain rate distortion optimization

Similar Documents

Publication Publication Date Title
CN101107860B (en) Method and apparatus for estimating channel induced distortion
CN102137263B (en) Distributed video coding and decoding methods based on classification of key frames of correlation noise model (CNM)
CN103873861B (en) Coding mode selection method for HEVC (high efficiency video coding)
CN101534436B (en) Allocation method of video image macro-block-level self-adaptive code-rates
CN101729891B (en) Method for encoding multi-view depth video
CN105120290B (en) A kind of deep video fast encoding method
CN105120282A (en) Code rate control bit distribution method of temporal dependency
CN106937116A (en) Low-complexity video coding method based on random training set adaptive learning
CN108989802A (en) A kind of quality estimation method and system of the HEVC video flowing using inter-frame relation
CN101562750B (en) Device and method for fast selecting video coding mode
CN102256133A (en) Distributed video coding and decoding method based on side information refining
CN102647591A (en) Fault-tolerance rate distortion optimization video coding method and device based on structure similarity (SSIM) evaluation
CN103475879A (en) Side information generation method in distribution type video encoding
CN102740081B (en) Method for controlling transmission errors of multiview video based on distributed coding technology
CN106534855B (en) A kind of Lagrange factor calculation method towards SATD
CN104244009A (en) Method for controlling code rate in distributed video coding
CN103856779A (en) Packet switching network oriented multi-view video transmission distortion predication method
CN101888561A (en) Multi-view video transmission error control method for rate distortion optimization dynamic regulation
CN107343202A (en) Feedback-less distributed video decoding method and mobile terminal based on additional code check
CN104363461B (en) The error concealing method of frame of video and apply its video encoding/decoding method
CN106210747A (en) A kind of low-complexity video coding method based on quaternary tree probabilistic forecasting
CN101557519B (en) Multi-view video coding method
CN100493194C (en) Leaking motion compensation process for video interesting area coding/decoding
CN104618714B (en) A kind of stereo video frame importance appraisal procedure
CN103517078A (en) Side information generating method in distribution type video code

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20140611