CN104661021A - Quality assessment method and device for video streaming - Google Patents

Quality assessment method and device for video streaming Download PDF

Info

Publication number
CN104661021A
CN104661021A CN201510077001.0A CN201510077001A CN104661021A CN 104661021 A CN104661021 A CN 104661021A CN 201510077001 A CN201510077001 A CN 201510077001A CN 104661021 A CN104661021 A CN 104661021A
Authority
CN
China
Prior art keywords
video
sequence
original
video stream
mrow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510077001.0A
Other languages
Chinese (zh)
Other versions
CN104661021B (en
Inventor
叶露
周城
冯伟东
肖治华
王俊曦
周正
高志荣
熊承义
田昕
董亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
Information and Telecommunication Branch of State Grid Hubei Electric Power Co Ltd
South Central Minzu University
Original Assignee
State Grid Corp of China SGCC
South Central University for Nationalities
Information and Telecommunication Branch of State Grid Hubei Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, South Central University for Nationalities, Information and Telecommunication Branch of State Grid Hubei Electric Power Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN201510077001.0A priority Critical patent/CN104661021B/en
Publication of CN104661021A publication Critical patent/CN104661021A/en
Application granted granted Critical
Publication of CN104661021B publication Critical patent/CN104661021B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention provides a quality assessment method and device for video streaming. The quality assessment method for the video streaming comprises steps as follows: first compressed video streaming generated due to processing of an original video sequence is acquired; the first compressed video streaming carries a sequence identifier and an image serial number which correspond to the original video sequence; the original video sequence corresponding to the first compressed video streaming is acquired according to the sequence identifier; original compressed video streaming corresponding to the original video sequence is acquired according to the sequence identifier and the image serial number; the video quality of the first compressed video streaming is assessed according to the first compressed video streaming and the original compressed video streaming. With the adoption of the quality assessment method and device for the video streaming, full-reference assessment on the objective video quality of the video streaming at different processing phases can be realized under the condition that the network transmission burden is not increased.

Description

Quality evaluation method and device for video stream
Technical Field
The present invention relates to multimedia communication technologies in the field of communications, and in particular, to a method and an apparatus for evaluating quality of a video stream.
Background
At present, with the rapid development of the internet and the mobile communication network, the demand of people for video services is increasing, and services such as video monitoring, video conference, online video playing and the like are increasingly strong. A basic feature common to all types of video processing and transmission systems is the transmission of a video stream generated by one node to another node over a network. Wherein the links affecting the quality of the video stream comprise the stages of generation, transmission and reconstruction of the video stream. Because the original video source data volume is huge, the data volume needing to be transmitted is greatly reduced by adopting a lossy compression standard in the generation stage of the video stream, and the video quality is reduced; the video quality is also reduced due to the problems of network packet loss, time delay, jitter and the like in the network transmission process; in addition, in the video stream reconstruction stage, factors such as the display quality and the lighting environment of the terminal device also affect the video quality. Since the above various types of video degradation factors are different, great difficulty is brought to video quality evaluation at a video receiving end.
The current Video Quality evaluation methods are classified into Video Subjective Quality evaluation methods (VSQA for short) and Video Objective Quality evaluation methods (VOQA for short) according to whether evaluation conclusion is given by human eye observation.
Video subjective quality assessment is to play a series of test video sequences under a test environment specified by a tester according to international standards (such as ITU-R BT 500) and allow the tester to give subjective scores to the quality of the test video sequences. Since the subjective scores given by the test video sequences are all the perceived values of the test video under human vision, the results of the subjective evaluation are considered to be accurate. However, the subjective evaluation process is complicated and time-consuming, and the test result obtained by evaluation has no expansibility, so that the method cannot be used in the field with high real-time requirements.
The video objective quality assessment method is simple and quick to operate, can meet the real-time requirement and is widely applied.
The video objective quality assessment method is further classified into a full-reference type video quality assessment method, a partial-reference type video quality assessment method, and a no-reference type video quality assessment method. The video quality evaluation methods of the full reference type and the partial reference type generally need to refer to all information or partial information of an original video sequence, and in practical application, a receiving end often has difficulty in obtaining the information of the original video sequence. The video quality evaluation method without reference type does not need to transmit any information of the original video sequence, and can directly estimate the video distortion degree according to some distortion characteristics of the video code stream received by the receiving end, and the method of the type is still in a research stage, can not accurately obtain the real video distortion degree, and has certain limitation in application.
In the practical application of various network video services, what is needed is an evaluation method which is easy to configure, simple and efficient, and can accurately detect the objective quality of video in the generation, transmission and reconstruction stages of video streams respectively.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide a method and an apparatus for evaluating the quality of a video stream, which can realize full reference evaluation of objective quality of video at different processing stages of the video stream respectively without increasing network transmission load.
The quality evaluation method of the video stream comprises the following steps:
acquiring a first compressed video stream generated by processing an original video sequence; the first compressed video stream carries a sequence identifier and an image sequence number corresponding to the original video sequence;
acquiring the original video sequence corresponding to the first compressed video stream according to the sequence identifier;
acquiring an original compressed video stream corresponding to the original video sequence according to the sequence identifier and the image sequence number;
and evaluating the video quality of the first compressed video stream according to the first compressed video stream and the original compressed video stream.
The invention also provides a quality evaluation method of the video stream, which comprises the following steps:
acquiring an original video sequence, a sequence identifier corresponding to the original video sequence and an image sequence number corresponding to the original video sequence;
and generating an original compressed video stream corresponding to the original video sequence according to the sequence identifier and the image sequence number.
The present invention also provides a quality evaluation device for video streams, comprising:
a first acquisition unit that acquires a first compressed video stream generated by processing an original video sequence; the first compressed video stream carries a sequence identifier and an image sequence number corresponding to the original video sequence;
the second acquisition unit is used for acquiring the original video sequence corresponding to the first compressed video stream according to the sequence identifier;
a third obtaining unit, configured to obtain an original compressed video stream corresponding to the original video sequence according to the sequence identifier and the image sequence number;
and the first evaluation unit evaluates the video quality of the first compressed video stream according to the first compressed video stream and the original compressed video stream.
The present invention also provides a quality evaluation device for video streams, comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring an original video sequence, a sequence identifier corresponding to the original video sequence and an image serial number corresponding to the original video sequence;
and the generating unit is used for generating an original compressed video stream corresponding to the original video sequence according to the sequence identification and the image sequence number.
The technical scheme of the invention has the following beneficial effects:
the invention can identify the corresponding compressed video code stream and the original video sequence at the receiving end under the condition of not increasing the network transmission burden, thereby realizing the full reference evaluation of the objective quality of the video respectively aiming at different processing stages of the video stream, having the characteristics of flexible application, higher precision, objective evaluation and the like, and being widely applied to the video field.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the present invention is further described below with reference to the accompanying drawings and the embodiments. Obviously, the described embodiments of the present invention are some of the embodiments of the present invention, and all other embodiments obtained by those skilled in the art without any inventive work are within the scope of the present invention based on the described embodiments of the present invention.
Fig. 1 is a schematic flow chart of a method for evaluating the quality of a video stream according to an embodiment of the present invention;
fig. 2 is a schematic connection diagram of a quality evaluation apparatus for video streams according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an apparatus for objective quality assessment of video streams according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a method of a video stream generation unit according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a method of a video capture and frame information identification unit according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a method of an objective quality calculation unit for a video stream according to an embodiment of the present invention;
fig. 7 is a schematic view of an application scenario for video streaming quality assessment according to an embodiment of the present invention;
fig. 8 is a schematic view of an application scenario for compression quality evaluation of a video encoding apparatus according to an embodiment of the present invention.
Fig. 9 is a schematic view of an application scenario for reconstructing quality assessment of a video stream according to an embodiment of the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantages of the present invention more apparent, the following detailed description is given with reference to the accompanying drawings and specific embodiments.
As shown in fig. 1, the present invention provides a method for evaluating the quality of a video stream, which includes:
step 11, acquiring a first compressed video stream generated by processing an original video sequence; and the first compressed video stream carries the sequence identification and the image sequence number corresponding to the original video sequence. That is, each video sequence is composed of a series of video images, and the playing at a certain frame rate forms a moving image. The invention has a plurality of original video sequences, and different original video sequences correspond to different sequence identifications. The same original video sequence corresponds to different video images, and different video images of the same original video sequence correspond to different image serial numbers.
Wherein the processing of the original video sequence comprises: encoding processing of the original video sequence; or, the transmission processing of the original compressed video stream generated by the original video sequence; or, a decoding process of the received original compressed video stream generated from the original video sequence. Correspondingly, step 11 is: step 11A, receiving a video of the original compressed video stream after transmission processing as a first compressed video stream; or, step 11B, receiving the video of the original compressed video stream after decoding processing as a first compressed video stream; or, step 11C, capturing a video displayed on the original compressed video stream as a first compressed video stream. Taking processing as an example of transmission processing, an original video sequence and a corresponding original compressed video stream are located at a transmitting end, and a first video sequence and a corresponding first compressed video stream are located at a receiving end.
Step 12, obtaining the original video sequence corresponding to the first compressed video stream according to the sequence identifier;
step 13, obtaining an original compressed video stream corresponding to the original video sequence according to the sequence identifier and the image sequence number;
and step 14, evaluating the video quality of the first compressed video stream according to the first compressed video stream and the original compressed video stream.
The method further comprises the following steps:
step 15, acquiring a first video sequence corresponding to the first compressed video stream; the step 15 specifically comprises:
step 15A, decoding the first compressed video stream to generate a first video sequence; or
And step 15B, collecting the video displayed on the original compressed video stream to generate a first video sequence.
And step 16, evaluating the video quality of the first video sequence according to the first video sequence and the original video sequence.
Optionally, before step 16, the method further includes:
step 16A, determining whether the first video sequence has a frame loss:
step 16B, if a frame is lost, filling up the lost video frame to generate a first video sequence after frame filling;
step 16 specifically comprises: and evaluating the video quality of the first video sequence after the frame is supplemented according to the first video sequence after the frame is supplemented and the original video sequence.
In one embodiment, step 12 comprises:
step 121A, extracting a sequence identifier carried by the first compressed video stream;
and step 122A, acquiring an original video sequence corresponding to the sequence identifier according to the corresponding relationship between the sequence identifier and the original video sequence.
In another embodiment, step 12 comprises:
step 121B, extracting a sequence identifier carried by the first compressed video stream;
and step 122B, generating an original video sequence corresponding to the sequence identifier according to the sequence identifier.
In one embodiment, step 13 comprises:
step 131A, extracting a sequence identifier and an image sequence number carried by the first compressed video stream;
step 132A, obtaining the original compressed video stream corresponding to the sequence identifier and the image sequence number according to the corresponding relationship between the sequence identifier and the image sequence number and the original compressed video stream.
In another embodiment, step 13 comprises:
step 131B, extracting sequence identifiers and image sequence numbers carried by the first compressed video stream;
step 131B, superimposing the sequence identifier and the image sequence number on the frame of the original video sequence to generate an original compressed video stream.
Step 14 specifically comprises the following steps: step 141, calculating a video packet loss ratio of the first compressed video stream relative to the original compressed video stream according to the first compressed video stream and the original compressed video stream.
Or, the step 14 is specifically a step 142 of calculating a video source error rate of the first compressed video stream relative to the original compressed video stream according to the first compressed video stream and the original compressed video stream.
Step 141 specifically includes:
step 142 specifically comprises:
the method for calculating the error bit number comprises the following steps: the first compressed video stream and the original compressed video stream are directly compared with each other to determine the number of error bits, and the comparison method comprises the following steps: and comparing the number of the bits corresponding to the difference between the two binary files.
Step 16 specifically comprises: a step 161 of calculating a mean square error of the first video sequence relative to the original video sequence based on the first video sequence and the original video sequence;
alternatively, step 16 specifically includes: step 162, calculating a peak signal-to-noise ratio of the first video sequence relative to the original video sequence according to the first video sequence and the original video sequence;
alternatively, step 16 specifically includes: step 163, calculating a structural similarity mean of the first video sequence with respect to the original video sequence according to the first video sequence and the original video sequence.
Step 161 specifically comprises:
<math> <mrow> <mi>MSE</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mi>NM</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>M</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <mo>[</mo> <mi>X</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>Y</mi> <msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>]</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> <mo>;</mo> </mrow> </math>
wherein,square error; the frame resolution of the first compressed video stream is M pixels × N pixels, and X (i, j) is the pixel value of one frame image of the original video sequence at the point (i, j); y (i, j) represents the pixel value of the corresponding frame image of the frame image in the first video sequence at the point (i, j); the pixel values may be gray scale or color differences.
Step 162 specifically comprises:
<math> <mrow> <mi>PSNR</mi> <mo>=</mo> <mn>10</mn> <mi>log</mi> <mo>[</mo> <mfrac> <mrow> <mi>N</mi> <mo>&times;</mo> <mi>M</mi> <mo>&times;</mo> <msup> <mi>E</mi> <mn>2</mn> </msup> </mrow> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>M</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <mo>[</mo> <mi>X</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>Y</mi> <msup> <mrow> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> <mn>2</mn> </msup> <mo>]</mo> </mrow> </mfrac> <mo>]</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein, PSNR is a peak signal-to-noise ratio, a frame resolution of the first compressed video stream is M pixels × N pixels, and X (i, j) is a pixel value of one frame image of the original video sequence at a point (i, j); y (i, j) represents the pixel value of the corresponding frame image of the frame image in the first video sequence at the point (i, j); e is the peak amplitude of the first compressed video stream under a sampling condition of a predetermined bit;
step 163 specifically comprises:
<math> <mrow> <mi>MSSIM</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mi>M</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>M</mi> </munderover> <mi>SSIM</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> <mo>;</mo> </mrow> </math>
wherein, SSIM is a structural similarity parameter, and k is a sequence number of a local window of one frame of image of the first video sequence; m is the total number of local windows of one frame of image of the first video sequence; xk and yk are the contents of the video frame of the kth local window; content is a collective term for all digital images within a window.
SSIM is calculated according to the following formula;
SSIM=[l(x,y)]α·[c(x,y)]β·[s(x,y)]γ (6);
wherein α, β, γ >0, which are weighting coefficients of the luminance comparison function l (x, y), the contrast comparison function c (x, y), and the structure information comparison function s (x, y), respectively;
<math> <mrow> <mi>l</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mn>2</mn> <msub> <mi>&mu;</mi> <mi>x</mi> </msub> <msub> <mi>&mu;</mi> <mi>y</mi> </msub> <mo>+</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> <mrow> <msup> <msub> <mi>&mu;</mi> <mi>x</mi> </msub> <mn>2</mn> </msup> <mo>+</mo> <msup> <msub> <mi>&mu;</mi> <mi>y</mi> </msub> <mn>2</mn> </msup> <mo>+</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>,</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> <mo>=</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>K</mi> <mn>1</mn> </msub> <mi>L</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> <mo>;</mo> </mrow> </math>
<math> <mrow> <mi>c</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mn>2</mn> <msub> <mi>&sigma;</mi> <mi>x</mi> </msub> <msub> <mi>&sigma;</mi> <mi>y</mi> </msub> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> <mrow> <msup> <msub> <mi>&sigma;</mi> <mi>x</mi> </msub> <mn>2</mn> </msup> <mo>+</mo> <msup> <msub> <mi>&sigma;</mi> <mi>y</mi> </msub> <mn>2</mn> </msup> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>,</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> <mo>=</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>K</mi> <mn>2</mn> </msub> <mi>L</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> <mo>;</mo> </mrow> </math>
<math> <mrow> <mi>s</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>&sigma;</mi> <mi>xy</mi> </msub> <mi></mi> <mo>+</mo> <msub> <mi>C</mi> <mn>3</mn> </msub> </mrow> <mrow> <msub> <mi>&sigma;</mi> <mi>xy</mi> </msub> <mo>+</mo> <msub> <mi>C</mi> <mn>3</mn> </msub> </mrow> </mfrac> <mo>,</mo> <msub> <mi>C</mi> <mn>3</mn> </msub> <mo>=</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> <mo>/</mo> <mn>2</mn> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> <mo>;</mo> </mrow> </math>
wherein,represents the average luminance of the original video sequence; n is the total number of frames of the original video sequence.
Representing an average luminance of the first video sequence;
representing the standard deviation of the original video sequence,
representing a standard deviation of a first video sequence;
representing an original video sequence anda covariance of the first video sequence;
C1、C2、C3is a constant;
K1、K2<<1, L represents the dynamic variation range of the pixel value.
As shown in fig. 2, the present invention further provides a quality evaluation apparatus for a video stream, comprising:
a first acquisition unit 21 that acquires a first compressed video stream generated by processing an original video sequence; the first compressed video stream carries a sequence identifier and an image sequence number corresponding to the original video sequence;
a second obtaining unit 22, obtaining the original video sequence corresponding to the first compressed video stream according to the sequence identifier;
a third obtaining unit 23, obtaining an original compressed video stream corresponding to the original video sequence according to the sequence identifier and the image sequence number;
the first evaluation unit 24 evaluates the video quality of the first compressed video stream according to the first compressed video stream and the original compressed video stream.
The device, still include:
a fourth obtaining unit 25, which obtains a first video sequence corresponding to the first compressed video stream;
a second evaluation unit 26 evaluates the video quality of the first video sequence based on the first video sequence and the original video sequence.
The first acquisition unit 21 includes:
the transmission module receives a video of the original compressed video stream after transmission processing as a first compressed video stream;
the decoding module receives a video of the original compressed video stream after decoding processing as a first compressed video stream; or
And the first acquisition module acquires a video displayed on the original compressed video stream as a first compressed video stream.
The fourth acquiring unit 25 includes:
a decoding module, decoding the first compressed video stream to generate a first video sequence; or
And the second acquisition module acquires the video displayed on the original compressed video stream to generate a first video sequence.
The second acquisition unit 22 includes:
the first extraction module is used for extracting the sequence identification carried by the first compressed video stream;
and the first acquisition module acquires the original video sequence corresponding to the sequence identifier according to the corresponding relation between the sequence identifier and the original video sequence.
Optionally, the second obtaining unit 22 includes:
the second extraction module is used for extracting the sequence identification carried by the first compressed video stream;
and the first generation module generates an original video sequence corresponding to the sequence identifier according to the sequence identifier.
The third acquiring unit 23 includes:
the third extraction module is used for extracting the sequence identification and the image sequence number carried by the first compressed video stream;
and the second acquisition module acquires the original compressed video stream corresponding to the sequence identifier and the image sequence number according to the corresponding relation between the sequence identifier and the image sequence number and the original compressed video stream.
Optionally, the third obtaining unit 23 includes:
the fourth extraction module is used for extracting the sequence identification and the image sequence number carried by the first compressed video stream;
and the second generation module is used for superposing the sequence identification and the image sequence number on the frames of the original video sequence to generate an original compressed video stream.
The embodiment of the invention can identify the corresponding compressed video code stream and the original video sequence at the receiving end under the condition of not increasing network transmission burden through different configuration modes, thereby realizing full reference evaluation aiming at the objective quality of the video at the generation, transmission and reconstruction stages of the video stream respectively, having the characteristics of flexible application, higher precision, objective evaluation and the like, and being widely applied to the field of videos.
The following describes an application scenario of the method of the present invention.
As shown in fig. 4, the present invention provides a method for generating a test-specific video stream (equivalent to the original compressed video stream described above), including:
first, the original video test sequence (equivalent to the original video sequence) is divided into four types of still, fast moving, slow moving and zooming according to the lens motion characteristics, and all the sequences are subjected to letter numbering (equivalent to the sequence identification), and the number of the test sequences and the encoding control parameters can be determined according to the specific test environment.
Then, according to the selected video compression mode and the reference frame structure, the image sequence numbers (Picture Order Count, POC for short) of the single test sequence are subjected to letter numbering (which is identical to the image sequence numbers), and the sequence letter numbers and POC numbers are superimposed at specific positions (positions can be determined according to actual conditions) of each original video frame.
Selecting a required video compression standard, generating a compression video stream special for testing according to a coding parameter table (mainly comprising coding control parameters such as grade level, quantization step length, code rate of a preset value and the like) recommended by the standard, and simultaneously recording the average peak signal-to-noise ratio and the code rate of the generated video stream.
The embodiment of the invention also provides a full-reference evaluation device for the objective quality of the video code stream, which can be used for testing two user sides needing to carry out video communication with each other. The method comprises the following steps: the device comprises a video stream generating unit, a video stream receiving and analyzing unit, a video stream reconstructing and displaying unit, a video collecting and frame information identifying unit and a video objective quality calculating unit. The video stream generation unit is connected with the video stream reconstruction and display unit and the video objective quality calculation unit; the video stream receiving and analyzing unit is connected with the video stream reconstruction and display unit; the video stream reconstruction and display unit is connected with the video acquisition and frame information identification unit; the video acquisition and frame information identification unit is connected with the video objective quality calculation unit; the video visitor quality calculating unit is connected with the video stream generating unit.
The video stream generation unit is deployed at the sending end and the receiving end simultaneously to generate a compressed video stream special for testing;
the video stream receiving and analyzing Unit is used for extracting parameters such as coding grade level, quantization step size, code rate parameter control parameters of a preset value, POC (POC (point of sale) and the like in a Network abstraction Layer Unit (NALU for short) in a video stream to analyze after the video stream is received by a receiving end, and synchronously generating a compressed video stream consistent with the currently received video stream according to the received parameters so as to be used by a video stream objective quality calculating Unit;
the video stream reconstruction and display unit completes the decoding and display functions of the video stream;
the video acquisition and frame information identification unit is used for collecting a locally acquired video sequence and identifying the currently received video sequence number and the POC number through the acquisition of the video display unit and the image signal identification of a designated area so as to be used by the video objective quality calculation unit;
the video objective quality calculating unit is used for synchronously generating a consistent video stream by the video stream generating unit at the receiving end through detecting and identifying the currently received video stream so as to evaluate and calculate the objective quality of the video; or, directly extracting the consistent video stream and the original video data stored at the receiving end for objective quality evaluation and calculation of the video; meanwhile, objective quality indexes such as packet loss rate, bit error rate and the like of the video code stream during channel transmission can be calculated by utilizing the consistent video code stream.
The invention has the following beneficial effects:
on one hand, the invention directly superposes the type information of the video sequence and the corresponding image serial number as a part of the test video sequence image, thereby facilitating the identification of the video information at the receiving end; meanwhile, objective quality detection of the video stream under the full reference condition can be realized under the condition that the original video stream does not need to be transmitted independently and the network transmission load is not increased.
On the other hand, the device disclosed by the invention has the functions of outputting the original video stream, outputting the compressed video stream and receiving the input of the video stream, so that a single video communication node can be independently tested; or, a plurality of video communication nodes are tested simultaneously, the test environment can be configured separately aiming at the generation, transmission and reconstruction stages of the video stream, and a flexible configuration mode is provided.
On the other hand, the method can objectively and quantitatively analyze the video quality degradation introduced in the video compression, transmission and reconstruction processes, and avoids the subjectivity introduced by a subjective test method; meanwhile, the authoritative objective evaluation index can be adopted to accurately describe the video stream quality and the reliability of the video transmission system. The invention is not only suitable for various video communication systems, but also can be used for equipment evaluation of video acquisition and coding systems.
Embodiments of the present invention are described below.
The first embodiment is as follows:
the embodiment is a method for accurately detecting objective quality of a video stream transmitted through a network, and a hardware system used in the method comprises: at a sending end, the video code stream objective quality assessment device according to the embodiment of the present invention is connected to a video code stream sending device, as shown in fig. 7; at the receiving end, the video stream receiving apparatus is connected to the video stream objective quality assessment apparatus according to the embodiment of the present invention, as shown in fig. 7.
As shown in fig. 7 and fig. 4, the basic principle of the method of this embodiment is:
at a sending end, firstly, determining the video category and the specific sequence used by the current test, and simultaneously selecting corresponding coding control parameters according to the selected video compression method;
then, a video stream generating unit of the video code stream objective quality evaluation device is used for selecting a corresponding original video sequence, generating a video sequence number and an image serial number, overlaying the video sequence number and the image serial number on the original video sequence, and compressing the video sequence number and the image serial number into a compressed video stream special for testing by using a corresponding video encoder.
In order to save the video stream generation time, each type of original video sequence can be compressed into each type of video stream to be stored according to determined parameters in advance, when in use, the corresponding video stream is directly selected according to the video coding control parameters to be output, and finally, the video stream transmitting device is utilized to circularly transmit the special compressed video stream for testing which is not less than 10s into a network for transmission. The cycle time may be determined according to specific test requirements, and the transport format and protocol may be determined according to a specific network physical layer transport protocol.
The method of the invention can be adapted to the following situations; at a receiving end, firstly, a video stream receiving device is used for receiving and restoring a video stream sequence which can be processed by a video decoder according to a specific network transmission protocol;
then, the video code stream objective quality evaluation device is sent to process the video code stream objective quality evaluation device. The video stream receiving and analyzing unit extracts the NALU unit of the video code stream and analyzes the basic parameters of the extracted video stream, such as frame type, POC number, video resolution, quantization step size and the like, for subsequent analysis;
the video stream reconstruction and display unit decodes and stores the received compressed video stream and transmits the compressed video stream to the display equipment for playing;
the video acquisition and frame information identification unit acquires and stores the decoded video played on the display device, and extracts the information characters at the designated position to obtain the type and POC number of the decoded video for subsequent analysis;
the video object quality calculating unit extracts the corresponding original video sequence and the compressed video stream in the video stream generating unit according to the video stream parameters and the video sequences of various types output by the units, and calculates the objective quality evaluation result of the output video stream.
As shown in fig. 5, the method for extracting the corresponding original video sequence and the compressed video stream includes: binary code stream comparison and video frame information extraction. Both types of methods may be used separately or together to authenticate each other depending on the configuration of the detection environment.
The binary code stream comparison method needs to compress each type of original video sequence into each type of video stream in advance at a receiving end for storage; or after the type and the coding parameters of the received video stream are obtained, the receiving end video stream generating unit is called to generate the corresponding compressed video stream. The method specifically comprises the following steps: the binary code stream comparison method utilizes the parameters extracted by the video stream receiving and analyzing unit to limit the analysis range of the compressed video stream of the video receiving end. After a code stream structure of a test special compressed video stream of not less than 10s is obtained, the video object quality calculation unit compares each NALU unit of the code stream with the compressed video stream stored by the receiving end by using a binary comparator to obtain a compressed video stream matched with the received video code stream, and extracts a corresponding original video sequence and the compressed video stream for video object quality calculation.
The video frame information extraction rule extracts the video stream type and the picture sequence number directly from the decoded video sequence. The method specifically comprises the following steps: the video frame information extraction method extracts the video sequence number letter images superposed at the appointed position on the video frame, and performs related matching with the letter characteristics in the character characteristic library to determine the letter information and the corresponding sequence number, and extracts the corresponding original video sequence and the compressed video stream according to the letter information and the corresponding sequence number for the objective quality calculation of the video. The method is susceptible to video degradation caused by network packet loss, and if the reading is not accurate, an accurate identification result is obtained after the network recovers to be stable by prolonging the observation time.
Wherein the step of calculating the objective quality of the video from the received compressed video stream and the extracted corresponding original video sequence comprises:
decoding a received compressed video stream unit of not less than 10s to generate a video sequence;
judging whether POC number extracted by the unit analyzes the compressed video stream and generates packet loss or not;
if there is a packet loss, the lost video frame needs to be filled in the decoded video sequence. The padding method can use a frame copy method or a motion compensation method directly. Directly copying the previous video frame to the position of the current lost video frame by using a frame copying method; the motion compensation method completes the information of the current lost frame position according to the motion compensation relation of the previous frame and the next frame.
As shown in FIG. 6, after the decoded video sequence is aligned, the objective error between the decoded video sequence and the corresponding original video sequence can be calculated. All-reference video objective quality evaluation parameters such as Mean Squared Error (MSE), Peak Signal to Noise Ratio (PSNR), and structurally similar Mean (MSSIM) may be used.
Assuming that the video frame resolution is M × N (pixels), X denotes the original video sequence and Y denotes the decoded video sequence, the above evaluation parameters can be calculated as follows:
the MSE can be obtained as equation (10):
<math> <mrow> <mi>MSE</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mi>NM</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>M</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <mo>[</mo> <mi>X</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>Y</mi> <msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>]</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein, X (i, j) is the pixel value of a certain frame image of the original video sequence at the point (i, j), and Y (i, j) represents the pixel value of the image of the corresponding frame of the decoded video sequence at the point (i, j).
PSNR can be obtained as in equation (11):
<math> <mrow> <mi>PSNR</mi> <mo>=</mo> <mn>10</mn> <mi>log</mi> <mo>[</mo> <mfrac> <mrow> <mi>N</mi> <mo>&times;</mo> <mi>M</mi> <mo>&times;</mo> <msup> <mn>255</mn> <mn>2</mn> </msup> </mrow> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>M</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <mo>[</mo> <mi>X</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>Y</mi> <msup> <mrow> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> <mn>2</mn> </msup> <mo>]</mo> </mrow> </mfrac> <mo>]</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> </mrow> </math>
where X (i, j) is the pixel value of a certain frame image of the original video sequence at point (i, j), Y (i, j) represents the pixel value of the image of the corresponding frame of the decoded video sequence at point (i, j), and 255 is the peak amplitude of the video signal under the sampling condition of 8 bits.
The MSSIM can be obtained according to equations (12), (13), (14), (15), (16):
first, a Structural Similarity Index (SSIM) parameter is calculated, and the expression is calculated as follows:
SSIM=[l(x,y)]α·[c(x,y)]β·[s(x,y)]γ (12)
where α, β, γ >0 are weighting coefficients of the luminance comparison function l (x, y), the contrast comparison function c (x, y), and the structure information comparison function s (x, y), respectively. The computational expressions for these three functions are as follows:
<math> <mrow> <mi>l</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mn>2</mn> <msub> <mi>&mu;</mi> <mi>x</mi> </msub> <msub> <mi>&mu;</mi> <mi>y</mi> </msub> <mo>+</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> <mrow> <msup> <msub> <mi>&mu;</mi> <mi>x</mi> </msub> <mn>2</mn> </msup> <mo>+</mo> <msup> <msub> <mi>&mu;</mi> <mi>y</mi> </msub> <mn>2</mn> </msup> <mo>+</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>,</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> <mo>=</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>K</mi> <mn>1</mn> </msub> <mi>L</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>13</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <mi>c</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mn>2</mn> <msub> <mi>&sigma;</mi> <mi>x</mi> </msub> <msub> <mi>&sigma;</mi> <mi>y</mi> </msub> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> <mrow> <msup> <msub> <mi>&sigma;</mi> <mi>x</mi> </msub> <mn>2</mn> </msup> <mo>+</mo> <msup> <msub> <mi>&sigma;</mi> <mi>y</mi> </msub> <mn>2</mn> </msup> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>,</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> <mo>=</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>K</mi> <mn>2</mn> </msub> <mi>L</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>14</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <mi>s</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>&sigma;</mi> <mi>xy</mi> </msub> <mi></mi> <mo>+</mo> <msub> <mi>C</mi> <mn>3</mn> </msub> </mrow> <mrow> <msub> <mi>&sigma;</mi> <mi>xy</mi> </msub> <mo>+</mo> <msub> <mi>C</mi> <mn>3</mn> </msub> </mrow> </mfrac> <mo>,</mo> <msub> <mi>C</mi> <mn>3</mn> </msub> <mo>=</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> <mo>/</mo> <mn>2</mn> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>15</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein,representing the average luminance of the original video sequence,represents the average luminance of the decoded video sequence;representing the standard deviation of the original video sequence, <math> <mrow> <msub> <mi>&sigma;</mi> <mi>y</mi> </msub> <mo>=</mo> <msup> <mrow> <mo>(</mo> <mfrac> <mn>1</mn> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>&mu;</mi> <mi>y</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> <mrow> <mn>1</mn> <mo>/</mo> <mn>2</mn> </mrow> </msup> </mrow> </math> represents the standard deviation of the decoded video sequence; <math> <mrow> <msub> <mi>&sigma;</mi> <mi>xy</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>&mu;</mi> <mi>y</mi> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>&mu;</mi> <mi>y</mi> </msub> <mo>)</mo> </mrow> </mrow> </math> representing the covariance of the original video sequence and the decoded video sequence; c1、C2、C3Is a constant number, K1、K2<<1. L represents the dynamic range of the pixel value. When L is 255, the video image is an 8-bit image.
The quality assessment value of the entire video frame is expressed by a structural similarity Mean (MSSIM):
<math> <mrow> <mi>MSSIM</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mi>M</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>M</mi> </munderover> <mi>SSIM</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>16</mn> <mo>)</mo> </mrow> </mrow> </math>
where M is the number of local windows (the size of the local window is 8X8) in a frame of video image, XkAnd ykIs the content of the kth local window video frame.
The step of calculating the video network transmission quality according to the received compressed video stream and the extracted corresponding locally stored compressed video stream specifically comprises: and analyzing whether packet loss occurs in the received compressed video stream according to the POC number extracted by the unit from a compressed video stream unit not less than 10 s. The packet loss rate in video network transmission can be calculated according to equation (17):
meanwhile, the bit error rate of the video compression source in network transmission can be calculated according to the formula (18):
because the network state is unstable in the network transmission process, in order to ensure the accuracy of the test result, a large number of tests can be carried out in a circulating way, and the objective quality evaluation results of the video sequences generated by decoding in a large number of compressed video stream units of not less than 10s are output after being subjected to arithmetic mean.
Example two:
the embodiment is an evaluation method for accurately detecting the compression quality of a video encoder in a video communication system. The hardware system used by the method comprises: the objective quality assessment device for video code streams according to the embodiment of the present invention is connected to a video encoder device, as shown in fig. 8.
As shown in fig. 8 and 3, the method of the embodiment includes:
firstly, generating an original video sequence by a video stream generating unit in the video code stream objective quality evaluation device according to the input test video category and parameters;
then, the sequence is sent to a video coding device to be evaluated to generate a compressed video stream;
and then, the compressed video stream is sent back to the video code stream objective quality evaluation device and processed by the video stream receiving and analyzing unit and the video stream reconstruction and display unit to generate a decoded video sequence.
And finally, the video objective quality calculation unit refers to the original video sequence and the decoded video sequence, and calculates all-reference video objective quality evaluation parameters such as MSE, PSNR and MSSIM.
The method can conveniently realize the evaluation of the compression quality of the video encoder in the video stream generation stage in the video communication system.
Example three:
the embodiment is an evaluation method for accurately detecting the video reconstruction quality of video playing and display equipment in the video code stream reconstruction stage in a video communication system. The hardware system used by the method comprises: the objective quality evaluation device for video code streams in the embodiment of the invention is connected with an independent playing and displaying device, as shown in fig. 9.
As shown in fig. 9 and fig. 3, the method of this embodiment includes:
firstly, a video stream generating unit in the video code stream objective quality evaluation device generates an original video sequence according to the input test video category and parameters;
then, the sequence is sent to a video stream reconstruction and display unit, and the unit controls an external independent video playing and display device to play the original video sequence;
then, a video acquisition and frame information identification unit is used for acquiring, identifying and storing a video sequence acquired from an external independent video playing and displaying device;
and finally, calculating full-reference video objective quality evaluation parameters such as MSE, PSNR, MSSIM and the like by the video objective quality calculation unit by referring to the original video sequence and the stored playing video sequence.
The video serial number and the image serial number are superposed in the process of generating the video original video sequence, and the identification and alignment of the collected video sequence and the original video sequence can be realized in the objective quality evaluation of the video, so that the quality evaluation of the video generated in the video reconstruction stage in the video communication system can be conveniently realized.
The invention also provides a quality evaluation method of the video stream, which can be applied to a sending end and comprises the following steps:
acquiring an original video sequence, a sequence identifier corresponding to the original video sequence and an image sequence number corresponding to the original video sequence;
and generating an original compressed video stream corresponding to the original video sequence according to the sequence identifier and the image sequence number.
The step of generating the original compressed video stream corresponding to the original video sequence according to the sequence identifier and the image sequence number specifically includes:
and superposing the sequence identification and the image sequence number on the frames of the original video sequence to generate an original compressed video stream.
The invention also provides a quality evaluation device of video stream, which can be arranged at a sending end and comprises:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring an original video sequence, a sequence identifier corresponding to the original video sequence and an image serial number corresponding to the original video sequence;
and the generating unit is used for generating an original compressed video stream corresponding to the original video sequence according to the sequence identification and the image sequence number.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A method for quality assessment of a video stream, comprising:
acquiring a first compressed video stream generated by processing an original video sequence; the first compressed video stream carries a sequence identifier and an image sequence number corresponding to the original video sequence;
acquiring the original video sequence corresponding to the first compressed video stream according to the sequence identifier;
acquiring an original compressed video stream corresponding to the original video sequence according to the sequence identifier and the image sequence number;
and evaluating the video quality of the first compressed video stream according to the first compressed video stream and the original compressed video stream.
2. The method of claim 1, further comprising:
acquiring a first video sequence corresponding to the first compressed video stream;
and evaluating the video quality of the first video sequence according to the first video sequence and the original video sequence.
3. The method of claim 2, wherein the step of evaluating the video quality of the first video sequence based on the first video sequence and the original video sequence is preceded by the method further comprising:
judging whether the first video sequence has frame loss: if the frame is lost, the lost video frame is supplemented, and a first video sequence after the frame is supplemented is generated;
the evaluating the video quality of the first video sequence according to the first video sequence and the original video sequence specifically includes: and evaluating the video quality of the first video sequence after the frame is supplemented according to the first video sequence after the frame is supplemented and the original video sequence.
4. The method of claim 1, wherein the processing of the original video sequence comprises:
encoding processing of the original video sequence;
transmitting an original compressed video stream generated by the original video sequence; or
Decoding processing of a received original compressed video stream generated from said original video sequence.
5. The method of claim 1, wherein the step of obtaining the first compressed video stream generated by processing the original video sequence comprises:
receiving a video of the original compressed video stream after transmission processing as a first compressed video stream;
receiving a video of the original compressed video stream after decoding processing as a first compressed video stream; or
Video displayed on the original compressed video stream is captured as a first compressed video stream.
6. The method according to claim 2, wherein the step of obtaining the first video sequence corresponding to the first compressed video stream specifically comprises:
decoding the first compressed video stream to generate a first video sequence; or
Video displayed on the original compressed video stream is collected to generate a first video sequence.
7. The method according to claim 1, wherein the step of obtaining the original video sequence corresponding to the first compressed video stream according to the sequence identifier comprises:
extracting a sequence identifier carried by the first compressed video stream;
and acquiring the original video sequence corresponding to the sequence identifier according to the corresponding relation between the sequence identifier and the original video sequence.
8. The method according to claim 1, wherein the step of obtaining the original video sequence corresponding to the first compressed video stream according to the sequence identifier comprises:
extracting a sequence identifier carried by the first compressed video stream;
and generating an original video sequence corresponding to the sequence identifier according to the sequence identifier.
9. The method according to claim 1, wherein the step of obtaining the original compressed video stream corresponding to the first compressed video stream according to the sequence identifier and the image sequence number comprises:
extracting a sequence identifier and an image sequence number carried by the first compressed video stream;
and acquiring the original compressed video stream corresponding to the sequence identifier and the image sequence number according to the corresponding relation between the sequence identifier and the image sequence number and the original compressed video stream.
10. The method according to claim 1, wherein the step of obtaining the original compressed video stream corresponding to the first compressed video stream comprises:
extracting a sequence identifier and an image sequence number carried by the first compressed video stream;
and superposing the sequence identification and the image sequence number on the frames of the original video sequence to generate an original compressed video stream.
CN201510077001.0A 2015-02-12 2015-02-12 A kind of method for evaluating quality of video flowing Expired - Fee Related CN104661021B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510077001.0A CN104661021B (en) 2015-02-12 2015-02-12 A kind of method for evaluating quality of video flowing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510077001.0A CN104661021B (en) 2015-02-12 2015-02-12 A kind of method for evaluating quality of video flowing

Publications (2)

Publication Number Publication Date
CN104661021A true CN104661021A (en) 2015-05-27
CN104661021B CN104661021B (en) 2017-03-08

Family

ID=53251649

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510077001.0A Expired - Fee Related CN104661021B (en) 2015-02-12 2015-02-12 A kind of method for evaluating quality of video flowing

Country Status (1)

Country Link
CN (1) CN104661021B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107295330A (en) * 2016-04-11 2017-10-24 华为技术有限公司 A kind of method and apparatus of marking video frame
CN107454387A (en) * 2017-08-28 2017-12-08 西安万像电子科技有限公司 Mass parameter acquisition methods and device for image coding and decoding Transmission system
CN107545214A (en) * 2016-06-28 2018-01-05 阿里巴巴集团控股有限公司 Image sequence number determines method, the method to set up of feature, device and smart machine
CN108668167A (en) * 2017-03-28 2018-10-16 中国移动通信有限公司研究院 A kind of method and device of video reduction
CN106713905B (en) * 2016-12-26 2019-02-26 百富计算机技术(深圳)有限公司 The method and apparatus of detection image transmission quality
CN109726693A (en) * 2019-01-02 2019-05-07 京东方科技集团股份有限公司 For the method, apparatus of assessment equipment ambient noise, medium and electronic equipment
CN109756730A (en) * 2017-11-03 2019-05-14 腾讯科技(深圳)有限公司 Evaluation process method, apparatus, smart machine and storage medium based on video
CN110049313A (en) * 2019-04-17 2019-07-23 微梦创科网络科技(中国)有限公司 A kind of video measurement method and system
CN110213573A (en) * 2019-06-14 2019-09-06 北京字节跳动网络技术有限公司 A kind of video quality evaluation method, device and electronic equipment
CN110662019A (en) * 2018-06-28 2020-01-07 统一专利有限责任两合公司 Method and system for assessing the quality of video transmission over a network
CN110738657A (en) * 2019-10-28 2020-01-31 北京字节跳动网络技术有限公司 Video quality evaluation method and device, electronic equipment and storage medium
CN110913213A (en) * 2019-12-30 2020-03-24 广州酷狗计算机科技有限公司 Method, device and system for evaluating and processing video quality
CN111510766A (en) * 2020-04-16 2020-08-07 中国航空无线电电子研究所 Video coding real-time evaluation and playing tool
US11006185B2 (en) 2016-06-16 2021-05-11 Huawei Technologies Co., Ltd. Video service quality assessment method and apparatus
CN114222066A (en) * 2021-12-23 2022-03-22 上海商米科技集团股份有限公司 System, method and computer readable medium for detecting network video quality

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1859584A (en) * 2005-11-14 2006-11-08 华为技术有限公司 Video frequency broadcast quality detecting method for medium broadcast terminal device
CN101448173A (en) * 2008-10-24 2009-06-03 华为技术有限公司 Method for evaluating Internet video quality, device and system thereof
CN101588498A (en) * 2009-06-23 2009-11-25 硅谷数模半导体(北京)有限公司 Video image data compression and decompression method and device
CN101616315A (en) * 2008-06-25 2009-12-30 华为技术有限公司 A kind of method for evaluating video quality, device and system
CN101668215A (en) * 2003-02-18 2010-03-10 诺基亚有限公司 Picture decoding method
CN102158881A (en) * 2011-04-28 2011-08-17 武汉虹信通信技术有限责任公司 Method and device for completely evaluating 3G visual telephone quality
CN102256130A (en) * 2011-07-27 2011-11-23 武汉大学 Method for marking video frame image sequence number based on inserted macro block brightness particular values
CN102932649A (en) * 2011-08-08 2013-02-13 华为软件技术有限公司 Video decoding quality detection method and device of set top box
CN104253996A (en) * 2014-09-18 2014-12-31 中安消技术有限公司 Video data sending and receiving methods, video data sending and receiving devices and video data transmission system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101668215A (en) * 2003-02-18 2010-03-10 诺基亚有限公司 Picture decoding method
CN1859584A (en) * 2005-11-14 2006-11-08 华为技术有限公司 Video frequency broadcast quality detecting method for medium broadcast terminal device
CN101616315A (en) * 2008-06-25 2009-12-30 华为技术有限公司 A kind of method for evaluating video quality, device and system
CN101448173A (en) * 2008-10-24 2009-06-03 华为技术有限公司 Method for evaluating Internet video quality, device and system thereof
CN101588498A (en) * 2009-06-23 2009-11-25 硅谷数模半导体(北京)有限公司 Video image data compression and decompression method and device
CN102158881A (en) * 2011-04-28 2011-08-17 武汉虹信通信技术有限责任公司 Method and device for completely evaluating 3G visual telephone quality
CN102256130A (en) * 2011-07-27 2011-11-23 武汉大学 Method for marking video frame image sequence number based on inserted macro block brightness particular values
CN102932649A (en) * 2011-08-08 2013-02-13 华为软件技术有限公司 Video decoding quality detection method and device of set top box
CN104253996A (en) * 2014-09-18 2014-12-31 中安消技术有限公司 Video data sending and receiving methods, video data sending and receiving devices and video data transmission system

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107295330A (en) * 2016-04-11 2017-10-24 华为技术有限公司 A kind of method and apparatus of marking video frame
CN107295330B (en) * 2016-04-11 2019-02-01 华为技术有限公司 A kind of method and apparatus of marking video frame
US11363346B2 (en) 2016-06-16 2022-06-14 Huawei Technologies Co., Ltd. Video service quality assessment method and apparatus
US11006185B2 (en) 2016-06-16 2021-05-11 Huawei Technologies Co., Ltd. Video service quality assessment method and apparatus
CN107545214A (en) * 2016-06-28 2018-01-05 阿里巴巴集团控股有限公司 Image sequence number determines method, the method to set up of feature, device and smart machine
CN106713905B (en) * 2016-12-26 2019-02-26 百富计算机技术(深圳)有限公司 The method and apparatus of detection image transmission quality
CN108668167B (en) * 2017-03-28 2021-01-15 中国移动通信有限公司研究院 Video restoration method and device
CN108668167A (en) * 2017-03-28 2018-10-16 中国移动通信有限公司研究院 A kind of method and device of video reduction
CN107454387A (en) * 2017-08-28 2017-12-08 西安万像电子科技有限公司 Mass parameter acquisition methods and device for image coding and decoding Transmission system
CN109756730A (en) * 2017-11-03 2019-05-14 腾讯科技(深圳)有限公司 Evaluation process method, apparatus, smart machine and storage medium based on video
CN109756730B (en) * 2017-11-03 2021-07-27 腾讯科技(深圳)有限公司 Evaluation processing method and device based on video, intelligent equipment and storage medium
CN110662019A (en) * 2018-06-28 2020-01-07 统一专利有限责任两合公司 Method and system for assessing the quality of video transmission over a network
CN109726693A (en) * 2019-01-02 2019-05-07 京东方科技集团股份有限公司 For the method, apparatus of assessment equipment ambient noise, medium and electronic equipment
US11508145B2 (en) 2019-01-02 2022-11-22 Beijing Boe Optoelectronics Technology Co., Ltd. Method for evaluating environmental noise of device, apparatus, medium and electronic device
CN110049313A (en) * 2019-04-17 2019-07-23 微梦创科网络科技(中国)有限公司 A kind of video measurement method and system
CN110213573A (en) * 2019-06-14 2019-09-06 北京字节跳动网络技术有限公司 A kind of video quality evaluation method, device and electronic equipment
CN110738657A (en) * 2019-10-28 2020-01-31 北京字节跳动网络技术有限公司 Video quality evaluation method and device, electronic equipment and storage medium
CN110738657B (en) * 2019-10-28 2022-06-03 北京字节跳动网络技术有限公司 Video quality evaluation method and device, electronic equipment and storage medium
CN110913213A (en) * 2019-12-30 2020-03-24 广州酷狗计算机科技有限公司 Method, device and system for evaluating and processing video quality
CN110913213B (en) * 2019-12-30 2021-07-06 广州酷狗计算机科技有限公司 Method, device and system for evaluating and processing video quality
CN111510766A (en) * 2020-04-16 2020-08-07 中国航空无线电电子研究所 Video coding real-time evaluation and playing tool
CN114222066A (en) * 2021-12-23 2022-03-22 上海商米科技集团股份有限公司 System, method and computer readable medium for detecting network video quality

Also Published As

Publication number Publication date
CN104661021B (en) 2017-03-08

Similar Documents

Publication Publication Date Title
CN104661021B (en) A kind of method for evaluating quality of video flowing
CN100559881C (en) A kind of method for evaluating video quality based on artificial neural net
JP3529305B2 (en) Image quality analysis method and apparatus
CN108933935B (en) Detection method and device of video communication system, storage medium and computer equipment
US8031770B2 (en) Systems and methods for objective video quality measurements
US7768937B2 (en) Video quality assessment
Ries et al. Video Quality Estimation for Mobile H. 264/AVC Video Streaming.
Ma et al. Reduced-reference video quality assessment of compressed video sequences
CN103283239B (en) The objective video quality appraisal procedure of continuous estimation and equipment based on data-bag lost visibility
CN105049838B (en) Objective evaluation method for compressing stereoscopic video quality
HUE028719T2 (en) Method and apparatus for temporally synchronizing the input bit stream of a video decoder with the processed video sequence decoded by the video decoder
KR101327709B1 (en) Apparatus for monitoring video quality and method thereof
CN114598864A (en) Full-reference ultrahigh-definition video quality objective evaluation method based on deep learning
CN114915777A (en) Non-reference ultrahigh-definition video quality objective evaluation method based on deep reinforcement learning
TWI403169B (en) Method for watermarking a digital data set and device implementing said method
Pullano et al. PSNR evaluation and alignment recovery for mobile satellite video broadcasting
CN112422956B (en) Data testing system and method
Ma et al. Reduced reference video quality assessment based on spatial HVS mutual masking and temporal motion estimation
CN115225961B (en) No-reference network video quality evaluation method and device
JP2003250155A (en) Moving picture encoding evaluation apparatus and charging system
Alvarez et al. A flexible QoE framework for video streaming services
Ong et al. Video quality monitoring of streamed videos
Martínez-Rach et al. On the performance of video quality assessment metrics under different compression and packet loss scenarios
CN109783475B (en) Method for constructing large-scale database of video distortion effect markers
Singhal et al. Machine learning based subjective quality estimation for video streaming over wireless networks

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Luo Gongyin

Inventor after: Tian Cuan

Inventor after: Luo Yiwei

Inventor after: Dong Liang

Inventor after: He Yi

Inventor after: Zhang Yong

Inventor after: Sun Jun

Inventor after: Zhan Peng

Inventor after: Tang Ge

Inventor after: Liu Fangfang

Inventor after: Li Lei

Inventor after: Zhou Cheng

Inventor after: Chen Jialin

Inventor after: Li Zhenli

Inventor after: Jiao Hanlin

Inventor after: Li Xinde

Inventor after: Zhou Zheng

Inventor after: Feng Weidong

Inventor after: Ye Lu

Inventor after: Wang Junxi

Inventor after: Gao Zhirong

Inventor after: Xiong Chengyi

Inventor before: Ye Lu

Inventor before: Dong Liang

Inventor before: Zhou Cheng

Inventor before: Feng Weidong

Inventor before: Xiao Zhihua

Inventor before: Wang Junxi

Inventor before: Zhou Zheng

Inventor before: Gao Zhirong

Inventor before: Xiong Chengyi

Inventor before: Tian Cuan

COR Change of bibliographic data
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170308

Termination date: 20210212

CF01 Termination of patent right due to non-payment of annual fee