CN114401400A - Video quality evaluation method and system based on visual saliency coding effect perception - Google Patents
Video quality evaluation method and system based on visual saliency coding effect perception Download PDFInfo
- Publication number
- CN114401400A CN114401400A CN202210057728.2A CN202210057728A CN114401400A CN 114401400 A CN114401400 A CN 114401400A CN 202210057728 A CN202210057728 A CN 202210057728A CN 114401400 A CN114401400 A CN 114401400A
- Authority
- CN
- China
- Prior art keywords
- video
- saliency
- coding effect
- coding
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000000694 effects Effects 0.000 title claims abstract description 100
- 230000000007 visual effect Effects 0.000 title claims abstract description 33
- 238000013441 quality evaluation Methods 0.000 title claims abstract description 29
- 238000000034 method Methods 0.000 title claims abstract description 27
- 230000008447 perception Effects 0.000 title claims abstract description 24
- 238000001514 detection method Methods 0.000 claims abstract description 20
- 230000006835 compression Effects 0.000 claims abstract description 14
- 238000007906 compression Methods 0.000 claims abstract description 14
- 238000005516 engineering process Methods 0.000 claims abstract description 13
- 230000009466 transformation Effects 0.000 claims abstract description 12
- 230000002708 enhancing effect Effects 0.000 claims abstract description 4
- 238000013507 mapping Methods 0.000 claims abstract description 4
- 230000006870 function Effects 0.000 claims description 10
- 230000008569 process Effects 0.000 claims description 10
- 238000012549 training Methods 0.000 claims description 8
- 238000005520 cutting process Methods 0.000 claims description 5
- 238000012360 testing method Methods 0.000 claims description 5
- CIWBSHSKHKDKBQ-JLAZNSOCSA-N Ascorbic acid Chemical compound OC[C@H](O)[C@H]1OC(=O)C(O)=C1O CIWBSHSKHKDKBQ-JLAZNSOCSA-N 0.000 claims description 3
- 230000004913 activation Effects 0.000 claims description 3
- 238000009826 distribution Methods 0.000 claims description 3
- 239000004576 sand Substances 0.000 claims description 3
- 230000005574 cross-species transmission Effects 0.000 claims description 2
- 238000001303 quality assessment method Methods 0.000 claims 8
- 230000000903 blocking effect Effects 0.000 claims 1
- 238000004590 computer program Methods 0.000 description 8
- 238000003860 storage Methods 0.000 description 5
- 238000012545 processing Methods 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000008014 freezing Effects 0.000 description 1
- 238000007710 freezing Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N17/00—Diagnosis, testing or measuring for television systems or their details
- H04N17/04—Diagnosis, testing or measuring for television systems or their details for receivers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
- H04N19/149—Data rate or code amount at the encoder output by estimating the code amount by means of a model, e.g. mathematical model or statistical model
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/154—Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/184—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/20—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Algebra (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The invention provides a video quality evaluation method and system based on visual saliency coding effect perception, which comprises the steps of firstly introducing a visual saliency model to extract a video saliency map; then, enhancing the contrast of the salient region of the image through an image gray level transformation technology so as to more accurately extract the salient region from the salient image; and finally, measuring the compression effect of the salient region by using the proposed coding effect detection model to realize the mapping of the intensity value of the compression effect to the video quality and construct a compressed video quality evaluation model. The experimental results show the superiority of the proposed model in evaluating the quality of compressed video.
Description
Technical Field
The invention belongs to the technical field of image quality evaluation, and particularly relates to a video quality evaluation method and system based on visual saliency coding effect perception.
Background
Video coding techniques greatly reduce storage capacity and transmission bandwidth. However, lossy compression and variable channel transmission inevitably cause various distortions. Thus, compressed video tends to exhibit visually objectionable distortions (i.e., coding effects) that greatly affect the perceived quality of the video. In order to effectively analyze and improve the user experience, it is necessary to accurately evaluate the visual quality of a video. The subjective video quality rating (VQA) is the most accurate and reliable reflection of human perception because it is the quality scored by the viewer. Currently, the accuracy of an objective quality evaluation method is measured only by taking the result of subjective quality evaluation as a reference. According to the international telecommunication union standard, MOS and DMOS are adopted to express the subjective quality of the video. Therefore, MOS and DMOS are the most reliable quality indicators for evaluating the objective quality of video. However, subjective experiments are tedious, time consuming and expensive. Therefore, it is imperative to establish a reliable and objective VQA index. Most of the existing no-reference video quality evaluation (NR-VQA) algorithms are directed to traditional video. Some algorithms involve transmission distortions caused by channel errors, such as packet loss and frame freezing.
Disclosure of Invention
In order to make up for the blank and the deficiency of the prior art, further improve the performance of NR-VQA and realize the perception of various coding effects, the invention provides a video quality evaluation method and a system based on the perception of visual significance coding effects to evaluate the quality of a compressed video.
Firstly, introducing a visual saliency model to extract a video saliency map; then, enhancing the contrast of the salient region of the image through an image gray level transformation technology so as to more accurately extract the salient region from the salient image; and finally, measuring the compression effect of the salient region by using the proposed coding effect detection model to realize the mapping of the intensity value of the compression effect to the video quality and construct a compressed video quality evaluation model. The experimental results show the superiority of the proposed model in evaluating the quality of compressed video.
The invention specifically adopts the following technical scheme:
a video quality evaluation method based on visual saliency coding effect perception is characterized by comprising the following steps: considering the space-time distribution of the visual saliency of the video, firstly introducing a visual saliency model to extract a video saliency map; then, enhancing the contrast of the salient region of the image through an image gray level transformation technology so as to more accurately extract the salient region from the salient image; and measuring the compression effect of the salient region by using the proposed coding effect detection model to realize the mapping of the intensity value of the compression effect to the video quality, and constructing a compressed video quality evaluation model to evaluate the video quality.
Further, the step of introducing the visual saliency model to extract the video saliency map specifically comprises the following steps:
step S11: given an input video Ft}tPredicting the saliency of the video by using the saliency ACLNot network to obtain a saliency map;
step S12: acquiring the time characteristics of the video saliency map by using a convLSTM module of a saliency network;
step S13: combining saliency maps of all frames into video saliency map Vs。
Further, in step S12, the output of the convLSTM module is calculated using equation (1):
(1)
wherein it、ft、otRespectively representing an input gate, a forgetting gate and an output gate, sigma and tanh are a sigmoid activation function and a hyperbolic tangent function, respectively,' is a convolution operator,representing a Hadamard product; all inputs X, memory cell C, hidden gate H and gates i, f, C are three-dimensional tensors with the same dimensions, WsAnd bsIs an adjustable weight and bias that can be learned through back propagation; the dynamic saliency map is obtained by convolving the hidden gate H with a 1 × 1 kernel.
Further, the contrast of the salient region of the image is enhanced by an image gray level transformation technology, and the step of measuring the compression effect of the salient region by using the proposed coding effect detection model specifically comprises the following steps:
step S21: the contrast between a salient object and the background of a saliency map of a video frame is increased by utilizing an image gray level transformation technology;
step S22: obtaining a binary image corresponding to the video frame saliency image by using a binary threshold operation;
step S23: accurately extracting a salient region from each frame of the video, cutting the salient region into 72 × 72 image blocks and grouping the image blocks;
step S24: sensing of the Video coding effect is achieved using a DenseNet-PR network (from prior art documents: Liqun Lin, Shiqi Yu, Liping Zhou, Weiling Chen, Tiesing Zhoo, and Zhou Wang. PEA265: Perceptual Association of Video Compression artifacts, IEEE Transactions on Circuits and Systems for Video Technology (T-CSVT), 2020, 30 (11): 3898-3909), obtaining a coding effect strength value for each tile, and calculating a coding effect strength value for each frame assuming that the coding effect strength value for each pixel in each tile is equal to the coding effect strength value for the tile, as shown in equations (2) and (3):
wherein IijIs the coding effect intensity value, N, of an image block of size 72x72pixelFor the total number of pixels of the salient region per frame,the intensity value representing the kth coding effect of each frame,an intensity value representing the kth coding effect of each video;
in step S25, a coding effect intensity value of the video sequence is calculated from the coding effect intensity value of each frame.
Further, the process of performing video quality evaluation specifically includes the following steps:
step S31: matching the intensity values of the four coding effects, i.e., blockiness, blurring, color spill-over and ringing, to the MOS or DMOS values, constitutes a complete data set, as shown in equation (4):
wherein the MOS ism|DMOSmA compressed video subjective quality score MOS | DMOS representing an mth video;
step S32: the data set is divided into 80: the proportion of 20 is randomly divided into a training set and a testing set;
step S33: inputting the four coding effect intensity values of the video sequence into a Bagging-based SVR model, and outputting a predicted quality score of the video by using the SVR model;
step S34: step S32 and step S33 are regarded as training a basic learning machine, and the steps are repeated for 10 times to obtain 10 basic learning machines in total;
step S35: calculating PLCC and SRCC correlation coefficients between the predicted quality fraction of the compressed video and the subjective real fraction MOS/DMOS of the video, thereby realizing the prediction of the quality of the compressed video, as shown in the formula (5):
where f (-) is a summation operation, yl(x) The predicted output of the first learning machine is shown, and L is the number of the learning machines, and the total number of the learning machines is 10; omegalRepresenting the weight of the ith learning machine.
Further, the weight of the learning machine of the top three PLCCs is set to 1/3, and the other learning machines are set to 0.
A video quality evaluation system based on visual saliency coding effect perception is characterized in that: the system is based on a computer system and comprises a compressed video significance detection module, a compressed video coding effect detection module and a compressed video quality evaluation module;
the compressed video saliency detection module introduces a visual saliency model to extract a video saliency map;
the compressed video coding effect detection module enhances the contrast of the image salient region through an image gray level transformation technology and measures the compression effect of the salient region by utilizing the proposed coding effect detection model;
and the compressed video quality evaluation module is used for evaluating the quality of the compressed video according to the compressed video quality evaluation model.
Further, the operation process of the compressed video saliency detection module is as follows:
given an input video Ft}tPredicting the saliency of the video by using the saliency ACLNet network to obtain a saliency map; then, the video saliency map is subjected to time characteristic acquisition by using a convLSTM module of a saliency network; finally, combining the saliency maps of all the frames into a video saliency map VS。
Further, the operation process of the compressed video coding effect detection module is as follows:
firstly, the contrast ratio of a salient object and a background of a saliency map of a video frame is increased by utilizing an image gray level transformation technology; obtaining a binary image corresponding to the video frame saliency image by using a binary threshold operation; then accurately extracting a salient region from each frame of the video, cutting the salient region into 72 × 72 image blocks and grouping; and then, sensing the video coding effect by using a DenseNet-PR network [1] to obtain a coding effect intensity value of each image block, assuming that the coding effect intensity value of each pixel in each image block is equal to the coding effect intensity value of the image block, calculating the coding effect intensity value of each frame, and finally calculating the coding effect intensity value of the video sequence through the coding effect intensity value of each frame.
Further, the operation process of the compressed video quality evaluating module is as follows:
matching the strength values of the four coding effects with MOS and DMOS values to form a complete data set, and randomly dividing the data set into a training set and a testing set; then inputting the four coding effect intensity values of the video sequence into a Bagging-based SVR model, and outputting the predicted quality score of the video by using the SVR model; and finally, calculating PLCC and SRCC correlation coefficients between the predicted quality fraction of the compressed video and the subjective real fraction MOS/DMOS of the video, thereby realizing the prediction of the quality of the compressed video.
Compared with the prior art, the video quality evaluation method based on the visual saliency coding effect perception is constructed by the method and the preferred scheme to evaluate the quality of the compressed video, and the performance and objectivity of the method in evaluating the quality of the compressed video are sufficiently superior.
Drawings
FIG. 1 is a schematic view of the overall working process of the embodiment of the present invention;
FIG. 2 is a schematic diagram of the overall model structure according to the embodiment of the present invention.
Detailed Description
In order to make the features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail as follows:
as shown in fig. 1 and fig. 2, the video quality evaluation scheme based on coding effect perception of visual saliency provided by this embodiment considers spatial-temporal distribution of visual saliency of a video, and proposes a reference-free compressed video quality evaluation model combining coding effect perception of visual saliency, which includes the following steps:
step S1, detecting the significance of the compressed video;
step S2, detecting the coding effect of the compressed video;
and step S3, evaluating the quality of the compressed video.
In an embodiment of the present invention, step S1 is implemented as follows:
step S11, give an input video { F }t}tPredicting the saliency of the video by using the saliency ACLNet network to obtain a saliency map;
step S12, obtaining the temporal characteristics of the video saliency map by using the convLSTM module, and calculating the output of the convLSTM module by using formula (1):
(1)
wherein it、ft、otRespectively representing an input gate, a forgetting gate and an output gate, sigma and tanh are a sigmoid activation function and a hyperbolic tangent function, respectively,' is a convolution operator,representing the hadamard product. All inputs X, memory cell C, hidden gate H and gates i, f, C are three-dimensional tensors with the same dimensions, WsAnd bsAre tunable weights and biases that can be learned through back propagation. Obtaining a dynamic saliency map by convolving the concealment gate H with a 1 × 1 kernel;
step S13, combining the saliency maps of all the frames into a video saliency map Vs。
In an embodiment of the present invention, step S2 is implemented as follows:
step S21, firstly, the contrast between the salient object and the background of the salient map of the video frame is increased by utilizing the image gray level transformation technology;
step S22, obtaining a binary image corresponding to the video frame saliency image by using a binary threshold operation;
step S23, extracting a salient region from the video frame accurately, cutting the salient region into image blocks of 72x72 size and grouping;
step S24, using the proposed DenseNet-PR network to realize the perception of video coding effect, obtaining a coding effect intensity value of each image block, and assuming that the coding effect intensity value of each pixel in each image block is equal to the coding effect intensity value of the image block, thereby calculating a coding effect intensity value of each frame, as shown in formulas (2) and (3):
wherein IijIs the coding effect intensity value, N, of a 72 × 72 sized image blockpixelFor the total number of pixels of the salient region per frame,the intensity value representing the kth coding effect of each frame,an intensity value representing the kth coding effect of each video;
in step S25, a coding effect intensity value of the video sequence is calculated from the coding effect intensity value of each frame.
In an embodiment of the present invention, step S3 is implemented as follows:
step S31, first, matching the intensity values of the four coding effects with MOS (Mean Opinion Score, MOS) | DMOS (differential Mean Opinion Score, D MOS) values to form a complete data set, which can be expressed as formula (4):
wherein the MOS ism|DMOSmA compressed video quality score MOS | DMOS representing an mth video;
step S32, then, randomly dividing the data set into a training set and a testing set according to the ratio of 80: 20;
step S33, then, inputting the four coding effect intensity values of the video sequence into a Bagging-based SVR model, and outputting the predicted quality score of the video by using the SVR model;
step S34, regarding the steps S32 and S33 as training a basic learning machine, repeating the steps for 10 times to obtain 10 basic learning machines in total;
step S35, finally, calculating PLCC and SRCC correlation coefficients between the predicted quality score of the compressed video and the subjective real score MOS/DMOS of the video, thereby realizing the prediction of the quality of the compressed video, as shown in the formula (5):
where f (-) is a summation operation, yl(x) L is the number of learning machines, and the total number is 10. OmegalRepresenting the weight of the ith learning machine. Here we set the weights of the learning machines of the top three PLCCs to 1/3, and the other learning machines to 0.
The above programming scheme provided by this embodiment can be stored in a computer readable storage medium in a coded form, and implemented in a computer program manner, and inputs basic parameter information required for calculation through computer hardware, and outputs a calculation result.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations of methods, apparatus (devices), and computer program products according to embodiments of the invention. It will be understood that each flow of the flowcharts, and combinations of flows in the flowcharts, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows.
The foregoing is directed to preferred embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow. However, any simple modification, equivalent change and modification of the above embodiments according to the technical essence of the present invention are within the protection scope of the technical solution of the present invention.
The present invention is not limited to the above-mentioned preferred embodiments, and other various video quality evaluation methods and systems based on video visual saliency coding effect perception can be derived by anyone based on the teaching of the present invention.
Claims (10)
1. A video quality evaluation method based on visual saliency coding effect perception is characterized by comprising the following steps: considering the space-time distribution of the visual saliency of the video, firstly introducing a visual saliency model to extract a video saliency map; then, enhancing the contrast of the salient region of the image through an image gray level transformation technology so as to more accurately extract the salient region from the salient image; and measuring the compression effect of the salient region by using the proposed coding effect detection model to realize the mapping of the intensity value of the compression effect to the video quality, and constructing a compressed video quality evaluation model to evaluate the video quality.
2. The video quality assessment method based on visual saliency coding effect perception according to claim 1, characterized by: the method for extracting the video saliency map by introducing the visual saliency model specifically comprises the following steps:
step S11: given an input video Ft}tPredicting the saliency of the video by using the saliency ACLNet network to obtain a saliency map;
step S12: acquiring the time characteristics of the video saliency map by using a convLSTM module of a saliency network;
step S13: combining saliency maps of all frames into video saliency map VS。
3. The video quality assessment method based on visual saliency coding effect perception according to claim 2, characterized by:
in step S12, the output of the convLSTM module is calculated using equation (1):
wherein it、ft、otRespectively representing input gate, forgetting gate and output gate, sigma and tanh respectivelysigmoid activation function and hyperbolic tangent function,' is convolution operator,representing a Hadamard product; all inputs X, memory cell C, hidden gate H and gates i, f, C are three-dimensional tensors with the same dimensions, WsAnd bsIs an adjustable weight and bias that can be learned through back propagation; the dynamic saliency map is obtained by convolving the hidden gate H with a 1 × 1 kernel.
4. The video quality assessment method based on visual saliency coding effect perception according to claim 1, characterized by: the method for detecting the compression effect of the salient region by using the coding effect detection model comprises the following steps:
step S21: the contrast between a salient object and the background of a saliency map of a video frame is increased by utilizing an image gray level transformation technology;
step S22: obtaining a binary image corresponding to the video frame saliency image by using a binary threshold operation;
step S23: accurately extracting a salient region from each frame of the video, cutting the salient region into 72 × 72 image blocks and grouping the image blocks;
step S24: sensing the video coding effect by using a DenseNet-PR network, obtaining a coding effect intensity value of each image block, and calculating a coding effect intensity value of each frame by assuming that the coding effect intensity value of each pixel in each image block is equal to the coding effect intensity value of the image block, as shown in formulas (2) and (3):
wherein IijIs the coding effect intensity value, N, of an image block of size 72x72pixelFor the total number of pixels of the salient region per frame,the intensity value representing the kth coding effect of each frame,an intensity value representing the kth coding effect of each video;
in step S25, a coding effect intensity value of the video sequence is calculated from the coding effect intensity value of each frame.
5. The video quality assessment method based on visual saliency coding effect perception according to claim 1, characterized by: the process of performing video quality evaluation specifically comprises the following steps:
step S31: four coding effects are: i.e., the intensity values of the blocking, blurring, color spill-over and ringing effects, are matched to the MOS or DMOS values to form a complete data set, as shown in equation (4):
wherein the MOS ism|DMOSmA compressed video subjective quality score MOS | DMOS representing an mth video;
step S32: the data set is divided into 80: the proportion of 20 is randomly divided into a training set and a testing set;
step S33: inputting the four coding effect intensity values of the video sequence into a Bagging-based SVR model, and outputting a predicted quality score of the video by using the SVR model;
step S34: step S32 and step S33 are regarded as training a basic learning machine, and the steps are repeated for 10 times to obtain 10 basic learning machines in total;
step S35: calculating PLCC and SRCC correlation coefficients between the predicted quality fraction of the compressed video and the subjective real fraction MOS/DMOS of the video, thereby realizing the prediction of the quality of the compressed video, as shown in the formula (5):
where f (-) is a summation operation, yl(x) The predicted output of the first learning machine is shown, and L is the number of the learning machines, and the total number of the learning machines is 10; omegalRepresenting the weight of the ith learning machine.
6. The video quality assessment method based on visual saliency coding effect perception according to claim 5, characterized by: the learning machine for the top three PLCCs is set to a weight of 1/3 and the other learning machines are set to 0.
7. A video quality evaluation system based on visual saliency coding effect perception is characterized in that: the system is based on a computer system and comprises a compressed video significance detection module, a compressed video coding effect detection module and a compressed video quality evaluation module;
the compressed video saliency detection module introduces a visual saliency model to extract a video saliency map;
the compressed video coding effect detection module enhances the contrast of the image salient region through an image gray level transformation technology and measures the compression effect of the salient region by utilizing the proposed coding effect detection model;
and the compressed video quality evaluation module is used for evaluating the quality of the compressed video according to the compressed video quality evaluation model.
8. The video quality assessment system based on visual saliency coding effect perception according to claim 7, characterized by: the operation process of the compressed video significance detection module is as follows:
given an input video Ft}tPredicting the saliency of the video by using the saliency ACLNet network to obtain a saliency map; then, the video saliency map is subjected to time characteristic acquisition by using a convLSTM module of a saliency network; finally, combining the saliency maps of all the frames into a video saliency map VS。
9. The video quality assessment system based on visual saliency coding effect perception according to claim 7, characterized by: the operation process of the compressed video coding effect detection module is as follows:
firstly, the contrast ratio of a salient object and a background of a saliency map of a video frame is increased by utilizing an image gray level transformation technology; obtaining a binary image corresponding to the video frame saliency image by using a binary threshold operation; then accurately extracting a salient region from each frame of the video, cutting the salient region into 72 × 72 image blocks and grouping; and finally, calculating the coding effect intensity value of the video sequence through the coding effect intensity value of each frame.
10. The video quality assessment system based on visual saliency coding effect perception according to claim 7, characterized by: the operation process of the compressed video quality evaluating module is as follows:
matching the strength values of the four coding effects with MOS and DMOS values to form a complete data set, and randomly dividing the data set into a training set and a testing set; then inputting the four coding effect intensity values of the video sequence into a Bagging-based SVR model, and outputting the predicted quality score of the video by using the SVR model; and finally, calculating PLCC and SRCC correlation coefficients between the predicted quality fraction of the compressed video and the subjective real fraction MOS/DMOS of the video, thereby realizing the prediction of the quality of the compressed video.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210057728.2A CN114401400A (en) | 2022-01-19 | 2022-01-19 | Video quality evaluation method and system based on visual saliency coding effect perception |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210057728.2A CN114401400A (en) | 2022-01-19 | 2022-01-19 | Video quality evaluation method and system based on visual saliency coding effect perception |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114401400A true CN114401400A (en) | 2022-04-26 |
Family
ID=81230661
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210057728.2A Pending CN114401400A (en) | 2022-01-19 | 2022-01-19 | Video quality evaluation method and system based on visual saliency coding effect perception |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114401400A (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170085892A1 (en) * | 2015-01-20 | 2017-03-23 | Beijing University Of Technology | Visual perception characteristics-combining hierarchical video coding method |
CN107040776A (en) * | 2017-03-29 | 2017-08-11 | 华南理工大学 | A kind of video quality evaluation method based on HDR |
CN108462872A (en) * | 2018-05-04 | 2018-08-28 | 南京邮电大学 | A kind of gradient similar video method for evaluating quality based on low frequency conspicuousness |
CN111711816A (en) * | 2020-07-08 | 2020-09-25 | 福州大学 | Video objective quality evaluation method based on observable coding effect intensity |
CN113327234A (en) * | 2021-05-31 | 2021-08-31 | 广西大学 | Video redirection quality evaluation method based on space-time saliency classification and fusion |
CN113810555A (en) * | 2021-09-17 | 2021-12-17 | 福建省二建建设集团有限公司 | Video quality evaluation method based on just noticeable difference and blocking effect |
-
2022
- 2022-01-19 CN CN202210057728.2A patent/CN114401400A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170085892A1 (en) * | 2015-01-20 | 2017-03-23 | Beijing University Of Technology | Visual perception characteristics-combining hierarchical video coding method |
CN107040776A (en) * | 2017-03-29 | 2017-08-11 | 华南理工大学 | A kind of video quality evaluation method based on HDR |
CN108462872A (en) * | 2018-05-04 | 2018-08-28 | 南京邮电大学 | A kind of gradient similar video method for evaluating quality based on low frequency conspicuousness |
CN111711816A (en) * | 2020-07-08 | 2020-09-25 | 福州大学 | Video objective quality evaluation method based on observable coding effect intensity |
CN113327234A (en) * | 2021-05-31 | 2021-08-31 | 广西大学 | Video redirection quality evaluation method based on space-time saliency classification and fusion |
CN113810555A (en) * | 2021-09-17 | 2021-12-17 | 福建省二建建设集团有限公司 | Video quality evaluation method based on just noticeable difference and blocking effect |
Non-Patent Citations (1)
Title |
---|
李富生;李霞;陈宇;: "基于改进四元傅里叶变换的显著性检测及其视频编码应用", 计算机应用研究, no. 05, 31 May 2015 (2015-05-31) * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108090902B (en) | Non-reference image quality objective evaluation method based on multi-scale generation countermeasure network | |
CN108428227B (en) | No-reference image quality evaluation method based on full convolution neural network | |
CN100559881C (en) | A kind of method for evaluating video quality based on artificial neural net | |
CN103996192B (en) | Non-reference image quality evaluation method based on high-quality natural image statistical magnitude model | |
CN102209257B (en) | Stereo image quality objective evaluation method | |
CN102663747B (en) | Stereo image objectivity quality evaluation method based on visual perception | |
Yue et al. | Blind stereoscopic 3D image quality assessment via analysis of naturalness, structure, and binocular asymmetry | |
CN102333233A (en) | Stereo image quality objective evaluation method based on visual perception | |
Liu et al. | A multi-metric fusion approach to visual quality assessment | |
CN105338343A (en) | No-reference stereo image quality evaluation method based on binocular perception | |
CN102595185A (en) | Stereo image quality objective evaluation method | |
CN107959848A (en) | Universal no-reference video quality evaluation algorithms based on Three dimensional convolution neutral net | |
CN109429051B (en) | Non-reference stereo video quality objective evaluation method based on multi-view feature learning | |
CN102547368A (en) | Objective evaluation method for quality of stereo images | |
CN109859166A (en) | It is a kind of based on multiple row convolutional neural networks without ginseng 3D rendering method for evaluating quality | |
CN107948635A (en) | It is a kind of based on degenerate measurement without refer to sonar image quality evaluation method | |
CN109257592B (en) | Stereoscopic video quality objective evaluation method based on deep learning | |
CN111709914A (en) | Non-reference image quality evaluation method based on HVS characteristics | |
CN108513132B (en) | Video quality evaluation method and device | |
CN114598864A (en) | Full-reference ultrahigh-definition video quality objective evaluation method based on deep learning | |
CN102737380A (en) | Stereo image quality objective evaluation method based on gradient structure tensor | |
Gaata et al. | No-reference quality metric for watermarked images based on combining of objective metrics using neural network | |
CN117237279A (en) | Blind quality evaluation method and system for non-uniform distortion panoramic image | |
CN101895787B (en) | Method and system for subjectively evaluating video coding performance | |
CN114401400A (en) | Video quality evaluation method and system based on visual saliency coding effect perception |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |