CN110838120A - Weighting quality evaluation method of asymmetric distortion three-dimensional video based on space-time information - Google Patents

Weighting quality evaluation method of asymmetric distortion three-dimensional video based on space-time information Download PDF

Info

Publication number
CN110838120A
CN110838120A CN201911125047.XA CN201911125047A CN110838120A CN 110838120 A CN110838120 A CN 110838120A CN 201911125047 A CN201911125047 A CN 201911125047A CN 110838120 A CN110838120 A CN 110838120A
Authority
CN
China
Prior art keywords
video
frame
view video
follows
time information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911125047.XA
Other languages
Chinese (zh)
Inventor
方玉明
李兆乾
眭相杰
鄢杰斌
左一帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201911125047.XA priority Critical patent/CN110838120A/en
Publication of CN110838120A publication Critical patent/CN110838120A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

The invention provides a weighted quality evaluation method of an asymmetric distortion three-dimensional video based on space-time information, which is characterized by comprising the following steps: firstly, calculating the scores of a left view video sequence and a right view video sequence by using a two-dimensional image/video quality evaluation method; secondly, extracting the space and time information of each frame of the left and right view video sequence; then, calculating the coefficient of variation of each frame by using the mean value and the standard deviation; using the quartile of the variation coefficient to obtain a threshold value to classify each frame; then, for the classified video frames, calculating frame advantage levels of the left video sequence and the right video sequence respectively, and then calculating global advantage levels of the left video sequence and the right video sequence; and weighting the quality scores of the left video and the right video by using the obtained advantage levels to obtain the final quality score of the whole three-dimensional video. The experimental result shows that when the two-dimensional image/video quality evaluation method is used for predicting the visual quality of the asymmetric distortion three-dimensional video, the performance of the weighted evaluation method can be effectively improved.

Description

Weighting quality evaluation method of asymmetric distortion three-dimensional video based on space-time information
Technical Field
The invention designs a weighting evaluation method for an asymmetric distortion three-dimensional video, belongs to the technical field of multimedia, and particularly belongs to the technical field of digital image and digital video processing.
Background
With the development of digital media and social networks, the forms of information media are more and more diversified from simple texts to visual images, from images to dynamic videos, from two-dimensional videos to three-dimensional videos. However, these information media may be generated under unstable conditions or transmitted under unstable network conditions, which may destroy the integrity of the information, resulting in a reduced quality of experience. On the other hand, images/videos are finally watched by people, and the only reliable method for evaluating the visual quality of the images/videos is subjective evaluation of people. However, the cost of subjective evaluation is expensive and it is not real-time. Therefore, there is a strong need for objective evaluation methods to automatically predict the perceived quality of images/video.
Three-dimensional video is transmitted in an unstable network environment. In the three-dimensional video encoding and transmission process, asymmetric distortion conditions may occur, that is, the distortion types or levels of the left and right views are significantly different, thereby resulting in different three-dimensional video quality feelings. Relevant subjective experiments show that if the quality scores of the left and right two-dimensional videos are directly averaged to predict the quality of the three-dimensional video, strong prediction deviation can be caused, namely the prediction deviation is inconsistent with the quality experience condition subjectively sensed by people. In addition, subjective experimental studies have also found that the perceptual quality of 3D video can be improved by improving the blur level of low quality video through post-processing. Therefore, three-dimensional video quality assessment is meaningful and challenging, especially in the case of distortion asymmetry.
The purpose of weighting asymmetric three-dimensional video is proposed:
(1) the existing image/video quality evaluation algorithms all adopt a two-stage structure of firstly measuring local quality and then weighting averagely, although the weighting strategy is simple and easy to implement, the weighting strategy has poor effect and deviates from subjective judgment of people, and the weighting algorithm is favorable for optimizing the performance of the image/video quality evaluation algorithm.
(2) The algorithm performance is improved and the correlation between the three-dimensional video quality and the subjective evaluation of human eyes is enhanced through an efficient weighting strategy of the asymmetric distortion three-dimensional video.
(3) The research on the weighting strategy of the asymmetric distortion three-dimensional video is helpful for further understanding the human perception visual system, such as a binocular competition mechanism and the like, and is helpful for the development of visual science.
Therefore, the weighting strategy method for the asymmetric distortion three-dimensional video with effective and accurate prediction has great promotion effect on the development of the three-dimensional video.
Disclosure of Invention
The invention provides a weighted quality evaluation method of an asymmetric distortion three-dimensional video based on space-time information, which is characterized by comprising the following steps: firstly, calculating the scores of a left view video sequence and a right view video sequence by using a two-dimensional image/video quality evaluation method; secondly, extracting the space and time information of each frame of the left and right view video sequence; extracting spatial information, filtering each frame of a left view video sequence and a right view video sequence by using a Scharr operator, then calculating the gradient size of each frame, and calculating the mean value and the standard deviation of each frame of the left video sequence and the right video sequence; extracting time information, obtaining the frame difference of the front frame and the rear frame, calculating the mean value and the standard deviation of the frame difference of each frame of the left video sequence and the right video sequence, and then calculating the variation coefficient of each frame by using the mean value and the standard deviation; using the quartile of the variation coefficient to obtain a threshold value to classify each frame; then, for the classified video frames, calculating frame advantage levels of the left video sequence and the right video sequence respectively, and then calculating global advantage levels of the left video sequence and the right video sequence; and weighting the quality scores of the left video and the right video by using the obtained advantage levels to obtain the final quality score of the whole three-dimensional video. The experimental result shows that when the two-dimensional image/video quality evaluation method is used for predicting the visual quality of the asymmetric distortion three-dimensional video, the performance of the weighted evaluation method can be effectively improved.
A weighted quality evaluation method of asymmetric distortion three-dimensional video based on space-time information is characterized by comprising the following steps:
A. evaluating the quality score of the single-view video by adopting a two-dimensional image/video quality evaluation method;
B. extracting the space information quantity and the time information quantity of each frame of the single-view video;
C. evaluating binocular competitive advantage of the single-view video by combining spatial and temporal information;
D. and weighting the quality of the left view video and the right view video by combining the spatial information quantity and the temporal information quantity of the single-view video to evaluate the quality score of the three-dimensional video.
Further, a video quality score is calculated for the entire video sequence of the single-view video.
Further, the method comprises the following specific steps of:
A. for the amount of spatial information: firstly, filtering the brightness map of each distorted single-view video frame by using a Scharr operator, wherein the gradient size calculation formula is as follows:
Figure BDA0002276554370000031
wherein G represents the gradient size, Gx、GyScharr convolution representing the horizontal and vertical directions, respectively; then, the mean and the standard deviation of the gradient map of each Scharr filtered frame after the above operation are calculated as the spatial information of a single frame, and the spatial information calculation formula of the single frame is as follows:
Figure BDA0002276554370000032
and
Figure BDA0002276554370000039
Figure BDA0002276554370000033
and
Figure BDA00022765543700000310
wherein G isi,d,lAnd Gi,d,rGradient maps of the ith frame of the left and right two distorted videos respectively,
Figure BDA0002276554370000034
means and standard deviations of the ith frame of the left and right video representing the distortion;
B. for the amount of time information: first, the difference in pixel values of luminance maps in successive frames of a distorted video is extracted as a frame difference map, denoted as Mi,d,l(r)The calculation formula is as follows:
Mi,d,l=Ii,d,l(x,y)-Ii-1,d,l(x,y) and
Mi,d,r=Ii,d,r(x,y)-Ii-1,d,r(x,y) (4)
wherein Ii,d,l(x,y),Ii,d,r(x, y) respectively represent the pixels of the x-th row and the y-th column of the ith frame of the left and right distorted videos; then, the mean value and the standard deviation of the motion difference feature map are calculated to be used as the time information of a single frame and recorded as the time information
Figure BDA0002276554370000035
And
Figure BDA0002276554370000036
the time information amount calculation formula of a single frame is as follows:
Figure BDA0002276554370000037
and
Figure BDA0002276554370000038
Figure BDA0002276554370000041
and
Figure BDA0002276554370000042
wherein,
Figure BDA0002276554370000043
and
Figure BDA0002276554370000044
respectively representing temporal perceptual information representing two distorted views, left and right, Mi,d,lAnd Mi,d,rAnd frame difference maps respectively representing the left and right distortion views.
Further, the time information amount is used for calculating a variation coefficient, a threshold value is obtained through the variation coefficient, and whether large motion exists in continuous frames or not is judged, wherein the method specifically comprises the following steps:
A. calculating the coefficient of variation by using the mean value and standard deviation in the time information content of each frame, and recording the coefficient of variation of the distorted frames of the left and right view video sequence as CVi,d,l(r)The calculation formula is as follows:
Figure BDA0002276554370000045
B. obtaining the quartile of the left and right video sequence by using the variation coefficient of each frame of the left and right video sequenceAnd
Figure BDA0002276554370000047
obtaining a threshold value threshold by using the mean value of the quartile of the variation coefficients of the left and right video sequences;
C. classifying each frame of the left and right video sequences by using a threshold value, and calculating the frame dominance level of each frame, wherein the calculation formula is as follows:
Figure BDA0002276554370000048
wherein,
Figure BDA0002276554370000049
representing each frame of the left and right video sequence separatelyThe standard deviation of the amount of spatial information and the amount of temporal information of (a);
D. using the obtained frame dominance level g of each frame of the left and right distorted view video sequencei,l(r)Calculating the dominance level of the whole video of the left-view video and the right-view video, wherein the calculation formula is as follows:
Figure BDA0002276554370000057
further, the spatial information and the temporal information of the single-view video are used for weighting the quality of the left-view video and the right-view video, and the method specifically comprises the following steps:
A. dominance level g using left and right view videol(r)Get the weight, denoted as wl(r)The calculation formula is as follows:
Figure BDA0002276554370000058
B. calculating the quality score of the three-dimensional video, and recording as Q3DThe calculation formula is as follows:
wherein,
Figure BDA0002276554370000056
is the quality score of the left and right view video sequence obtained by two-dimensional image/video quality evaluation.
Drawings
FIG. 1 is a block diagram of the algorithm of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Wherein technical features, abbreviations/abbreviations, symbols and the like referred to herein are explained, defined/explained on the basis of the known knowledge/common understanding of a person skilled in the art.
The technical solution of the present invention is further described in detail below with reference to the accompanying drawings.
In order to improve the prediction deviation generated when the asymmetric distorted three-dimensional video is predicted by using a direct averaging method, the invention provides a new method for a weighting strategy of the asymmetric distorted three-dimensional video, and the used visual characteristics comprise space information quantity and time information quantity.
The process of the invention is shown in figure 1, and the specific process is as follows:
step 1: evaluating the single-view video quality score by adopting the existing two-dimensional image/video quality evaluation method;
step 2: extracting the space information quantity and the time information quantity of each frame of the single-view video;
and step 3: classifying each frame of the video by using the characteristics of the space information content and the time information content of each frame of the single-view video and estimating the dominance degree, thereby obtaining the weight of the left-view video and the right-view video;
and 4, step 4: and weighting the quality of the left view video and the right view video by utilizing the space information content and the time information content of the single view video to obtain the final quality score.
The method uses three common standards to evaluate the accuracy of the algorithm for predicting the three-dimensional video quality. The first criterion is Pearson Linear Correlation Coefficient (PLCC) for estimating the accuracy of the prediction, the second criterion is Spearman Rank-order Correlation Coefficient (SRCC) for estimating the monotonicity of the prediction, and the last criterion is Root Mean Square Error (RMSE), which is a Correlation criterion that measures objective and subjective scores. In general, higher PLCC and SRCC, lower RMSE values indicate better prediction accuracy of the algorithm. To verify the performance of the algorithm proposed by the present invention, we compared the algorithm with the existing three-dimensional video quality evaluation method on the database Wateloo-IVC-3D, including Chen's method, Benoit's method, You's method, Yang's method, Silva's method, Lin's method, Wang's method. The watermark IVC 3D database contains 704 three-dimensional videos, and the distortion types include HEVC compression distortion, gaussian blur, upsampling reduced resolution, and combinations thereof.
The specific operation of each part of the invention is as follows:
(1) extracting the spatial information quantity and the time information quantity:
for the amount of spatial information: firstly, a Scharr operator is used for filtering a brightness map of each distorted single-view video frame, and particularly, a gradient size calculation formula is as follows:
Figure BDA0002276554370000071
wherein G represents the gradient size, Gx、GyScharr convolution representing the horizontal and vertical directions, respectively; then, the mean and the standard deviation of the gradient map of each Scharr filtered frame after the above operations are calculated as the spatial perception information of a single frame, and the spatial information calculation formula of the single frame is as follows:
Figure BDA0002276554370000072
and
Figure BDA0002276554370000073
Figure BDA0002276554370000074
and
Figure BDA0002276554370000075
wherein G isi,d,lAnd Gi,d,rGradient maps of the ith frame of the left and right two distorted videos respectively,
Figure BDA0002276554370000076
and
Figure BDA0002276554370000077
the amount of spatial information of the ith frame of the distorted left and right videos.
For the amount of time information: first, the difference in pixel values of luminance maps in successive frames of a distorted video is extracted as a frame difference map, denoted as Mi,d,l(r)The calculation formula is as follows:
Mi,d,l=Ii,d,l(x,y)-Ii-1,d,l(x,y) and
Mi,d,r=Ii,d,r(x,y)-Ii-1,d,r(x,y) (15)
wherein Ii,d,l(x,y),Ii,d,r(x, y) respectively represent the pixels of the x-th row and the y-th column of the ith frame of the left and right distorted videos; then, the mean and standard deviation of the frame difference image are calculated as the time information of a single frame and recorded as
Figure BDA0002276554370000078
Andthe time information amount calculation formula of a single frame is as follows:
Figure BDA00022765543700000710
and
Figure BDA00022765543700000711
Figure BDA00022765543700000712
and
Figure BDA0002276554370000081
wherein,
Figure BDA0002276554370000082
and
Figure BDA0002276554370000083
respectively representing temporal perceptual information representing two distorted views, left and right, Mi,d,lAnd Mi,d,rAnd frame difference maps respectively representing the left and right distortion views.
(2) And (3) evaluating the binocular competitive advantage of the single-view video by combining spatial and temporal information:
the space information and the time information can provide useful information for binocular competition, so that the dominance degree of the binocular competition is calculated by using two methods of the space information and the time information; dominance decreases with decreasing temporal information content when consecutive frames contain a large amount of motion. The Coefficient of Variation (CV) is therefore used to determine whether there is large motion in successive frames, and is calculated as:
obtaining the quartile of the left and right video sequence by using the variation coefficient of each frame of the left and right video sequence
Figure BDA0002276554370000085
Andand obtaining a threshold value threshold by using the mean value of the quartile of the variation coefficients of the left and right video sequences.
Classifying each frame of the left and right video sequences by using a threshold value, and calculating the frame dominance level of each frame, wherein the calculation formula is as follows:
Figure BDA0002276554370000087
using the obtained frame dominance level g of each frame of the left and right distorted view video sequencei,l(r)Calculating the dominance level of the whole video of the left-view video and the right-view video, wherein the calculation formula is as follows:
Figure BDA0002276554370000088
and
Figure BDA0002276554370000089
dominance level g with left/right view videol(r)Get the weight, denoted as wl(r)The calculation formula is as follows:
Figure BDA00022765543700000810
and
Figure BDA00022765543700000811
(3) calculating the quality score of the three-dimensional video:
in the step, the quality score of the asymmetric distortion three-dimensional video is calculated according to the weight obtained by the binocular competition model, and the quality score of the three-dimensional video is set to be Q3DThe calculation formula is as follows:
Figure BDA0002276554370000091
table 1: the invention compares the performance of the model in the Database Waterloo-IVC-3D Database with other models using different quality evaluation methods with different weighting strategies;
Figure BDA0002276554370000092
Figure BDA0002276554370000101
table 1 shows examples of comparison between different two-dimensional image/video quality evaluation methods and different weighting strategies, from which the three-dimensional video quality weighting strategy proposed by the present invention has higher correlation with subjective evaluation.
Table 2: the performance of the model is compared with that of other models with different quality evaluation methods in a Database Waterloo-IVC-3D Database;
Figure BDA0002276554370000102
table 2 shows the comparison between the evaluation method proposed by the present invention (using FSIM as a two-dimensional single video quality evaluation algorithm) and other three-dimensional video quality evaluation algorithms, and it can be seen from these comparisons that the method proposed by the present invention is most effective.
Table 3: the binocular competition in the Database Waterloo-IVC-3D Database independently uses the space information quantity and the time information quantity, and simultaneously uses the comparison of the experimental results;
Figure BDA0002276554370000111
Figure BDA0002276554370000121
table 3 shows comparison of experimental results of binocular rivalry using the amount of spatial information and the amount of temporal information alone and simultaneously, and it can be seen from the comparison that the effect is the best when the amount of spatial information and the amount of temporal information are used simultaneously.
The above-described embodiments are illustrative of the present invention and not restrictive, it being understood that various changes, modifications, substitutions and alterations can be made herein without departing from the principles and spirit of the invention, the scope of which is defined by the appended claims and their equivalents.

Claims (5)

1. A weighted quality evaluation method of asymmetric distortion three-dimensional video based on space-time information is characterized by comprising the following steps:
A. evaluating the quality score of the single-view video by adopting a two-dimensional image/video quality evaluation method;
B. extracting the space information quantity and the time information quantity of each frame of the single-view video;
C. evaluating binocular competitive advantage of the single-view video by combining spatial and temporal information;
D. and weighting the quality of the left view video and the right view video by combining the spatial information quantity and the temporal information quantity of the single-view video to evaluate the quality score of the three-dimensional video.
2. The method of claim 1, wherein the video quality score is computed for an entire video sequence of the single-view video.
3. The method according to claim 1, wherein the spatial information amount and the temporal information amount are included, and the specific steps are as follows:
A. for the amount of spatial information: firstly, filtering the brightness map of each distorted single-view video frame by using a Scharr operator, wherein the gradient size calculation formula is as follows:
Figure FDA0002276554360000011
wherein G represents the gradient size, Gx、GyScharr convolution representing the horizontal and vertical directions, respectively; then, the mean and the standard deviation of the gradient map of each Scharr filtered frame after the above operation are calculated as the spatial information of a single frame, and the spatial information calculation formula of the single frame is as follows:
Figure FDA0002276554360000012
Figure FDA0002276554360000013
wherein G isi,d,lAnd Gi,d,rGradient maps of the ith frame of the left and right two distorted videos respectively,
Figure FDA0002276554360000021
left and right video representing distortionMean and standard deviation of i frames;
B. for the amount of time information: first, the difference in pixel values of luminance maps in successive frames of a distorted video is extracted as a frame difference map, denoted as Mi,d,l(r)The calculation formula is as follows:
Mi,d,l=Ii,d,l(x,y)-Ii-1,d,l(x,y) and
Mi,d,r=Ii,d,r(x,y)-Ii-1,d,r(x,y) (4)
wherein Ii,d,l(x,y),Ii,d,r(x, y) respectively represent the pixels of the x-th row and the y-th column of the ith frame of the left and right distorted videos; then, the mean value and the standard deviation of the motion difference feature map are calculated to be used as the time information of a single frame and recorded as the time informationAnd
Figure FDA0002276554360000023
the time information amount calculation formula of a single frame is as follows:
Figure FDA0002276554360000024
Figure FDA0002276554360000026
Figure FDA0002276554360000027
wherein,and
Figure FDA0002276554360000029
respectively representing temporal perceptual information representing two distorted views, left and right, Mi,d,lAnd Mi,d,rAnd frame difference maps respectively representing the left and right distortion views.
4. The method of claim 3, wherein the time information is used to calculate a variance factor, and a threshold is obtained from the variance factor to determine whether there is large motion in consecutive frames, and the method comprises the following steps:
A. calculating the coefficient of variation by using the mean value and standard deviation in the time information content of each frame, and recording the coefficient of variation of the distorted frames of the left and right view video sequence as CVi,d,l(r)The calculation formula is as follows:
B. obtaining the quartile of the left and right video sequence by using the variation coefficient of each frame of the left and right video sequence
Figure FDA0002276554360000032
And
Figure FDA0002276554360000033
obtaining a threshold value threshold by using the mean value of the quartile of the variation coefficients of the left and right video sequences;
C. classifying each frame of the left and right video sequences by using a threshold value, and calculating the frame dominance level of each frame, wherein the calculation formula is as follows:
Figure FDA0002276554360000034
wherein,
Figure FDA0002276554360000035
respectively representing the amount of spatial information and temporal information for each frame of the left and right video sequencesStandard deviation of the amount;
D. using the obtained frame dominance level g of each frame of the left and right distorted view video sequencei,l(r)Calculating the dominance level of the whole video of the left-view video and the right-view video, wherein the calculation formula is as follows:
Figure FDA0002276554360000036
5. the method according to claim 4, wherein the left and right view video quality is weighted by using the spatial information and the temporal information of the single view video, and the specific steps are as follows:
A. dominance level g using left and right view videol(r)Get the weight, denoted as wl(r)The calculation formula is as follows:
Figure FDA0002276554360000037
B. calculating the quality score of the three-dimensional video, and recording as Q3DThe calculation formula is as follows:
Figure FDA0002276554360000038
wherein,
Figure FDA0002276554360000039
is the quality score of the left and right view video sequence obtained by two-dimensional image/video quality evaluation.
CN201911125047.XA 2019-11-18 2019-11-18 Weighting quality evaluation method of asymmetric distortion three-dimensional video based on space-time information Pending CN110838120A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911125047.XA CN110838120A (en) 2019-11-18 2019-11-18 Weighting quality evaluation method of asymmetric distortion three-dimensional video based on space-time information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911125047.XA CN110838120A (en) 2019-11-18 2019-11-18 Weighting quality evaluation method of asymmetric distortion three-dimensional video based on space-time information

Publications (1)

Publication Number Publication Date
CN110838120A true CN110838120A (en) 2020-02-25

Family

ID=69576662

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911125047.XA Pending CN110838120A (en) 2019-11-18 2019-11-18 Weighting quality evaluation method of asymmetric distortion three-dimensional video based on space-time information

Country Status (1)

Country Link
CN (1) CN110838120A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113938671A (en) * 2020-07-14 2022-01-14 北京灵汐科技有限公司 Image content analysis method and device, electronic equipment and storage medium
CN114332088A (en) * 2022-03-11 2022-04-12 电子科技大学 Motion estimation-based full-reference video quality evaluation method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090274390A1 (en) * 2008-04-30 2009-11-05 Olivier Le Meur Method for assessing the quality of a distorted version of a frame sequence
CN102523477A (en) * 2011-12-01 2012-06-27 上海大学 Stereoscopic video quality evaluation method based on binocular minimum discernible distortion model
CN103686178A (en) * 2013-12-04 2014-03-26 北京邮电大学 Method for extracting area-of-interest of video based on HVS
US20160330439A1 (en) * 2016-05-27 2016-11-10 Ningbo University Video quality objective assessment method based on spatiotemporal domain structure
CN107067333A (en) * 2017-01-16 2017-08-18 长沙矿山研究院有限责任公司 A kind of high altitudes and cold stability of the high and steep slope monitoring method
CN107635136A (en) * 2017-09-27 2018-01-26 北京理工大学 View-based access control model is perceived with binocular competition without with reference to stereo image quality evaluation method
CN109996065A (en) * 2019-04-15 2019-07-09 方玉明 A kind of quality evaluating method for asymmetric distortion 3 D video

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090274390A1 (en) * 2008-04-30 2009-11-05 Olivier Le Meur Method for assessing the quality of a distorted version of a frame sequence
CN102523477A (en) * 2011-12-01 2012-06-27 上海大学 Stereoscopic video quality evaluation method based on binocular minimum discernible distortion model
CN103686178A (en) * 2013-12-04 2014-03-26 北京邮电大学 Method for extracting area-of-interest of video based on HVS
US20160330439A1 (en) * 2016-05-27 2016-11-10 Ningbo University Video quality objective assessment method based on spatiotemporal domain structure
CN107067333A (en) * 2017-01-16 2017-08-18 长沙矿山研究院有限责任公司 A kind of high altitudes and cold stability of the high and steep slope monitoring method
CN107635136A (en) * 2017-09-27 2018-01-26 北京理工大学 View-based access control model is perceived with binocular competition without with reference to stereo image quality evaluation method
CN109996065A (en) * 2019-04-15 2019-07-09 方玉明 A kind of quality evaluating method for asymmetric distortion 3 D video

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113938671A (en) * 2020-07-14 2022-01-14 北京灵汐科技有限公司 Image content analysis method and device, electronic equipment and storage medium
CN114332088A (en) * 2022-03-11 2022-04-12 电子科技大学 Motion estimation-based full-reference video quality evaluation method

Similar Documents

Publication Publication Date Title
CN106028026B (en) A kind of efficient video assessment method for encoding quality based on space-time domain structure
CN107483920B (en) A kind of panoramic video appraisal procedure and system based on multi-layer quality factor
CN103763552B (en) Stereoscopic image non-reference quality evaluation method based on visual perception characteristics
Gu et al. Multiscale natural scene statistical analysis for no-reference quality evaluation of DIBR-synthesized views
CN103606132B (en) Based on the multiframe Digital Image Noise method of spatial domain and time domain combined filtering
CN104243973B (en) Video perceived quality non-reference objective evaluation method based on areas of interest
CN106341677B (en) Virtual view method for evaluating video quality
Ma et al. Reduced-reference video quality assessment of compressed video sequences
KR101761928B1 (en) Blur measurement in a block-based compressed image
CN104079925A (en) Ultrahigh definition video image quality objective evaluation method based on visual perception characteristic
EP1692876A1 (en) Method and system for video quality measurements
CN102523477A (en) Stereoscopic video quality evaluation method based on binocular minimum discernible distortion model
CN105635743A (en) Minimum noticeable distortion method and system based on saliency detection and total variation
CN109064418A (en) A kind of Images Corrupted by Non-uniform Noise denoising method based on non-local mean
CN114648482A (en) Quality evaluation method and system for three-dimensional panoramic image
Feng et al. Saliency based objective quality assessment of decoded video affected by packet losses
CN110838120A (en) Weighting quality evaluation method of asymmetric distortion three-dimensional video based on space-time information
CN104992419A (en) Super pixel Gaussian filtering pre-processing method based on JND factor
CN112381744A (en) Adaptive preprocessing method for AV1 synthetic film grains
CN106664404A (en) Block segmentation mode processing method in video coding and relevant apparatus
CN115131229A (en) Image noise reduction and filtering data processing method and device and computer equipment
CN107169941A (en) A kind of video denoising method
CN109801257A (en) No reference DIBR generates image quality evaluating method
CN109996065B (en) Quality evaluation method for asymmetric distortion three-dimensional video
CN111768355A (en) Method for enhancing image of refrigeration type infrared sensor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination