CN113014918B - Virtual viewpoint image quality evaluation method based on skewness and structural features - Google Patents

Virtual viewpoint image quality evaluation method based on skewness and structural features Download PDF

Info

Publication number
CN113014918B
CN113014918B CN202110236232.7A CN202110236232A CN113014918B CN 113014918 B CN113014918 B CN 113014918B CN 202110236232 A CN202110236232 A CN 202110236232A CN 113014918 B CN113014918 B CN 113014918B
Authority
CN
China
Prior art keywords
image
virtual viewpoint
gradient
skewness
viewpoint image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110236232.7A
Other languages
Chinese (zh)
Other versions
CN113014918A (en
Inventor
陈芬
王晨
邹文辉
金充充
彭宗举
王培容
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Technology
Original Assignee
Chongqing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Technology filed Critical Chongqing University of Technology
Priority to CN202110236232.7A priority Critical patent/CN113014918B/en
Publication of CN113014918A publication Critical patent/CN113014918A/en
Application granted granted Critical
Publication of CN113014918B publication Critical patent/CN113014918B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/257Colour aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/282Image signal generators for generating image signals corresponding to three or more geometrical viewpoints, e.g. multi-view systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/02Diagnosis, testing or measuring for television systems or their details for colour television signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a virtual viewpoint image quality evaluation method based on skewness and structural characteristics, which comprises the following steps: acquiring a virtual viewpoint image; generating a three-channel image corresponding to the virtual viewpoint image; dividing a three-channel image into a plurality of image blocks respectively and calculating the skewness of each block as the skewness characteristic of the virtual viewpoint image; extracting the structural characteristics of the virtual viewpoint image; inputting the skewness characteristics and the structural characteristics into the trained evaluation model; and outputting the evaluation result of the virtual viewpoint image. According to the method, the skewness of the image is extracted after the image is blocked and is used as the geometric distortion characteristic of the image, the local geometric distortion characteristic of the virtual viewpoint image can be better reflected, and the accuracy of the non-reference evaluation method in the quality evaluation of the virtual viewpoint image can be ensured. In addition, characteristics do not need to be extracted separately for hole distortion and stretching distortion of the virtual viewpoint image, and effectiveness of evaluation is improved.

Description

Virtual viewpoint image quality evaluation method based on skewness and structural features
Technical Field
The invention relates to the field of image quality evaluation, in particular to a virtual viewpoint image quality evaluation method based on skewness and structural characteristics.
Background
With the development of digital video technology, single-viewpoint images and videos no longer meet the requirements of people on visual experience, and free-viewpoint videos are produced accordingly. The free viewpoint video can enable a user to freely select viewpoints for watching, but if all the viewpoints are directly collected, the data volume is too large, and real-time coding and transmission are difficult. The free viewpoint video system adopts a multi-viewpoint plus depth format, limited color information, depth information and corresponding parameter information are input at an input end, and a required viewpoint is drawn at an output end through a depth-based virtual viewpoint drawing algorithm. However, due to inaccuracy of depth information and imperfect rendering algorithm, the rendered virtual viewpoint image has geometric distortions such as holes and stretching, and the geometric distortions are local distortions different from those of a common image, so that a quality evaluation method specially for the virtual viewpoint image needs to be designed.
The existing virtual viewpoint image quality evaluation methods mainly comprise two categories, namely full reference and no reference. Full reference quality assessment requires the use of all the information of the original image. Although the full-reference quality evaluation model has higher effectiveness, a distortion-free reference image of the virtual viewpoint synthetic image is difficult to find in a real scene, so scholars also propose a virtual viewpoint image quality evaluation method without reference, but the current partial evaluation method only considers global distortion and ignores the influence of local distortion on image quality; the other part takes local distortion and global distortion into consideration, but the model is less effective.
In summary, how to provide a more effective non-reference virtual viewpoint image quality evaluation method based on comprehensive consideration of distortion becomes a problem that needs to be solved by those skilled in the art urgently.
Disclosure of Invention
Aiming at the defects in the prior art, the invention actually solves the problems that: the virtual viewpoint image no-reference quality evaluation method can effectively improve objective evaluation results and ensure consistency with subjective perception of human eyes.
To solve the above technical problems. The invention adopts the following technical scheme:
a virtual viewpoint image quality evaluation method based on skewness and structural features comprises the following steps:
s1, acquiring a virtual viewpoint image;
s2, generating a three-channel image corresponding to the virtual viewpoint image;
s3, dividing the three-channel image into a plurality of image blocks respectively and calculating the skewness of each block as the skewness characteristic of the virtual viewpoint image;
s4, extracting the structural characteristics of the virtual viewpoint image;
s5, inputting skewness characteristics and structural characteristics into the trained evaluation model;
and S6, outputting the evaluation result of the virtual viewpoint image.
Preferably, in step S2, the virtual viewpoint image is switched to H, S, V three channels, resulting in three-channel image I H 、I S 、I V
Preferably, in step S3, the skewness of the S-th image block of the c-channel
Figure BDA0002960285410000021
The calculation is as follows:
Figure BDA0002960285410000022
in the formula (I), the compound is shown in the specification,
Figure BDA0002960285410000023
is the average of the pixels of the s-th image block of the c-channel, N is the number of pixels per image block,
Figure BDA0002960285410000024
and the pixel value of the ith pixel point in the s-th image block of the c channel.
Preferably, step S4 includes:
s401, calculating a horizontal gradient and a vertical gradient of the virtual viewpoint image;
F x (a,b)=(F(a,b+1)-F(a,b-1))/2
F y (a,b)=(F(a+1,b)-F(a-1,b))/2
in the formula, F x (a, b) is the horizontal gradient of the pixel point indexed (a, b), F y (a, b) is the vertical gradient of the pixel with index (a, b);
s402, calculating gradient amplitude based on the horizontal gradient and the vertical gradient;
Figure BDA0002960285410000031
in the formula, G (a, b) is the gradient amplitude of the pixel point with the index of (a, b);
s403, calculating a rotation invariant uniform local binary pattern value based on the gradient amplitude;
Figure BDA0002960285410000032
in the formula, GM (a, b) is a local binary pattern value with a uniform rotation of the pixel point with the index (a, b), which describes the relationship between pixels in the image field, and different GMs represent different local gradient patterns.
Figure BDA0002960285410000033
It is determined that the number of neighborhood pixel points of the local gradient mode is M and the neighborhood radius is N. G m Gradient magnitude, U (LBP), of m-th adjacent pixel of pixel indexed as (a, b) M,N ) Is a uniform local binary pattern value, calculated as the number of bitwise transitions, s (-) is a threshold function,
Figure BDA0002960285410000034
s404, accumulating the gradient amplitudes of the pixel points in the same GM mode to obtain a GM histogram with gradient weighting, wherein the GM histogram is used as the structural feature of the virtual viewpoint image;
Figure BDA0002960285410000041
where A is the number of image rows, B is the number of image columns, K is the kth GM pattern, K is [0, K ∈]K is the number of GM modes, H (K) is the gradient weighted GM histogram,
Figure BDA0002960285410000042
in summary, compared with the prior art, the invention has the following technical advantages:
(1) the cavity distortion image, the stretching distortion image and the original image have different characteristics on skewness characteristics, and the skewness of the image is extracted by partitioning the image and is used as the geometric distortion characteristics of the image, so that the local geometric distortion characteristics of the virtual viewpoint image can be better reflected, and the accuracy of the non-reference evaluation method in the quality evaluation of the virtual viewpoint image can be ensured; in addition, characteristics do not need to be extracted separately for hole distortion and stretching distortion of the virtual viewpoint image, and effectiveness of evaluation is improved.
(2) Because the HSV color space is closer to the perception experience of human beings on colors than the RGB color space, the deviation characteristic of the virtual viewpoint image extracted by selecting the HSV color space can enable the evaluation result to be more in line with the subjective perception of human eyes.
(3) By extracting the gradient of the image, accumulating the gradient amplitudes of the pixel points with the same local gradient mode to obtain a local gradient mode histogram with gradient weighting, and taking the local gradient mode histogram as the structural feature of the virtual viewpoint image, the structural feature of the image can be well represented.
Drawings
Fig. 1 and fig. 2 are flowcharts illustrating a method for evaluating the quality of a virtual viewpoint image based on skewness and structural features;
FIG. 3 is a diagram showing the characteristics of an original image of the Book arrival sequence of the IRCCyN/IVC database and a distorted image obtained by using different rendering methods; since the geometric distortion is local distortion, it can be seen that the skewness feature can clearly distinguish the hole distortion and the stretching distortion from the non-geometric distortion in the local block, as shown in the blue and red boxes.
Fig. 4(a), fig. 4(b) and fig. 4(c) are block diagrams of a hole image, a stretched image and an original image in the IRCCyN/IVC database Book arrival sequence, respectively;
fig. 5 is a 27-dimensional skewness value statistical chart of fig. 4(a), 4(b), and 4 (c). Phi in FIG. 4 corresponds to the 1-3 skewness feature in FIG. 5, and so on. The (c), (c) and (d) in fig. 4(a) and (b) have obvious distortion, and the features 1-3, 10-12 and 19-21 corresponding to the hole image and the stretched image in fig. 5 are different from those of the original image.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
As shown in fig. 1 to 5, the present invention discloses a virtual viewpoint image quality evaluation method based on skewness and structural features, comprising:
s1, acquiring a virtual viewpoint image;
s2, generating a three-channel image corresponding to the virtual viewpoint image;
s3, dividing the three-channel image into a plurality of image blocks respectively and calculating the skewness of each block as the skewness characteristic of the virtual viewpoint image;
s4, extracting the structural characteristics of the virtual viewpoint image;
s5, inputting skewness characteristics and structural characteristics into the trained evaluation model;
and S6, outputting the evaluation result of the virtual viewpoint image.
The cavity distortion image, the stretching distortion image and the original image have different characteristics on skewness characteristics, and the skewness of the image is extracted by partitioning the image and is used as the geometric distortion characteristics of the image, so that the local geometric distortion characteristics of the virtual viewpoint image can be better reflected, and the accuracy of the non-reference evaluation method in the quality evaluation of the virtual viewpoint image can be ensured; in addition, characteristics do not need to be extracted separately for hole distortion and stretching distortion of the virtual viewpoint image, and evaluation efficiency is improved.
In step S2, the virtual viewpoint image is switched to H, S, V three channels,obtaining three-channel image I H 、I S 、I V The conversion of the virtual viewpoint image may be performed as follows:
R′=R/255
G′=G/255
B′=B/255
C max =max(R′,G′,B′)
C min =min(R′,G′,B′)
Δ=C max -C min
Figure BDA0002960285410000061
Figure BDA0002960285410000062
I V =C max
because the HSV color space is closer to the perception experience of human beings on colors than the RGB color space, the deviation characteristic of the virtual viewpoint image extracted by selecting the HSV color space can enable the evaluation result to be more in line with the subjective perception of human eyes.
In specific implementation, in step S3, the skewness of the S-th image block of the c-channel
Figure BDA0002960285410000071
The following calculations were made:
Figure BDA0002960285410000072
in the formula (I), the compound is shown in the specification,
Figure BDA0002960285410000073
is the average of the pixels of the s-th image block of the c-channel, N is the number of pixels per image block,
Figure BDA0002960285410000074
for ith pixel point in s image block of c channelThe pixel value.
In the present invention, I may be H 、I S 、I V Are divided into 9 equal-sized image blocks, so that a total of 27 skewness features can be obtained
Figure BDA0002960285410000075
In specific implementation, step S4 includes:
s401, calculating a horizontal gradient and a vertical gradient of the virtual viewpoint image;
F x (a,b)=(F(a,b+1)-F(a,b-1))/2
F y (a,b)=(F(a+1,b)-F(a-1,b))/2
in the formula, F x (a, b) is the horizontal gradient of the pixel point indexed (a, b), F y (a, b) is the vertical gradient of the pixel with index (a, b);
s402, calculating gradient amplitude based on the horizontal gradient and the vertical gradient;
Figure BDA0002960285410000076
in the formula, G (a, b) is the gradient amplitude of the pixel point with the index of (a, b);
s403, calculating a rotation invariant uniform local binary pattern value based on the gradient amplitude;
Figure BDA0002960285410000081
in the formula, GM (a, b) is a local binary pattern value with a uniform rotation of the pixel point with the index (a, b), which describes the relationship between pixels in the image field, and different GMs represent different local gradient patterns.
Figure BDA0002960285410000082
It is determined that the number of neighborhood pixel points of the local gradient mode is M and the neighborhood radius is N. G m Gradient magnitude, U (LBP), of m-th adjacent pixel of pixel indexed as (a, b) M,N ) For a uniform local binary pattern value, which is calculated as the number of bitwise transitions, the uniformity metric U is calculated as the number of bitwise transitions, the rotationally invariant ULBP has M +2 GM patterns, describing different local gradient structures, s (-) is a threshold function,
Figure BDA0002960285410000083
s404, accumulating the gradient amplitudes of the pixel points with the same GM mode to obtain a GM histogram with gradient weighting, wherein the GM histogram is used as the structural characteristic of the virtual viewpoint image;
Figure BDA0002960285410000084
where A is the number of image rows, B is the number of image columns, K is the kth GM pattern, K is [0, K ∈]K is the number of GM modes, H (K) is the gradient weighted GM histogram,
Figure BDA0002960285410000085
the gradient of the image is extracted, and the GM histogram is calculated and used as the structural feature of the virtual viewpoint image, so that the structural feature of the image can be well represented. The neighborhood radius N may be 1, the number M of neighborhood pixels is 8, and there are 10 structural features.
Taking the division of 9 image blocks and the value of the neighborhood radius N as 1 as an example, the method provided by the invention can obtain 27-dimensional skewness characteristics, 10-dimensional structure characteristics and 37-dimensional characteristics in total. The evaluation model can be a Support Vector Machine (SVM), and can map the quality perception features to subjective evaluation. The invention can randomly divide the three-dimensional synthetic visual quality database into a training set and a testing set for 1000 times: 80% of the image samples and corresponding subjective scores were trained in the database, and the remaining 20% of the samples were used for testing. And finally, taking the median of the Pearson linear correlation coefficient, the spearman rank correlation coefficient and the root mean square error as a final result.
In order to verify the effect of the technical scheme disclosed by the invention, the invention is compared with the existing six virtual viewpoint Image Quality Assessment (IQA) methods and six advanced general Reference-free (NR) Quality methods, wherein the former comprises MW-PSNR, MP-PSNR, APT, NIQSV +, MNSS and LOGS, and the latter comprises BIQI, BRISQE, DIVINE, M3, QAC and NIQE. The test conditions were the same as those in the method of the present invention. The experimental results are summarized in table 1, and it can be seen from table 1 that, compared with the six virtual viewpoint IQA method, the method of the present invention is superior to the existing six virtual viewpoint image IQA method on two databases: (1) in an IRCCyN/IVC database, the PLCC and SROCC values of the method are the highest, and the RMSE value is the lowest, so that the method has better prediction accuracy and monotonicity; (2) in the MCL3D database, the present invention still has significant advantages over the existing Full Reference (FR) and NR virtual viewpoint IQA methods.
Table 1 also lists the performance comparisons of the method of the invention with the general NR quality assessment method on IRCCYN/IVC and MCL-3D image databases. It can be seen that most of the general NR quality evaluation methods have worse performance than the virtual viewpoint image quality evaluation methods in both databases, and the method provided by the present invention has significant advantages in both databases.
TABLE 1
Figure BDA0002960285410000101
The method randomly selects a certain number of image samples for training, and tests the other samples. Generally, a large sample size training set helps to improve the performance and stability of the training scale, while a small sample size training set may lead to overfitting. The present invention sets the ratio of the training set to 90%, 80%, 70%, 60%, 50% relative to the entire database. The training test process was also repeated 1000 times, and the experimental results are shown in table 2, which shows that the performance of the invention is also reduced with the reduction of the training images. For the IRCCyN/IVC database, the method is still superior to most of the existing NR quality evaluation methods when the training images account for 50% of all the images. For the MCL-3D database, even if only 50% of the images are used for model training, the performance of the method is still higher than 0.89 and far higher than that of the existing virtual viewpoint and general NR IQA method. These results show that the present invention does not depend on the number of training images, i.e. a small number of training images is sufficient to obtain good performance.
TABLE 2
Figure BDA0002960285410000111
The invention uses two characteristics to evaluate the synthetic viewpoint image library, and uses skewness and GM characteristics to respectively carry out model training and quality prediction on the MCL-3D database and the IRCCyN/IVC database in order to check the relative contribution of the two characteristics. The training and testing process was repeated 1000 times, and the average was taken as the final mass score. The results of the experiment are shown in table 3. The results show that: (1) for the two databases, no matter the skewness characteristic or the GM characteristic, the two characteristic methods provided by the invention can obtain better performance, which is obviously superior to most of the existing virtual viewpoint IQA method and the general NR quality evaluation method, and the effectiveness of the provided characteristics is proved. (2) In the IRCCyN/IVC database, the performance value of the skewness is slightly higher than GM, but in the MCL-3D database, the performance value of the skewness is slightly lower than GM, which indicates that the distortion of the virtual viewpoint image in the MCL-3D database is more biased to the structural distortion, so the structural feature can better measure the distortion of the virtual viewpoint image in the MCL-3D database. Psychological studies have shown that the HVS has a greater perception of image edges/structures. (3) The evaluation method provided by the invention can obtain better performance on two complete databases, and further proves the effectiveness of the fusion of two groups of characteristics, namely skewness and structure, in the virtual viewpoint image quality evaluation.
TABLE 3
Figure BDA0002960285410000121
The performance of the SVM method adopted by the invention is compared with a regression method based on Random Forest (RF). The experimental results are shown in table 4, in the MCL-3D database, the performance using the SVM method is comparable to that using the RF method, but in the IRCCyN/IVC database, the performance using the SVM method is significantly better than that using the RF method.
TABLE 4
Figure BDA0002960285410000131
The above are only preferred embodiments of the present invention, and it should be noted that a person skilled in the art may make several variations and modifications without departing from the technical solution, and the technical solution of the above variations and modifications should also be considered as falling within the scope of the present invention.

Claims (2)

1. A virtual viewpoint image quality evaluation method based on skewness and structural features is characterized by comprising the following steps:
s1, acquiring a virtual viewpoint image;
s2, generating a three-channel image corresponding to the virtual viewpoint image;
s3, dividing the three-channel image into a plurality of image blocks respectively and calculating the skewness of each block as the skewness characteristic of the virtual viewpoint image; in step S3, skewness of the S-th image block of the c-channel
Figure FDA0003742099150000011
The following calculations were made:
Figure FDA0003742099150000012
in the formula (I), the compound is shown in the specification,
Figure FDA0003742099150000013
is the average of the pixels of the s-th image block of the c-channel, N is the number of pixels per image block,
Figure FDA0003742099150000014
the pixel value of the ith pixel point in the ith image block of the c channel;
s4, extracting the structural characteristics of the virtual viewpoint image; step S4 includes:
s401, calculating a horizontal gradient and a vertical gradient of the virtual viewpoint image;
F x (a,b)=(F(a,b+1)-F(a,b-1))/2
F y (a,b)=(F(a+1,b)-F(a-1,b))/2
in the formula, F x (a, b) is the horizontal gradient of the pixel point indexed (a, b), F y (a, b) is the vertical gradient of the pixel with index (a, b);
s402, calculating gradient amplitude based on the horizontal gradient and the vertical gradient;
Figure FDA0003742099150000015
in the formula, G (a, b) is the gradient amplitude of the pixel point with the index of (a, b);
s403, calculating a rotation invariant uniform local binary pattern value based on the gradient amplitude;
Figure FDA0003742099150000016
in the formula, GM (a, b) is a local binary pattern value with a uniform rotation of a pixel point with an index (a, b), which describes the relationship between pixels in the image field, and different GMs represent different local gradient patterns;
Figure FDA0003742099150000017
defining that the number of neighborhood pixel points of the local gradient mode is M and the radius of the neighborhood is N; g m Gradient magnitude, U (LBP), of m-th adjacent pixel of pixel indexed as (a, b) M,N ) Is a uniform local binary pattern value, calculated as the number of bitwise transitions, s (-) is a threshold function,
Figure FDA0003742099150000021
s404, accumulating the gradient amplitudes of the pixel points in the same GM mode to obtain a GM histogram with gradient weighting, wherein the GM histogram is used as the structural feature of the virtual viewpoint image;
Figure FDA0003742099150000022
where A is the number of image rows, B is the number of image columns, k is the kth GM pattern, H (k) is the gradient weighted GM histogram,
Figure FDA0003742099150000023
s5, inputting skewness characteristics and structural characteristics into the trained evaluation model;
and S6, outputting the evaluation result of the virtual viewpoint image.
2. The method for evaluating the quality of a virtual visual point image based on skewness and structural features as claimed in claim 1, wherein in step S2, the virtual visual point image is switched to H, S, V three channels to obtain three-channel image I H 、I S 、I V
CN202110236232.7A 2021-03-03 2021-03-03 Virtual viewpoint image quality evaluation method based on skewness and structural features Active CN113014918B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110236232.7A CN113014918B (en) 2021-03-03 2021-03-03 Virtual viewpoint image quality evaluation method based on skewness and structural features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110236232.7A CN113014918B (en) 2021-03-03 2021-03-03 Virtual viewpoint image quality evaluation method based on skewness and structural features

Publications (2)

Publication Number Publication Date
CN113014918A CN113014918A (en) 2021-06-22
CN113014918B true CN113014918B (en) 2022-09-02

Family

ID=76404073

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110236232.7A Active CN113014918B (en) 2021-03-03 2021-03-03 Virtual viewpoint image quality evaluation method based on skewness and structural features

Country Status (1)

Country Link
CN (1) CN113014918B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106341677A (en) * 2015-07-07 2017-01-18 中国科学院深圳先进技术研究院 Virtual viewpoint video quality evaluation method
CN106875383A (en) * 2017-01-24 2017-06-20 北京理工大学 The insensitive blurred picture quality evaluating method of content based on Weibull statistical nature
CN108289222A (en) * 2018-01-26 2018-07-17 嘉兴学院 A kind of non-reference picture quality appraisement method mapping dictionary learning based on structural similarity
CN110246111A (en) * 2018-12-07 2019-09-17 天津大学青岛海洋技术研究院 Based on blending image with reinforcing image without reference stereo image quality evaluation method
CN110996096A (en) * 2019-12-24 2020-04-10 嘉兴学院 Tone mapping image quality evaluation method based on structural similarity difference

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106341677A (en) * 2015-07-07 2017-01-18 中国科学院深圳先进技术研究院 Virtual viewpoint video quality evaluation method
CN106875383A (en) * 2017-01-24 2017-06-20 北京理工大学 The insensitive blurred picture quality evaluating method of content based on Weibull statistical nature
CN108289222A (en) * 2018-01-26 2018-07-17 嘉兴学院 A kind of non-reference picture quality appraisement method mapping dictionary learning based on structural similarity
CN110246111A (en) * 2018-12-07 2019-09-17 天津大学青岛海洋技术研究院 Based on blending image with reinforcing image without reference stereo image quality evaluation method
CN110996096A (en) * 2019-12-24 2020-04-10 嘉兴学院 Tone mapping image quality evaluation method based on structural similarity difference

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
《Analysis of Probability Density Functions in Existing No-Reference Image Quality Assessment Algorithm for Contrast-Distorted Images》;Ismail Taha Ahmed et al.;《2019 IEEE 10th Control and System Graduate Research Colloquium (ICSGRC)》;20190916;全文 *
《基于三维感知的立体虚拟视点图像质量评价方法》;汤锐彬 等;《光电子· 激光》;20181231;第29卷(第8期);893-902 *
《基于梯度幅度和梯度方向直方图的参考图像质量评价算法》;王同罕 等;《东南大学学报(自然科学版)》;20180730;第48卷(第2期);276-281 *
《基于边缘差异的虚拟视图像质量评价方法》;张艳;《电子与信息学报》;20130825;第35卷(第8期);1894-1900 *

Also Published As

Publication number Publication date
CN113014918A (en) 2021-06-22

Similar Documents

Publication Publication Date Title
CN103996192B (en) Non-reference image quality evaluation method based on high-quality natural image statistical magnitude model
CN107767413B (en) Image depth estimation method based on convolutional neural network
CN102333233B (en) Stereo image quality objective evaluation method based on visual perception
CN101610425B (en) Method for evaluating stereo image quality and device
CN112950596B (en) Tone mapping omnidirectional image quality evaluation method based on multiple areas and multiple levels
CN109242834A (en) It is a kind of based on convolutional neural networks without reference stereo image quality evaluation method
CN110443800A (en) The evaluation method of video image quality
CN102722888A (en) Stereoscopic image objective quality evaluation method based on physiological and psychological stereoscopic vision
CN111882516B (en) Image quality evaluation method based on visual saliency and deep neural network
CN116403063A (en) No-reference screen content image quality assessment method based on multi-region feature fusion
CN111641822A (en) Method for evaluating quality of repositioning stereo image
CN105488792A (en) No-reference stereo image quality evaluation method based on dictionary learning and machine learning
CN113014918B (en) Virtual viewpoint image quality evaluation method based on skewness and structural features
Yang et al. EHNQ: Subjective and objective quality evaluation of enhanced night-time images
CN117252936A (en) Infrared image colorization method and system adapting to multiple training strategies
CN108648186B (en) No-reference stereo image quality evaluation method based on primary visual perception mechanism
CN107578406A (en) Based on grid with Wei pool statistical property without with reference to stereo image quality evaluation method
CN104820988B (en) One kind is without with reference to objective evaluation method for quality of stereo images
CN113192003B (en) Spliced image quality evaluation method
CN114067006B (en) Screen content image quality evaluation method based on discrete cosine transform
CN111083468B (en) Short video quality evaluation method and system based on image gradient
CN110223268B (en) Drawn image quality evaluation method
CN112508847A (en) Image quality evaluation method based on depth feature and structure weighted LBP feature
CN113469998B (en) Full-reference image quality evaluation method based on subjective and objective feature fusion
CN112770105B (en) Repositioning stereo image quality evaluation method based on structural features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant