CN109191428B - Masking texture feature-based full-reference image quality evaluation method - Google Patents
Masking texture feature-based full-reference image quality evaluation method Download PDFInfo
- Publication number
- CN109191428B CN109191428B CN201810834955.5A CN201810834955A CN109191428B CN 109191428 B CN109191428 B CN 109191428B CN 201810834955 A CN201810834955 A CN 201810834955A CN 109191428 B CN109191428 B CN 109191428B
- Authority
- CN
- China
- Prior art keywords
- image
- similarity
- reference image
- formula
- color space
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Color Image Communication Systems (AREA)
- Facsimile Image Signal Circuits (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a full-reference image quality evaluation method based on masking texture features, which belongs to the technical field of image processing and image quality evaluation, and comprises the steps of firstly carrying out color space conversion on a reference image and a distorted image, secondly extracting gradient amplitude and gradient direction features of the reference image and the distorted image and calculating image gradient information similarity, then calculating texture feature similarity and chromatic aberration, respectively counting the mean value and standard deviation of the texture feature similarity and the chromatic aberration to form a 6-D feature vector, establishing a regression model according to a random forest to fuse the feature vector and a subjective MOS value, and carrying out training; and finally, extracting the 6-D characteristic vector of the image to be detected, inputting the vector into the trained regression model, and finishing objective image quality evaluation. The evaluation method disclosed by the invention adopts three different similarity characteristics, uses a random forest to establish a regression model, realizes high-precision objective evaluation of the quality of the full-reference image, and can keep higher consistency with the visual characteristics of human eyes.
Description
Technical Field
The invention belongs to the technical field of image processing and image quality evaluation, and relates to a full-reference image quality evaluation method based on masking texture features.
Background
As the big data age has come, more and more images are shared on the network. Digital images are used as important carriers for people to acquire information and communicate, and the life style of people is gradually changed. With the large increase of the data size, a great challenge is brought, and the image may be distorted to a certain degree in the processes of acquisition, storage, transmission and processing. Therefore, how to effectively process and transmit images and accurately evaluate the image quality has become a problem to be researched.
In recent years, the full-reference image quality evaluation algorithm and the corresponding device are widely applied to various image processing systems to optimize parameters, so the full-reference image quality evaluation becomes a research hotspot. Most of the existing full-reference image quality evaluation methods adopt a framework based on Human Visual Systems (HVS) as a principle, and w.zhou et al propose an image evaluation method: firstly, respectively extracting three indexes of brightness information, contrast information, structure information and the like of a reference image and a corresponding distorted image; secondly, calculating the similarity of the three indexes to obtain the brightness similarity, the contrast similarity and the structural similarity; and finally, averagely weighting the three similarity characteristics to obtain the quality score of the distorted image, and giving visual characteristic weight according to the image content on the premise of the theory. In addition, there are some methods for extracting a global feature from the whole image in the spatial domain to perform quality evaluation, but this method cannot be used for evaluating a color image.
At present, in some researches, frequency domain characteristics are adopted to describe image structure information and further improve an image quality evaluation model, but most of image quality evaluation methods based on characteristic similarity calculation cannot accurately reflect human eye vision masking effect, and influence of complex factors such as physiology, psychology and the like on human eye vision is ignored, so that the evaluation result precision is low.
Disclosure of Invention
The invention aims to provide a masking texture feature-based full-reference image quality evaluation method, which solves the problems that the existing evaluation method cannot accurately reflect the masking effect of human vision and neglects the influence of complex factors such as physiology, psychology and the like on the human vision.
The technical scheme adopted by the invention is that the method for evaluating the quality of the full-reference image based on the masking texture features comprises the following steps:
step 3, after the step 1 is finished, Laws texture characteristics of the L channel in the reference image and the distorted image are sequentially extracted, and the texture similarity mean value and the standard deviation of the reference image and the distorted image are counted;
step 4, calculating the color differences of the reference image and the distorted image in three channels L, a and b according to the Lab color space obtained in the step 1, and counting the mean value and the standard deviation of the color differences;
and 5, after the step 2, the step 3 and the step 4 are finished, fusing the obtained gradient amplitude similarity, gradient direction similarity, texture similarity mean value and standard deviation and feature similarity color difference mean value and standard deviation in a regression model through a random forest, inputting the subjective evaluation score MOS value into the regression model for training, and directly using the trained model for accurately predicting the quality of the image to be evaluated.
Yet another feature of the present invention is that,
the specific process of step 1 is as follows:
color space conversion is performed on the reference image and the distorted image in the database according to formulas 1-3, and conversion is performed from an RGB color space to a Lab color space:
wherein R, G and B respectively represent three channels of a color image, X, Y and Z respectively represent tristimulus values of colors, and X0=0.9505,Y0=1.000,Z01.0890 is D65Tristimulus value, L, under illumination conditions*Representing the lightness channel after color space conversion, a*And b*Respectively representing the chrominance channels after color space conversion; the RGB color space is calculated by formula 1, formula 2 and formula 3 to obtain Lab color space, the size of the image after color space conversion is the same as the size of the image before color space conversion, and the separation of the luminance information L channel of the image from the chrominance information a and b channels is realized.
The specific process of step 2 is as follows:
step 2.1, performing convolution operation on the reference image and the distorted image respectively by using a Prewitt operator with 3 × 3 window horizontal and vertical components, and extracting the features of gradient amplitude and gradient direction:
for an image f (x), x represents the position of a pixel point, and the method of convolving the image is shown in formula 4:
in the formula, Gx(x) Representing horizontal gradient amplitude values, Gy(x) Representing a vertical gradient magnitude value;
step 2.2, after step 2.1, calculating the gradient amplitude value gm (x) and the gradient direction value θ (x) of the reference image and the distorted image according to the formulas 5 and 6, respectively, wherein the specific calculation method is as follows:
step 2.3 after step 2.2, the gradient amplitude similarity of the reference image and the distorted image is calculated according to formula 7 and formula 8 respectivelyAnd gradient direction similarity Sor(x) The specific calculation method is as follows:
in formula 7, m and n respectively represent the width and height of the image, x represents the position of the pixel point, and Ir(x) And Id(x) Respectively representing a reference image and a distorted image; in the formula 8, θrAnd thetadRespectively representing the gradient directions of the reference image and the distorted image, C1=1。
The specific process of step 3 is as follows:
step 3.1, extracting texture features, namely performing convolution operation on the image by adopting four two-dimensional Laws filters, wherein the four two-dimensional Laws filters are shown as a formula 9:
for an image f (x), x represents the position of the pixel point, the convolution operation is performed on the image and the four templates in the formula 9 respectively, and the maximum value is taken, and the specific form is shown in the formula 10:
te=max(f(x)*i),i=(a),(b),(c),(d) (10)
step 3.2, after the step 3.1, calculating the texture similarity of the reference image and the distorted image, wherein the specific calculation mode is as follows:
in equation 11, terAnd tedTexture features representing the reference image and the distorted image, respectively, C2=100;
Step 3.3 after step 3.2, the mean of the convolution results is countedAnd standard deviation ofThe specific statistical form is as follows: (ii) a
In the formula 12, the first and second groups of the formula,the mean value of the similarity of the textures is represented,the texture similarity standard deviation is represented, and n represents the total number of pixel points.
The specific process of step 4 is as follows:
step 4.1, according to the Lab color space obtained in step 1, the color difference values Δ E of the reference image and the distorted image under the three channels L, a, and b are respectively calculated, as shown in formula 13:
in the formula 13, the first and second groups,respectively representing the values of the three channels in the Lab color space, wherein the subscript r and the subscript d respectively represent the reference image and the distorted image;
step 4.2 statistics of the mean of the color differencesAnd standard deviation ofAs shown in equations 14 and 15:
in the formula, m and n respectively represent the width and the height of the color difference diagram, and (i and j) represent the position of a two-dimensional plane where a pixel point is located.
The specific process of step 5 is as follows:
step 5.1 six similarity features to be obtainedSor,Andand subjective average score MOS values of the distorted images in the database are input into a regression model established by a random forest together for training, the number ntree of decision trees in the model is set to be 500, and the number mtry of number node preselected variables is set to be 2;
and 5.2, extracting similarity characteristics from one or more distortion images to be detected and reference images corresponding to the distortion images according to the steps 2, 3 and 4 by using the trained regression model, inputting the similarity characteristics into the trained random forest regression model to obtain an output prediction quality score, and finishing the evaluation of the quality of the distortion images.
The method has the advantages that three different similarity characteristics of the image are extracted from a large public database based on the masking texture characteristic full-reference image quality evaluation method, and the mean value and standard deviation of the similarity characteristics are counted to describe image information in a mutually complementary manner, so that the problem that the consistency between the traditional characteristics and the subjective perception of human eyes is low is solved; a regression model can be established according to Random Forests (RF), the mean value and the variance of each similarity feature are fused, and learning and prediction are carried out by combining subjective score MOS values, so that the robustness of the model is improved, and the application universality is increased; when in use, the method can greatly improve the image quality prediction precision and has high consistency with a human visual system.
Drawings
Fig. 1 is a frame diagram of a masking texture feature-based full-reference image quality evaluation method according to the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The method for evaluating the quality of the full-reference image based on the masking texture features, as shown in fig. 1, can be divided into two parts, which are respectively: establishing an RF model and predicting image quality evaluation: the method comprises the steps that an RF model building part is used for building a regression model by using random forest RF, wherein processing objects are a reference image and a distorted image in an image database, the mean value and the variance of three similarity characteristics in the method are extracted, and the regression model is built by combining subjective MOS values in the database;
and the prediction part of the image quality evaluation calculates the gradient amplitude similarity, the gradient direction similarity, the texture similarity mean value, the texture similarity standard deviation, the color difference mean value and the color difference standard deviation of the distorted image and the corresponding reference image, molds the three similarity features into a 6-D feature vector, and inputs the 6-D feature vector into the RF regression model as an input value, thereby predicting the quality of the distorted image and finishing the evaluation of the image quality.
The specific operation process comprises the following steps:
color space conversion is performed on the reference image and the distorted image in the database according to formulas 1-3, and conversion is performed from an RGB color space to a Lab color space:
wherein R, G and B respectively represent three channels of a color image, X, Y and Z respectively represent tristimulus values of colors, and X0=0.9505,Y0=1.000,Z01.0890 is D65Tristimulus value, L, under illumination conditions*Representing the lightness channel after color space conversion, a*And b*Respectively representing the chrominance channels after color space conversion; calculating the RGB color space by a formula 1, a formula 2 and a formula 3 to obtain a Lab color space, wherein the size of the image after color space conversion is the same as that of the image before color space conversion, so that the separation of a brightness information L channel and chrominance information a and b channels of the image is realized;
step 2.1, performing convolution operation on the reference image and the distorted image respectively by using a Prewitt operator with 3 × 3 window horizontal and vertical components, and extracting the features of gradient amplitude and gradient direction:
for an image f (x), x represents the position of a pixel point, and the method of convolving the image is shown in formula 4:
in the formula, Gx(x) Representing horizontal gradient amplitude values, Gy(x) Representing a vertical gradient magnitude value;
step 2.2, after step 2.1, calculating the gradient amplitude value gm (x) and the gradient direction value θ (x) of the reference image and the distorted image according to the formulas 5 and 6, respectively, wherein the specific calculation method is as follows:
step 2.3, after the step 2.2 is finished, respectively calculating the gradient amplitude similarity of the reference image and the distorted image according to a formula 7 and a formula 8And gradient direction similarity Sor(x) The specific calculation method is as follows:
in formula 7, m and n respectively represent the width and height of the image, x represents the position of the pixel point, and Ir(x) And Id(x) Respectively representing a reference image and a distorted image; in the formula 8, θrAnd thetadRespectively representing the gradient directions of the reference image and the distorted image, C 11, to stabilize equation 8, avoid the denominator appearing as zero.
And 3, after the step 1 is finished, sequentially extracting Laws texture features of the L channel in the reference image and the distorted image, and counting the mean value and the standard deviation of the texture similarity of the reference image and the distorted image:
step 3.1, extracting texture features, namely performing convolution operation on the image by adopting four two-dimensional Laws filters and taking the maximum value, wherein the four two-dimensional Laws filters are shown as a formula 9:
for an image f (x), x represents the position of the pixel point, the convolution operation is performed on the image and the four templates in the formula 9 respectively, and the maximum value is taken, and the specific form is shown in the formula 10:
te=max(f(x)*i),i=(a),(b),(c),(d) (10)
step 3.2, after the step 3.1, calculating the texture similarity of the reference image and the distorted image, wherein the specific calculation mode is as follows:
in equation 11, terAnd tedTexture features representing the reference image and the distorted image, respectively, C2100, which is used for stabilizing formula 11 and avoiding the phenomenon that the denominator is zero;
step 3.3 after step 3.2, the mean of the convolution results is countedAnd standard deviation ofThe specific statistical form is as follows:
in the formula 12, the first and second groups of the formula,the mean value of the similarity of the textures is represented,the texture similarity standard deviation is represented, and n represents the total number of pixel points.
Step 4, calculating the color differences of the reference image and the distorted image in three channels L, a and b according to the Lab color space obtained in the step 1, and counting the mean value and the standard deviation of the color differences;
step 4.1, according to the Lab color space obtained in step 1, the color difference values Δ E of the reference image and the distorted image under the three channels L, a, and b are respectively calculated, as shown in formula 13:
in the formula 13, the first and second groups,respectively representing the values of the three channels in the Lab color space, wherein the subscript r and the subscript d respectively represent the reference image and the distorted image;
step 4.2 statistics of the mean of the color differencesAnd standard deviation ofAs shown in equations 14 and 15:
in the formulas 14 and 15, m and n respectively represent the width and height of the color difference graph, and (i and j) represent the position of a two-dimensional plane where a pixel point is located;
and 5, after the step 2, the step 3 and the step 4 are finished, fusing the obtained gradient amplitude similarity, gradient direction similarity, texture similarity mean value and standard deviation and feature similarity color difference mean value and standard deviation in a regression model through a random forest, inputting the subjective evaluation score MOS value into the regression model for training, and directly using the trained model for accurately predicting the quality of the image to be evaluated:
step 5.1 six similarity features to be obtainedSor,Andand subjective average score MOS values of the distorted images in the database are input into a regression model established by a random forest together for training, the number ntree of decision trees in the model is set to be 500, and the number mtry of number node preselected variables is set to be 2;
and 5.2, extracting similarity characteristics from one or more distortion images to be detected and reference images corresponding to the distortion images according to the steps 2, 3 and 4 by using the trained regression model, inputting the similarity characteristics into the trained random forest regression model to obtain an output prediction quality score, and finishing the evaluation of the quality of the distortion images.
The invention relates to a method for evaluating the quality of a full-reference image based on masking texture characteristics, which comprises the steps of firstly, carrying out color space conversion on a reference image and a distorted image in a database; secondly, extracting spatial gradient and frequency domain phase characteristics of the reference image and the distorted image to calculate the global maximum structural characteristic similarity; then, calculating the similarity of the frequency domain texture and the spatial frequency characteristic, the similarity of the spatial color characteristic, and combining the global maximum structural characteristic similarity to form a 6-D characteristic vector; then, establishing a regression model for training through random forest RF in combination with the feature vector and the MOS value; and finally, extracting the 6-D characteristic vector of the image to be detected, taking the vector as an input value of a random forest RF regression model, and carrying out high-precision prediction on the quality of the image to be detected so as to evaluate the image quality.
The method for evaluating the quality of the full-reference image based on the masking texture features fully utilizes the mean value and the variance of three similarity features consistent with the visual characteristics of human eyes, can establish a random forest RF regression model according to the reference image and the distorted image in the database to fuse the similarity features, and carries out training and prediction, thereby evaluating the quality of the predicted image with high precision and keeping high consistency with human eye identification.
Claims (6)
1. The method for evaluating the quality of the full-reference image based on the masking texture features is characterized by comprising the following steps of:
step 1, converting a reference image and a distorted image in a database from an RGB color space to an Lab color space, and separating color information and brightness information of the image;
step 2, respectively extracting the gradient amplitude and gradient direction characteristics of the reference image and the distorted image in the L channel according to the Lab color space obtained in the step 1, and calculating the gradient amplitude similarity and the gradient direction similarity;
step 3, after the step 1 is finished, Laws texture characteristics of the L channel in the reference image and the distorted image are sequentially extracted, and the texture similarity mean value and the standard deviation of the reference image and the distorted image are counted;
step 4, calculating the color differences of the reference image and the distorted image in three channels L, a and b according to the Lab color space obtained in the step 1, and counting the mean value and the standard deviation of the color differences;
and 5, after the step 2, the step 3 and the step 4 are finished, fusing the obtained gradient amplitude similarity, gradient direction similarity, texture similarity mean value and standard deviation and feature similarity color difference mean value and standard deviation in a regression model through a random forest, inputting the subjective evaluation score MOS value into the regression model for training, and directly using the trained model for accurately predicting the quality of the image to be evaluated.
2. The method for evaluating the quality of the fully-referenced image based on the masking texture features as claimed in claim 1, wherein the specific process of the step 1 is as follows:
color space conversion is performed on the reference image and the distorted image in the database according to formulas 1-3, and conversion is performed from an RGB color space to a Lab color space:
wherein R, G and B respectively represent three channels of a color image, X, Y and Z respectively represent tristimulus values of colors, and X0=0.9505,Y0=1.000,Z01.0890 is D65Tristimulus value, L, under illumination conditions*Representing the lightness channel after color space conversion, a*And b*Respectively representing the chrominance channels after color space conversion; the RGB color space is calculated by formula 1, formula 2 and formula 3 to obtain Lab color space, the size of the image after color space conversion is the same as the size of the image before color space conversion, and the separation of the luminance information L channel of the image from the chrominance information a and b channels is realized.
3. The method for evaluating the quality of the fully-referenced image based on the masking texture features as claimed in claim 1, wherein the specific process of the step 2 is as follows:
step 2.1, performing convolution operation on the reference image and the distorted image respectively by using a Prewitt operator with 3 × 3 window horizontal and vertical components, and extracting the features of gradient amplitude and gradient direction:
for an image f (x), x represents the position of a pixel point, and the method of convolving the image is shown in formula 4:
in the formula, Gx(x) Representing horizontal gradient amplitude values, Gy(x) Representing a vertical gradient magnitude value;
step 2.2, after step 2.1, calculating the gradient amplitude value gm (x) and the gradient direction value θ (x) of the reference image and the distorted image according to the formulas 5 and 6, respectively, wherein the specific calculation method is as follows:
step 2.3 after step 2.2, the gradient amplitude similarity of the reference image and the distorted image is calculated according to formula 7 and formula 8 respectivelyAnd gradient direction similarity Sor(x) The specific calculation method is as follows:
in formula 7, m and n respectively represent the width and height of the image, x represents the position of the pixel point, and Ir(x) And Id(x) Respectively representing a reference image and a distorted image; in the formula 8, θrAnd thetadRespectively representing the gradient directions of the reference image and the distorted image, C1=1。
4. The method for evaluating the quality of the fully-referenced image based on the masking texture features as claimed in claim 1, wherein the specific process of the step 3 is as follows:
step 3.1, extracting texture features, namely performing convolution operation on the image by adopting four two-dimensional Laws filters, wherein the four two-dimensional Laws filters are shown as a formula 9:
for an image f (x), x represents the position of the pixel point, the convolution operation is performed on the image and the four templates in the formula 9 respectively, and the maximum value is taken, and the specific form is shown in the formula 10:
te=max(f(x)*i),i=(a),(b),(c),(d) (10)
step 3.2, after the step 3.1, calculating the texture similarity of the reference image and the distorted image, wherein the specific calculation mode is as follows:
in equation 11, terAnd tedTexture features representing the reference image and the distorted image, respectively, C2=100;
Step 3.3 after step 3.2, the mean of the convolution results is countedAnd standard deviation ofThe specific statistical form is as follows:
5. The method for evaluating the quality of the fully-referenced image based on the masking texture features as claimed in claim 1, wherein the specific process of the step 4 is as follows:
step 4.1, according to the Lab color space obtained in step 1, the color difference values Δ E of the reference image and the distorted image under the three channels L, a, and b are respectively calculated, as shown in formula 13:
in the formula 13, the first and second groups,respectively representing the values of the three channels in the Lab color space, wherein the subscript r and the subscript d respectively represent the reference image and the distorted image;
step 4.2 statistics of the mean of the color differencesAnd standard deviation ofAs shown in equations 14 and 15:
in the formula, m and n respectively represent the width and the height of the color difference diagram, and (i and j) represent the position of a two-dimensional plane where a pixel point is located.
6. The method for evaluating the quality of the fully-referenced image based on the masking texture features as claimed in claim 1, wherein the specific process of the step 5 is as follows:
step 5.1 six similarity features to be obtainedSor,Andand subjective average score MOS values of the distorted images in the database are input into a regression model established by a random forest together for training, the number ntree of decision trees in the model is set to be 500, and the number mtry of number node preselected variables is set to be 2;
and 5.2, extracting similarity characteristics from one or more distortion images to be detected and reference images corresponding to the distortion images according to the steps 2, 3 and 4 by using the trained regression model, inputting the similarity characteristics into the trained random forest regression model to obtain an output prediction quality score, and finishing the evaluation of the quality of the distortion images.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810834955.5A CN109191428B (en) | 2018-07-26 | 2018-07-26 | Masking texture feature-based full-reference image quality evaluation method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810834955.5A CN109191428B (en) | 2018-07-26 | 2018-07-26 | Masking texture feature-based full-reference image quality evaluation method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109191428A CN109191428A (en) | 2019-01-11 |
CN109191428B true CN109191428B (en) | 2021-08-06 |
Family
ID=64937628
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810834955.5A Active CN109191428B (en) | 2018-07-26 | 2018-07-26 | Masking texture feature-based full-reference image quality evaluation method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109191428B (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109919920B (en) * | 2019-02-25 | 2021-01-26 | 厦门大学 | Method for evaluating quality of full-reference and no-reference images with unified structure |
CN112118457B (en) * | 2019-06-20 | 2022-09-09 | 腾讯科技(深圳)有限公司 | Live broadcast data processing method and device, readable storage medium and computer equipment |
CN110838119B (en) * | 2019-11-15 | 2022-03-04 | 珠海全志科技股份有限公司 | Human face image quality evaluation method, computer device and computer readable storage medium |
CN111598837B (en) * | 2020-04-21 | 2023-05-05 | 中山大学 | Full-reference image quality evaluation method and system suitable for visualized two-dimensional code |
CN112381812A (en) * | 2020-11-20 | 2021-02-19 | 深圳市优象计算技术有限公司 | Simple and efficient image quality evaluation method and system |
CN112950597B (en) * | 2021-03-09 | 2022-03-08 | 深圳大学 | Distorted image quality evaluation method and device, computer equipment and storage medium |
CN112837319B (en) * | 2021-03-29 | 2022-11-08 | 深圳大学 | Intelligent evaluation method, device, equipment and medium for real distorted image quality |
CN115984283B (en) * | 2023-03-21 | 2023-06-23 | 山东中济鲁源机械有限公司 | Intelligent detection method for welding quality of reinforcement cage |
CN116188809B (en) * | 2023-05-04 | 2023-08-04 | 中国海洋大学 | Texture similarity judging method based on visual perception and sequencing driving |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102036098A (en) * | 2010-12-01 | 2011-04-27 | 北京航空航天大学 | Full-reference type image quality evaluation method based on visual information amount difference |
CN102750695A (en) * | 2012-06-04 | 2012-10-24 | 清华大学 | Machine learning-based stereoscopic image quality objective assessment method |
CN106780441A (en) * | 2016-11-30 | 2017-05-31 | 杭州电子科技大学 | A kind of stereo image quality objective measurement method based on dictionary learning and human-eye visual characteristic |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9378546B2 (en) * | 2012-01-12 | 2016-06-28 | Hewlett-Packard Indigo B.V. | Image defect visibility predictor |
US10410330B2 (en) * | 2015-11-12 | 2019-09-10 | University Of Virginia Patent Foundation | System and method for comparison-based image quality assessment |
-
2018
- 2018-07-26 CN CN201810834955.5A patent/CN109191428B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102036098A (en) * | 2010-12-01 | 2011-04-27 | 北京航空航天大学 | Full-reference type image quality evaluation method based on visual information amount difference |
CN102750695A (en) * | 2012-06-04 | 2012-10-24 | 清华大学 | Machine learning-based stereoscopic image quality objective assessment method |
CN106780441A (en) * | 2016-11-30 | 2017-05-31 | 杭州电子科技大学 | A kind of stereo image quality objective measurement method based on dictionary learning and human-eye visual characteristic |
Non-Patent Citations (3)
Title |
---|
《Evaluating Texture Compression Masking Effects Using Objective Image Quality Assessment Metrics》;Wesley Griffin,et al;《IEEE Transcations on visualization and computer graphics》;20150831;第970-979页 * |
《基于最优色空间和视觉掩蔽的彩色图像评价算法》;谢德红,等;《包装工程》;20141130;第35卷(第21期);第86-90页 * |
《基于视觉系统和特征提取的图像质量客观评价方法及应用研究》;刘明娜;《中国博士学位论文全文数据库 信息科技辑》;20101015(第10期);第I138-33页 * |
Also Published As
Publication number | Publication date |
---|---|
CN109191428A (en) | 2019-01-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109191428B (en) | Masking texture feature-based full-reference image quality evaluation method | |
CN106447646A (en) | Quality blind evaluation method for unmanned aerial vehicle image | |
CN109255358B (en) | 3D image quality evaluation method based on visual saliency and depth map | |
CN107408211A (en) | Method for distinguishing is known again for object | |
CN110120034B (en) | Image quality evaluation method related to visual perception | |
CN108134937B (en) | Compressed domain significance detection method based on HEVC | |
CN108830823B (en) | Full-reference image quality evaluation method based on spatial domain combined frequency domain analysis | |
CN107635136B (en) | View-based access control model perception and binocular competition are without reference stereo image quality evaluation method | |
CN108961227B (en) | Image quality evaluation method based on multi-feature fusion of airspace and transform domain | |
CN110246111B (en) | No-reference stereoscopic image quality evaluation method based on fusion image and enhanced image | |
CN107610093B (en) | Full-reference image quality evaluation method based on similarity feature fusion | |
CN109242834A (en) | It is a kind of based on convolutional neural networks without reference stereo image quality evaluation method | |
CN109831664B (en) | Rapid compressed stereo video quality evaluation method based on deep learning | |
CN106600597A (en) | Non-reference color image quality evaluation method based on local binary pattern | |
CN107273866A (en) | A kind of human body abnormal behaviour recognition methods based on monitoring system | |
CN109523513A (en) | Based on the sparse stereo image quality evaluation method for rebuilding color fusion image | |
CN111882516B (en) | Image quality evaluation method based on visual saliency and deep neural network | |
CN106127234A (en) | The non-reference picture quality appraisement method of feature based dictionary | |
CN110321452B (en) | Image retrieval method based on direction selection mechanism | |
CN105844640A (en) | Color image quality evaluation method based on gradient | |
CN108682005B (en) | Semi-reference 3D synthetic image quality evaluation method based on covariance matrix characteristics | |
CN110415816B (en) | Skin disease clinical image multi-classification method based on transfer learning | |
CN108648186B (en) | No-reference stereo image quality evaluation method based on primary visual perception mechanism | |
CN109167988B (en) | Stereo image visual comfort evaluation method based on D + W model and contrast | |
Hu et al. | No reference quality assessment for Thangka color image based on superpixel |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |