CN109191428A - Full-reference image quality evaluating method based on masking textural characteristics - Google Patents

Full-reference image quality evaluating method based on masking textural characteristics Download PDF

Info

Publication number
CN109191428A
CN109191428A CN201810834955.5A CN201810834955A CN109191428A CN 109191428 A CN109191428 A CN 109191428A CN 201810834955 A CN201810834955 A CN 201810834955A CN 109191428 A CN109191428 A CN 109191428A
Authority
CN
China
Prior art keywords
image
formula
reference picture
color space
gradient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810834955.5A
Other languages
Chinese (zh)
Other versions
CN109191428B (en
Inventor
郑元林
王玮
唐梽森
廖开阳
于淼淼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN201810834955.5A priority Critical patent/CN109191428B/en
Publication of CN109191428A publication Critical patent/CN109191428A/en
Application granted granted Critical
Publication of CN109191428B publication Critical patent/CN109191428B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Abstract

Full-reference image quality evaluating method disclosed by the invention based on masking textural characteristics belongs to image procossing and image quality evaluation technical field, color space conversion is carried out to reference and distorted image first, secondly reference picture and distorted image gradient amplitude and gradient direction feature are extracted and calculates image gradient information similitude, then textural characteristics similitude and color difference are calculated, and its mean value and standard deviation are counted respectively, constitute a 6-D feature vector, regression model is established come fusion feature vector and subjectivity MOS value according to random forest, and is trained;The 6-D feature vector for finally extracting testing image, is input in trained regression model, completes Objective image quality evaluation.Evaluation method disclosed by the invention uses three kinds of different similarity features, establishes regression model using random forest, realizes that full-reference image quality carries out high-precision and objectively evaluates, can keep higher consistency with human-eye visual characteristic.

Description

Full-reference image quality evaluating method based on masking textural characteristics
Technical field
The invention belongs to image procossing and image quality evaluation technical fields, are related to a kind of based on the complete of masking textural characteristics Reference image quality evaluation method.
Background technique
With the arriving of big data era, more and more images are shared on network.Digital picture is obtained as people It wins the confidence breath, carries out the important carrier of communication exchange, people's lives mode is just being altered in steps.With the substantially increasing of data scale It is long, huge challenge is also brought, image may all occur to a certain degree during acquisition, storage, transmission and processing Distortion.Therefore, how effectively to handle, transmit image, accurately picture quality is evaluated, has become and urgently studies The problem of.
In recent years, since full-reference image quality evaluation algorithm and related device are widely used in all kinds of image procossing systems Carry out Optimal Parameters in system, therefore, full-reference image quality evaluation has become a hot topic of research.Current most of existing full references It is principle that type image quality evaluating method, which is all using based on human visual system (human visual systems, HVS), Frame, W.Zhou et al. propose a kind of image evaluation method: firstly, extracting the brightness of reference picture and corresponding distorted image respectively Three indexs such as information, contrast information and structural information;Secondly, calculate three indexs similitude, obtain brightness similitude, Contrast similitude and structural similarity;Finally, three similarity features of average weighted, obtain the mass fraction of distorted image, Under background premised on this theory, visual signature weight is assigned according to picture material.In addition, there are also methods in sky A global characteristics are extracted to whole image in domain and realize quality evaluation, but this method cannot be used to evaluate color image.
Image structure information is described using frequency domain character in current some researchs, is further improved image quality evaluation mould Type, still, the image quality evaluating method for being mostly based on characteristic similarity calculating cannot precisely reflect human eye vision masking effect It answers, have ignored human eye vision is influenced by complicated factors such as physiology and psychology, lower so as to cause evaluation result precision.
Summary of the invention
The object of the present invention is to provide a kind of full-reference image quality evaluating methods based on masking textural characteristics, solve Existing evaluation method cannot precisely reflect human eye vision masking effect, have ignored human eye vision by physiology and psychology etc. it is complicated because The problem of element influences, the present invention are calculated to establish model by reference to the characteristic similarity of image and distorted image, be realized accurate Distorted image quality evaluation.
The technical scheme adopted by the invention is that the full-reference image quality evaluating method based on masking textural characteristics, Specific operation process includes the following steps:
Step 1. by database reference picture and distorted image be transformed into Lab color space by RGB color, Separate the colouring information of image with luminance information;
The Lab color space that step 2. is obtained according to step 1 extracts reference picture and distorted image in the channel L respectively Gradient amplitude and gradient direction feature, and calculate gradient amplitude similitude and gradient direction similitude;
For step 3. after step 1, the Laws texture successively extracted in reference picture and distorted image in the channel L is special Sign, and the texture paging mean value and standard deviation of statistical-reference image and distorted image;
The Lab color space that step 4. is obtained according to step 1 calculates reference picture and distorted image in L, and a, b tri- logical The color difference in road, and count the mean value and standard deviation of color difference;
After the completion of step 5. step 2, step 3 and step 4, the gradient amplitude similitude that will acquire by random forest, ladder Degree directional similarity, texture paging mean value and standard deviation and the fusion of the mean value and standard deviation of characteristic similarity color difference are returning In model, and subjective assessment score MOS value is also entered into regression model and is trained, trained model is used directly to essence Really predict the quality of image to be evaluated.
Other features of the invention also reside in,
Detailed process is as follows for step 1:
According to formula 1-3 to the reference picture and distorted image progress color space conversion in database, by RGB color sky Between be transformed into Lab color space:
Wherein, R, G and B respectively indicate the triple channel of color image, and X, Y and Z respectively indicate the tristimulus values of color, X0= 0.9505, Y0=1.000, Z0=1.0890 be D65Tristimulus values under lighting condition, L*Lightness after indicating color space conversion Channel, a*And b*Chrominance channel after respectively indicating color space conversion;RGB color passes through formula 1, formula 2 and formula 3 Lab color space is obtained after calculating, the image size after color space conversion and the image size phase before color space conversion Together, the channel luminance information L and chrominance information a, the b channel separation of image are realized.
Detailed process is as follows for step 2:
Step 2.1 using the Prewitt operator with 3*3 window level and vertical two components respectively to reference picture and Distorted image carries out convolution algorithm, carries out the extraction of gradient amplitude and gradient direction feature:
Wherein, for piece image f (x), x is the coordinate of pixel in image, to method such as 4 institute of formula of image convolution Show:
In formula, Gx(x) horizontal gradient range value, G are indicatedy(x) vertical gradient range value is indicated;
Step 2.2 is through calculating separately the ladder of reference picture and distorted image according to formula 5 and formula 6 after the completion of step 2.1 Range value GM (x) and gradient direction value θ (x) are spent, circular is as follows:
Step 2.3 is through calculating separately the ladder of reference picture and distorted image according to formula 7 and formula 8 after the completion of step 2.2 Degree amplitude similitudeWith gradient direction similitude Sor(x), circular is as follows:
M in formula 7, n respectively indicate the width and height of image, and x indicates the position of pixel, Ir(x) and Id(x) it respectively indicates Reference picture and distorted image;In formula 8, θrAnd θdRespectively indicate the gradient direction of reference picture and distorted image, C1=1.
Detailed process is as follows for step 3:
Step 3.1 texture feature extraction is to carry out convolution algorithm to image using four two dimension Laws filters, wherein four A two dimension Laws filter is as shown in Equation 9:
It is the coordinate of pixel in image for piece image f (x), x, image is made with four templates in formula 9 respectively Convolution algorithm is simultaneously maximized, and concrete form is as shown in formula 10:
Te=max (f (x) * i), i=(a), (b), (c), (d) (10)
Step 3.2 calculates the texture paging of reference picture and distorted image, specific calculation after step 3.1 It is as follows:
In formula 11, terAnd tedRespectively indicate the textural characteristics of reference picture and distorted image, C2=100;
Step 3.3 counts the mean value of convolution results after step 3.2And standard deviationSpecific statistical form It is as follows:;
In formula 12,Indicate texture paging mean value,Indicate texture paging standard deviation, n indicates pixel The sum of point.
Detailed process is as follows for step 4:
The Lab color space that step 4.1 is obtained according to step 1 calculates separately reference picture and distorted image in L, a, b tri- Value of chromatism Δ E under a channel, as shown in formula 13:
In formula 13,Respectively indicate triple channel under Lab color space Value, wherein subscript r and subscript d respectively represent reference picture and distorted image;
The mean value of step 4.2 statistics color differenceAnd standard deviationAs shown in formula 14 and formula 15:
In formula, m, n respectively indicate the width and height of chromaticity difference diagram, the position of two-dimensional surface where (i, j) indicates pixel.
Detailed process is as follows for step 5:
Step 5.1 is by six similarity features of acquisitionSor,WithWith The subjective average mark MOS value of distorted image in database, they are input to jointly random forest foundation regression model into Row training, and the quantity ntree=500, several sections of point pre-selection variable number mtry=2 of decision tree in model are set;
Step 5.2 is using trained regression model, by one or more distortion map to be detected and corresponding ginseng It examines image and extracts similarity feature according to step 2, step 3 and step 4, then similarity feature is input to trained random gloomy In woods regression model, the forecast quality score exported completes the evaluation to distorted image quality.
The invention has the advantages that the full-reference image quality evaluating method based on masking textural characteristics is large-scale public Extraction three kinds of different similarity features of image on database are opened, and the mean value and the standard deviation that count these similarity features are mutual For additional notes image information, solve the problems, such as that traditional characteristic is low with human eye subjective perception consistency;It can be according to random gloomy Woods (RF) establishes the mean value and variance that regression model merges each similarity feature, and in conjunction with subjective scores MOS value carry out study and Prediction, improves the robustness of model, to increase widespread popularity;In use, image quality estimation can be increased substantially Precision, and there is high consistency with human visual system.
Detailed description of the invention
Fig. 1 is the full-reference image quality evaluating method frame diagram of the invention based on masking textural characteristics.
Specific embodiment
The following describes the present invention in detail with reference to the accompanying drawings and specific embodiments.
Full-reference image quality evaluating method based on masking textural characteristics of the invention, as shown in Figure 1, can be by its point For two large divisions, be respectively as follows: the prediction of foundation and the image quality evaluation of RF model: RF model establishes part, process object It is the reference picture and distorted image in image data base, extracts the mean value and variance of three kinds of similarity features in the present invention, Subjective MOS value in combined data library, establishes regression model using random forest RF;
The gradient amplitude similitude of the predicted portions of image quality evaluation, calculated distortion image and corresponding reference picture, Gradient direction similitude, texture paging mean value, texture paging standard deviation, color difference typical value and standards of chromatic aberration are poor, by these three Similarity feature fashions into a 6-D feature vector, and is input to RF regression model as input value, thus to distortion map The quality of picture is predicted that picture quality is evaluated in completion.
Specific operation process includes the following steps:
Step 1. by database reference picture and distorted image be transformed into Lab color space by RGB color, Separate the colouring information of image with luminance information:
According to formula 1-3 to the reference picture and distorted image progress color space conversion in database, by RGB color sky Between be transformed into Lab color space:
Wherein, R, G and B respectively indicate the triple channel of color image, and X, Y and Z respectively indicate the tristimulus values of color, X0= 0.9505, Y0=1.000, Z0=1.0890 be D65Tristimulus values under lighting condition, L*Lightness after indicating color space conversion Channel, a*And b*Chrominance channel after respectively indicating color space conversion;RGB color passes through formula 1, formula 2 and formula 3 Lab color space is obtained after calculating, the image size after color space conversion and the image size phase before color space conversion Together, the channel luminance information L and chrominance information a, the b channel separation of image are realized;
The Lab color space that step 2. is obtained according to step 1 extracts reference picture and distorted image in the channel L respectively Gradient amplitude and gradient direction feature, and calculate gradient amplitude similitude and gradient direction similitude:
Step 2.1 using the Prewitt operator with 3*3 window level and vertical two components respectively to reference picture and Distorted image carries out convolution algorithm, carries out the extraction of gradient amplitude and gradient direction feature:
Wherein, for piece image f (x), x is the coordinate of pixel in image, to method such as 4 institute of formula of image convolution Show:
In formula, Gx(x) horizontal gradient range value, G are indicatedy(x) vertical gradient range value is indicated;
Step 2.2 is through calculating separately the ladder of reference picture and distorted image according to formula 5 and formula 6 after the completion of step 2.1 Range value GM (x) and gradient direction value θ (x) are spent, circular is as follows:
Step 2.3, through after the completion of step 2.2, reference picture and distorted image are calculated separately according to formula 7 and formula 8 Gradient amplitude similitudeWith gradient direction similitude Sor(x), circular is as follows:
M in formula 7, n respectively indicate the width and height of image, and x indicates the position of pixel, Ir(x) and Id(x) it respectively indicates Reference picture and distorted image;In formula 8, θrAnd θdRespectively indicate the gradient direction of reference picture and distorted image, C1=1, it uses In stable formula 8, the phenomenon that avoid denominator from occurring be zero.
For step 3. after step 1, the Laws texture successively extracted in reference picture and distorted image in the channel L is special Sign, and the texture paging mean value and standard deviation of statistical-reference image and distorted image:
Step 3.1 texture feature extraction is to carry out convolution algorithm to image using four two dimension Laws filters and take maximum Value, wherein four two dimension Laws filters are as shown in Equation 9:
It is the coordinate of pixel in image for piece image f (x), x, image is made with four templates in formula 9 respectively Convolution algorithm is simultaneously maximized, and concrete form is as shown in formula 10:
Te=max (f (x) * i), i=(a), (b), (c), (d) (10)
Step 3.2 calculates the texture paging of reference picture and distorted image, specific calculation after step 3.1 It is as follows:
In formula 11, terAnd tedRespectively indicate the textural characteristics of reference picture and distorted image, C2=100, for stablizing The phenomenon that formula 11, to avoid denominator from occurring be zero;
Step 3.3 counts the mean value of convolution results after step 3.2And standard deviationSpecific statistical form is such as Shown in lower:
In formula 12,Indicate texture paging mean value, σ SmteIndicate texture paging standard deviation, n indicates pixel Sum.
The Lab color space that step 4. is obtained according to step 1 calculates reference picture and distorted image in L, and a, b tri- logical The color difference in road, and count the mean value and standard deviation of color difference;
The Lab color space that step 4.1 is obtained according to step 1 calculates separately reference picture and distorted image in L, a, b tri- Value of chromatism Δ E under a channel, as shown in formula 13:
In formula 13,Respectively indicate triple channel under Lab color space Value, wherein subscript r and subscript d respectively represent reference picture and distorted image;
The mean value of step 4.2 statistics color differenceAnd standard deviationAs shown in formula 14 and formula 15:
In formula 14 and formula 15, m, n respectively indicate the width and height of chromaticity difference diagram, and two dimension where (i, j) indicates pixel is flat The position in face;
After the completion of step 5. step 2, step 3 and step 4, the gradient amplitude similitude that will acquire by random forest, ladder Degree directional similarity, texture paging mean value and standard deviation and the fusion of the mean value and standard deviation of characteristic similarity color difference are returning In model, and subjective assessment score MOS value is also entered into regression model and is trained, trained model is used directly to essence Really predict the quality of image to be evaluated:
Step 5.1 is by six similarity features of acquisitionSor,WithSum number According to the subjective average mark MOS value of distorted image in library, the regression model that they are input to random forest foundation jointly is carried out Training, and the quantity ntree=500, several sections of point pre-selection variable number mtry=2 of decision tree in model are set;
Step 5.2 is using trained regression model, by one or more distortion map to be detected and corresponding ginseng It examines image and extracts similarity feature according to step 2, step 3 and step 4, then similarity feature is input to trained random gloomy In woods regression model, the forecast quality score exported completes the evaluation to distorted image quality.
Full-reference image quality evaluating method based on masking textural characteristics of the invention, is first carried out in database Reference and distorted image carry out color space conversion;Secondly the airspace gradient and frequency for extracting reference picture and distorted image are executed Domain phase property calculates global max architecture characteristic similarity;Then it is similar with spatial frequency features to execute calculating frequency domain texture Property, airspace color characteristic similitude, and a 6-D feature vector is constituted in conjunction with global max architecture characteristic similarity;It connects down To be trained by random forest RF binding characteristic vector sum MOS value to establish regression model;It finally executes and extracts to mapping It is pre- to carry out high-precision to testing image quality as the input value of random forest RF regression model for the 6-D feature vector of picture It surveys, to evaluate picture quality.
Full-reference image quality evaluating method based on masking textural characteristics of the invention, takes full advantage of and regards with human eye The mean value and variance for feeling three kinds of consistent similarity features of characteristic, can be according to the reference picture and distortion map in database Picture establishes random forest RF regression model fusion similarity feature, and is trained and predicts, thus high-precision forecast image matter Amount evaluation keeps higher consistency with eye recognition.

Claims (6)

1. the full-reference image quality evaluating method based on masking textural characteristics, which is characterized in that specific operation process includes Following steps:
Step 1. by database reference picture and distorted image be transformed into Lab color space by RGB color, make figure The colouring information of picture is separated with luminance information;
The Lab color space that step 2. is obtained according to step 1 extracts reference picture and distorted image in the gradient in the channel L respectively Amplitude and gradient direction feature, and calculate gradient amplitude similitude and gradient direction similitude;
Step 3. successively extracts the Laws textural characteristics in reference picture and distorted image in the channel L after step 1, and The texture paging mean value and standard deviation of statistical-reference image and distorted image;
The Lab color space that step 4. is obtained according to step 1, calculates reference picture and distorted image in L, tri- channels a, b Color difference, and count the mean value and standard deviation of color difference;
After the completion of step 5. step 2, step 3 and step 4, the gradient amplitude similitude that will acquire by random forest, gradient side To similitude, texture paging mean value and standard deviation and the fusion of the mean value and standard deviation of characteristic similarity color difference in regression model In, and subjective assessment score MOS value is also entered into regression model and is trained, trained model is used directly to accurate pre- Survey the quality of image to be evaluated.
2. the full-reference image quality evaluating method as described in claim 1 based on masking textural characteristics, which is characterized in that Detailed process is as follows for the step 1:
According to formula 1-3 to reference picture and distorted image progress color space conversion in database, turned by RGB color Change to Lab color space:
Wherein, R, G and B respectively indicate the triple channel of color image, and X, Y and Z respectively indicate the tristimulus values of color, X0= 0.9505, Y0=1.000, Z0=1.0890 be D65Tristimulus values under lighting condition, L*Lightness after indicating color space conversion Channel, a*And b*Chrominance channel after respectively indicating color space conversion;RGB color passes through formula 1, formula 2 and formula 3 Lab color space is obtained after calculating, the image size after color space conversion and the image size phase before color space conversion Together, the channel luminance information L and chrominance information a, the b channel separation of image are realized.
3. the full-reference image quality evaluating method as described in claim 1 based on masking textural characteristics, which is characterized in that Detailed process is as follows for the step 2:
Step 2.1 is using the Prewitt operator with 3*3 window level and vertical two components respectively to reference picture and distortion Image carries out convolution algorithm, carries out the extraction of gradient amplitude and gradient direction feature:
It wherein, is the coordinate of pixel in image for piece image f (x), x, as shown in formula 4 to the method for image convolution:
In formula, Gx(x) horizontal gradient range value, G are indicatedy(x) vertical gradient range value is indicated;
Step 2.2 is through calculating separately the gradient width of reference picture and distorted image according to formula 5 and formula 6 after the completion of step 2.1 Angle value GM (x) and gradient direction value θ (x), circular are as follows:
Step 2.3 is through calculating separately the gradient width of reference picture and distorted image according to formula 7 and formula 8 after the completion of step 2.2 Spend similitudeWith gradient direction similitude Sor(x), circular is as follows:
M in formula 7, n respectively indicate the width and height of image, and x indicates the position of pixel, Ir(x) and Id(x) reference is respectively indicated Image and distorted image;In formula 8, θrAnd θdRespectively indicate the gradient direction of reference picture and distorted image, C1=1.
4. the full-reference image quality evaluating method as described in claim 1 based on masking textural characteristics, which is characterized in that Detailed process is as follows for the step 3:
Step 3.1 texture feature extraction is to carry out convolution algorithm to image using four two dimension Laws filters, wherein four two It is as shown in Equation 9 to tie up Laws filter:
For piece image f (x), x is the coordinate of pixel in image, and image is made convolution with four templates in formula 9 respectively Operation is simultaneously maximized, and concrete form is as shown in formula 10:
Te=max (f (x) * i), i=(a), (b), (c), (d) (10)
Step 3.2 calculates the texture paging of reference picture and distorted image after step 3.1, and specific calculation is as follows:
In formula 11, terAnd tedRespectively indicate the textural characteristics of reference picture and distorted image, C2=100;
Step 3.3 counts the mean value of convolution results after step 3.2And standard deviationSpecific statistical form is as follows It is shown:;
In formula 12,Indicate texture paging mean value,Indicate texture paging standard deviation, n indicates the total of pixel Number.
5. the full-reference image quality evaluating method as described in claim 1 based on masking textural characteristics, which is characterized in that Detailed process is as follows for the step 4:
The Lab color space that step 4.1 is obtained according to step 1 calculates separately reference picture and distorted image in L, and a, b tri- logical Value of chromatism Δ E under road, as shown in formula 13:
In formula 13,The value of triple channel under Lab color space is respectively indicated, Middle subscript r and subscript d respectively represent reference picture and distorted image;
The mean value of step 4.2 statistics color differenceAnd standard deviationAs shown in formula 14 and formula 15:
In formula, m, n respectively indicate the width and height of chromaticity difference diagram, the position of two-dimensional surface where (i, j) indicates pixel.
6. the full-reference image quality evaluating method as described in claim 1 based on masking textural characteristics, which is characterized in that Detailed process is as follows for the step 5:
Step 5.1 is by six similarity features of acquisitionSor,WithAnd database The subjective average mark MOS value of middle distorted image, the regression model that they are input to random forest foundation jointly are trained, And the quantity ntree=500, several sections of point pre-selection variable number mtry=2 of decision tree in model are set;
Step 5.2 is using trained regression model, by one or more distortion map to be detected and corresponding with reference to figure As being input to trained random forest time according to step 2, step 3 and step 4 extraction similarity feature, then by similarity feature Return in model, the forecast quality score exported, completes the evaluation to distorted image quality.
CN201810834955.5A 2018-07-26 2018-07-26 Masking texture feature-based full-reference image quality evaluation method Active CN109191428B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810834955.5A CN109191428B (en) 2018-07-26 2018-07-26 Masking texture feature-based full-reference image quality evaluation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810834955.5A CN109191428B (en) 2018-07-26 2018-07-26 Masking texture feature-based full-reference image quality evaluation method

Publications (2)

Publication Number Publication Date
CN109191428A true CN109191428A (en) 2019-01-11
CN109191428B CN109191428B (en) 2021-08-06

Family

ID=64937628

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810834955.5A Active CN109191428B (en) 2018-07-26 2018-07-26 Masking texture feature-based full-reference image quality evaluation method

Country Status (1)

Country Link
CN (1) CN109191428B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109919920A (en) * 2019-02-25 2019-06-21 厦门大学 The full reference of unified structure and non-reference picture quality appraisement method
CN110838119A (en) * 2019-11-15 2020-02-25 珠海全志科技股份有限公司 Human face image quality evaluation method, computer device and computer readable storage medium
CN111598837A (en) * 2020-04-21 2020-08-28 中山大学 Full-reference image quality evaluation method and system suitable for visual two-dimensional code
CN112118457A (en) * 2019-06-20 2020-12-22 腾讯科技(深圳)有限公司 Live broadcast data processing method and device, readable storage medium and computer equipment
CN112381812A (en) * 2020-11-20 2021-02-19 深圳市优象计算技术有限公司 Simple and efficient image quality evaluation method and system
CN112837319A (en) * 2021-03-29 2021-05-25 深圳大学 Intelligent evaluation method, device, equipment and medium for real distorted image quality
CN112950597A (en) * 2021-03-09 2021-06-11 深圳大学 Distorted image quality evaluation method and device, computer equipment and storage medium
CN115984283A (en) * 2023-03-21 2023-04-18 山东中济鲁源机械有限公司 Intelligent detection method for welding quality of reinforcement cage
CN116188809A (en) * 2023-05-04 2023-05-30 中国海洋大学 Texture similarity judging method based on visual perception and sequencing driving

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102036098A (en) * 2010-12-01 2011-04-27 北京航空航天大学 Full-reference type image quality evaluation method based on visual information amount difference
CN102750695A (en) * 2012-06-04 2012-10-24 清华大学 Machine learning-based stereoscopic image quality objective assessment method
US20130182972A1 (en) * 2012-01-12 2013-07-18 Xiaochen JING Image defect visibility predictor
US20170140518A1 (en) * 2015-11-12 2017-05-18 University Of Virginia Patent Foundation D/B/A/ University Of Virginia Licensing & Ventures Group System and method for comparison-based image quality assessment
CN106780441A (en) * 2016-11-30 2017-05-31 杭州电子科技大学 A kind of stereo image quality objective measurement method based on dictionary learning and human-eye visual characteristic

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102036098A (en) * 2010-12-01 2011-04-27 北京航空航天大学 Full-reference type image quality evaluation method based on visual information amount difference
US20130182972A1 (en) * 2012-01-12 2013-07-18 Xiaochen JING Image defect visibility predictor
CN102750695A (en) * 2012-06-04 2012-10-24 清华大学 Machine learning-based stereoscopic image quality objective assessment method
US20170140518A1 (en) * 2015-11-12 2017-05-18 University Of Virginia Patent Foundation D/B/A/ University Of Virginia Licensing & Ventures Group System and method for comparison-based image quality assessment
CN106780441A (en) * 2016-11-30 2017-05-31 杭州电子科技大学 A kind of stereo image quality objective measurement method based on dictionary learning and human-eye visual characteristic

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
WESLEY GRIFFIN,ET AL: "《Evaluating Texture Compression Masking Effects Using Objective Image Quality Assessment Metrics》", 《IEEE TRANSCATIONS ON VISUALIZATION AND COMPUTER GRAPHICS》 *
刘明娜: "《基于视觉系统和特征提取的图像质量客观评价方法及应用研究》", 《中国博士学位论文全文数据库 信息科技辑》 *
谢德红,等: "《基于最优色空间和视觉掩蔽的彩色图像评价算法》", 《包装工程》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109919920A (en) * 2019-02-25 2019-06-21 厦门大学 The full reference of unified structure and non-reference picture quality appraisement method
CN112118457B (en) * 2019-06-20 2022-09-09 腾讯科技(深圳)有限公司 Live broadcast data processing method and device, readable storage medium and computer equipment
CN112118457A (en) * 2019-06-20 2020-12-22 腾讯科技(深圳)有限公司 Live broadcast data processing method and device, readable storage medium and computer equipment
CN110838119B (en) * 2019-11-15 2022-03-04 珠海全志科技股份有限公司 Human face image quality evaluation method, computer device and computer readable storage medium
CN110838119A (en) * 2019-11-15 2020-02-25 珠海全志科技股份有限公司 Human face image quality evaluation method, computer device and computer readable storage medium
CN111598837B (en) * 2020-04-21 2023-05-05 中山大学 Full-reference image quality evaluation method and system suitable for visualized two-dimensional code
CN111598837A (en) * 2020-04-21 2020-08-28 中山大学 Full-reference image quality evaluation method and system suitable for visual two-dimensional code
CN112381812A (en) * 2020-11-20 2021-02-19 深圳市优象计算技术有限公司 Simple and efficient image quality evaluation method and system
CN112950597A (en) * 2021-03-09 2021-06-11 深圳大学 Distorted image quality evaluation method and device, computer equipment and storage medium
CN112950597B (en) * 2021-03-09 2022-03-08 深圳大学 Distorted image quality evaluation method and device, computer equipment and storage medium
CN112837319A (en) * 2021-03-29 2021-05-25 深圳大学 Intelligent evaluation method, device, equipment and medium for real distorted image quality
CN112837319B (en) * 2021-03-29 2022-11-08 深圳大学 Intelligent evaluation method, device, equipment and medium for real distorted image quality
CN115984283A (en) * 2023-03-21 2023-04-18 山东中济鲁源机械有限公司 Intelligent detection method for welding quality of reinforcement cage
CN116188809A (en) * 2023-05-04 2023-05-30 中国海洋大学 Texture similarity judging method based on visual perception and sequencing driving
CN116188809B (en) * 2023-05-04 2023-08-04 中国海洋大学 Texture similarity judging method based on visual perception and sequencing driving

Also Published As

Publication number Publication date
CN109191428B (en) 2021-08-06

Similar Documents

Publication Publication Date Title
CN109191428A (en) Full-reference image quality evaluating method based on masking textural characteristics
CN106447646A (en) Quality blind evaluation method for unmanned aerial vehicle image
CN105719318B (en) Magic square color identification method based on HSV in a kind of Educational toy external member
CN104966085B (en) A kind of remote sensing images region of interest area detecting method based on the fusion of more notable features
WO2017092431A1 (en) Human hand detection method and device based on skin colour
CN102663745B (en) Color fusion image quality evaluation method based on vision task.
CN107123088B (en) A kind of method of automatic replacement photo background color
CN104243973B (en) Video perceived quality non-reference objective evaluation method based on areas of interest
CN101562675B (en) No-reference image quality evaluation method based on Contourlet transform
CN105741328A (en) Shot image quality evaluation method based on visual perception
CN104361574B (en) No-reference color image quality assessment method on basis of sparse representation
CN107396095B (en) A kind of no reference three-dimensional image quality evaluation method
CN108830823A (en) The full-reference image quality evaluating method of frequency-domain analysis is combined based on airspace
CN109191460B (en) Quality evaluation method for tone mapping image
CN104268590B (en) The blind image quality evaluating method returned based on complementary combination feature and multiphase
CN102800111B (en) Color harmony based color fusion image color quality evaluation method
CN101146226A (en) A highly-clear video image quality evaluation method and device based on self-adapted ST area
CN109242834A (en) It is a kind of based on convolutional neural networks without reference stereo image quality evaluation method
CN104143077B (en) Pedestrian target search method and system based on image
CN108961227A (en) A kind of image quality evaluating method based on airspace and transform domain multiple features fusion
CN110120034A (en) A kind of image quality evaluating method relevant to visual perception
CN109741285B (en) Method and system for constructing underwater image data set
CN107154058A (en) A kind of method for guiding user to reduce magic square
CN106127234A (en) The non-reference picture quality appraisement method of feature based dictionary
CN109345552A (en) Stereo image quality evaluation method based on region weight

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant