CN105005990B - A kind of blind image quality evaluating method based on distinctiveness rarefaction representation - Google Patents

A kind of blind image quality evaluating method based on distinctiveness rarefaction representation Download PDF

Info

Publication number
CN105005990B
CN105005990B CN201510381379.XA CN201510381379A CN105005990B CN 105005990 B CN105005990 B CN 105005990B CN 201510381379 A CN201510381379 A CN 201510381379A CN 105005990 B CN105005990 B CN 105005990B
Authority
CN
China
Prior art keywords
dictionary
mrow
sub
noise
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510381379.XA
Other languages
Chinese (zh)
Other versions
CN105005990A (en
Inventor
陈阳
石路遥
罗立民
李松毅
鲍旭东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
XINGAOYI MEDICAL EQUIPMENT Co.,Ltd.
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201510381379.XA priority Critical patent/CN105005990B/en
Publication of CN105005990A publication Critical patent/CN105005990A/en
Application granted granted Critical
Publication of CN105005990B publication Critical patent/CN105005990B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of blind image quality evaluating method based on distinctiveness rarefaction representation, including:First, training obtains the sub- dictionary of feature and the sub- dictionary of noise from the natural image and noise image sample of noise-less pollution respectively, and two sub- dictionaries are merged into distinctiveness dictionary;Then, image to be evaluated is represented with the distinctiveness dictionary, obtains sparse coefficient corresponding to two sub- dictionaries;Finally, final picture quality scoring is obtained by counting the ratio of two parts sparse coefficient.Compared to other existing blind quality evaluating methods, the inventive method is high with human eye evaluation result degree of fitting, and realizes simply, is trained without artificial marking sample.

Description

A kind of blind image quality evaluating method based on distinctiveness rarefaction representation
Technical field
The present invention relates to a kind of blind image quality evaluating method, belong to image processing techniques, particularly perceive visual signal Treatment technology.
Background technology
The problem of image quality evaluation is a basis in image processing field, in compression of images, recovery, reconstruction, increasing By force, important application is suffered from the field such as identification and classification.Image quality evaluation can be divided into subjective assessment and objective evaluation Two kinds.Subjective quality assessment is usually to ask one group of expert to image progress visual evaluation and carry out subjective marking.In many applications In field, image is finally all read by people see, in this case, subjective quality assessment be it is unique accurately and reliably Quality evaluation mode.But in a practical situation, subjective assessment is expensive due to time-consuming, it is inconvenient the shortcomings of be difficult to be applied.
Compared to subjective quality assessment, evaluating objective quality has speed fast, and cost is low, simple operation and other advantages, As the emphasis in image quality evaluation research field.Evaluating objective quality can be divided into two kinds, and one kind needs undistorted ginseng Examine image and carry out quality evaluation (reference image quality appraisement), another kind only needs the information of distorted image itself to carry out matter Amount evaluation (blind quality evaluation).Reference image quality appraisement implements also phase by the development comparative maturity of decades To simple.Comparing classical reference image quality appraisement method has PSNR, SSIM etc..But this kind of method is required to access without mistake Genuine artwork, this requirement can not be met in many application environments.
It compared to reference image quality appraisement, can utilize, therefore be faced with more without reference to image in blind quality evaluation Challenge.Existing blind quality evaluating method is mainly made up of two steps:Feature extraction and the model time based on human eye scoring Return.The image that existing blind quality evaluating method usually requires a large amount of handmarkings is used to train, and the performance of algorithm is also to training Sample is very sensitive;In addition, this kind of algorithm is typically necessary higher amount of calculation and complicated parameter Estimation and training, these are all It significantly limit the application of blind quality evaluating method in practice.
The content of the invention
The technical problem to be solved by the invention is to provide one kind to realize simply, is trained without handmarking's image And the high blind image quality evaluating method with human eye evaluation degree of fitting.
The present invention is solution above-mentioned technical problem the technical scheme adopted is that a kind of based on the blind of distinctiveness rarefaction representation Image quality evaluating method, comprise the following steps:
(1) training obtains the sub- dictionary of feature and noise from the natural image and noise image sample of noise-less pollution respectively Sub- dictionary, and two sub- dictionaries are merged into distinctiveness dictionary;
(2) image to be evaluated is represented with the distinctiveness dictionary, obtains sparse coefficient corresponding to two sub- dictionaries;
(3) final picture quality scoring is obtained by counting the ratio of two parts sparse coefficient.
In the step (1), specifically include:
(1.1) segment is extracted from the natural image of noise-less pollution as features training collection, is extracted from noise image Segment is as noise training set;
(1.2) from two training sets of step (1.1), training obtains the sub- dictionary D of feature+With the sub- dictionary D of noise-, construction Distinctiveness dictionary DdIt is characterized sub- dictionary D+With the sub- dictionary D of noise-Set.
In the step (2), specifically include:
(2.1) subgraph by picture breakdown to be evaluated for three Color Channels of red, green and blue;
(2.2) to the subgraph of each Color Channel, subgraph is split into and the segment size identical in training set Segment, with distinctiveness dictionary DdSubgraph is represented, obtains sparse coefficient corresponding to each subgraph, the sparse coefficient includes spy Levy sub- dictionary D+Represent the sparse coefficient and the sub- dictionary D of noise corresponding to image-Represent the sparse coefficient corresponding to image.
In the step (3), specifically include:
(3.1) sparse coefficient corresponding to the sub- dictionary of noise and the sub- word of feature in the subgraph by counting each Color Channel The weight ratio of sparse coefficient corresponding to allusion quotation, obtain the scoring of corresponding subgraph;
(3.2) weighted sum scored according to the subgraph of three Color Channels, final picture quality scoring R is obtainedd, RdIt is bigger, then it is assumed that picture quality is better.
With distinctiveness dictionary D in the step (2.2)dSubgraph is represented, obtains sparse coefficient corresponding to each subgraph Specific method be:Equation below is solved with orthogonal matching pursuit algorithm, obtains sparse coefficient α:
Wherein, R is the operator that segment is extracted from subgraph y to be evaluated, and subscript i, j are the segment upper left corner in subgraph Position coordinates, | | αij||0It is zero norm, limits αijThe number of middle nonzero element is no more than L;The sparse coefficient tried to achieve is by two It is grouped into α=[α+, α-], wherein α+It is characterized sub- dictionary D+Represent the sparse coefficient corresponding to image, α-For the sub- dictionary D of noise- Represent the sparse coefficient corresponding to image.
The scoring of the subgraph of each Color Channel in the step (3.1)Calculation formula be:
Wherein N is the segment number that subgraph decomposites,WithRespectively the sub- dictionary of c Color Channels feature and make an uproar Phonon dictionary represents the sparse coefficient of k-th of segment, | | | |1For a norm, wkFor weight coefficient corresponding to k-th of segment.wk It is defined as wkk+ M, wherein M are a constant, σkIt is the standard deviation of k-th of segment.
R in the step (3.2)dObtained according to the weighted sum that the subgraph of three Color Channels scores, each subgraph Coefficient is converted into the conversion formula coefficient of Y passages in YIQ spaces with reference to rgb space as corresponding to scoring.
Beneficial effect:The blind image quality evaluating method based on distinctiveness rarefaction representation of the present invention, it is entitled to be based on area Characteristic quantification (the Feature Quantification via Discriminative Sparse of other property rarefaction representation Representation, FQ-DSR).The it is proposed of this method receives primary human vision cortex when carrying out picture quality to figure As feature carries out the inspiration of rarefaction representation processing, it is proposed that carry out quantitative analysis to characteristics of image and carry out the side of quality evaluation Method.Wherein, characteristics of image both includes normal picture structure, also the degenerative structure (noise) comprising image.It is therefore, of the invention A kind of distinctiveness dictionary is pointedly devised, " just " the sub- dictionary of suitable expression normal picture structure had both been included in the dictionary (the sub- dictionary of feature), and include " negative " sub- dictionary (the sub- dictionary of noise) of suitable expression picture noise.When with this distinctiveness word When allusion quotation carries out rarefaction representation to image, it is possible to by being used for representing to scheme in the sub- dictionary of quantitative statisticses feature and the sub- dictionary of noise The coefficient distribution of the atom of picture carrys out the quality of evaluation image quality.Compared to the blind quality image quality evaluation side of existing representativeness Method, the inventive method realize that simply the image without handmarking is trained, and more consistent with manually giving a mark.
Brief description of the drawings
Fig. 1 is the flow chart of the inventive method.
Fig. 2 (a)-(g) is 7 undistorted natural images used in the sub- dictionary of training characteristics.
Fig. 2 (h) is the simulation Gaussian noise image used in the training sub- dictionary of noise.
Fig. 3 (a) is the sub- dictionary of feature trained using FDDL methods.
Fig. 3 (b) is the sub- dictionary of noise trained using FDDL methods.
Embodiment
With reference to specific embodiment, the present invention is furture elucidated, it should be understood that these embodiments are merely to illustrate the present invention Rather than limitation the scope of the present invention, after the present invention has been read, various equivalences of the those skilled in the art to the present invention The modification of form falls within the application appended claims limited range.
As shown in figure 1, the blind image quality evaluating method master based on distinctiveness rarefaction representation disclosed in the embodiment of the present invention To be made up of three steps:Distinctiveness dictionary is constructed, dictionary represents and comprehensive marking.First, from (no mistake of noise-less pollution Very) dictionary corresponding to each self-training in natural image and noise image sample, and two sub- dictionaries are merged into distinctiveness dictionary. Then, remove to represent image to be evaluated with the distinctiveness dictionary, obtain sparse coefficient corresponding to two sub- dictionaries.Finally, pass through The ratio for counting two parts sparse coefficient obtains final image marking.Comprise the following steps that:
Step 1: construction distinctiveness dictionary, is comprised the following steps that:
1st step, substantial amounts of segment is extracted as features training collection from the natural image of some noise-less pollutions;Separately from mould A large amount of segments are extracted in the Gaussian noise image of plan as noise training set.Analogue noise image can select Gaussian noise to carry out Simulation, can also use other kinds of noise.Image to be evaluated is polluted by Gaussian noise in this experiment, therefore in order to obtain Best evaluation effect, herein select corresponding to Gaussian noise analog image training noise dictionary.Such as Fig. 2 is selected in this experiment (a) 7 undistorted natural images shown in-(g) and the simulation Gaussian noise image as shown in Fig. 2 (h) are as sample, wherein scheming The size of block is 16 × 16.In concrete application scene can reasonable selection sample image and number of samples as the case may be, And the size of segment.
In 2nd step, two training sets obtained from the 1st step, instructed using Fisher distinctiveness dictionary learning methods (FDDL) Practise two sub- dictionaries that there is specific characteristic to represent ability:The sub- dictionary D of feature+With the sub- dictionary D of noise-.Wherein every sub- dictionary Size is 256 × 512 (see Fig. 3).Purpose from FDDL methods is in order that the sub- dictionary of the feature trained is to natural image Architectural feature have more preferable expression ability, have poor expression ability to noise;And the sub- dictionary of noise trained is then to making an uproar Sound has more preferable expression ability, has poor expression ability to the architectural feature of image.
3rd step, construction distinctiveness dictionary DdIt is characterized the set of sub- dictionary and the sub- dictionary of noise
Dd=[D+,D-]。DdSize be 256 × 1024.
Step 2: dictionary represents, comprise the following steps that:
1st step, by picture breakdown it is red, the subgraph y of green and blue three passagesr, yg, yb
2nd step, with distinctiveness dictionary DdRemove to represent the subgraph y to be evaluated of each passagec(c can be r, g or b, divide Three passages of red, green, blue are not represented).Specific method is to solve following problem with orthogonal matching pursuit algorithm (OMP):
Obtain sparse coefficient α.Wherein R is the operator that segment is extracted from subgraph y to be evaluated, and segment size is similarly 16 ×16.Subscript i, j are the position coordinates of the segment upper left corner in the picture.Between segment at intervals of 16, i.e., segment and segment it Between both do not overlapped, it is also very close to each other.||αij||0It is zero norm, limits αijThe number of middle nonzero element is no more than L, and L is set to 25.The sparse coefficient tried to achieve forms α=[α by two parts+, α-], wherein α+It is characterized sub- dictionary D+Represent dilute corresponding to image Sparse coefficient, α-For the sub- dictionary D of noise-Represent the sparse coefficient corresponding to image.
Step 3: comprehensive marking, is comprised the following steps that:
1st step, the weight ratio by counting sparse coefficient corresponding to the sub- dictionary of feature in subgraph, define a certain color Passage subgraph scoresFor
N is the segment number that subgraph to be evaluated decomposites in formula;WithThe respectively sub- word of c Color Channels feature Allusion quotation and the sub- dictionary of noise represent the sparse coefficient of k-th of segment, | | | |1For a norm, wkFor weight corresponding to k-th of segment Coefficient.wkIt is defined as wkk+ M, wherein M are a constant, can be adjusted according to concrete application scene, be set as 20 in this experiment Preferably test and appraisal effect can be obtained;σkIt is the standard deviation of k-th of segment.
2nd step, define the weighted scoring that final image quality score is three Color Channel scorings:
Wherein,WithThe respectively scoring of the subgraph of three Color Channels of red, green and blue.Each subchannel Coefficient corresponding to scoring is converted into Y in YIQ spaces with reference to rgb space in NTSC (American National Television System Committee) standard and led to The conversion formula coefficient in road (lightness passage)
(Y=0.2989R+0.5870G+0.1140B).
Work as RdWhen smaller, the part proportion that the sub- dictionary of feature represents in representative image is larger, and what the sub- dictionary of noise represented Part proportion is smaller, it is taken as that picture quality is better;On the contrary, work as RdWhen larger, then it is assumed that image is relatively tight by noise pollution Weight, picture quality are poorer.Weight wkIt is set as the standard deviation sigma of a certain segmentkWith a constant M's and, reflect making an uproar in the segment Sound intensity.When a certain segment noise is larger, a norm of the sub- dictionary coefficient of noise corresponding to the segmentWith weight wkAll It is larger, proportion of the segment noise coefficient in integrally scoring is further enhancing after multiplication;When a certain segment noise is smaller, One norm of the sub- dictionary coefficient of noise corresponding to the segmentWith weight wkIt is all smaller, the figure is further reduced after multiplication Proportion of the block noise coefficient in integrally scoring.
In this part, by FQ-DSR methods proposed by the present invention compared with other technologies carry out Experimental comparison.It is real Other the blind image quality evaluating methods being related in testing have a BIQI, LBIQ, DIIVINE, BLIINDS-II, BRISQUE and CORNIA.In addition, also FQ-DSR methods are contrasted with the reference picture evaluation method of some main flows in experiment, the side being related to Method includes SSIM, PSNR, IFC and VIF.
Above several method is tested on the image quality evaluation database LIVE IQA of main flow respectively in experiment. LIVE IQA databases include 5 kinds of image degradation types:JPEG2000, JPEG, white Gaussian noise and quickly decline at Gaussian Blur Fall.Only consider Gaussian noise in this experiment, Gaussian noise corresponding part includes 29 undistorted reference pictures and 145 in database Open the image of different degrees of noise pollution.Each image has corresponding human eye subjective scoring (DMOS) in database.In experiment, Calculating marking is carried out to image with method mentioned above first, the scoring calculated and DMOS scorings are sought into the degree of correlation to comment afterwards The quality of valency algorithm.It is Spearman coefficient of rank correlations (SROCC) to seek the criterion used in the degree of correlation, and the coefficient is closer to 1 Represent algorithm appraisal result and human eye scoring is closer.
It is trained because the blind image quality evaluation algorithm of some main flows needs to use artificial scorable marker image, therefore In corresponding document, every group of test image is divided into two parts by author.Wherein 80% part is used to train, afterwards again another Tested on 20% image.Method proposed by the present invention is trained without handmarking's image, only need to be random in several width Natural image on training dictionary.But in line with the principle of justice contrast, in every group of test, we also only choose 20% Image (different degrees of image polluted by noise corresponding to 6 width reference pictures and 30 width) is tested, and ignores other 80% images. Experiment includes 1000 test sets, and each test set randomly selects 6 width reference pictures, 30 amplitude and noise sound pollution figure corresponding with its Picture.Experiment is all given a mark with algorithm to every group of test set and seeks SROCC coefficients, 1000 obtained after repeating 1000 times The median of SROCC scorings is as final evaluation result.Experimental result is as shown in table 1.
Table 1
As can be seen from the table, FQ-DSR scores proposed by the present invention have not exceeded only the blind image quality evaluation side of main flow Method, it have also exceeded the reference image quality appraisement method of main flow.Test result indicates that method proposed by the present invention is made an uproar for Gauss The image of sound pollution, the quality evaluation score being sufficiently close to human eye evaluation can be obtained in the case of without reference to image.

Claims (6)

1. a kind of blind image quality evaluating method based on distinctiveness rarefaction representation, it is characterised in that comprise the following steps:
(1) training obtains the sub- dictionary of feature and the sub- word of noise from the natural image and noise image sample of noise-less pollution respectively Allusion quotation, and two sub- dictionaries are merged into distinctiveness dictionary;Including:
(1.1) segment is extracted from the natural image of noise-less pollution as features training collection, segment is extracted from noise image As noise training set;
(1.2) from two training sets of step (1.1), training obtains the sub- dictionary D of feature+With the sub- dictionary D of noise-, construction difference Property dictionary DdIt is characterized sub- dictionary D+With the sub- dictionary D of noise-Set;
(2) image to be evaluated is represented with the distinctiveness dictionary, obtains sparse coefficient corresponding to two sub- dictionaries;Including:
(2.1) subgraph by picture breakdown to be evaluated for three Color Channels of red, green and blue;
(2.2) to the subgraph of each Color Channel, by subgraph split into the segment size identical segment in training set, With distinctiveness dictionary DdSubgraph is represented, obtains sparse coefficient corresponding to each subgraph, the sparse coefficient includes feature Dictionary D+Represent the sparse coefficient and the sub- dictionary D of noise corresponding to image-Represent the sparse coefficient corresponding to image;
(3) final picture quality scoring is obtained by counting the ratio of two parts sparse coefficient;Including:
(3.1) sparse coefficient corresponding to the sub- dictionary of noise and the sub- dictionary pair of feature in the subgraph by counting each Color Channel The weight ratio for the sparse coefficient answered, obtain the scoring of corresponding subgraph;
(3.2) weighted sum scored according to the subgraph of three Color Channels, coefficient reference corresponding to each subgraph scoring Rgb space is converted into the conversion formula coefficient of Y passages in YIQ spaces, obtains final picture quality scoring Rd, RdIt is smaller, then recognize It is better for picture quality.
2. the blind image quality evaluating method according to claim 1 based on distinctiveness rarefaction representation, it is characterised in that institute State in step (2.2) with distinctiveness dictionary DdSubgraph is represented, obtains the specific method of sparse coefficient corresponding to each subgraph For:Equation below is solved with orthogonal matching pursuit algorithm, obtains sparse coefficient α:
<mfenced open = "" close = ""> <mtable> <mtr> <mtd> <mrow> <munder> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> <mi>&amp;alpha;</mi> </munder> <mo>|</mo> <mo>|</mo> <msub> <mi>R</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mi>y</mi> <mo>-</mo> <msub> <mi>D</mi> <mi>d</mi> </msub> <msub> <mi>&amp;alpha;</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>|</mo> <msubsup> <mo>|</mo> <mn>2</mn> <mn>2</mn> </msubsup> </mrow> </mtd> <mtd> <mrow> <mi>s</mi> <mo>.</mo> <mi>t</mi> <mo>.</mo> </mrow> </mtd> <mtd> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mi>&amp;alpha;</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>|</mo> <msub> <mo>|</mo> <mn>0</mn> </msub> <mo>&amp;le;</mo> <mi>L</mi> <mo>&amp;ForAll;</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </mtd> </mtr> </mtable> </mfenced>
Wherein, R is the operator that segment is extracted from subgraph y to be evaluated, and subscript i, j are position of the segment upper left corner in subgraph Coordinate is put, | | αij||0It is zero norm, limits αijThe number of middle nonzero element is no more than L;The sparse coefficient tried to achieve is by two parts group Into α=[α+, α-], wherein α+It is characterized sub- dictionary D+Represent the sparse coefficient corresponding to image, α-For the sub- dictionary D of noise-Represent Sparse coefficient corresponding to image.
3. the blind image quality evaluating method according to claim 1 based on distinctiveness rarefaction representation, it is characterised in that institute State the scoring of the subgraph of each Color Channel in step (3.1)Calculation formula be:
<mrow> <msubsup> <mi>R</mi> <mi>d</mi> <mi>c</mi> </msubsup> <mo>=</mo> <mfrac> <mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mo>|</mo> <mo>|</mo> <msubsup> <mi>&amp;alpha;</mi> <mi>k</mi> <mrow> <mi>c</mi> <mo>-</mo> </mrow> </msubsup> <mo>|</mo> <msub> <mo>|</mo> <mn>1</mn> </msub> <mo>&amp;times;</mo> <msub> <mi>w</mi> <mi>k</mi> </msub> </mrow> <mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mo>|</mo> <mo>|</mo> <msubsup> <mi>&amp;alpha;</mi> <mi>k</mi> <mrow> <mi>c</mi> <mo>+</mo> </mrow> </msubsup> <mo>|</mo> <msub> <mo>|</mo> <mn>1</mn> </msub> </mrow> </mfrac> </mrow>
Wherein N is the segment number that subgraph decomposites,WithThe respectively sub- dictionary of c Color Channels feature and the sub- word of noise Allusion quotation represents the sparse coefficient of k-th of segment, | | | |1For a norm, wkFor weight coefficient corresponding to k-th of segment.
4. the blind image quality evaluating method according to claim 3 based on distinctiveness rarefaction representation, it is characterised in that wk It is defined as wkk+ M, wherein M are a constant, σkIt is the standard deviation of k-th of segment.
5. the blind image quality evaluating method according to claim 1 based on distinctiveness rarefaction representation, it is characterised in that institute It is simulation Gaussian noise image to state the noise image in step (1).
6. the blind image quality evaluating method according to claim 1 based on distinctiveness rarefaction representation, it is characterised in that institute Step (1) is stated to train to obtain the sub- dictionary of feature and the sub- dictionary of noise using Fisher distinctiveness dictionary learning methods.
CN201510381379.XA 2015-07-02 2015-07-02 A kind of blind image quality evaluating method based on distinctiveness rarefaction representation Active CN105005990B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510381379.XA CN105005990B (en) 2015-07-02 2015-07-02 A kind of blind image quality evaluating method based on distinctiveness rarefaction representation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510381379.XA CN105005990B (en) 2015-07-02 2015-07-02 A kind of blind image quality evaluating method based on distinctiveness rarefaction representation

Publications (2)

Publication Number Publication Date
CN105005990A CN105005990A (en) 2015-10-28
CN105005990B true CN105005990B (en) 2017-11-28

Family

ID=54378647

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510381379.XA Active CN105005990B (en) 2015-07-02 2015-07-02 A kind of blind image quality evaluating method based on distinctiveness rarefaction representation

Country Status (1)

Country Link
CN (1) CN105005990B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106997585A (en) * 2016-01-22 2017-08-01 同方威视技术股份有限公司 Imaging system and image quality evaluating method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102945552A (en) * 2012-10-22 2013-02-27 西安电子科技大学 No-reference image quality evaluation method based on sparse representation in natural scene statistics
CN103473745A (en) * 2013-09-16 2013-12-25 东南大学 Low-dosage CT image processing method based on distinctive dictionaries
CN104376565A (en) * 2014-11-26 2015-02-25 西安电子科技大学 Non-reference image quality evaluation method based on discrete cosine transform and sparse representation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102945552A (en) * 2012-10-22 2013-02-27 西安电子科技大学 No-reference image quality evaluation method based on sparse representation in natural scene statistics
CN103473745A (en) * 2013-09-16 2013-12-25 东南大学 Low-dosage CT image processing method based on distinctive dictionaries
CN104376565A (en) * 2014-11-26 2015-02-25 西安电子科技大学 Non-reference image quality evaluation method based on discrete cosine transform and sparse representation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Sparse Representation-based Image Quality Assessment;Tanaya Guha 等;《Signal Processing:Image Communication》;20141130;第29卷(第10期);1138-1148 *
无参考图像质量评价综述;王志明;《自动化学报》;20150615;第41卷(第6期);1062-1079 *

Also Published As

Publication number Publication date
CN105005990A (en) 2015-10-28

Similar Documents

Publication Publication Date Title
CN103996192B (en) Non-reference image quality evaluation method based on high-quality natural image statistical magnitude model
CN105447884B (en) A kind of method for objectively evaluating image quality based on manifold characteristic similarity
CN106778676B (en) Attention assessment method based on face recognition and image processing
CN104134204B (en) Image definition evaluation method and image definition evaluation device based on sparse representation
CN104751456B (en) Blind image quality evaluating method based on conditional histograms code book
CN108052980B (en) Image-based air quality grade detection method
CN103325122B (en) Based on the pedestrian retrieval method of Bidirectional sort
CN101976444B (en) Pixel type based objective assessment method of image quality by utilizing structural similarity
CN108289222A (en) A kind of non-reference picture quality appraisement method mapping dictionary learning based on structural similarity
CN106447646A (en) Quality blind evaluation method for unmanned aerial vehicle image
CN104021545A (en) Full-reference color image quality evaluation method based on visual saliency
CN104361574A (en) No-reference color image quality assessment method on basis of sparse representation
CN103745466A (en) Image quality evaluation method based on independent component analysis
CN107066972A (en) Natural scene Method for text detection based on multichannel extremal region
CN103745231B (en) Teleutospore image identification method for Tillctia Controversa Kahn (TCK) and allied variety TCT (Tilletia caries (DC.) Tul.) of TCK
CN106023214B (en) Image quality evaluating method and system based on central fovea view gradient-structure similitude
CN105894507B (en) Image quality evaluating method based on amount of image information natural scene statistical nature
CN109886945A (en) Based on contrast enhancing without reference contrast distorted image quality evaluating method
CN106169174A (en) A kind of image magnification method
CN109741285A (en) A kind of construction method and system of underwater picture data set
CN107767367A (en) It is a kind of for HDR figures without reference mass method for objectively evaluating
CN105005990B (en) A kind of blind image quality evaluating method based on distinctiveness rarefaction representation
CN107018410A (en) A kind of non-reference picture quality appraisement method based on pre- attention mechanism and spatial dependence
CN108010023B (en) High dynamic range image quality evaluation method based on tensor domain curvature analysis
CN104835172A (en) No-reference image quality evaluation method based on phase consistency and frequency domain entropy

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210528

Address after: 555 Yeshan Road, Yuyao City, Ningbo City, Zhejiang Province

Patentee after: XINGAOYI MEDICAL EQUIPMENT Co.,Ltd.

Address before: 210096 No. four archway, 2, Jiangsu, Nanjing

Patentee before: SOUTHEAST University