CN112330657A - Image quality evaluation method and system based on gray level characteristics - Google Patents

Image quality evaluation method and system based on gray level characteristics Download PDF

Info

Publication number
CN112330657A
CN112330657A CN202011314950.3A CN202011314950A CN112330657A CN 112330657 A CN112330657 A CN 112330657A CN 202011314950 A CN202011314950 A CN 202011314950A CN 112330657 A CN112330657 A CN 112330657A
Authority
CN
China
Prior art keywords
sub
image
block
feature
block image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011314950.3A
Other languages
Chinese (zh)
Other versions
CN112330657B (en
Inventor
罗文峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Upixels Technology Co ltd
Original Assignee
Hunan Upixels Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Upixels Technology Co ltd filed Critical Hunan Upixels Technology Co ltd
Priority to CN202011314950.3A priority Critical patent/CN112330657B/en
Publication of CN112330657A publication Critical patent/CN112330657A/en
Application granted granted Critical
Publication of CN112330657B publication Critical patent/CN112330657B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

An image quality evaluation method and system based on gray scale characteristics comprises the following steps: step S1: the method comprises the steps of carrying out blocking processing on a reference image and an image to be evaluated, and dividing the reference image and the image to be evaluated into a first sub-block image and a second sub-block image with preset sizes respectively; step S2: calculating a gray characteristic index of each of the first sub-block image and the second sub-block image; step S3: dividing the first sub-block image and the second sub-block image into a first category and a second category according to the gray characteristic index; step S4: respectively extracting a first feature of each first sub-block image and a second feature of each second sub-block image in the first category; step S5: and respectively extracting a third feature of each first sub-block image and a fourth feature of each second sub-block image in the second category. The method can accurately evaluate the quality of the image, has a simple algorithm, fully considers the correlation among pixels, and has high practical value.

Description

Image quality evaluation method and system based on gray level characteristics
Technical Field
The invention relates to the technical field of image processing, in particular to an image quality evaluation method and system based on gray scale characteristics.
Background
With the rapid development of multimedia technology, digital images are widely favored by people due to the characteristics of intuition, reality and richness. In the process of processing the digital image, factors such as an imaging system of the image, a storage device of the image, a transmission medium, a processing mechanism of the image at a terminal and the like inevitably cause distortion of the image, and the distortion degree of the image can directly reflect the performance of the multimedia transmission system and the service quality thereof. Therefore, the image quality evaluation algorithm is an important index for evaluating the performance of the multimedia transmission system as an objective evaluation criterion of the image quality.
The quality evaluation method can be divided into 3 types of full reference quality evaluation, no reference quality evaluation and semi-reference quality evaluation according to the quantity of the acquired reference information. The full reference image quality evaluation algorithm uses an original image as a reference image of a distorted image; only the information of partial reference images is used in the semi-reference image quality evaluation algorithm; the no-reference image quality assessment algorithm does not use any information in the reference image as a priori data.
The most common method at present is a full-reference image quality assessment algorithm. The traditional full-reference objective image quality evaluation algorithm has a mean square error and a peak signal-to-noise ratio, and is widely used due to a simple calculation method and clear physical significance, but the algorithms only analyze images in a statistical sense and do not consider the correlation between pixels.
The foregoing description is provided for general background information and is not admitted to be prior art.
Disclosure of Invention
The invention aims to provide an image quality evaluation method and system based on gray scale characteristics, which can accurately evaluate the quality of an image, has a simple algorithm, fully considers the correlation among pixels and has high practical value.
The invention provides an image quality evaluation method based on gray scale characteristics, which comprises the following steps: step S1: the reference image and the image to be evaluated are subjected to block processing, and are respectively divided into a first sub-block image and a second sub-block image with preset sizes, and the first sub-block image and the second sub-block image are respectively marked as { A }n(x, y) | N ═ 1, …, N } and { B { (a) }n(x,y)|n=1,…,N, wherein N represents the number of all subblocks after blocking; step S2: calculating a gray characteristic index of each of the first sub-block image and the second sub-block image; step S3: dividing the first sub-block image and the second sub-block image into a first category and a second category according to the gray characteristic index; step S4: respectively extracting a first feature of each first sub-block image and a second feature of each second sub-block image in the first category; step S5: respectively extracting a third feature of each first sub-block image and a fourth feature of each second sub-block image in the second category; step S6: calculating to obtain a first similarity index of the first category according to the first characteristic and the second characteristic; step S7: according to the third feature and the fourth feature, calculating to obtain a second similarity index of the second category; step S8: and calculating the final similarity of the reference image and the image to be evaluated according to the first similarity index and the second similarity index.
Further, any one of the first sub-block images A is selectedn1(x, y), the step S2 includes: step S21: at random at An1Selecting 10 points on (x, y), calculating the gray level mean value in the field with the 10 points as the center and the diameter of 7mm, and obtaining 10 gray level mean values { alpha [ alpha ] in totaliAnd l i is 1, …,10}, and the difference E of 10 gray-scale means is counted, wherein the specific formula is as follows:
Figure BDA0002791067510000021
wherein
Figure BDA0002791067510000022
Step 22: when the difference degree E<When 15, let the first sub-block image An1(x, y) and the corresponding second sub-block image Bn1The gradation characteristic index (able) of (x, y) is 1; otherwise, let is 2.
Further, in step S3, the first category is specifically: a first sub-block image set { a1 where the gradation characteristic index stable ═ 1n(x, y) | N ═ 1, …, N1} and { a1 ″n(x, y) | N ═ 1, …, N1} phaseCorresponding second set of sub-block images { B1n(x, y) | N ═ 1, …, N1}, where N1 represents the number of subblocks for which the grayscale characteristic index stable ═ 1; the second category is specifically: a first sub-block image set { a2 where the gradation characteristic index stable ═ 2n(x, y) | N ═ 1, …, N2} and { a2 ″n(x, y) | N ═ 1, …, N2} corresponding second set of sub-block images { B2n(x, y) | N ═ 1, …, N2}, where N2 denotes the number of subblocks for which the gradation characteristic index stable ═ 2.
Further, choose the { A1nAny one of the first sub-block images a1 in (x, y) | N ═ 1, …, N1}n1(x, y), the step S4 includes: step S41: mixing the A1n1Dividing (x, y) into sixteen equal parts, counting the gray level histogram of the first sub-block image to obtain a sixteen-dimensional vector which is marked as v1n1(ii) a Step S42: calculating the mean value and the variance of the first sub-block images, and storing the mean value and the variance as features, so that each first sub-block image obtains a two-dimensional vector which is recorded as v2n1(ii) a Step S43: mixing the v1n1And said v2n1Taken together, the first sub-block image A1 is obtainedn1The first feature of (x, y) is a eighteen-dimensional vector, and finally the first feature is obtained and is marked as VA1n1(ii) a Selecting the { B1nAny one of the second sub-block images B1 of (x, y) | N ═ 1, …, N1}n1(x, y) the second feature is calculated in a manner consistent with the calculation manners of the steps S41, S42, and S43, and the second feature is finally obtained and is denoted as VB1n1
Further, choose the { A2nAny one of the first sub-block images a2 in (x, y) | N ═ 1, …, N2}n2(x, y), the step S5 includes: step S51: respectively calculating the A2 by using sobel operatorn2(x, y) horizontal gradient information G1And vertical direction gradient information G2
Figure BDA0002791067510000031
Wherein
Figure BDA0002791067510000032
Figure BDA0002791067510000033
Representing a convolution operation; step S52: using said G1And said G2Calculating a first gradient magnitude GA2n2(x, y) and a first gradient direction QA2n2(x, y), the specific formula is as follows:
Figure BDA0002791067510000034
step S53: QA2 according to the first gradient directionn2(x, y), the final calculation of the third feature is PA2n2(x, y), specifically:
Figure BDA0002791067510000041
selecting the { B2nAny one of the second sub-block images B2 of (x, y) | N ═ 1, …, N2}n2(x, y) the calculation method of the fourth feature is consistent with the calculation methods of the step S51, the step S52 and the step S53, and finally the second gradient width GB2 is obtainedn2(x, y) and the fourth feature is PB2n2(x,y)。
Further, step S6 includes: step S61: optionally one of said A1n1(x, y) and with said A1n1(x, y) corresponding to said B1n1(x, y), the degree of similarity E1 is obtained for each pair of sub-block imagesn1
Figure BDA0002791067510000042
Wherein sum represents a summation operation; step S62: for all the E1n1Summing to obtain the first similarity index E1:
Figure BDA0002791067510000043
further, the step S7 includes: step S71: optionally one of said A2n2(x, y) and with said A2n2(x, y) corresponding to said B2n2(x, y), the degree of similarity E2 is obtained for each pair of sub-block imagesn2
Figure BDA0002791067510000044
Wherein sum represents a summation operation, · x represents a dot product operation; step S72: for all the E2n2Summing to obtain the second similarity index E2:
Figure BDA0002791067510000045
further, the step S8 is specifically: the similarity E is the sum of the first similarity index and the second similarity index, and the specific formula is as follows: E-E1 + E2.
The invention also provides an image quality evaluation system based on the gray characteristic, which comprises a separation module, a classification module, an extraction module and a calculation module, wherein the separation module is used for carrying out blocking processing on the reference image and the image to be evaluated, and respectively dividing the reference image and the image to be evaluated into a first sub-block image and a second sub-block image with preset sizes, and respectively recording the first sub-block image and the second sub-block image as { A } in preset sizesn(x, y) | N ═ 1, …, N } and { B { (a) }n(x, y) | N ═ 1, …, N }, where N denotes the number of all subblocks after chunking; the calculation module is used for calculating the gray characteristic index of each first sub-block image and each second sub-block image; the classification module is used for classifying the first sub-block image and the second sub-block image into a first class and a second class according to the gray characteristic index; the extraction module is used for respectively extracting a first feature of each first sub-block image in the first category and a second feature of each second sub-block image, and respectively extracting a third feature of each first sub-block image in the second category and a fourth feature of each second sub-block image; the calculation module is further configured to calculate a first similarity indicator of the first category according to the first feature and the second feature, calculate a second similarity indicator of the second category according to the third feature and the fourth feature, and calculate a final similarity between the reference image and the image to be evaluated according to the first similarity indicator and the second similarity indicator.
Further, any one of the first sub-block images A is selectedn1(x, y), the gray characteristic index is obtained by the following method: at random at An1Selecting 10 points on (x, y), and selectingCalculating the gray level mean value in the field with the 10 points as the centers and the diameter of 7mm to obtain 10 gray level mean values { alpha }iAnd l i is 1, …,10}, and the difference E of 10 gray-scale means is counted, wherein the specific formula is as follows:
Figure BDA0002791067510000051
wherein
Figure BDA0002791067510000052
When the difference degree E<When 15, let the first sub-block image An1(x, y) and the corresponding second sub-block image Bn1The gradation characteristic index (able) of (x, y) is 1; otherwise, table is 2; the first category is specifically: a first sub-block image set { a1 where the gradation characteristic index stable ═ 1n(x, y) | N ═ 1, …, N1} and { a1 ″n(x, y) | N ═ 1, …, N1} corresponding second set of sub-block images { B1n(x, y) | N ═ 1, …, N1}, where N1 represents the number of subblocks for which the grayscale characteristic index stable ═ 1; the second category is specifically: a first sub-block image set { a2 where the gradation characteristic index stable ═ 2n(x, y) | N ═ 1, …, N2} and { a2 ″n(x, y) | N ═ 1, …, N2} corresponding second set of sub-block images { B2n(x, y) | N ═ 1, …, N2}, where N2 denotes the number of subblocks for which the grayscale characteristic index stable ═ 2; selecting the { A1nAny one of the first sub-block images a1 in (x, y) | N ═ 1, …, N1}n1(x, y), the first feature is obtained by the following method: mixing the A1n1Dividing (x, y) into sixteen equal parts, counting the gray level histogram of the first sub-block image to obtain a sixteen-dimensional vector which is marked as v1n1(ii) a Calculating the mean value and the variance of the first sub-block images, and storing the mean value and the variance as features, so that each first sub-block image obtains a two-dimensional vector which is recorded as v2n1(ii) a Mixing the v1n1And said v2n1Taken together, the first sub-block image A1 is obtainedn1The first feature of (x, y) is a eighteen-dimensional vector, and finally the first feature is obtained and is marked as VA1n1(ii) a Selecting the { B1nAny one of the second sub-block images B in (x, y) | N ═ 1, …, N1}1n1(x, y), the second feature is obtained by the following method: obtaining the second characteristic recorded as VB1 finally according to the obtaining mode of the first characteristicn1(ii) a Selecting the { A2nAny one of the first sub-block images a2 in (x, y) | N ═ 1, …, N2}n2(x, y), the third feature is obtained by: respectively calculating the A2 by using sobel operatorn2(x, y) horizontal gradient information G1And vertical direction gradient information G2
Figure BDA0002791067510000061
Wherein
Figure BDA0002791067510000062
Figure BDA0002791067510000063
Representing a convolution operation; using said G1And said G2Calculating a first gradient magnitude GA2n2(x, y) and a first gradient direction QA2n2(x, y), the specific formula is as follows:
Figure BDA0002791067510000064
QA2 according to the first gradient directionn2(x, y), the final calculation of the third feature is PA2n2(x, y), specifically:
Figure BDA0002791067510000065
selecting the { B2nAny one of the second sub-block images B2 of (x, y) | N ═ 1, …, N2}n2(x, y), the fourth feature is obtained by: the obtaining mode of the third characteristic is consistent, and the second gradient amplitude GB2 is finally obtainedn2(x, y) and the fourth feature is PB2n2(x, y); the first similar index is obtained in the following mode: optionally one of said A1n1(x, y) and with said A1n1(x, y) corresponding to said B1n1(x, y), the degree of similarity E1 is obtained for each pair of sub-block imagesn1
Figure BDA0002791067510000071
Wherein sum represents a summation operation; for all the E1n1Summing to obtain the first similarity index E1:
Figure BDA0002791067510000072
the second similar index is obtained in the following manner: optionally one of said A2n2(x, y) and with said A2n2(x, y) corresponding to said B2n2(x, y), the degree of similarity E2 is obtained for each pair of sub-block imagesn2
Figure BDA0002791067510000073
Wherein sum represents a summation operation, · x represents a dot product operation; for all the E2n2Summing to obtain the second similarity index E2:
Figure BDA0002791067510000074
the final similarity between the reference image and the image to be evaluated is obtained in the following manner: the similarity E is the sum of the first similarity index and the second similarity index, and the specific formula is as follows: E-E1 + E2.
The image quality evaluation method and system based on the gray characteristic can accurately evaluate the quality of the image by calculating the final similarity of the reference image and the image to be evaluated, has simple algorithm, fully considers the correlation among pixels and has high practical value.
Drawings
Fig. 1 is a first flowchart of an image quality evaluation method based on gray scale characteristics according to an embodiment of the present invention.
Fig. 2 is a second flowchart of the image quality evaluation method based on the gray scale characteristics shown in fig. 1.
Fig. 3 is a specific flowchart of calculating the gray characteristic index in the image quality evaluation method based on the gray characteristic shown in fig. 1.
Fig. 4 is a specific flowchart of extracting a first feature in the image quality evaluation method based on the grayscale characteristics shown in fig. 1.
Fig. 5 is a specific flowchart of extracting a third feature in the image quality evaluation method based on the grayscale characteristics shown in fig. 1.
Fig. 6 is a specific flowchart of calculating the first similarity index in the image quality assessment method based on the gray scale characteristics shown in fig. 1.
Fig. 7 is a specific flowchart of calculating a second similarity index in the image quality evaluation method based on gray scale characteristics shown in fig. 1.
Fig. 8 is a schematic structural diagram of an image quality evaluation system based on gray scale characteristics according to an embodiment of the present invention.
Detailed Description
The following detailed description of embodiments of the present invention is provided in connection with the accompanying drawings and examples. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
As shown in fig. 1 to 7, in the present embodiment, an image quality evaluation method based on gray scale characteristics is provided, which includes the following steps:
step S1: the reference image and the image to be evaluated are subjected to block processing, and are respectively divided into a first sub-block image and a second sub-block image with preset sizes, and the first sub-block image and the second sub-block image are respectively marked as { A }n(x, y) | N ═ 1, …, N } and { B { (a) }n(x, y) | N ═ 1, …, N }, where N denotes the number of all subblocks after chunking.
In this embodiment, the reference image and the image to be evaluated are both RGB (three primary colors, R for red, G for green, and B for blue) images, which are the best color modes. In this embodiment, the reference image and the image to be evaluated are divided into a first sub-block image and a second sub-block image of a preset size of 100 × 100 mm. In other embodiments, the predetermined size may be 50 × 50mm, 80 × 80mm, or other values.
Step S2: and calculating the gray characteristic index of each of the first sub-block image and the second sub-block image.
Specifically, a gradation characteristic index is calculated based on human gradation characteristics. In a specific application example, any first sub-block image A is selectedn1(x, y) As shown in FIG. 3, a specific flow of calculating the gradation characteristic index in the gradation characteristic-based image quality evaluation method will be describedFigure (a). In other embodiments, any second sub-block image B may be selectedn1(x, y) will be described. The detailed flow of step S2 of the present invention includes:
step S21: at random at An1Selecting 10 points on (x, y), calculating the gray average value in the field with 10 points as the center and 7mm diameter, and obtaining 10 gray average values { alpha i1, …,10}, and counting the difference E of 10 gray-scale means, wherein the specific formula is as follows:
Figure BDA0002791067510000091
wherein
Figure BDA0002791067510000092
Step 22: when the difference degree E<When 15, let the first sub-block image An1(x, y) and corresponding second sub-block image Bn1The gradation characteristic index (able) of (x, y) is 1; otherwise, let is 2.
Step S3: the first sub-block image and the second sub-block image are classified into a first category and a second category according to the gradation characteristic index.
The first category is specifically: the subblock set with the grayscale characteristic index stable of 1 is the first subblock image set { a 1}n(x, y) | N ═ 1, …, N1} and { a1n(x, y) | N ═ 1, …, N1} corresponding second set of sub-block images { B1n(x, y) | N ═ 1, …, N1}, where N1 denotes the number of subblocks for which the gradation characteristic index stable ═ 1.
The second category is specifically: the subblock set with the grayscale characteristic index stable 2 is the first subblock image set { a2 {n(x, y) | N ═ 1, …, N2} and { a2n(x, y) | N ═ 1, …, N2} corresponding second set of sub-block images { B2n(x, y) | N ═ 1, …, N2}, where N2 denotes the number of subblocks for which the gradation characteristic index stable ═ 2.
Step S4: and respectively extracting a first feature of each first sub-block image and a second feature of each second sub-block image in the first category.
First, a first feature of each first sub-block image in a first category is extracted, as shown in fig. 4The specific flow chart of the method for evaluating the image quality based on the gray characteristic is used for extracting the first characteristic. In a specific application example, { A1 } is selectednAny one of the first sub-block images a1 in (x, y) | N ═ 1, …, N1}n1(x, y) will be described.
The detailed flow of step S4 of the present invention includes:
step S41: a1n1Dividing (x, y) into sixteen equal parts, counting the gray level histogram of the first sub-block image to obtain a sixteen-dimensional vector which is marked as v1n1
Step S42: calculating the mean value and the variance of the first sub-block images, and storing the mean value and the variance as features to ensure that each first sub-block image obtains a two-dimensional vector which is recorded as v2n1
Step S43: v1n1And v2n1Taken together, a first sub-block image A1 is obtainedn1The first feature of (x, y) is a eighteen-dimensional vector, and finally the first feature is obtained and is marked as VA1n1
Next, the second feature of each second sub-block image in the first category is extracted. In a specific application example, B1 is selectednAny one of the second sub-block images B1 of (x, y) | N ═ 1, …, N1}n1(x, y) will be described. The calculation method of the second feature is the same as the calculation method of the above steps S41, S42, and S43, and will not be described in detail here. The second characteristic, designated VB1, is finally obtainedn1
Step S5: and respectively extracting the third feature of each first sub-block image and the fourth feature of each second sub-block image in the second category.
First, the third feature of each first sub-block image in the second category is extracted, as shown in a specific flowchart of the image quality evaluation method based on the gray scale characteristics in fig. 5. In a specific application example, { A2 } is selectednAny one of the first sub-block images a2 in (x, y) | N ═ 1, …, N2}n2(x, y) will be described. The detailed flow of step S5 of the present invention includes:
step S51: respectively calculating A2 by using sobel (Sobel) operatorn2(x, y) horizontal gradient signalMessage G1And vertical direction gradient information G2. The specific formula is as follows:
Figure BDA0002791067510000101
wherein
Figure BDA0002791067510000102
Figure BDA0002791067510000103
Representing a convolution operation.
Step S52: using G1And G2Calculating a first gradient magnitude GA2n2(x, y) and a first gradient direction QA2n2(x, y), the specific formula is as follows:
Figure BDA0002791067510000104
step S53: according to a first gradient direction QA2n2(x, y), the final third feature is calculated as PA2n2(x, y), specifically:
Figure BDA0002791067510000111
next, a fourth feature of each second sub-block image in the second category is extracted. In a specific application example, B2 is selectednAny one of the second sub-block images B2 of (x, y) | N ═ 1, …, N2}n2(x, y) will be described. The calculation method of the fourth feature is the same as the calculation method of the above steps S51, S52, and S53, and will not be described in detail here. Finally, the second gradient amplitude GB2 is obtainedn2(x, y) and the fourth feature is PB2n2(x,y)。
Step S6: and calculating to obtain a first similarity index of a first category according to the first characteristic and the second characteristic.
In this embodiment, specifically, a similarity index of the sub-block set whose grayscale characteristic index stable is 1 is calculated, as shown in a specific flowchart of calculating a first similarity index in the grayscale characteristic-based image quality assessment method shown in fig. 6. The detailed flow of step S6 of the present invention includes:
step S61: optionally one A1n1(x, y) and A1n1(x, y) corresponding B1n1(x, y), the degree of similarity E1 is obtained for each pair of sub-block imagesn1
Figure BDA0002791067510000112
Where sum denotes the sum operation.
Step S62: for all E1n1Summing to obtain a first similarity index E1:
Figure BDA0002791067510000113
in step S62, all E1n1And summing, namely summing the similarity degrees of all the sub-block pairs of which the gray characteristic index stable is 1.
Step S7: and calculating a second similarity index of a second type according to the third characteristic and the fourth characteristic.
In this embodiment, specifically, a similarity index of the subblock set whose grayscale characteristic index stable is 2 is calculated, as shown in a specific flowchart of calculating a second similarity index in the grayscale characteristic-based image quality assessment method shown in fig. 7. The detailed flow of step S7 of the present invention includes:
step S71: optionally one A2n2(x, y) and A2n2(x, y) corresponding B2n2(x, y), the degree of similarity E2 is obtained for each pair of sub-block imagesn2
Figure BDA0002791067510000121
Where sum represents a summation operation, x represents a dot product operation.
Step S72: for all E2n2Summing to obtain a second similarity index E2:
Figure BDA0002791067510000122
in step S72, all E1n1And summing, namely summing the similarity degrees of all the sub-block pairs of which the gray characteristic index table is 2.
Step S8: and calculating the final similarity of the reference image and the image to be evaluated according to the first similarity index and the second similarity index.
Step S8 specifically includes: the similarity E is the sum of the first similarity index and the second similarity index, and the specific formula is as follows: E-E1 + E2.
According to the image quality evaluation method and device, the final similarity between the reference image and the image to be evaluated is calculated, so that the image quality can be accurately evaluated, the algorithm is simple, the correlation among pixels is fully considered, and the method and device have high practical value.
As shown in fig. 8, the present invention further provides an image quality evaluation system based on gray scale characteristics, which includes a separation module 80, a classification module 81, an extraction module 82, and a calculation module 83.
The separation module 80 is configured to perform block processing on the reference image and the image to be evaluated, and divide the reference image and the image to be evaluated into a first sub-block image and a second sub-block image with preset sizes, which are respectively marked as { a }n(x, y) | N ═ 1, …, N } and { B { (a) }n(x, y) | N ═ 1, …, N }, where N denotes the number of all subblocks after chunking.
In this embodiment, the predetermined size is 100 × 100 mm. In other embodiments, the predetermined size may be 50 × 50mm, 80 × 80mm, or other values.
The calculating module 83 is configured to calculate a gray characteristic index of each of the first sub-block image and the second sub-block image.
Specifically, any first sub-block image A is selectedn1(x, y) calculating a gray characteristic index, wherein the specific acquisition mode of the gray characteristic index is as follows:
at random at An1Selecting 10 points on (x, y), calculating the gray average value in the field with 10 points as the center and 7mm diameter, and obtaining 10 gray average values { alpha i1, …,10}, and counting the difference E of 10 gray-scale means, wherein the specific formula is as follows:
Figure BDA0002791067510000131
wherein
Figure BDA0002791067510000132
When the difference degree E<When 15, let the first sub-block imageAn1(x, y) and corresponding second sub-block image Bn1The gradation characteristic index (able) of (x, y) is 1; otherwise, let is 2.
The classification module 81 is configured to classify the first sub-block image and the second sub-block image into a first category and a second category according to the grayscale characteristic index.
Specifically, the first category is specifically: first subblock image set { a1 having a grayscale characteristic index stable of 1n(x, y) | N ═ 1, …, N1} and { a1n(x, y) | N ═ 1, …, N1} corresponding second set of sub-block images { B1n(x, y) | N ═ 1, …, N1}, where N1 denotes the number of subblocks for which the gradation characteristic index stable ═ 1.
Specifically, the second category is specifically: first subblock image set { a2 having a grayscale characteristic index let of 2n(x, y) | N ═ 1, …, N2} and { a2n(x, y) | N ═ 1, …, N2} corresponding second set of sub-block images { B2n(x, y) | N ═ 1, …, N2}, where N2 denotes the number of subblocks for which the grayscale characteristic index table ═ 2;
the extracting module 82 is configured to extract a first feature of each first sub-block image in the first category and a second feature of each second sub-block image, respectively, and extract a third feature of each first sub-block image in the second category and a fourth feature of each second sub-block image, respectively.
The specific acquisition modes of the first feature, the second feature, the third feature and the fourth feature are as follows:
choose { A1nAny one of the first sub-block images a1 in (x, y) | N ═ 1, …, N1}n1(x, y) the first feature is obtained by:
a1n1Dividing (x, y) into sixteen equal parts, counting the gray level histogram of the first sub-block image to obtain a sixteen-dimensional vector which is marked as v1n1(ii) a Calculating the mean value and the variance of the first sub-block images, and storing the mean value and the variance as features to ensure that each first sub-block image obtains a two-dimensional vector which is recorded as v2n1(ii) a V1n1And v2n1Taken together, a first sub-block image A1 is obtainedn1The first feature of (x, y) is a eighteen-dimensional vector, and the first feature is finally obtainedIs denoted as VA1n1
Choose { B1nAny one of the second sub-block images B1 of (x, y) | N ═ 1, …, N1}n1(x, y), the second feature is obtained by:
the second characteristic is finally obtained in a manner consistent with the acquisition of the first characteristic and is marked as VB1n1
Choose { A2nAny one of the first sub-block images a2 in (x, y) | N ═ 1, …, N2}n2(x, y), the third feature is obtained by:
respectively calculating A2 by using sobel operatorn2(x, y) horizontal gradient information G1And vertical direction gradient information G2
Figure BDA0002791067510000141
Wherein
Figure BDA0002791067510000142
Figure BDA0002791067510000143
Representing a convolution operation; using G1And G2Calculating a first gradient magnitude GA2n2(x, y) and a first gradient direction QA2n2(x, y), the specific formula is as follows:
Figure BDA0002791067510000144
according to a first gradient direction QA2n2(x, y), the final third feature is calculated as PA2n2(x, y), specifically:
Figure BDA0002791067510000145
choose { B2nAny one of the second sub-block images B2 of (x, y) | N ═ 1, …, N2}n2(x, y), the fourth feature is obtained by:
in accordance with the third feature obtaining manner, the second gradient amplitude GB2 is finally obtainedn2(x, y) and the fourth feature is PB2n2(x,y)。
The calculating module 83 is further configured to calculate a first similar indicator of the first category according to the first feature and the second feature, calculate a second similar indicator of the second category according to the third feature and the fourth feature, and calculate a final similarity between the reference image and the image to be evaluated according to the first similar indicator and the second similar indicator.
The first similar index is obtained in the following way:
optionally one A1n1(x, y) and A1n1(x, y) corresponding B1n1(x, y), the degree of similarity E1 is obtained for each pair of sub-block imagesn1
Figure BDA0002791067510000151
Wherein sum represents a summation operation; for all E1n1Summing to obtain a first similarity index E1:
Figure BDA0002791067510000152
the second similar index is obtained in the following way:
optionally one A2n2(x, y) and A2n2(x, y) corresponding B2n2(x, y), the degree of similarity E2 is obtained for each pair of sub-block imagesn2
Figure BDA0002791067510000153
Wherein sum represents a summation operation, · x represents a dot product operation; for all E2n2Summing to obtain a second similarity index E2:
Figure BDA0002791067510000154
the final similarity of the reference image and the image to be evaluated is obtained in the following manner:
the similarity E is the sum of the first similarity index and the second similarity index, and the specific formula is as follows: E-E1 + E2.
In this document, the terms "upper", "lower", "front", "rear", "left", "right", "top", "bottom", "inner", "outer", "vertical", "horizontal", and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for the purpose of clarity and convenience of description of the technical solutions, and thus, should not be construed as limiting the present invention.
As used herein, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, including not only those elements listed, but also other elements not expressly listed.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (10)

1. An image quality evaluation method based on gray scale characteristics is characterized by comprising the following steps: step S1: the reference image and the image to be evaluated are subjected to block processing, and are respectively divided into a first sub-block image and a second sub-block image with preset sizes, and the first sub-block image and the second sub-block image are respectively marked as { A }n(x, y) | N ═ 1, …, N } and { B { (a) }n(x, y) | N ═ 1, …, N }, where N denotes the number of all subblocks after chunking; step S2: calculating a gray characteristic index of each of the first sub-block image and the second sub-block image; step S3: dividing the first sub-block image and the second sub-block image into a first category and a second category according to the gray characteristic index; step S4: respectively extracting a first feature of each first sub-block image and a second feature of each second sub-block image in the first category; step S5: respectively extracting a third feature of each first sub-block image and a fourth feature of each second sub-block image in the second category; step S6: calculating to obtain a first similarity index of the first category according to the first characteristic and the second characteristic; step S7: according to the third feature and the fourth feature, calculating to obtain a second similarity index of the second category; step S8: and calculating the final similarity of the reference image and the image to be evaluated according to the first similarity index and the second similarity index.
2. The image quality assessment method according to claim 1, wherein any one of said first sub-block images A is selectedn1(x, y), the step S2 includes: step S21: at random at An1Selecting 10 points on (x, y), calculating the gray level mean value in the field with the 10 points as the center and the diameter of 7mm, and obtaining 10 gray level mean values { alpha [ alpha ] in totaliAnd l i is 1, …,10}, and the difference E of 10 gray-scale means is counted, wherein the specific formula is as follows:
Figure FDA0002791067500000011
wherein
Figure FDA0002791067500000012
Step 22: when the difference degree E<When 15, let the first sub-block image An1(x, y) and the corresponding second sub-block image Bn1The gradation characteristic index (able) of (x, y) is 1; otherwise, let is 2.
3. The image quality evaluation method based on gray scale characteristics according to claim 2, wherein in step S3, the first category is specifically: a first sub-block image set { a1 where the gradation characteristic index stable ═ 1n(x, y) | N ═ 1, …, N1} and { a1 ″n(x, y) | N ═ 1, …, N1} corresponding second set of sub-block images { B1n(x, y) | N ═ 1, …, N1}, where N1 represents the number of subblocks for which the grayscale characteristic index stable ═ 1; the second category is specifically: a first sub-block image set { a2 where the gradation characteristic index stable ═ 2n(x, y) | N ═ 1, …, N2} and { a2 ″n(x, y) | N ═ 1, …, N2} corresponding second set of sub-block images { B2n(x, y) | N ═ 1, …, N2}, where N2 denotes the number of subblocks for which the gradation characteristic index stable ═ 2.
4. The gray-scale-characteristic-based image quality evaluation method according to claim 3, wherein { A1 } is selectednAny one of the first sub-block images a1 in (x, y) | N ═ 1, …, N1}n1(x, y), the step S4 includes: step S41: mixing the A1n1Dividing (x, y) into sixteen equal parts, counting the gray level histogram of the first sub-block image to obtain a sixteen-dimensional vector which is marked as v1n1(ii) a Step S42: calculating the mean value and the variance of the first sub-block images, and storing the mean value and the variance as features, so that each first sub-block image obtains a two-dimensional vector which is recorded as v2n1(ii) a Step S43: mixing the v1n1And said v2n1Taken together, the first sub-block image A1 is obtainedn1The first feature of (x, y) is a eighteen-dimensional vector, and finally the first feature is obtained and is marked as VA1n1(ii) a Selecting the { B1nAny one of the second sub-block images B1 of (x, y) | N ═ 1, …, N1}n1(x, y) the second feature is calculated in a manner consistent with the calculation manners of the steps S41, S42, and S43, and the second feature is finally obtained and is denoted as VB1n1
5. The gray-scale-characteristic-based image quality evaluation method according to claim 4, wherein { A2 } is selectednAny one of the first sub-block images a2 in (x, y) | N ═ 1, …, N2}n2(x, y), the step S5 includes: step S51: respectively calculating the A2 by using sobel operatorn2(x, y) horizontal gradient information G1And vertical direction gradient information G2
Figure FDA0002791067500000021
Wherein
Figure FDA0002791067500000022
Figure FDA0002791067500000023
Representing a convolution operation; step S52: using said G1And said G2Calculating a first gradient magnitude GA2n2(x, y) and a first gradient direction QA2n2(x, y), the specific formula is as follows:
Figure FDA0002791067500000031
step S53: QA2 according to the first gradient directionn2(x, y), the final calculation of the third feature is PA2n2(x, y), specifically:
Figure FDA0002791067500000032
selecting the { B2nAny one of the second sub-block images B2 of (x, y) | N ═ 1, …, N2}n2(x, y) the calculation method of the fourth feature is consistent with the calculation methods of the step S51, the step S52 and the step S53, and finally the second gradient width GB2 is obtainedn2(x, y) and the fourth feature is PB2n2(x,y)。
6. The image quality estimation method based on gray characteristics according to claim 5, wherein step S6 includes: step S61: optionally one of said A1n1(x, y) and with said A1n1(x, y) corresponding to said B1n1(x, y), the degree of similarity E1 is obtained for each pair of sub-block imagesn1
Figure FDA0002791067500000033
Wherein sum represents a summation operation; step S62: for all the E1n1Summing to obtain the first similarity index E1:
Figure FDA0002791067500000034
7. the image quality estimation method based on gradation characteristics as claimed in claim 6, wherein the step S7 includes: step S71: optionally one of said A2n2(x, y) and with said A2n2(x, y) corresponding to said B2n2(x, y), the degree of similarity E2 is obtained for each pair of sub-block imagesn2
Figure FDA0002791067500000035
Wherein sum represents a summation operation, · x represents a dot product operation; step S72: for all the E2n2Summing to obtain the second similarity index E2:
Figure FDA0002791067500000036
8. the method for evaluating image quality based on gray scale characteristics as claimed in claim 7, wherein said step S8 specifically comprises: the similarity E is the sum of the first similarity index and the second similarity index, and the specific formula is as follows: E-E1 + E2.
9. The image quality evaluation system based on the gray characteristic is characterized by comprising a separation module, a classification module, an extraction module and a calculation module, wherein the separation module is used for carrying out blocking processing on a reference image and an image to be evaluated, the reference image and the image to be evaluated are respectively divided into a first sub-block image and a second sub-block image with preset sizes, and the first sub-block image and the second sub-block image are respectively marked as { A }n(x, y) | N ═ 1, …, N } and { B { (a) }n(x, y) | N ═ 1, …, N }, where N denotes the number of all subblocks after chunking; the calculation module is used for calculating the gray characteristic index of each first sub-block image and each second sub-block image; the classification module is used for classifying the first sub-block image and the second sub-block image into a first class and a second class according to the gray characteristic index; the extraction module is used for respectively extracting a first feature of each first sub-block image in the first category and a second feature of each second sub-block image, and respectively extracting a third feature of each first sub-block image in the second category and a fourth feature of each second sub-block image; the calculation module is further configured to calculate a first similar indicator of the first category according to the first feature and the second feature, calculate a second similar indicator of the second category according to the third feature and the fourth feature, and calculate the reference image and the reference image according to the first similar indicator and the second similar indicatorAnd the final similarity of the images to be evaluated.
10. The image quality evaluation system according to claim 9, wherein any one of the first sub-block images a is selectedn1(x, y), the gray characteristic index is obtained by the following method: at random at An1Selecting 10 points on (x, y), calculating the gray level mean value in the field with the 10 points as the center and the diameter of 7mm, and obtaining 10 gray level mean values { alpha [ alpha ] in totaliAnd l i is 1, …,10}, and the difference E of 10 gray-scale means is counted, wherein the specific formula is as follows:
Figure FDA0002791067500000041
wherein
Figure FDA0002791067500000042
When the difference degree E<When 15, let the first sub-block image An1(x, y) and the corresponding second sub-block image Bn1The gradation characteristic index (able) of (x, y) is 1; otherwise, table is 2; the first category is specifically: a first sub-block image set { a1 where the gradation characteristic index stable ═ 1n(x, y) | N ═ 1, …, N1} and { a1 ″n(x, y) | N ═ 1, …, N1} corresponding second set of sub-block images { B1n(x, y) | N ═ 1, …, N1}, where N1 represents the number of subblocks for which the grayscale characteristic index stable ═ 1; the second category is specifically: a first sub-block image set { a2 where the gradation characteristic index stable ═ 2n(x, y) | N ═ 1, …, N2} and { a2 ″n(x, y) | N ═ 1, …, N2} corresponding second set of sub-block images { B2n(x, y) | N ═ 1, …, N2}, where N2 denotes the number of subblocks for which the grayscale characteristic index stable ═ 2; selecting the { A1nAny one of the first sub-block images a1 in (x, y) | N ═ 1, …, N1}n1(x, y), the first feature is obtained by the following method: mixing the A1n1Dividing (x, y) into sixteen equal parts, counting the gray level histogram of the first sub-block image to obtain a sixteen-dimensional vector which is marked as v1n1(ii) a Calculating the mean and variance of the first sub-block image, and calculating the mean and varianceStored as a feature, so that each first sub-block image obtains a two-dimensional vector, which is denoted as v2n1(ii) a Mixing the v1n1And said v2n1Taken together, the first sub-block image A1 is obtainedn1The first feature of (x, y) is a eighteen-dimensional vector, and finally the first feature is obtained and is marked as VA1n1(ii) a Selecting the { B1nAny one of the second sub-block images B1 of (x, y) | N ═ 1, …, N1}n1(x, y), the second feature is obtained by the following method: obtaining the second characteristic recorded as VB1 finally according to the obtaining mode of the first characteristicn1(ii) a Selecting the { A2nAny one of the first sub-block images a2 in (x, y) | N ═ 1, …, N2}n2(x, y), the third feature is obtained by: respectively calculating the A2 by using sobel operatorn2(x, y) horizontal gradient information G1And vertical direction gradient information G2
Figure FDA0002791067500000051
Wherein
Figure FDA0002791067500000052
Figure FDA0002791067500000053
Representing a convolution operation; using said G1And said G2Calculating a first gradient magnitude GA2n2(x, y) and a first gradient direction QA2n2(x, y), the specific formula is as follows:
Figure FDA0002791067500000054
QA2 according to the first gradient directionn2(x, y), the final calculation of the third feature is PA2n2(x, y), specifically:
Figure FDA0002791067500000061
selecting the { B2nAny one of the second sub-block images B2 of (x, y) | N ═ 1, …, N2}n2(x, y), acquisition of said fourth featureThe taking method comprises the following steps: the obtaining mode of the third characteristic is consistent, and the second gradient amplitude GB2 is finally obtainedn2(x, y) and the fourth feature is PB2n2(x, y); the first similar index is obtained in the following mode: optionally one of said A1n1(x, y) and with said A1n1(x, y) corresponding to said B1n1(x, y), the degree of similarity E1 is obtained for each pair of sub-block imagesn1
Figure FDA0002791067500000062
Wherein sum represents a summation operation; for all the E1n1Summing to obtain the first similarity index E1:
Figure FDA0002791067500000063
the second similar index is obtained in the following manner: optionally one of said A2n2(x, y) and with said A2n2(x, y) corresponding to said B2n2(x, y), the degree of similarity E2 is obtained for each pair of sub-block imagesn2
Figure FDA0002791067500000064
Wherein sum represents a summation operation, · x represents a dot product operation; for all the E2n2Summing to obtain the second similarity index E2:
Figure FDA0002791067500000065
the final similarity between the reference image and the image to be evaluated is obtained in the following manner: the similarity E is the sum of the first similarity index and the second similarity index, and the specific formula is as follows: E-E1 + E2.
CN202011314950.3A 2020-11-20 2020-11-20 Image quality evaluation method and system based on gray scale characteristics Active CN112330657B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011314950.3A CN112330657B (en) 2020-11-20 2020-11-20 Image quality evaluation method and system based on gray scale characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011314950.3A CN112330657B (en) 2020-11-20 2020-11-20 Image quality evaluation method and system based on gray scale characteristics

Publications (2)

Publication Number Publication Date
CN112330657A true CN112330657A (en) 2021-02-05
CN112330657B CN112330657B (en) 2024-06-07

Family

ID=74322035

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011314950.3A Active CN112330657B (en) 2020-11-20 2020-11-20 Image quality evaluation method and system based on gray scale characteristics

Country Status (1)

Country Link
CN (1) CN112330657B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030113021A1 (en) * 2001-11-16 2003-06-19 Hiroyuki Shiotani Image-quality determination method, Image-quality determination apparatus, Image-quality determination program
CN106709958A (en) * 2016-12-03 2017-05-24 浙江大学 Gray scale gradient and color histogram-based image quality evaluation method
CN108053393A (en) * 2017-12-08 2018-05-18 广东工业大学 A kind of gradient similarity graph image quality evaluation method and device
CN109325550A (en) * 2018-11-02 2019-02-12 武汉大学 Non-reference picture quality appraisement method based on image entropy
CN111507426A (en) * 2020-04-30 2020-08-07 中国电子科技集团公司第三十八研究所 No-reference image quality grading evaluation method and device based on visual fusion characteristics
CN111598837A (en) * 2020-04-21 2020-08-28 中山大学 Full-reference image quality evaluation method and system suitable for visual two-dimensional code

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030113021A1 (en) * 2001-11-16 2003-06-19 Hiroyuki Shiotani Image-quality determination method, Image-quality determination apparatus, Image-quality determination program
CN106709958A (en) * 2016-12-03 2017-05-24 浙江大学 Gray scale gradient and color histogram-based image quality evaluation method
CN108053393A (en) * 2017-12-08 2018-05-18 广东工业大学 A kind of gradient similarity graph image quality evaluation method and device
CN109325550A (en) * 2018-11-02 2019-02-12 武汉大学 Non-reference picture quality appraisement method based on image entropy
CN111598837A (en) * 2020-04-21 2020-08-28 中山大学 Full-reference image quality evaluation method and system suitable for visual two-dimensional code
CN111507426A (en) * 2020-04-30 2020-08-07 中国电子科技集团公司第三十八研究所 No-reference image quality grading evaluation method and device based on visual fusion characteristics

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MOHAMMED AHMED HASSAN, MAZEN SHEIKH BASHRAHEEL: "Color-based structural similarity image quality assessment", 《2017 8TH INTERNATIONAL CONFERENCE ON INFORMATION TECHNOLOGY (ICIT)》, 17 May 2017 (2017-05-17), pages 691 - 696, XP033231421, DOI: 10.1109/ICITECH.2017.8079929 *
王盛春: "基于视觉感知的无参考图像质量评价", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 01, 15 January 2018 (2018-01-15), pages 1 - 70 *
邓杰航 等: "基于扩展梯度算子的结构相似度图像质量评价方法", 《科学技术与工程》, vol. 18, no. 27, 28 September 2018 (2018-09-28), pages 42 - 47 *

Also Published As

Publication number Publication date
CN112330657B (en) 2024-06-07

Similar Documents

Publication Publication Date Title
CN105339951B (en) Method for detecting document boundaries
CN108052980B (en) Image-based air quality grade detection method
CN108109147B (en) No-reference quality evaluation method for blurred image
CN109345502B (en) Stereo image quality evaluation method based on disparity map stereo structure information extraction
CN110059728B (en) RGB-D image visual saliency detection method based on attention model
CN110443800B (en) Video image quality evaluation method
CN112950596B (en) Tone mapping omnidirectional image quality evaluation method based on multiple areas and multiple levels
CN112184672A (en) No-reference image quality evaluation method and system
CN113343822B (en) Light field saliency target detection method based on 3D convolution
CN110706196B (en) Clustering perception-based no-reference tone mapping image quality evaluation algorithm
CN116205886A (en) Point cloud quality assessment method based on relative entropy
CN110910347B (en) Tone mapping image non-reference quality evaluation method based on image segmentation
CN108682005B (en) Semi-reference 3D synthetic image quality evaluation method based on covariance matrix characteristics
CN111047618A (en) Multi-scale-based non-reference screen content image quality evaluation method
CN112215266B (en) X-ray image contraband detection method based on small sample learning
Vora et al. Analysis of compressed image quality assessments, m
CN112070714B (en) Method for detecting flip image based on local ternary counting feature
CN112330657A (en) Image quality evaluation method and system based on gray level characteristics
CN111738099A (en) Face automatic detection method based on video image scene understanding
CN106558047A (en) Color image quality evaluation method based on complementary colours small echo
CN111444825A (en) Method for judging image scene by utilizing histogram
CN112950592B (en) Non-reference light field image quality evaluation method based on high-dimensional discrete cosine transform
CN111402189B (en) Video image color cast detection device and method
CN113936200A (en) Ammeter box image quality detection method based on mobile terminal
CN115795370B (en) Electronic digital information evidence obtaining method and system based on resampling trace

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant