CN111310774B - PM2.5 concentration measurement method based on image quality - Google Patents

PM2.5 concentration measurement method based on image quality Download PDF

Info

Publication number
CN111310774B
CN111310774B CN202010252858.2A CN202010252858A CN111310774B CN 111310774 B CN111310774 B CN 111310774B CN 202010252858 A CN202010252858 A CN 202010252858A CN 111310774 B CN111310774 B CN 111310774B
Authority
CN
China
Prior art keywords
image
saturation
function
extracting
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010252858.2A
Other languages
Chinese (zh)
Other versions
CN111310774A (en
Inventor
汤丽娟
孙克争
黄帅凤
韩燕�
娄彩荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong Daguo Intelligent Technology Co.,Ltd.
Original Assignee
Jiangsu Vocational College of Business
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Vocational College of Business filed Critical Jiangsu Vocational College of Business
Priority to CN202010252858.2A priority Critical patent/CN111310774B/en
Publication of CN111310774A publication Critical patent/CN111310774A/en
Application granted granted Critical
Publication of CN111310774B publication Critical patent/CN111310774B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume, or surface-area of porous materials
    • G01N15/06Investigating concentration of particle suspensions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/758Involving statistics of pixels or of feature values, e.g. histogram matching
    • G01N15/075

Abstract

The invention provides a PM2.5 concentration measuring method based on image quality, which utilizes the natural scene statistical properties of an image to measure the distortion degree of the image from three aspects of color, contrast and structure of the PM2.5 image. Firstly, extracting hue, saturation and dark channel characteristics of an image to measure distortion in the aspect of image color, secondly, obviously influencing a PM2.5 image by contrast, finally, extracting characteristics in the aspect of image structure from two aspects of natural scene statistical characteristics of a local structure and natural scene statistical characteristics of a global structure, after extracting relevant characteristics, training a model by using a random forest machine learning tool, and obtaining a corresponding PM2.5 density value according to an input PM2.5 image. The invention is simple to implement, has high efficiency, and can be widely applied to PM2.5 concentration detection in different occasions.

Description

PM2.5 concentration measurement method based on image quality
The technical field is as follows:
the invention relates to the field of image quality evaluation application, in particular to a PM2.5 concentration measuring method based on image quality.
Background art:
in recent decades, high-speed industrialization has brought convenience to people and also caused many negative influences, such as environmental pollution, resource shortage and ecological destruction. Among these effects, environmental pollution by humans or nature surpasses the self-purification capacity of the environment, allowing extremely dangerous substances to enter our living environment. Soil, water and atmospheric pollution are three typical pollution problems. Compared to the former two, atmospheric pollution is the most dangerous and is liable to cause panic in the whole society, since the atmosphere is ubiquitous and the polluted air deteriorates human health.
The main causes of air pollution mainly include harmful gases and particles emitted from vehicles, some human activities such as steel making, oil refining and drug manufacturing, and combustion emissions of petroleum, coal, natural gas and the like. Six common air pollutants include NO2, SO2, O3, CO, PM2.5 (fine particulate matter), and PM10 (respirable PM). The first four gaseous pollutants, when present at concentrations above a certain level, are susceptible to respiratory inflammation and nervous system disorders. In contrast, the remaining two particles are smaller aerodynamic diameter particles, less than or equal to 2.5 and 10 μm. It is clear that PM2.5 is a constituent of PM10, which has an index less than PM 10. The fine particles can easily invade the lung of the human body and are difficult to clean. If exposed to a high concentration of PM2.5 for a long period of time, the morbidity and mortality to the public will increase dramatically.
At present, methods for monitoring fine particulate matter PM2.5 mainly include physical methods such as a gravimetric method, a micro-oscillation balance method, a beta-ray method and the like, so that the method is only suitable for certain specific occasions and is difficult to be widely used in reality. Therefore, monitoring the concentration of PM2.5 and developing relevant detection methods are increasingly gaining importance.
The invention content is as follows:
in order to solve the problems, the invention provides a PM2.5 concentration measuring method based on image quality, under the condition of no weather information, the color, contrast and structure characteristics of a PM2.5 image are extracted based on the natural scene statistical characteristics, the quality evaluation is carried out by using a random forest, the value of the PM2.5 concentration is obtained, the consistency of the predicted value of the PM2.5 image and the actual PM2.5 value is high, and the PM2.5 concentration can be accurately detected.
In order to achieve the above object, the present invention provides a PM2.5 concentration measuring method based on image quality, comprising the steps of:
A. extracting the characteristics of three aspects of tone, saturation and dark channel characteristic of the image to be measured by utilizing the statistical characteristics of the natural scene to measure the distortion of the image in the aspect of color;
B. extracting contrast energy of three color channels in an image to be measured to measure distortion of the image in the aspect of contrast;
C. extracting local structure statistical characteristics of the image to be detected by utilizing the linear relation and the free energy between the free energy and the structure degradation model;
D. c, extracting the global structure statistical characteristics of the image to be measured by utilizing the generalized Gaussian distribution function, and measuring the distortion of the image in the aspect of structure by combining the local structure statistical characteristics in the step c;
E. and B, performing regression training by using a random forest machine learning tool according to the relevant characteristic parameters extracted in the steps A to D, and obtaining the PM2.5 concentration value of the image to be tested according to the training model.
Preferably, the method for extracting hue features in step a includes:
converting an image to be detected from an RGB space to a confrontation color space, wherein a formula of converting a red-green channel RG is as follows:
Figure GDA0002829064890000031
the conversion formula of the yellow-blue channel is as follows:
Figure GDA0002829064890000032
therefore, the hue of the dominant wavelength of the color signal is:
Figure GDA0002829064890000033
wherein R, G, B are color values of three color channels;
based on the difference between the content and the color in each image, the statistical characteristics of the image tone are described by the relative tone of the spatial domain, and the statistical characteristics are obtained by the angle difference of the adjacent pixel tones, and the formula is as follows:
Figure GDA0002829064890000034
wherein the content of the first and second substances,
Figure GDA0002829064890000035
is an angle difference operator with the value range of [ -pi, pi](i, j) represents a location in the image;
fitting the relative Hue delta Hue of the image by using a Cauchy distribution model to obtain the probability intensity of the relative Hue as follows:
Figure GDA0002829064890000036
wherein, γhRepresents a random variable, μhRepresenting a position parameter, obtained by fitting a Cauchy distribution model, xihRepresenting scale parameters, and fitting the scale parameters by a Cauchy distribution model;
simultaneously calculating the annular peak k of the input anglehThe formula is as follows:
Figure GDA0002829064890000037
wherein, thetahBeing an angular random variable, η is defined as:
Figure GDA0002829064890000038
using muh,ξhAnd khThese three features combine both horizontal and vertical directions to yield six hue-based features labeled f1, f2, f3, f4, f5, f6, respectively.
Preferably, the method for extracting the saturation feature in step a includes:
converting the image to be detected from the RGB space to the HSV color space, and obtaining a saturation calculation formula as follows:
Figure GDA0002829064890000041
wherein X (m, n) is the maximum of the three channels R (m, n), G (m, n), B (m, n), Y (m, n) is the minimum of the three channels R (m, n), G (m, n), B (m, n), m, n represent the number of horizontal and vertical pixels, respectively;
calculating the mean M (S) mean (S) and the information entropy based on the saturation S
Figure GDA0002829064890000042
Wherein: mean is the mean function, P (i, j) is the saturation probability distribution;
and marking the extracted saturation mean value M (S) and the saturation information entropy E (S) as features { f7, f8 }.
Preferably, the dark channel characteristic in step a is a saturation dark channel, and the formula is as follows:
Figure GDA0002829064890000043
wherein the min () function is a minimum operator;
saturation of dark channel Idark(S) is labeled as feature f 9.
Preferably, the contrast energy calculation formula of the three color channels in step B is:
Figure GDA0002829064890000044
wherein a is Y (I)f) B controls the contrast gain, [ phi ] f is a threshold for controlling the contrast noise, Y (I)f)=((Ik′×fh)2+(Ik′×fv)2)1/2I denotes an image signal, Ik′Representing the image signal filtered by the filter in the k' direction, fhAnd fvRepresenting the horizontal and vertical second derivatives of the gaussian function, respectively, f GR, YB, RG being the three channels of image I, and GR 0.299R +0.587G +0.114B, YB 0.5 (R + G) -B, RG R-G;
thus obtaining three features C of contrast energyGR,CYB,CRGLabeled f10, f11, f12, respectively.
Preferably, the local structural feature extraction in the step C includes free energy extraction and degradation model feature extraction;
the free energy is defined by the formula:
Figure GDA0002829064890000051
where V represents the visual signal, s is a parametric vector, and q (s | V) represents the posterior probability distribution;
for the image to be measured, the free energy represents the minimum of energy and is therefore defined as
Figure GDA0002829064890000052
Describing the change of a distorted image in a spatial frequency domain based on a degradation model to capture the similarity between the distorted image and an original image, and calculating the structural similarity through a two-dimensional circularly symmetric Gaussian weight function based on the linear relation between the degradation model and free energy, wherein the calculation formula is W (K, L) | K ═ K.., K, L ═ L.., L), and the (K, L) distribution takes values of (1, 1), (3, 3) and (5, 5), three characteristic values are obtained through calculation based on the three values, and four local structural characteristics are formed by combining with the free energy E (V), and are respectively marked as f13, f14, f15 and f 16.
Preferably, the extracting of the global structural feature in step D includes:
capturing the image distortion deviation by using a generalized Gaussian distribution function, wherein the generalized Gaussian function is as follows:
Figure GDA0002829064890000053
wherein mu represents a mean value, alpha represents a shape parameter, controls the distribution of a Gaussian function, beta represents a scale parameter,
Figure GDA0002829064890000054
in order to be a function of the gamma function,
Figure GDA0002829064890000055
σ is the variance;
the zero mean generalized gaussian distribution function is:
Figure GDA0002829064890000056
for the image to be measured, a generalized Gaussian function is fitted to a pair of values (alpha, sigma) of the normalized brightness coefficient2) Representing global structural characteristics, labeled f17, f 18.
Preferably, in step E, the 18 extracted feature set vectors f ═ f1, f2, f3, f4 … f18 are trained based on a random forest toolbox as a regression training tool, and an objective function of the t decision tree of the ith node in the training process is defined as:
Figure GDA0002829064890000061
wherein T isiTo control the random number of training ith nodes, GiIs defined as
Figure GDA0002829064890000062
PiIs the number of training samples, P, of the training node ii LAnd Pi RRepresenting left and right diversity, η, respectivelysIs a conditional covariance matrix of a probabilistic linear fit;
predicted PM2.5 concentration value
Figure GDA0002829064890000063
By averaging the outputs of the T regression trees we derive:
Figure GDA0002829064890000064
the invention has the beneficial effects that: compared with the current mainstream quality evaluation algorithm, the method provided by the invention extracts the characteristics of color, contrast and structure based on the natural scene statistical characteristics aiming at the characteristics of the PM2.5 image, accords with the influence of real PM2.5 concentration on the image quality, is more effective than the conventional quality evaluation algorithm, and is simple and efficient compared with the traditional PM2.5 concentration physical detection method, and can be widely applied to PM2.5 concentration detection in different occasions.
Description of the drawings:
FIG. 1 is a schematic block diagram of a PM2.5 concentration measurement method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method for measuring PM2.5 concentration in an embodiment of the present invention;
FIG. 3 is a saturation probability distribution graph of a PM2.5 concentration image according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of training of a random forest toolbox according to an embodiment of the present invention.
The specific implementation mode is as follows:
the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1-2, a PM2.5 concentration measuring method based on image quality includes the following steps:
A. extracting the characteristics of three aspects of tone, saturation and dark channel characteristic of the image to be measured by utilizing the statistical characteristics of the natural scene to measure the distortion of the image in the aspect of color;
B. extracting contrast energy of three color channels in an image to be measured to measure distortion of the image in the aspect of contrast;
C. extracting the statistical characteristics of the local structure of the image to be detected by utilizing the linear relation and the free energy between the free energy and the structure degradation model, and describing the statistical distortion of the local structure of the image;
D. extracting the global structure statistical characteristics of the image to be detected by utilizing the generalized Gaussian distribution function, and describing the distortion of the global natural structure of the image;
E. and B, performing regression training by using a random forest tool kit according to the relevant characteristic parameters extracted in the steps A to D, and obtaining a predicted PM2.5 concentration value of the image to be tested according to the training model.
The color of the image comprises three parts of hue, saturation and dark channel, wherein the hue plays a very important role in the image quality evaluation because the distortion of the hue has a strong visual effect on the visual quality. By observing the joint probability distribution map of adjacent pixels, it can be seen that the joint density of high fidelity natural images is mainly centered on the diagonal axis, indicating that the hue values of two adjacent pixels are highly correlated. When an image is subjected to distortion, this joint distribution will be altered, and therefore the change in image quality can be measured in terms of hue.
To calculate hue, the challenge color space is used to decorrelate the color channels of the RGB color space. Converting the image from the RGB space to the confrontational color space, wherein the formula of the red-green channel RG conversion is:
Figure GDA0002829064890000081
the conversion formula of the yellow-blue channel is as follows:
Figure GDA0002829064890000082
therefore, the hue of the dominant wavelength of the color signal is:
Figure GDA0002829064890000083
wherein R, G, B are color values of three color channels;
because the content and the color in each image have difference, the relative tone of the spatial domain is used to describe the statistical property of the image tone, which is obtained by the angle difference of the adjacent pixel tone, and the formula is:
Figure GDA0002829064890000084
wherein (i, j) represents a position in the image,
Figure GDA0002829064890000085
is an angle difference operator with the value range of [ -pi, pi];
Figure GDA0002829064890000086
Based on experiments, the relative tone of the natural image is in single-part circular distribution, so that a Cauchy distribution model is used for fitting the relative tone delta Hue of the image to be detected, and the probability intensity of the obtained relative tone is as follows:
Figure GDA0002829064890000087
wherein, γhRepresenting a random variable, belonging to a known parameter, muhRepresenting a position parameter, obtained by fitting a Cauchy distribution model, xihRepresenting scale parameters, and fitting the scale parameters by a Cauchy distribution model;
simultaneously calculating the annular peak k of the input anglehThe formula is as follows:
Figure GDA0002829064890000091
wherein, thetahBeing an angular random variable, η is defined as:
Figure GDA0002829064890000092
using muh,ξhAnd khThe three characteristics are used for measuring the distortion condition of image colors, and simultaneously six tone-based characteristics mu are obtained by utilizing the horizontal direction and the vertical directionh1,ξh1,kh1,μh2,ξh2,kh2Labeled f1, f2, f3, f4, f5, f6, respectively.
For the extraction of the saturation feature, according to experiments, it is found that the HSV color space can more effectively reflect the change of the saturation of the PM2.5 image along with the change of the PM2.5 concentration value than the RGB color space (as shown in fig. 3), and the image to be measured is converted from the RGB space to the HSV color space, so that the calculation formula of the saturation is obtained as follows:
Figure GDA0002829064890000093
wherein X (m, n) is the maximum of the three channels R (m, n), G (m, n), B (m, n), Y (m, n) is the minimum of the three channels R (m, n), G (m, n), B (m, n), m, n represent the number of horizontal and vertical pixels, respectively;
then, the mean value of the saturation and the information entropy are calculated, wherein:
the mean value m (S) of the saturation S (mean) (S);
information entropy of saturation S
Figure GDA0002829064890000094
Wherein: mean is the mean function, P (i, j) is the saturation probability distribution;
and marking the extracted saturation mean value M (S) and the saturation information entropy E (S) as features { f7, f8 }.
In image processing, a high-quality image exhibits dark channel characteristics, which are affected by PM2.5 density, and the dark channel properties of the image are also affected, so the dark channels defining saturation are:
Figure GDA0002829064890000101
where the min () function is a minimum operator, dark channel I of saturationdark(S) is labeled as feature f 9.
In contrast characteristics, contrast plays a very important role in human eye perception of PM2.5 images, and the present invention describes PM2.5 images by extracting contrast energy. The images are separated using a gaussian second derivative filter, the response of the entire filter is rectified and adjusted, and split normalization is a control used to establish the nonlinear contrast gain of the visual cortex.
The contrast energy calculation formula for the three color channels is:
Figure GDA0002829064890000102
wherein a is Y (I)f) B controls the contrast gain, [ phi ] f is a threshold for controlling the contrast noise, Y (I)f)=((Ik′×fh)2+(Ik′×fv)2)1/2I denotes an image signal, Ik′Representing the image signal filtered by the filter in the k' direction, fhAnd fvRepresenting the horizontal and vertical second derivatives of the gaussian function, respectively, f GR, YB, RG being the three channels of image I, and GR 0.299R +0.587G +0.114B, YB 0.5 (R + G) -B, RG R-G;
thus obtaining three characteristics G of contrast energyGR,CYB,CRGLabeled f10, f11, f12, respectively.
The structural characteristics of the image comprise local structural characteristics and global structural characteristics, wherein the extraction of the local structural characteristics is carried out on the basis of the linear relation existing between the free energy and the degradation model;
firstly, using a parameter internal generation model, and presuming an input signal through parameter adjustment, wherein a joint distribution function is defined as: -log p (V) ═ log ^ p (V, s) ds where V is the given visual signal and s is the parametric vector, for simplicity of calculation, with the addition of auxiliary terms to the numerator and denominator to the right of the formula, the joint probability distribution function is rewritten as:
Figure GDA0002829064890000111
wherein q (s | V) represents the posterior probability distribution; the following formula is obtained for this formula using the jensen inequality:
Figure GDA0002829064890000112
the right side of the formula is defined as the free energy, i.e.:
Figure GDA0002829064890000113
for the image to be measured, the free energy represents the minimum of energy and is therefore defined as
Figure GDA0002829064890000114
Describing the change of a distorted image in a spatial frequency domain based on a degradation model to capture the similarity between the distorted image and an original image, and calculating the structural similarity through a two-dimensional circularly symmetric Gaussian weight function based on the linear relation between the degradation model and free energy, wherein the calculation formula is W (K, L) | K ═ K.., K, L ═ L.., L), and the (K, L) distribution takes values of (1, 1), (3, 3) and (5, 5), three characteristic values are obtained through calculation based on the three values, and four local structural characteristics are formed by combining with the free energy E (V), and are respectively marked as f13, f14, f15 and f 16.
And for global structure characteristics, based on the natural scene uniform characteristics of natural images, the normalized brightness coefficient MSCN of the high-quality images presents Gaussian distribution, the statistical characteristics are destroyed by image distortion, and the deviation can be captured by a generalized Gaussian distribution function.
Capturing the image distortion deviation by using a generalized Gaussian distribution function, wherein the generalized Gaussian function is as follows:
Figure GDA0002829064890000115
wherein mu represents a mean value, alpha represents a shape parameter, controls the distribution of a Gaussian function, beta represents a scale parameter,
Figure GDA0002829064890000116
in order to be a function of the gamma function,
Figure GDA0002829064890000117
σ is the variance;
the zero mean generalized gaussian distribution function is:
Figure GDA0002829064890000121
for the image to be measuredFitting a Gaussian-defining function to a pair of values (alpha, sigma) of the normalized luminance coefficient2) Representing global structural characteristics, labeled f17, f 18.
As shown in fig. 4, after the 18 image features are obtained, the 18 extracted feature set vectors f ═ f1, f2, f3, f4 … f18 are trained based on a random forest toolbox as a regression training tool, and an objective function of the t-th decision tree of the ith node in the training process is defined as:
Figure GDA0002829064890000122
wherein T isiTo control the random number of training ith nodes, GiIs defined as
Figure GDA0002829064890000123
PiIs the number of training samples, P, of the training node ii LAnd Pi RRepresenting left and right diversity, η, respectivelysIs a conditional covariance matrix of a probabilistic linear fit;
predicted PM2.5 concentration values
Figure GDA0002829064890000124
By averaging the outputs of the T regression trees we derive:
Figure GDA0002829064890000125
in order to better verify the effectiveness of the method for detecting the PM2.5 concentration value, the method and other general image quality evaluation algorithms, a contrast quality evaluation algorithm and a general definition image quality evaluation algorithm are tested on a PM2.5 image database. The performance index of the evaluation method comprises the following steps: 1) a Pearson Linear Correlation Coefficient (PLCC) for quantitatively measuring the accuracy of the evaluation algorithm; 2) the Root Mean Square Error (RMSE) is a standard deviation after nonlinear regression and is used for quantitatively measuring and evaluating the consistency degree of the algorithm; 3) spearman correlation coefficient (SRCC), which is used to measure the monotonicity of the evaluation algorithm. 4) Kendall correlation coefficient (KRCC), also used to measure the monotonicity of the evaluation algorithm. Wherein, the smaller the value of RMSE is, the better the performance of the algorithm is, and the larger the values of PLCC, SRCC and KRCC are, the better the performance of the algorithm is.
The experiment used an AQID image database containing 750 images with different PM2.5 concentrations, the PM2.5 concentration values of the images were from the test data of the U.S. embassies located in beijing, the whole database concentration values were from 1 to 423 μ g/m3, with higher concentration values representing poorer air quality.
The following are specific comparative details
Statistically evaluating the image quality by utilizing the natural scene of a natural high-quality image in a spatial domain, wherein the method is marked as NIQE; extracting 23 visual features by utilizing free energy and a human visual system to evaluate the image quality, wherein the method is marked as NFERM; evaluating the image quality by utilizing natural scene statistics and local definition, wherein the method is marked as BQIC; the following experiment comparing the method of the present invention with the three general image quality evaluation methods in the AQID image database shows the results in table 1:
TABLE 1 Experimental results of the method of the present invention and the general image quality evaluation algorithm in AQID image database
Evaluation index NIQE method NFERM method BQIC method The method of the invention
PLCC 0.0773 0.2020 0.5248 0.8082
SRCC 0.0382 0.1726 0.5037 0.8177
KRCC 0.0251 0.1147 0.3474 0.6115
RMSE 87.5930 86.8364 74.2096 51.5973
Evaluating the definition of the image by utilizing discrete Chebyshev moment, and recording the method as BIBLE; calculating the energy of a wavelet sub-band by utilizing discrete wavelet transform, weighting the energy of the sub-band to obtain a quality evaluation score of the image, and marking the quality evaluation score as FISH; evaluating the image definition by calculating the energy and contrast difference of a local autoregressive coefficient, wherein the method is marked as ASIRM; the following is a comparison of the method of the present invention and the three general-purpose sharpness image quality evaluation methods in an AQID image database, and the results are shown in table 2:
table 2 experimental results of the method and sharpness quality evaluation algorithm of the present invention in AQID image database
Evaluation index BIBLE method FISH method ARISM method The method of the invention
PLCC 0.1250 0.4687 0.2990 0.8082
SRCC 0.0802 0.4106 0.2192 0.8177
KRCC 0.0537 0.2784 0.1472 0.6115
RMSE 87.1671 77.6077 83.8364 51.5973
The contrast of the image is evaluated by using characteristics such as mean value, variance, information entropy, peak value, skewness and the like, and the method is marked as CDIQA; evaluating the quality of the image by using characteristics such as definition, brightness, color, naturalness and the like, wherein the method is marked as BIQME; evaluating the contrast of the image based on the maximum information quantity, wherein the method is marked as NIQMC; the following is an experiment comparing the method of the present invention with the three mainstream contrast image quality evaluation methods in an AQID image database, and the results are shown in table 3:
table 3 experimental results of the method of the present invention and the image contrast quality evaluation algorithm in AQID image database
Figure GDA0002829064890000141
Figure GDA0002829064890000151
It can be seen from tables 1, 2 and 3 that the present invention has the best effect in AQID image databases regardless of the mainstream definition, contrast image quality evaluation method or general image quality evaluation method, and the RMSE values of the present invention are lower than those of the comparative algorithms, and the PLCC, SRCC and KRCC values are significantly higher than those of other comparative methods, which indicates that the present invention has very high accuracy and stability in evaluating image quality.
Detecting the density value of PM2.5 by extracting the gradient similarity of the image and the distribution shape of a saturation map, and the method is recorded as Yue; detecting the value of PM2.5 by analyzing the probability distribution of the saturation of the non-salient regions of the PM2.5 image, denoted as IPPS; the following experiments comparing the method of the present invention with the two main methods for evaluating PM2.5 concentration detection in the AQID image database show the results in table 4: .
Table 4 experimental results of the method of the present invention and the PM2.5 quality evaluation algorithm in AQID image database
Evaluation index Yue method IPPS method The method of the invention
PLCC - 0.8011 0.8082
SRCC 0.7823 - 0.8177
KRCC 0.5809 0.6102 0.6115
RMSE 57.6900 52.200 51.5973
As can be seen from table 4, the method of the present invention is superior to the algorithm specifically directed to PM2.5 prediction in both accuracy and consistency of the algorithm.
According to the opinion of the international video image quality expert group, the objective evaluation score and the subjective score present a nonlinear relation, so the invention adopts five-parameter nonlinear regressionThe equation performs a non-linear regression of the prediction with the true PM2.5 value,
Figure GDA0002829064890000161
where s represents the predicted PM2.5 concentration value, the optimum is selected
Figure GDA0002829064890000162
And
Figure GDA0002829064890000163
so that the error of f(s) with the actual PM2.5 concentration value is minimized.
In order to verify the feature information of the three aspects used in the algorithm of the present invention, the performance of the feature information of each aspect in the AQID image database and the performance index of the feature combination of the three aspects of the present invention in the AQID performance are tested separately, see table 5 specifically:
table 5 experimental results in AQID image database of three features constituting the method of the invention
Evaluation index Color information Contrast information Structural information Method of the invention
PLCC 0.7922 0.6100 0.4750 0.8082
SRCC 0.7957 0.5843 0.4420 0.8177
KRCC 0.5912 0.4085 0.3041 0.6115
RMSE 52.9947 69.2386 76.7039 51.5973
As can be seen from Table 5, the necessity of selecting the combination of the feature information of the three aspects of the method is provided.
The following compares the machine learning method training used in the present invention with other machine learning method training (support vector regression and random subspace), and the results are shown in table 6:
table 6 experimental results of AQID image database trained using different machine learning methods
Evaluation index Support vector regression Stochastic subspaces Random forest
PLCC 0.7873 0.7762 0.8033
SRCC 0.7870 0.7983 0.8134
KRCC 0.5876 0.5852 0.6085
RMSE 53.4563 74.3541 52.3189
As can be seen from Table 6, the difference in the results of different machine training tools on AQID also indicates the reason for selecting random forest by the method of the present invention, which can achieve higher accuracy and consistency.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (7)

1. A PM2.5 concentration measurement method based on image quality is characterized by comprising the following steps:
A. extracting the characteristics of three aspects of tone, saturation and dark channel characteristic of the image to be measured by utilizing the statistical characteristics of the natural scene to measure the distortion of the image in the aspect of color; the method for extracting the hue features comprises the following steps:
converting an image to be detected from an RGB space to a confrontation color space, wherein a formula of converting a red-green channel RG is as follows:
Figure FDA0002829064880000011
the conversion formula of the yellow-blue channel is as follows:
Figure FDA0002829064880000012
therefore, the hue of the dominant wavelength of the color signal is:
Figure FDA0002829064880000013
wherein R, G, B are color values of three color channels;
based on the difference between the content and the color in each image, the statistical characteristics of the image tone are described by the relative tone of the spatial domain, and the statistical characteristics are obtained by the angle difference of the adjacent pixel tones, and the formula is as follows:
Figure FDA0002829064880000014
wherein the content of the first and second substances,
Figure FDA0002829064880000015
is an angle difference operator with the value range of [ -pi, pi](i, j) represents a location in the image;
fitting the relative Hue delta Hue of the image by using a Cauchy distribution model to obtain the probability intensity of the relative Hue as follows:
Figure FDA0002829064880000016
wherein, γhRepresents a random variable, μhRepresenting a position parameter, obtained by fitting a Cauchy distribution model, xihRepresenting scale parameters, and fitting the scale parameters by a Cauchy distribution model;
simultaneously calculating the annular peak k of the input anglehThe formula is as follows:
Figure FDA0002829064880000017
wherein, thetahBeing an angular random variable, η is defined as:
Figure FDA0002829064880000021
using muh,ξhAnd khThese three features combine both horizontal and vertical directions to yield six hue-based features labeled f1, f2, f3, f4, f5, f 6;
B. extracting contrast energy of three color channels in an image to be measured to measure distortion of the image in the aspect of contrast;
C. extracting local structure statistical characteristics of the image to be detected by utilizing the linear relation and the free energy between the free energy and the structure degradation model;
D. c, extracting the global structure statistical characteristics of the image to be measured by utilizing the generalized Gaussian distribution function, and measuring the distortion of the image in the aspect of structure by combining the local structure statistical characteristics in the step c;
E. and B, performing regression training by using a random forest machine learning tool according to the relevant characteristic parameters extracted in the steps A to D, and obtaining the PM2.5 concentration value of the image to be tested according to the training model.
2. An image quality-based PM2.5 concentration measurement method according to claim 1, characterized in that: the method for extracting the saturation characteristic in the step A comprises the following steps:
converting the image to be detected from the RGB space to the HSV color space, and obtaining a saturation calculation formula as follows:
Figure FDA0002829064880000022
wherein X (m, n) is the maximum of the three channels R (m, n), G (m, n), B (m, n), Y (m, n) is the minimum of the three channels R (m, n), G (m, n), B (m, n),m, n represent horizontal and vertical pixel number respectively;
calculating the mean M (S) mean (S) and the information entropy based on the saturation S
Figure FDA0002829064880000023
Wherein: mean is the mean function, P (i, j) is the saturation probability distribution;
the extracted saturation mean m(s) and saturation information entropy e(s) are labeled as features f7, f 8.
3. An image quality-based PM2.5 concentration measurement method according to claim 2, characterized in that: the dark channel characteristic in the step A is a saturation dark channel, and the formula is as follows:
Figure FDA0002829064880000031
wherein the min () function is a minimum operator;
saturation of dark channel Idark(S) is labeled as feature f 9.
4. An image quality-based PM2.5 concentration measurement method according to claim 3, characterized in that: the contrast energy calculation formula of the three color channels in the step B is as follows:
Figure FDA0002829064880000032
wherein a is Y (I)f) B controls the contrast gain, [ phi ] f is a threshold for controlling the contrast noise, Y (I)f)=((Ik′×fh)2+(Ik′×fv)2)1/2I denotes an image signal, Ik′Representing the image signal filtered by the filter in the k' direction, fhAnd fvRepresenting the horizontal and vertical second derivatives of the gaussian function, respectively, f GR, YB, RG being the three channels of image I, and GR 0.299R +0.587G +0.114B, YB 0.5 (R + G) -B, RG R-G;
thus obtaining three features C of contrast energyGR,CYB,CRGLabeled f10, f11, f12, respectively.
5. An image quality-based PM2.5 concentration measurement method according to claim 4, characterized in that: c, extracting local structural features, including free energy extraction and degradation model feature extraction;
the free energy is defined by the formula:
Figure FDA0002829064880000033
where V represents the visual signal, s is a parametric vector, and q (s | V) represents the posterior probability distribution;
for the image to be measured, the free energy represents the minimum of energy and is therefore defined as
Figure FDA0002829064880000041
Describing the change of a distorted image in a spatial frequency domain based on a degradation model to capture the similarity between the distorted image and an original image, and calculating the structural similarity through a two-dimensional circularly symmetric Gaussian weight function based on the linear relation between the degradation model and free energy, wherein the calculation formula is W (K, L) | K ═ K.., K, L ═ L.., L), and the (K, L) distribution takes values of (1, 1), (3, 3) and (5, 5), three characteristic values are obtained through calculation based on the three values, and four local structural characteristics are formed by combining with the free energy E (V), and are respectively marked as f13, f14, f15 and f 16.
6. An image quality-based PM2.5 concentration measurement method according to claim 5, characterized in that: the step D of extracting the global structural features comprises the following steps:
capturing the image distortion deviation by using a generalized Gaussian distribution function, wherein the generalized Gaussian function is as follows:
Figure FDA0002829064880000042
wherein, mu represents the mean value, alpha represents the shape parameter, controls the distribution of the Gaussian function, and beta represents the scaleThe parameters are set to be in a predetermined range,
Figure FDA0002829064880000043
in order to be a function of the gamma function,
Figure FDA0002829064880000044
σ is the variance;
the zero mean generalized gaussian distribution function is:
Figure FDA0002829064880000045
for the image to be measured, a generalized Gaussian function is fitted to a pair of values (alpha, sigma) of the normalized brightness coefficient2) Representing global structural characteristics, labeled f17, f 18.
7. An image quality-based PM2.5 concentration measurement method according to claim 6, characterized in that: in step E, based on a random forest tool box as a regression training tool, training the extracted 18 feature set vectors f ═ { f1, f2, f3, f4 … f18}, where an objective function of the t-th decision tree of the ith node in the training process is defined as:
Figure FDA0002829064880000051
wherein T isiTo control the random number of training ith nodes, GiIs defined as
Figure FDA0002829064880000052
PiIs the number of training samples, P, of the training node ii LAnd Pi RRepresenting left and right diversity, η, respectivelysIs a conditional covariance matrix of a probabilistic linear fit;
predicted PM2.5 concentration values
Figure FDA0002829064880000053
By averaging the outputs of the T regression trees we derive:
Figure FDA0002829064880000054
CN202010252858.2A 2020-04-01 2020-04-01 PM2.5 concentration measurement method based on image quality Active CN111310774B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010252858.2A CN111310774B (en) 2020-04-01 2020-04-01 PM2.5 concentration measurement method based on image quality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010252858.2A CN111310774B (en) 2020-04-01 2020-04-01 PM2.5 concentration measurement method based on image quality

Publications (2)

Publication Number Publication Date
CN111310774A CN111310774A (en) 2020-06-19
CN111310774B true CN111310774B (en) 2021-03-12

Family

ID=71146135

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010252858.2A Active CN111310774B (en) 2020-04-01 2020-04-01 PM2.5 concentration measurement method based on image quality

Country Status (1)

Country Link
CN (1) CN111310774B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113643262A (en) * 2021-08-18 2021-11-12 上海大学 No-reference panoramic image quality evaluation method, system, equipment and medium
CN114022747B (en) * 2022-01-07 2022-03-15 中国空气动力研究与发展中心低速空气动力研究所 Salient object extraction method based on feature perception

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105825500B (en) * 2016-03-10 2018-07-27 江苏商贸职业学院 A kind of evaluation method and device to camera image quality
CN106447647A (en) * 2016-07-12 2017-02-22 中国矿业大学 No-reference quality evaluation method of compression perception recovery images
CN108376396B (en) * 2018-01-05 2022-07-05 北京工业大学 High-efficiency PM2.5 concentration prediction method based on image
TWI662422B (en) * 2018-04-23 2019-06-11 國家中山科學研究院 Air quality prediction method based on machine learning model
CN109087277B (en) * 2018-06-11 2021-02-26 北京工业大学 Method for measuring PM2.5 of fine air particles
CN109191460B (en) * 2018-10-15 2021-10-26 方玉明 Quality evaluation method for tone mapping image
CN109978834A (en) * 2019-03-05 2019-07-05 方玉明 A kind of screen picture quality evaluating method based on color and textural characteristics

Also Published As

Publication number Publication date
CN111310774A (en) 2020-06-19

Similar Documents

Publication Publication Date Title
CN108052980B (en) Image-based air quality grade detection method
CN110689531A (en) Automatic power transmission line machine inspection image defect identification method based on yolo
CN111310774B (en) PM2.5 concentration measurement method based on image quality
CN109087277B (en) Method for measuring PM2.5 of fine air particles
CN106447646A (en) Quality blind evaluation method for unmanned aerial vehicle image
CN103034838A (en) Special vehicle instrument type identification and calibration method based on image characteristics
US20090167850A1 (en) Method for identifying guignardia citricarpa
CN109816646B (en) Non-reference image quality evaluation method based on degradation decision logic
CN104517126A (en) Air quality assessment method based on image analysis
CN110400293A (en) A kind of non-reference picture quality appraisement method based on depth forest classified
CN105894507B (en) Image quality evaluating method based on amount of image information natural scene statistical nature
Sun et al. A deep learning-based pm2. 5 concentration estimator
CN110766658B (en) Non-reference laser interference image quality evaluation method
Utaminingrum et al. Alphabet Sign Language Recognition Using K-Nearest Neighbor Optimization.
CN111325158B (en) CNN and RFC-based integrated learning polarized SAR image classification method
CN111080651B (en) Automatic monitoring method for petroleum drilling polluted gas based on water flow segmentation
Gaata et al. No-reference quality metric for watermarked images based on combining of objective metrics using neural network
CN116519710A (en) Method and system for detecting surface pollution state of composite insulator
Priya et al. No-reference image quality assessment using statistics of sparse representations
Sun et al. A photo‐based quality assessment model for the estimation of PM2. 5 concentrations
CN114708190A (en) Road crack detection and evaluation algorithm based on deep learning
Zhang et al. No-reference image quality assessment based on multi-order gradients statistics
Mou et al. Reduced reference image quality assessment via sub-image similarity based redundancy measurement
CN104915959A (en) Aerial photography image quality evaluation method and system
CN112950630A (en) PM2.5 concentration measurement method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Tang Lijuan

Inventor after: Sun Kezheng

Inventor after: Huang Shuaifeng

Inventor after: Han Yan

Inventor after: Lou Cairong

Inventor before: Tang Lijuan

Inventor before: Sun Kezheng

Inventor before: Huang Shuaifeng

GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231123

Address after: Room 106, Building 30, Jiangjingyuan, No. 52 Yaogang Road, Chongchuan District, Nantong City, Jiangsu Province, 226000

Patentee after: Nantong Daguo Intelligent Technology Co.,Ltd.

Address before: No.48 jiangtongdao Road, Gangzha District, Nantong City, Jiangsu Province, 226000

Patentee before: JIANGSU VOCATIONAL College OF BUSINESS