CN107044947B - A kind of recognition methods of the PM2.5 pollution index based on characteristics of image - Google Patents

A kind of recognition methods of the PM2.5 pollution index based on characteristics of image Download PDF

Info

Publication number
CN107044947B
CN107044947B CN201710301867.4A CN201710301867A CN107044947B CN 107044947 B CN107044947 B CN 107044947B CN 201710301867 A CN201710301867 A CN 201710301867A CN 107044947 B CN107044947 B CN 107044947B
Authority
CN
China
Prior art keywords
image
value
pixel
weighted average
queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710301867.4A
Other languages
Chinese (zh)
Other versions
CN107044947A (en
Inventor
白鹤翔
李艳红
李德玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BEIJING JIAHUA ZHILIAN TECHNOLOGY CO.,LTD.
Original Assignee
Shanxi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanxi University filed Critical Shanxi University
Priority to CN201710301867.4A priority Critical patent/CN107044947B/en
Publication of CN107044947A publication Critical patent/CN107044947A/en
Application granted granted Critical
Publication of CN107044947B publication Critical patent/CN107044947B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • G01N15/06Investigating concentration of particle suspensions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • G01N15/06Investigating concentration of particle suspensions
    • G01N15/075Investigating concentration of particle suspensions by optical means

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Pathology (AREA)
  • Biochemistry (AREA)
  • Analytical Chemistry (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Immunology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Dispersion Chemistry (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to image procossing and meteorology crossing domains, more particularly to a kind of recognition methods of PM2.5 pollution index based on characteristics of image, the measuring method of existing PM2.5 is mainly solved there are the complicated for operation and inaccurate technical problem of measurement result, the history image for being labelled with PM2.5 measured value is uniformly scaled X by the present inventionS×YSPixel size is simultaneously converted to grayscale image;Its characteristics of image is calculated to each grayscale image;The prediction model of PM2.5 is obtained using suitable homing method using characteristics of image as input;Target image is all scaled XS×YSPixel size is simultaneously converted to grayscale image;Its various feature is calculated for the target image after scaling;Target image PM2.5 value is calculated using the characteristics of image of target image as the input for having trained prediction model.The present invention is analyzed to estimate the PM2.5 value of the described scene of image by the feature to piece image, so as to be applied to the mankind's activities such as daily trip, health, weather forecasting, and can be used for the rapid survey of extensive PM2.5 pollution index.

Description

A kind of recognition methods of the PM2.5 pollution index based on characteristics of image
Technical field
The invention belongs to image procossing and meteorology crossing domains, and in particular to a kind of PM2.5 based on characteristics of image is dirty The recognition methods of staining index.
Background technique
Afloat particulate matter is of different sizes in air, and PM2.5 is wherein relatively fine part.Measure the dense of PM2.5 Degree is generally divided into two steps: (1) PM2.5 and biggish particulate separation, almost all of measuring method requires this step; (2) weight for the PM2.5 that measurement is separated.Currently, there are three types of the PM2.5 measuring methods that environmental administration of various countries are widely used: Gravimetric method, β attenuation sensors and trace oscillating balance method.
PM2.5 manual monitoring method is also known as standard weight method (filter membrane weight method) and belongs to manual monitoring method, is mainly used for Research, or the calibration to automatic analysis method.This method is easier, it is only necessary to a PM2.5 cutting head, a pump and film frame And its filter membrane, after acquiring 24 hours samples, remove filter membrane weighing, it may be necessary to 3 samples of parallel acquisition, through constant temperature perseverance It is re-weighed after wet.
The advantages of filter membrane weight method is that economic cost is low, is easy to implement.Disadvantage: (1) air-flow constantly passes through sampling for a long time Filter membrane, collected substance will also result in the damage of volatility and half volatile substance with the variation of air-flow and temperature on filter membrane It loses;(2) some superfine small particles pass through filter membrane, cause Lower result;(3) gaseous material may be adsorbed by filter membrane, cause to tie Fruit is higher.
The survey station point of automatic monitoring method is unattended, and data are directly obtained by network, including β attenuation sensors and micro- Measure oscillating balance method.The basic principle of β attenuation sensors is the β discharged using the particulate matter being deposited on quartz filter to C14 The variation of the variation detection airborne particulate amount of substance of ray attenuation amount.Surrounding air sucks sampling pipe through cutter by sampling pump, Go out through filter membrane heel row.Particulate matter is deposited on strip quartz filter, and when β ray passes through the filter membrane for depositing particulate matter, β is penetrated Line intensity decays, and the concentration of particulate matter is calculated by the tester to attenuation.Micro oscillating balance (TEOM) method is also known as Micro quartz oscillation sedimentation balance method, this method are using a quartzy hollow conical pipe in mass sensor, in hollow conical pipe Replaceable filter membrane is placed in vibration end, frequency of oscillation depends on the quality of quartzy conical pipe characteristic and it.When sampling air flow is logical Filter membrane, particulate matter therein are deposited on filter membrane, and filter membrane mass change causes frequency of oscillation to change, by measuring frequency of oscillation Variation calculate the quality for being deposited on particulate matter on filter membrane, further according to sampling flow, sampling location environment temperature and barometer Calculate the particulate matter standard state mass concentration of the period.
The advantages of oscillating balance method is that quantitative relationship is clear.But there are following two disadvantages for it: (1) volatilizing after sample heating The loss of property and half volatile substance, causes measurement result relatively low;(2) need to install additional film dynamic measurement system (Filter Dynamic Measurement System) relatively low result is calibrated.
β ray method is based on two hypothesis: the quartzy sampling membrane band of instrument is uniform;The PM2.5 particle collected Physical characteristic is uniform, identical to β radiation attenuation rate.But under current conditions, two above hypothesis is generally difficult to set up, because This determination data is generally acknowledged to that there is also deviations.The shortcomings that this method is in relative clean and dry regional failure rate It is low, and it is very high in moist high-temperature area failure rate.
Summary of the invention
Present invention aim to address the measuring method of existing PM2.5, there are the complicated for operation and inaccurate skills of measurement result Art problem provides a kind of recognition methods of PM2.5 pollution index based on characteristics of image.
In order to solve the above technical problems, the technical solution adopted by the present invention are as follows:
A kind of recognition methods of the PM2.5 pollution index based on characteristics of image, comprising the following steps:
Step 1. collects the history image for being labelled with PM2.5 measured value, and the history image of collection is all uniformly scaled XS×YSPixel size;If image be it is colored, be just converted into grayscale image;
Step 2. calculates each scaling and is converted to the characteristics of image of image after grayscale image;
Step 3. obtains the prediction model of PM2.5 using homing method using the feature of each image as input;
Step 4. acquires target image, and target image is scaled X according to method same in step 1S×YSPixel is big It is small, if image be it is colored, be converted into grayscale image;
Step 5. calculates according to the same method in step 2 for the target image after scaling and scales and be converted to ash The characteristics of image of image after degree figure;
Step 6. is calculated using the characteristics of image of target image as the input of trained prediction model in step 3 Obtain the PM2.5 value of target image.
It is calculating each scaling in the step 2 and is being converted to after grayscale image after the characteristics of image of image, in addition it is also necessary to is right The characteristics of image of each image and the value of PM2.5 are normalized;For the target image after scaling, In in the step 5 It calculates and scales and be converted to after grayscale image after the characteristics of image of image, it is also necessary to the characteristics of image of target image according to step Same method for normalizing, is normalized in 2.In machine learning, using it is same trained when the same method data are carried out Pretreatment, i.e., using it is same trained when the same method convert the image into gray value, calculate characteristics of image and be simultaneously normalized, In Prediction Shi Caineng, which is carried out, using previous trained prediction model obtains optimal prediction result.
Use formula GRAY=0.299 × R+0.587 × G+0.114 × B by color image in the step 1 or step 4 Gray level image is converted to, wherein R, G, B respectively indicate the red of color image, green and blue wave band;
Characteristics of image in the step 2 or step 5 is the number N of the average pixel value of image, Local ExtremumMax, office The mean μ of portion's varianceLVAR, image level direction and vertical direction second differnce weighted average, image level direction and vertical Weighted average, the Q of the second order gradient sum in directionx、QyAnd QgThe weighted average of broken line difference and the spot number in image; These features why are selected, are because finding that these features can be good at reflecting in actual photographed image during the experiment PM2.5 height.
The average pixel value of described image is after the pixel value of all pixels adds up divided by XS×YS
The number N of described image Local ExtremumMaxWith the mean μ of local varianceLVARCalculation method are as follows:
(1) N is setMaxAnd μLVARInitial value be 0;
(2) boundary pixel non-for each of image finds out 8 neighbouring pixels, remembers the composition set of 8 pixels For PIXA;If current cell coordinate is (i, j), i and j respectively indicate the pixel and image most upper left corner pixel respectively vertical With horizontal direction with the pixel number of current Pixel-space, then the coordinate of its 8 neighbours be respectively (i+1, j), (i, j+1), (i-1, j), (i, j-1), (i-1, j-1), (i-1, j+1), (i+1, j-1) and (i+1, j+1);Here non-boundary pixel refers to It is all in image to find above-mentioned 8 neighbours, and any neighbour's pixel pixel value is not empty pixel;
(3) if current pixel value is greater than 8 pixels on periphery, NMaxValue plus 1;And
Here, p indicates that current pixel, I (p) indicate the gray value of pixel p,Indicate PIXAIn 8 pixels and The mean value of current pixel pixel value;
(4) after all pixels have all been handled, μLVARLVAR/((XS-2)×(YS-2));
Described image horizontally and vertically second differnce weighted average, image level direction and vertical direction two The calculation method of the weighted average of ladder degree sum are as follows:
(1) second differnce for calculating separately each pixel p of image level direction x and vertical direction y, is denoted as respectivelyWithAnd the second differnce of both direction is arranged according to sequence from small to large respectively Sequence is removed repetition values in sequencer procedure, and is filtered out less than horizontal direction given threshold value αxOr vertical direction given threshold value αyInstitute There is difference value, thus obtaining two sizes is respectively nxAnd nyHorizontal direction queue QxWith vertical direction queue Qy, two queues In i-th element be respectivelyWithTwo scales corresponding to the pixel of i are as ordered as in queue Score value;
(2) calculate each pixel p of image second order gradient andAnd by second order gradient and Value removes repetition values in sequencer procedure according to sequence permutation from small to large, and filters out less than given threshold value αgAll two Ladder degree and value, thus obtaining size is ngSecond order gradient and queue Qg, i-th element is respectively Q in queueg(i)=grad (pi), second order gradient and value corresponding to the pixel of i are as ordered as in queue;
(3) respectively to three queue Qx、QyAnd QgCalculate corresponding broken line total lengthWithAnd overall weighting change degree With
(4) three queue Q are calculated separatelyx、QyAnd QgWeighted average WVLx=Vx/Lx、WVLy=Vy/LyAnd WVLg= Vg/Lg, these three values just be respectively horizontal direction second differnce weighted average, vertical direction second differnce weighted average with And the weighted average of second order gradient sum;
The calculation method of spot number in described image are as follows:
(1) a series of given ordinal scale value σ12,…,σs, s >=3, and some radius r is selected, it is drawn according to standardization This Gauss operator of pula generates the convolution mask of corresponding each scale, wherein the centre coordinate of each convolution mask is (0,0), institute There are all grids for differing r pixel with convolution mask central horizontal or vertical direction to belong to template;All nets in template The coordinate set of lattice is { (tx,ty)|tx=-r ,-r+1 ..., r-1, r ∧ ty=-r ,-r+1 ..., r-1, r }, corresponding scale σ ∈ {σ12,…,σsConvolution mask in each grid (tx,ty) on value use following formula calculate:
To each scale σ ∈ { σ12,…,σs, convolution algorithm is done to image using corresponding convolution mask, obtains a system Column convolved image I1,I2,…,Is
(2) for each convolved image Ii, each pixel p in i=2 ..., s-1, it is assumed that its coordinate in the picture is (i, j), the cell coordinates of 8 arest neighbors be respectively (i+1, j), (i, j+1), (i-1, j), (i, j-1), (i-1, j-1), (i-1, j+1), (i+1, j-1) and (i+1, j+1);If IiIt (p) is the smallest in all neighbours, and Ii(p) also compare Ii-1With Ii+1In all coordinates be (i, j), (i+1, j), (i, j+1), (i-1, j), (i, j-1), (i-1, j-1), (i-1, j+1), (i+ 1, j-1) and the value of the pixel of (i+1, j+1) is all small, then it is exactly a spot that the pixel is corresponding;If IiIt (p) is all It is maximum in neighbour, and Ii(p) also compare Ii-1And Ii+1In all coordinates be (i, j), (i+1, j), (i, j+1), (i-1, j), The value of the pixel of (i, j-1), (i-1, j-1), (i-1, j+1), (i+1, j-1) and (i+1, j+1) is all big, then the pixel is corresponding Be also a spot;In all convolved image Ii, i=2 ..., the dropout count found on s-1 is the spot in image Number.
The Qx、QyAnd QgThe calculation method of the weighted average of broken line difference are as follows:
(1) first against Qx、QyAnd QgBroken line calculate separately the difference between two neighboring element, and it is this is poor Value is respectively formed a queue, is denoted as QQ respectively according to sorting from small to largex、QQyAnd QQg, i-th of element is respectively QQx (i)、QQy(i) and QQg(i)。
(2) respectively to three queue QQx、QQyAnd QQgCalculate corresponding broken line total lengthWithAnd overall weighting change degree With
(3) three queue QQ are calculated separatelyx、QQyAnd QQgWeighted average WVLQx=VQx/LQx、WVLQy=VQy/LQy And WVLQg=VQg/LQg, these three values are just respectively Qx、QyAnd QgThe weighted average of broken line difference.
The feature of image and the value of PM2.5 are normalized according to following formula in the step 2 and step 5:
Wherein F indicates some feature, and F (I) is indicated to scale and is converted to history image or target image I after grayscale image Value on feature F, minFFeature F is indicated in all scalings and is converted to the minimum value on the history image after grayscale image, maxFFeature F is indicated in all scalings and is converted to the maximum value on the history image after grayscale image, Fnorm(I) indicate I in feature Value after being normalized on F, and if to being converted to F when the target image after grayscale image is normalizednormIt (I) will when > 1 Fnorm(I) it is set to 1, if Fnorm(I) by F when < 0norm(I) it is set to 0, InewFor to normalized new value.This method implements It is fairly simple efficient, therefore this method is selected to be normalized.
The method for obtaining the prediction model of PM2.5 in the step 3 using homing method is feedforward neural network, Gauss mistake The methods of journey or common least square.
The concrete operation step of the prediction model that PM2.5 is obtained using feedforward neural network method are as follows:
(1) activation primitive is set in feedforward neural network, this several or hyperbolic of writing of logic can be used in this step Tangent function;
(2) level and every layer of neuronal quantity of neural network are set;
(3) back-propagation algorithm learning neural network is used;
(4) network learnt is preserved, need to save in have and used which kind of activation primitive, neural network The number of plies, the weight of every layer of neuronal quantity and each neuron;
When obtaining the prediction model of PM2.5 using feedforward neural network method in the step 6, need to read in the base of preservation In the prediction model parameters of feedforward neural network, including activation primitive, the number of plies of neural network, every layer of neuronal quantity and The weight of each neuron, then according to the feedforward transmission method of feedforward neural network using the characteristics of image after normalization as defeated Enter to calculate InewPM2.5 value PM2.5 after corresponding normalizationnorm(Inew), finally by it according to the anti-normalizing of following formula PM2.5 value PM2.5 (I after change as new shooting imagenew):
PM2.5(Inew)=PM2.5norm(Inew)×(maxF-minF)+minF
The present invention makes full use of the measured value and corresponding outdoor scene photo of history PM2.5, comprehensive to use image processing techniques And data mining technology, it is only necessary to can quickly estimate its PM2.5 index using target area scenery photo, overcome tradition Method is typically only capable to the shortcomings that one-point measurement, provides a kind of technical method that can quickly estimate extensive area PM2.5.This Invention use above technical scheme, compared with prior art the advantages of be:
(1) it does not need to make or uses special instrument and chemical reagent, using only the photo energy of target area Enough identify PM2.5 index;
(2) it when only target scene photograph, can not rebuild or the environmental factor of repeated observation target area, this method Still its PM2.5 index can be identified;
(3) it is capable of the average PM2.5 value of Direct Recognition large area region, rather than ocean weather station observation.
Detailed description of the invention
Fig. 1 is implementation flow chart of the invention;
Fig. 2-5 is four input pictures in present example.
Specific embodiment
The recognition methods of PM2.5 pollution index of one of the present embodiment based on characteristics of image, comprising the following steps:
Step 1. selects a region, shoots the photochrome of different time using camera and records this time PM2.5 value is labelled with the history image of PM2.5 measured value to collect, and all photos are divided into daytime according to season and time With evening two major classes, calculated all by taking daytime as an example below;The history image of collection is all uniformly scaled XS×YSPixel is big Small, used here as the convert order of imagemagick software: " convert-resize 640x320 " is by the history of collection Image is all uniformly scaled 640 × 320 pixel sizes;It will scheme used here as the mode for the red band for only using photochrome As being converted to grayscale image, gray scale can also be converted the image into using formula GRAY=0.299 × R+0.587 × G+0.114 × B Figure, wherein R, G, B respectively indicate the red of color image, green and blue wave band;
Step 2. calculates each scaling and is converted to the number of the average pixel value of image after grayscale image, Local Extremum NMax, local variance mean μLVAR, image level direction and vertical direction second differnce weighted average, image level direction With weighted average, the Q of the second order gradient sum of vertical directionx、QyAnd QgThe weighted average and the spot in image of broken line difference Points;
The average pixel value of described image is after the pixel value of all pixels adds up divided by XS×YS
The number N of described image Local ExtremumMaxWith the mean μ of local varianceLVARCalculation method are as follows:
(1) N is setMaxAnd μLVARInitial value be 0;
(2) boundary pixel non-for each of image finds out 8 neighbouring pixels, remembers the composition set of 8 pixels For PIXA;If current cell coordinate is (i, j), i and j respectively indicate the pixel and image most upper left corner pixel respectively vertical With horizontal direction with the pixel number of current Pixel-space, then the coordinate of its 8 neighbours be respectively (i+1, j), (i, j+1), (i-1, j), (i, j-1), (i-1, j-1), (i-1, j+1), (i+1, j-1) and (i+1, j+1);Here non-boundary pixel refers to It is all in image to find above-mentioned 8 neighbours, and any neighbour's pixel pixel value is not empty pixel;
(3) if current pixel value is greater than 8 pixels on periphery, NMaxValue plus 1;And
Here, p indicates that current pixel, I (p) indicate the gray value of pixel p,Indicate PIXAIn 8 pixels and The mean value of current pixel pixel value;
(4) after all pixels have all been handled, μLVARLVAR/((XS-2)×(YS-2));
Described image horizontally and vertically second differnce weighted average, image level direction and vertical direction two The calculation method of the weighted average of ladder degree sum are as follows:
(1) second differnce for calculating separately each pixel p of image level direction x and vertical direction y, is denoted as respectivelyWithAnd the second differnce of both direction is arranged according to sequence from small to large respectively Sequence is removed repetition values in sequencer procedure, and is filtered out less than horizontal direction given threshold value αxOr vertical direction given threshold value αyInstitute There is difference value, thus obtaining two sizes is respectively nxAnd nyHorizontal direction queue QxWith vertical direction queue Qy, two queues In i-th element be respectivelyWithSecond order corresponding to the pixel of i is as ordered as in queue Difference value;
(2) calculate each pixel p of image second order gradient andAnd by second order gradient and Value removes repetition values in sequencer procedure according to sequence permutation from small to large, and filters out less than given threshold value αgAll two Ladder degree and value, thus obtaining size is ngSecond order gradient and queue Qg, i-th element is respectively Q in queueg(i)=grad (pi), second order gradient and value corresponding to the pixel of i are as ordered as in queue;
(3) respectively to three queue Qx、QyAnd QgCalculate corresponding broken line total lengthWithAnd overall weighting change degree With
(4) three queue Q are calculated separatelyx、QyAnd QgWeighted average WVLx=Vx/Lx、WVLy=Vy/LyAnd WVLg= Vg/Lg, these three values just be respectively horizontal direction second differnce weighted average, vertical direction second differnce weighted average with And the weighted average of second order gradient sum;
The Qx、QyAnd QgThe calculation method of the weighted average of broken line difference are as follows:
(1) first against Qx、QyAnd QgBroken line calculate separately the difference between two neighboring element, and it is this is poor Value is respectively formed a queue, is denoted as QQ respectively according to sorting from small to largex、QQyAnd QQg, i-th of element is respectively QQx (i)、QQy(i) and QQg(i)。
(2) respectively to three queue QQx、QQyAnd QQgCalculate corresponding broken line total lengthWithAnd overall weighting change degree With
(3) three queue QQ are calculated separatelyx、QQyAnd QQgWeighted average WVLQx=VQx/LQx、WVLQy=VQy/LQy And WVLQg=VQg/LQg, these three values are just respectively Qx、QyAnd QgThe weighted average of broken line difference.
The calculation method of spot number in described image are as follows:
(1) a series of given ordinal scale value σ12,…,σs, s >=3, and some radius r is selected, it is drawn according to standardization This Gauss operator of pula generates the convolution mask of corresponding each scale, wherein the centre coordinate of each convolution mask is (0,0), institute There are all grids for differing r pixel with convolution mask central horizontal or vertical direction to belong to template;All nets in template The coordinate set of lattice is { (tx, ty)|tx=-r ,-r+1 ..., r-1, r ∧ ty=-r ,-r+1 ..., r-1, r }, corresponding scale σ ∈ {σ12,…,σsConvolution mask in each grid (tx, ty) on value use following formula calculate:
To each scale σ ∈ { σ12,…,σs, convolution algorithm is done to image using corresponding convolution mask, obtains a system Column convolved image I1,I2,…,Is
(2) for each convolved image Ii, each pixel p in i=2 ..., s-1, it is assumed that its coordinate in the picture is (i, j), the cell coordinates of 8 arest neighbors be respectively (i+1, j), (i, j+1), (i-1, j), (i, j-1), (i-1, j-1), (i-1, j+1), (i+1, j-1) and (i+1, j+1);If IiIt (p) is the smallest in all neighbours, and Ii(p) also compare Ii-1With Ii+1In all coordinates be (i, j), (i+1, j), (i, j+1), (i-1, j), (i, j-1), (i-1, j-1), (i-1, j+1), (i+ 1, j-1) and the value of the pixel of (i+1, j+1) is all small, then it is exactly a spot that the pixel is corresponding;If IiIt (p) is all It is maximum in neighbour, and Ii(p) also compare Ii-1And Ii+1In all coordinates be (i, j), (i+1, j), (i, j+1), (i-1, j), The value of the pixel of (i, j-1), (i-1, j-1), (i-1, j+1), (i+1, j-1) and (i+1, j+1) is all big, then the pixel is corresponding Be also a spot;In all convolved image Ii, i=2 ..., the dropout count found on s-1 is the spot in image Number;
This example has selected to calculate the number N of Local ExtremumMax, local variance mean μLVAR, average pixel value, base In the second order gradient and weighted average and Q of Laplace operatorgThe weighted average of broken line difference, the Q based on Sobel operatory Spot number these features in the weighted average of broken line difference and image based on Gauss-Laplace, it is calculated Image section feature is as shown in the table:
The library OpenCV is used when calculating Laplacian Image Speckle number, partial code is as follows:
Mat image=img;
Mat descriptors;
vector<KeyPoint>keypoints;
SimpleBlobDetector::Params params;
Params.minThreshold=10;
Params.maxThreshold=100;
Params.thresholdStep=10;
Params.minArea=10;
Params.minConvexity=0.3;
Params.minInertiaRatio=0.01;
Params.maxArea=8000;
Params.maxConvexity=10;
Params.filterByColor=false;
Params.filterByCircularity=false;
SimpleBlobDetector blobDetector(params);
blobDetector.create("SimpleBlob");
blobDetector.detect(image,keypoints);
drawKeypoints(image,keypoints,image,Scalar(255,0,0));
cout<<keypoints.size()<<"";
Second order gradient and weighted average and Q of the calculating based on Laplace operatorgThe weighted average of broken line difference The library OpenCV is used, partial code is as follows:
Here αgThreshold value is set as 50.
And the value of each characteristics of image and PM2.5 to each image is normalized according to following formula:
Wherein F indicates some feature, and F (I) is indicated to scale and is converted to history image or target image I after grayscale image Value on feature F, minFFeature F is indicated in all scalings and is converted to the minimum value on the history image after grayscale image, maxFFeature F is indicated in all scalings and is converted to the maximum value on the history image after grayscale image, Fnorm(I) indicate I in feature Value after being normalized on F, and if to being converted to F when the target image after grayscale image is normalizednormIt (I) will when > 1 Fnorm(I) it is set to 1, if Fnorm(I) by F when < 0norm(I) it is set to 0, InewFor to normalized new value.It is with average pixel value , the maximum value of this feature is 147.323 in training data, minimum value 49.6862, therefore, when this value is 113.21 When, the value after normalization is (113.21-49.6862)/(147.323-49.6862)=0.181324.Following table is each spy Result after the PM2.5 that seeks peace normalization:
Step 3. obtains the prediction mould of PM2.5 using feedforward neural network method using the feature of each image as input Type, concrete operation step are as follows:
(1) activation primitive is set in feedforward neural network, this several or hyperbolic of writing of logic can be used in this step Tangent function, it is this number of writing of logic, also referred to as sigmoid function that activation primitive is arranged here;
(2) level and every layer of neuronal quantity of neural network are set, it is 3 that the network number of plies is arranged here, wherein inputting Layer neuron number is 7, and output layer neuron number is 1, and middle layer neuron number is 1000;
(3) back-propagation algorithm learning neural network is used, the minimum trained rate of parameter used herein is 0.001, most Height training rate is 0.01, and burst error is [0.001,0.02], the number of iterations 10000;
Neural network uses the library of 0.998 version of lwneuralnetplus, and part training code is as follows:
Iomanager=new iomanagelwnnfann ();
iomanager->info_from_file(argv[1],&npatterns,&ninput,&noutput);
Net=new network (network::LOGISTIC, 3, ninput, 1000, noutput);
net->set_momentum(0);
net->set_learning_rate(0.001);
net->jolt(0.03,0.22);
Train=new trainer (net, " ", " ");
train->set_iomanager(iomanager);
cout<<"Loading training data..."<<endl;
train->load_training(argv[1]);
train_go(argv,train,net);
Wherein train_go partial code is as follows:
(4) network learnt is preserved, need to save in have and used which kind of activation primitive, neural network The number of plies, the weight of every layer of neuronal quantity and each neuron;
Step 4. acquires target image, and target image is scaled X according to method same in step 1S×YSPixel is big It is small, if image be it is colored, be converted into grayscale image;
Step 5. calculates according to the same method in step 2 for the target image after scaling and scales and be converted to ash Spend the number N of the average pixel value of image after figure, Local ExtremumMax, local variance mean μLVAR, image level direction and Weighted average, the Q of the second order gradient sum of vertical direction second differnce weighted average, image level direction and vertical directionx、 QyAnd QgThe weighted average of broken line difference and the spot number in image, and to each characteristics of image of target image according to step 2 In same method for normalizing, be normalized;
Step 6. is read in using the characteristics of image of target image as the input of trained prediction model in step 3 The prediction model parameters based on feedforward neural network saved, including activation primitive, the number of plies of neural network, every layer of neuron The weight of quantity and each neuron, it is then according to the feedforward transmission method of feedforward neural network that the image after normalization is special Sign calculates I as inputnewPM2.5 value PM2.5 after corresponding normalizationnorm(Inew), finally by it according to following public affairs PM2.5 value PM2.5 (I after formula renormalization as new shooting imagenew):
PM2.5(Inew)=PM2.5norm(Inew)×(maxF-minF)+minF
Four images that we have selected in the following figure are identified as new shooting image.It needs exist for first in the following figure The feature of each image is normalized, the PM2.5 value after then bringing previously trained neural computing normalization into, most Renormalization is carried out afterwards.Recognition effect is as shown in the table:
Measured value Predicted value
Image 1 37 49
Image 2 223 247
Image 3 323 306
Image 4 117 103
No matter using which kind of method for normalizing in the step 2 and step 5, as long as can be by characteristic value or PM2.5 value The number for becoming [0,1] section or (0,1) section, just belongs to patent claims range;
The optional method for building up of prediction model in the step 3 include neural network, Gaussian process and it is common most Small two the methods of multiply etc., no matter using which kind of method, as long as used during prediction neural network, Gaussian process and One or more of the methods of common least square feature is all in the scope of the claims of this patent;
In the step 2 and step 5, image level direction and vertical direction second differnce weighted average and two are calculated A variety of methods can be used in the weighted average of ladder degree sum, compute repeatedly and obtain various features, such as use Sobel respectively Operator and Laplace operator calculate these three features;
The step 2 and step 5 are normalized to optional step, if not using step 2 and step 5, need to walk again The PM2.5 value predicted in rapid 6 is without carrying out renormalization operation.
The value of prediction target PM2.5 in this patent may be visibility etc. can be used that real number measured other Actual prediction target, no matter using which kind of prediction target, as long as image level direction and vertical direction second differnce has been used to add The weighted average and Q of weight average value, second order gradient sumx、QyAnd QgThe weighted average of broken line difference these features are just all Belong to the scope of the claims of this patent.
The foregoing is only a preferred embodiment of the present invention, is not intended to restrict the invention, for the skill of this field For art personnel, the invention may be variously modified and varied.All within the spirits and principles of the present invention, made any to repair Change, equivalent replacement, improvement etc., should be included within scope of the presently claimed invention.

Claims (6)

1. a kind of recognition methods of the PM2.5 pollution index based on characteristics of image, it is characterised in that: the following steps are included:
Step 1. collects the history image for being labelled with PM2.5 measured value, and the history image of collection is all uniformly scaled XS×YS Pixel size;If image be it is colored, be just converted into grayscale image;
Step 2. calculates each scaling and is converted to the characteristics of image of image after grayscale image;
Step 3. obtains the prediction model of PM2.5 using homing method using the feature of each image as input;
Step 4. acquires target image, and target image is scaled X according to method same in step 1S×YSPixel size, such as Fruit image be it is colored, be converted into grayscale image;
Step 5. calculates according to the same method in step 2 for the target image after scaling and scales and be converted to grayscale image The characteristics of image of image afterwards;
Step 6. is calculated using the characteristics of image of target image as the input of trained prediction model in step 3 The PM2.5 value of target image;
It is calculating each scaling in the step 2 and is being converted to after grayscale image after the characteristics of image of image, in addition it is also necessary to each The characteristics of image of image and the value of PM2.5 are normalized;For the target image after scaling in the step 5, calculating It scales and is converted to after grayscale image after the characteristics of image of image, it is also necessary to the characteristics of image of target image according in step 2 Same method for normalizing, is normalized;
Color image is converted using formula GRAY=0.299 × R+0.587 × G+0.114 × B in the step 1 or step 4 For gray level image, wherein R, G, B respectively indicate the red of color image, green and blue wave band;
Characteristics of image in the step 2 or step 5 is the number N of the average pixel value of image, Local ExtremumMax, part side The mean μ of differenceLVAR, image level direction and vertical direction second differnce weighted average, image level direction and vertical direction Second order gradient sum weighted average, Qx、QyAnd QgThe weighted average of broken line difference and the spot number in image, it is described Qx、Qy、QgIt is defined as follows and states described in step a-b;
The average pixel value of described image is after the pixel value of all pixels adds up divided by XS×YS
The number N of the Local Extremum of described imageMaxWith the mean μ of local varianceLVARCalculation method are as follows:
(1) N is setMaxAnd μLVARInitial value be 0;
(2) boundary pixel non-for each of image finds out 8 neighbouring pixels, remembers that the composition collection of 8 pixels is combined into PIXA;If current cell coordinate is (i, j), i and j respectively indicate the pixel and image most upper left corner pixel respectively vertical and Horizontal direction with current Pixel-space pixel number, then the coordinate of its 8 neighbours be respectively (i+1, j), (i, j+1), (i- 1, j), (i, j-1), (i-1, j-1), (i-1, j+1), (i+1, j-1) and (i+1, j+1);Here non-boundary pixel refers to figure It is all as in find above-mentioned 8 neighbours, and any neighbour's pixel pixel value is not empty pixel;
(3) if current pixel value is greater than 8 pixels on periphery, NMaxValue plus 1;And
Here, p indicates that current pixel, I (p) indicate the gray value of pixel p,Indicate PIXAIn 8 pixels and current The mean value of pixel pixel value;
(4) after all pixels have all been handled, μLVARLVAR/((XS-2)×(YS-2));
Described image horizontally and vertically two ladder of second differnce weighted average, image level direction and vertical direction Spend the calculation method of the weighted average of sum are as follows:
A. the second differnce for calculating separately each pixel p of image level direction x and vertical direction y, is denoted as respectivelyWithAnd the second differnce of both direction is arranged according to sequence from small to large respectively Sequence is removed repetition values in sequencer procedure, and is filtered out less than horizontal direction given threshold value αxOr vertical direction given threshold value αyInstitute There is difference value, thus obtaining two sizes is respectively nxAnd nyHorizontal direction queue QxWith vertical direction queue Qy, two queues In i-th element be respectivelyWithTwo scales corresponding to the pixel of i are as ordered as in queue Score value;
B. calculate each pixel p of image second order gradient andAnd by second order gradient and value according to Sequence permutation from small to large removes repetition values in sequencer procedure, and filters out less than given threshold value αgAll second order gradients And value, thus obtaining size is ngSecond order gradient and queue Qg, i-th element is respectively Q in queueg(i)=grad (pi), Second order gradient and value corresponding to the pixel of i are as ordered as in queue;
C. respectively to three queue Qx、QyAnd QgCalculate corresponding broken line total lengthWith And overall weighting change degree With
D. three queue Q are calculated separatelyx、QyAnd QgWeighted average WVLx=Vx/Lx、WVLy=Vy/LyAnd WVLg=Vg/Lg, These three values are just respectively horizontal direction second differnce weighted average, vertical direction second differnce weighted average and second order The weighted average of gradient sum;
The calculation method of spot number in described image are as follows:
(1) a series of given ordinal scale value σ12,…,σs, s >=3, and some radius r is selected, according to standardization La Pula This Gauss operator generates the convolution mask of corresponding each scale, wherein the centre coordinate of each convolution mask is (0,0), Suo Youhe All grids that convolution mask central horizontal or vertical direction differ r pixel belong to template;All grids in template Coordinate set is { (tx,ty)|tx=-r ,-r+1 ..., r-1, r ∧ ty=-r ,-r+1 ..., r-1, r }, corresponding scale σ ∈ { σ1, σ2,…,σsConvolution mask in each grid (tx,ty) on value use following formula calculate:
To each scale σ ∈ { σ12,…,σs, convolution algorithm is done to image using corresponding convolution mask, obtains a series of volumes Product image I1,I2,…,Is
(2) for each convolved image Ii, each pixel p in i=2 ..., s-1, it is assumed that its coordinate in the picture be (i, J), the cell coordinate of 8 arest neighbors be respectively (i+1, j), (i, j+1), (i-1, j), (i, j-1), (i-1, j-1), (i-1, J+1), (i+1, j-1) and (i+1, j+1);If IiIt (p) is the smallest in all neighbours, and Ii(p) also compare Ii-1And Ii+1In All coordinates are (i, j), (i+1, j), (i, j+1), (i-1, j), (i, j-1), (i-1, j-1), (i-1, j+1), (i+1, j-1) The value of the pixel of (i+1, j+1) is all small, then it is exactly a spot that the pixel is corresponding;If IiIt (p) is in all neighbours It is maximum, and Ii(p) also compare Ii-1And Ii+1In all coordinates be (i, j), (i+1, j), (i, j+1), (i-1, j), (i, j- 1), the value of the pixel of (i-1, j-1), (i-1, j+1), (i+1, j-1) and (i+1, j+1) is all big, then the pixel is corresponding It is a spot;In all convolved image Ii, i=2 ..., the dropout count found on s-1 is the spot number in image.
2. a kind of recognition methods of PM2.5 pollution index based on characteristics of image according to claim 1, feature exist In: the Qx、QyAnd QgThe calculation method of the weighted average of broken line difference are as follows:
(1) first against Qx、QyAnd QgBroken line calculate separately the difference between two neighboring element, and this difference is pressed According to sorting from small to large, it is respectively formed a queue, is denoted as QQ respectivelyx、QQyAnd QQg, i-th of element is respectively QQx(i)、 QQy(i) and QQg(i);
(2) respectively to three queue QQx、QQyAnd QQgCalculate corresponding broken line total lengthWithAnd overall weighting change degree With
(3) three queue QQ are calculated separatelyx、QQyAnd QQgWeighted average WVLQx=VQx/LQx、WVLQy=VQy/LQyWith WVLQg=VQg/LQg, these three values are just respectively Qx、QyAnd QgThe weighted average of broken line difference.
3. a kind of recognition methods of PM2.5 pollution index based on characteristics of image according to claim 2, feature exist In:
The feature of image and the value of PM2.5 are normalized according to following formula in the step 2 and step 5:
Wherein F indicates some feature, and F (I) indicates to scale and be converted to history image after grayscale image or target image I in spy Levy the value on F, minFFeature F is indicated in all scalings and is converted to the minimum value on the history image after grayscale image, maxFTable Show feature F in all scalings and is converted to the maximum value on the history image after grayscale image, Fnorm(I) indicate that I returns on feature F One change after value, and if to being converted to F when the target image after grayscale image is normalizednorm(I) by F when > 1norm(I) It is set to 1, if Fnorm(I) by F when < 0norm(I) it is set to 0, InewFor to normalized new value.
4. a kind of recognition methods of PM2.5 pollution index based on characteristics of image according to claim 3, feature exist In: obtained in the step 3 using homing method the prediction model of PM2.5 method be feedforward neural network, Gaussian process or Common least square method.
5. a kind of recognition methods of PM2.5 pollution index based on characteristics of image according to claim 4, feature exist In: the method for the prediction model that PM2.5 is obtained using homing method is feedforward neural network, concrete operation step are as follows:
(1) activation primitive is set in feedforward neural network, this several or tanh of writing of logic can be used in this step Function;
(2) level and every layer of neuronal quantity of neural network are set;
(3) back-propagation algorithm learning neural network is used;
(4) network learnt is preserved, need to save in have which kind of activation primitive used, the layer of neural network Several, every layer of neuronal quantity and the weight of each neuron.
6. a kind of recognition methods of PM2.5 pollution index based on characteristics of image according to claim 5, feature exist In: when obtaining the prediction model of PM2.5 using feedforward neural network method, need to read in preservation based on feedforward neural network Prediction model parameters, including activation primitive, the power of the number of plies of neural network, every layer of neuronal quantity and each neuron Then weight calculates I for the characteristics of image after normalization as input according to the feedforward transmission method of feedforward neural networknewInstitute PM2.5 value PM2.5 after corresponding normalizationnorm(Inew), finally it is shot according to after following formula renormalization as new The PM2.5 value PM2.5 (I of imagenew):
PM2.5(Inew)=PM2.5norm(Inew)×(maxF-minF)+minF
CN201710301867.4A 2017-05-02 2017-05-02 A kind of recognition methods of the PM2.5 pollution index based on characteristics of image Active CN107044947B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710301867.4A CN107044947B (en) 2017-05-02 2017-05-02 A kind of recognition methods of the PM2.5 pollution index based on characteristics of image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710301867.4A CN107044947B (en) 2017-05-02 2017-05-02 A kind of recognition methods of the PM2.5 pollution index based on characteristics of image

Publications (2)

Publication Number Publication Date
CN107044947A CN107044947A (en) 2017-08-15
CN107044947B true CN107044947B (en) 2019-11-19

Family

ID=59546232

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710301867.4A Active CN107044947B (en) 2017-05-02 2017-05-02 A kind of recognition methods of the PM2.5 pollution index based on characteristics of image

Country Status (1)

Country Link
CN (1) CN107044947B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109087277B (en) * 2018-06-11 2021-02-26 北京工业大学 Method for measuring PM2.5 of fine air particles
CN111488804B (en) * 2020-03-19 2022-11-11 山西大学 Labor insurance product wearing condition detection and identity identification method based on deep learning
CN111912755B (en) * 2020-08-07 2021-08-10 山东中煤工矿物资集团有限公司 Mining dust concentration sensor, sensor system and method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6310346B1 (en) * 1997-05-30 2001-10-30 University Of Central Florida Wavelength-tunable coupled antenna uncooled infrared (IR) sensor
CN103903273A (en) * 2014-04-17 2014-07-02 北京邮电大学 PM2.5 grade fast-evaluating system based on mobile phone terminal
CN103954542A (en) * 2014-05-12 2014-07-30 中国计量学院 PM2.5 (Particulate Matter2.5) concentration detector based on definition evaluation without reference image
CN104462778A (en) * 2014-11-06 2015-03-25 华北电力大学 PM2.5 pollutant measurement method based on deep learning
CN103593660B (en) * 2013-11-27 2016-08-17 青岛大学 The palm grain identification method that gradient of intersecting under a kind of invariant feature image encodes
CN106295516A (en) * 2016-07-25 2017-01-04 天津大学 Haze PM2.5 value method of estimation based on image
CN106401359A (en) * 2016-08-31 2017-02-15 余姚市泗门印刷厂 Infrared-photographing-based window control platform

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6310346B1 (en) * 1997-05-30 2001-10-30 University Of Central Florida Wavelength-tunable coupled antenna uncooled infrared (IR) sensor
CN103593660B (en) * 2013-11-27 2016-08-17 青岛大学 The palm grain identification method that gradient of intersecting under a kind of invariant feature image encodes
CN103903273A (en) * 2014-04-17 2014-07-02 北京邮电大学 PM2.5 grade fast-evaluating system based on mobile phone terminal
CN103954542A (en) * 2014-05-12 2014-07-30 中国计量学院 PM2.5 (Particulate Matter2.5) concentration detector based on definition evaluation without reference image
CN104462778A (en) * 2014-11-06 2015-03-25 华北电力大学 PM2.5 pollutant measurement method based on deep learning
CN106295516A (en) * 2016-07-25 2017-01-04 天津大学 Haze PM2.5 value method of estimation based on image
CN106401359A (en) * 2016-08-31 2017-02-15 余姚市泗门印刷厂 Infrared-photographing-based window control platform

Also Published As

Publication number Publication date
CN107044947A (en) 2017-08-15

Similar Documents

Publication Publication Date Title
EP3614308A1 (en) Joint deep learning for land cover and land use classification
CN109800631A (en) Fluorescence-encoded micro-beads image detecting method based on masked areas convolutional neural networks
CN107909109B (en) SAR image classification method based on conspicuousness and multiple dimensioned depth network model
CN108921203B (en) Detection and identification method for pointer type water meter
CN108334847A (en) A kind of face identification method based on deep learning under real scene
CN104202547B (en) Method, projection interactive approach and its system of target object are extracted in projected picture
CN108615046A (en) A kind of stored-grain pests detection recognition methods and device
CN107665492A (en) Colon and rectum panorama numeral pathological image tissue segmentation methods based on depth network
CN106682697A (en) End-to-end object detection method based on convolutional neural network
CN107044947B (en) A kind of recognition methods of the PM2.5 pollution index based on characteristics of image
CN109064462A (en) A kind of detection method of surface flaw of steel rail based on deep learning
CN108629369A (en) A kind of Visible Urine Sediment Components automatic identifying method based on Trimmed SSD
CN110517311A (en) Pest and disease monitoring method based on leaf spot lesion area
CN109509187A (en) A kind of efficient check algorithm for the nibs in big resolution ratio cloth image
CN104239902B (en) Hyperspectral image classification method based on non local similitude and sparse coding
CN110569747A (en) method for rapidly counting rice ears of paddy field rice by using image pyramid and fast-RCNN
CN111724355B (en) Image measuring method for abalone body type parameters
CN107978110A (en) Fence intelligence identifying system in place and recognition methods based on images match
CN109815953A (en) One kind being based on vehicle annual test target vehicle identification matching system
CN113435282A (en) Unmanned aerial vehicle image ear recognition method based on deep learning
CN111860459A (en) Gramineous plant leaf stomata index measuring method based on microscopic image
CN111426637A (en) Method for acquiring water quality parameters of urban black and odorous rivers by using unmanned aerial vehicle
KR101933856B1 (en) System for image processing using the convolutional neural network and method using the same
CN110298410A (en) Weak target detection method and device in soft image based on deep learning
CN112489026A (en) Asphalt pavement disease detection method based on multi-branch parallel convolution neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20211222

Address after: 101100 4th floor, building 6, No.10 yard, Jiachuang Road, Tongzhou District, Beijing

Patentee after: BEIJING JIAHUA ZHILIAN TECHNOLOGY CO.,LTD.

Address before: 030006 No. 92, Hollywood Road, Taiyuan, Shanxi

Patentee before: SHANXI University

TR01 Transfer of patent right