CN110705619A - Fog concentration grade judging method and device - Google Patents

Fog concentration grade judging method and device Download PDF

Info

Publication number
CN110705619A
CN110705619A CN201910911479.7A CN201910911479A CN110705619A CN 110705619 A CN110705619 A CN 110705619A CN 201910911479 A CN201910911479 A CN 201910911479A CN 110705619 A CN110705619 A CN 110705619A
Authority
CN
China
Prior art keywords
image
gray level
fog concentration
vector
following formula
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910911479.7A
Other languages
Chinese (zh)
Other versions
CN110705619B (en
Inventor
田治仁
张贵峰
李锐海
廖永力
张巍
龚博
王俊锞
黄增浩
吴新桥
朱登杰
何锦强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Research Institute of Southern Power Grid Co Ltd
Original Assignee
Research Institute of Southern Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Research Institute of Southern Power Grid Co Ltd filed Critical Research Institute of Southern Power Grid Co Ltd
Priority to CN201910911479.7A priority Critical patent/CN110705619B/en
Publication of CN110705619A publication Critical patent/CN110705619A/en
Application granted granted Critical
Publication of CN110705619B publication Critical patent/CN110705619B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/758Involving statistics of pixels or of feature values, e.g. histogram matching
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The invention discloses a fog concentration grade discrimination method, which comprises the following steps: establishing an image sample set for the obtained image, and dividing the image in the image sample set into a plurality of grades according to the fog concentration; respectively extracting the feature vectors of contrast and Fourier for each image in the image sample set to obtain a feature vector set of the image sample set; training the feature vector set through a support vector machine to obtain a fog concentration discrimination model; and inputting the image to be distinguished into the fog concentration distinguishing model to obtain a fog concentration grade division result of the image to be distinguished. The invention can carry out accurate grading on the fog concentration and meet the requirement of each transformer substation on the prejudgment of the meter fog concentration. The invention also provides a method and a device for judging the fog concentration grade.

Description

Fog concentration grade judging method and device
Technical Field
The invention relates to the technical field of image processing, in particular to a fog concentration grade distinguishing method and device.
Background
In recent years, with the construction of power engineering, the number of substations is rapidly increased, and the research on the reading algorithm of the meter of the substation is more and more emphasized by people. The meter reading requires a clear dial, and the air on or near the surface of the meter is easily fogged due to weather and other factors, which causes great inconvenience to the meter reading. In order to meet the reading requirement, the meter is required to be subjected to defogging treatment before the reading of the meter.
However, in actual processing, different haze concentrations also had different effects on the meter readings. For example, when the mist concentration is low, the meter reading is hardly different from that in the absence of mist, and the reading can be accurately performed. If the mist concentration is moderate, readings can still be taken, but sometimes the readings are inaccurate. When the fog concentration is high, the meter reading can hardly be carried out. No solution is given to this problem by the correlation meter reading or defogging algorithm. Therefore, it is necessary to find an effective algorithm to pre-determine the concentration of mist before meter reading or defogging.
In the prior art, visibility (visible distance) is generally used as an index of fog density. At present, visibility monitoring is mainly divided into visual inspection of personnel, instrument and equipment detection and monitoring video detection.
The invention finds the following problems in the prior art when implementing the invention:
the timeliness of the visual inspection of personnel is extremely poor, and the estimated data is lack of normativity and sustainable traceability; the instrument and equipment mainly use visibility detectors which are generally based on infrared light or laser light and can measure extinction coefficients or visibility values, but the visibility detectors are expensive and are not suitable for large-area distribution; visibility detection based on the monitoring video is low in detection precision, and cannot perform accurate grading on fog concentration.
Disclosure of Invention
The embodiment of the invention provides a fog concentration grade discrimination method, which can be used for accurately grading fog concentration and meeting the requirement of each transformer substation for pre-judging meter fog concentration.
The embodiment of the invention provides a fog concentration grade discrimination method, which comprises the following steps:
establishing an image sample set for the obtained image, and dividing the image in the image sample set into a plurality of grades according to the fog concentration;
respectively extracting the feature vectors of contrast and Fourier for each image in the image sample set to obtain a feature vector set of the image sample set;
training the feature vector set through a support vector machine to obtain a fog concentration discrimination model;
and inputting the image to be distinguished into the fog concentration distinguishing model to obtain a fog concentration grade division result of the image to be distinguished.
As an improvement of the above scheme, the acquired images include a plurality of substation meter images of different kinds; the substation meter images in each category include substation meter images of different fog concentration levels.
As an improvement of the above scheme, the extracting the feature vectors of the contrast and the fourier for each image in the image sample set to obtain the feature vector set of the image sample set specifically includes:
calculating an image histogram of the image, and performing normalization processing on the image histogram;
dividing the image histogram vector after normalization processing into a plurality of k equally-long cells, and adding the gray level pixel value distribution probability in each cell to obtain the contrast characteristic vector;
and carrying out Fourier transform on the image to obtain the Fourier transform characteristic of the image.
As an improvement of the above scheme, the method further comprises the following steps: calculating the gray level pixel value distribution probability of the image through an image histogram;
the calculating the distribution probability of the gray level pixel values of the image through the image histogram specifically includes:
acquiring the occurrence frequency of each gray level in the image, and if the gray level of the image f (x, y) is L, distributing the gray value of the image between [0- (L-1) ];
the image histogram is a row vector with a dimension of L, the coordinate of the vector is the gray level of the image, and the value corresponding to the coordinate is the total number of pixels of the gray level;
counting the total number of pixels of the image according to the histogram of the image, and calculating the gray level pixel value distribution probability of each gray level according to the total number of pixels of the image, wherein the gray level pixel value distribution probability of each gray level is shown as the following formula (1):
Figure BDA0002214827680000031
wherein n is the total number of image pixels, niThe number of pixels in the image having a pixel value i, p (x)i) A probability is distributed for each gray level pixel value of the gray levels.
As an improvement of the above solution, when the dimension of the normalized histogram vector of the image is 256, k is 32, and the probability of the distribution of the gray level pixel values in each of the cells is added to obtain a 32-dimensional contrast feature vector, where the calculation formula is shown in the following formula (2):
Figure BDA0002214827680000032
as an improvement of the above scheme, the performing fourier transform on the image to obtain the fourier transform characteristic of the image specifically includes:
decomposing the image into sine and cosine components, wherein the two-dimensional Fourier transform formula of the image is shown as the following formula (3):
Figure BDA0002214827680000033
in the formula, u is 0, 1., M-1, v is 0, 1., N-1, F (x, y) represents an image matrix of M × N size, and F (u, v) is a fourier transform characteristic of an image.
As an improvement of the above scheme, the method further comprises the following steps: the structural information of the image is represented by a fourier spectrum obtained from a real image and an imaginary image by the following formula (4):
|F(u,v)|=[R2(u,v)+I2(u,v)]1/2(4)
in the formula, R (u, v) is a real number image, I (u, v) is an imaginary number image, and | F (u, v) | is a frequency spectrum of fourier transform.
Further comprising: performing frequency domain center translation on the image;
the performing frequency domain center translation on the image specifically includes:
concentrating low-frequency components around the image to a central position, and moving high-frequency components around the image;
multiplying the image by (-1)x+yDue to e1, then there is (-1)x+y=e(x + y) and substituting the formula (4) to obtain the following formula (5):
as can be seen from equation (5), the original coordinate (0,0) position of the image is moved to the (M/2, N/2) position, and the spectrum centering is completed.
As an improvement of the above scheme, the training of the feature vector set by the support vector machine to obtain the mist concentration discrimination model specifically includes:
learning and training the feature vector set to obtain an optimal decision surface, wherein an equation of the optimal decision surface is shown as the following formula (6);
Figure BDA0002214827680000042
wherein x is an input vector, i.e. a vector in the set of feature vectors; w is an adjustable weight vector; b is the offset of the hyperplane relative to the origin;
using class +1 and class-1 to represent the expected correspondence of the feature vector set, we obtain the following formula (7):
Figure BDA0002214827680000043
classifying samples above the upper interval boundary as positive classes, and classifying samples below the lower interval boundary as negative classes; two spaced boundaries at a distance of
Figure BDA0002214827680000044
Maximizing the separation margin between the two classes is equivalent to minimizing the euclidean norm of the weight vector w, i.e. equivalent to optimizing | | w | | |2, resulting in a conditional extremum as shown in equation (8) below:
Figure BDA0002214827680000045
a lagrange equation shown in the following formula (9) is constructed:
Figure BDA0002214827680000051
performing partial derivation calculation on the lagrangian equation of the formula (9) to obtain the following formula (10):
Figure BDA0002214827680000052
solving a by using SMO algorithmiFurther, calculating w;
the intercept b is calculated by the following formula (11):
Figure BDA0002214827680000053
and (3) solving the optimal decision surface equation according to the formula (11), namely finishing SVM training to obtain the fog concentration discrimination model.
As an improvement of the above scheme, the method further comprises the following steps: evaluating the performance of the mist concentration discrimination model by adopting error rate and precision;
the method for evaluating the performance of the fog concentration discrimination model by adopting the classification error rate and the precision specifically comprises the following steps:
assuming that the sample set is D, the classification error rate is shown as the following formula (12):
Figure BDA0002214827680000054
the accuracy is as shown in the following equation (13):
Figure BDA0002214827680000055
in the formula, I (·) is an indication function, and takes 1 when · is true and takes 0 when · is false; m is the total number of samples; f (x)i) To predict value, yiAre true sample values.
Correspondingly, the embodiment of the invention provides a fog concentration level distinguishing method and device, which comprises a processor, a memory and a computer program stored in the memory and configured to be executed by the processor, wherein the processor implements the fog concentration level distinguishing method in the first embodiment of the invention when executing the computer program.
Compared with the prior art, the fog concentration grade discrimination method provided by the embodiment of the invention has the following beneficial effects:
selecting a plurality of different types of transformer substation meter images and images of each type of meter under different fog concentration levels as a sample set, and improving the diversity and coverage rate of the sample; the histogram of the image is normalized, so that the problem that the number of pixels of each gray level of the histogram is too large and the training is not facilitated due to too large image size is solved; the characteristic of fog concentration is effectively reflected through image contrast; through the characteristic extraction of Fourier change, the influence of illumination and noise is removed, and the identification and reading of a meter are facilitated; finally, the meter fog concentration grades are accurately classified, and the requirement for pre-judging the meter fog concentration of the transformer substation is met.
Drawings
Fig. 1 is a schematic flow chart of a mist concentration level determination method according to an embodiment of the present invention.
Fig. 2 is a flowchart of an algorithm of a mist concentration level determination method according to an embodiment of the present invention.
Fig. 3 is a histogram of an image of a fog density level determination method according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of a contrast characteristic of an image of a fog density level determination method according to an embodiment of the present invention.
Fig. 5 is a schematic diagram of fourier characteristics of a mist concentration level determination method according to an embodiment of the present invention.
Fig. 6 is a schematic diagram illustrating a result obtained by discriminating an image of a test set by a fog density discrimination method according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, an embodiment of the present invention provides a method for determining a mist concentration level, including:
s101, establishing an image sample set for the acquired image, and dividing the image in the image sample set into a plurality of grades according to the fog concentration;
s102, respectively extracting the contrast and Fourier characteristic vectors of each image in the image sample set to obtain a characteristic vector set of the image sample set;
s103, training the feature vector set through a support vector machine to obtain a fog concentration discrimination model;
and S104, inputting the image to be distinguished into a fog density distinguishing model to obtain a fog density grade division result of the image to be distinguished.
Further, the acquired images comprise a plurality of transformer substation meter images of different types; the substation meter images in each category include substation meter images of different fog concentration levels.
Further, the feature vector extraction of contrast and fourier is performed on each image in the image sample set, so as to obtain a feature vector set of the image sample set, and the method specifically includes:
calculating an image histogram of the image, and performing normalization processing on the image histogram;
dividing the image histogram vector after normalization processing into a plurality of k equally-long cells, and adding the gray level pixel value distribution probability in each cell to obtain a contrast characteristic vector;
and carrying out Fourier transform on the image to obtain the Fourier transform characteristic of the image.
Further, the method also comprises the following steps: calculating the gray level pixel value distribution probability of the image through an image histogram;
calculating the gray level pixel value distribution probability of the image through the image histogram, which specifically comprises the following steps:
acquiring the occurrence frequency of each gray level in an image, and if the gray level of an image f (x, y) is L, distributing the gray value of the image between [0- (L-1) ];
the image histogram is a row vector with a dimension of L, the coordinate of the vector is the gray level of the image, and the value corresponding to the coordinate is the total number of pixels of the gray level;
counting the total number of pixels of the image according to the histogram of the image, and calculating the gray level pixel value distribution probability of each gray level according to the total number of pixels of the image, wherein the gray level pixel value distribution probability of each gray level is as follows
Figure BDA0002214827680000071
Formula (1):
,i=0,1,...,L-1 (1)
wherein n is the total number of image pixels, niThe number of pixels in the image having a pixel value i, p (x)i) The gray level pixel value distribution probability for each gray level is a statistic that reflects the overall characteristics of the image.
Preferably, when an image is large in size, the number of pixels in each gray level of the statistical histogram is also large, so that the extracted features are not beneficial for training, and therefore, the features need to be normalized to the (0,1) interval.
Further, when the dimension of the normalized histogram vector of the image is 256, k is 32, and the distribution probabilities of the gray-scale pixel values in each cell are added, so as to obtain a 32-dimensional contrast feature vector, where the calculation formula is shown in the following formula (2):
Figure BDA0002214827680000081
preferably, the image contrast refers to the measurement of different gray levels of bright and dark regions in an image, i.e. the gray contrast of an image. The fog is mainly composed of water vapor or fine particles in the air, so when light passes through the fog in the air, scattering, refraction, absorption and the like can occur, and the contrast of the acquired image is generally reduced. The contrast of the image is therefore an effective feature to reflect the fog density.
Further, performing fourier transform on the image to obtain fourier transform characteristics of the image, specifically including:
decomposing the image into sine and cosine components, wherein the two-dimensional Fourier transform formula of the image is shown as the following formula (3):
Figure BDA0002214827680000082
in the formula, u is 0, 1., M-1, v is 0, 1., N-1, F (x, y) represents an image matrix of M × N size, and F (u, v) is a fourier transform characteristic of an image.
Preferably, since ejxCos (x) + jsin (x), the fourier transform of the image can also be represented using a trigonometric function.
Preferably, the original image is a set of a series of sampling points in real space, and describes the gray distribution characteristics of the image. The fourier transform transforms the image from the spatial domain to the frequency domain, describing the frequency domain characteristics of the image. In the frequency domain, the intensity of the gray scale change of the image in the spatial domain is reflected by the high and low frequency, and the image transformation is more intense when the frequency is higher, and vice versa.
The direct current component after the fourier transform represents the average brightness of the image, so the influence of illumination can be removed to a certain extent by filtering the direct current component of the image. For meter identification and reading, the algorithm focuses more on shape information of the image, i.e., meter profile information, which generally corresponds to the low-order components of the image after fourier transformation. The image noise is mostly concentrated on the high-frequency component after Fourier transform, so a certain denoising effect can be achieved by filtering the high-frequency component. According to the algorithm, Fourier change is carried out on the image, then the low-order components of the image are extracted to form the characteristic vector, and the direct-current component and the high-frequency component of the image are filtered, so that the influence of illumination and noise is removed, and the characteristic is more beneficial to meter identification and reading. All signals in the time domain can be represented as the sum of an infinite number of sine and cosine functions, and the fourier transform is based on the idea that it can decompose an image into two components, sine and cosine, i.e. from the spatial domain to the frequency domain.
Further, the method also comprises the following steps: the structural information of the image is represented by a fourier spectrum obtained from a real image and an imaginary image by the following formula (4):
|F(u,v)|=[R2(u,v)+I2(u,v)]1/2(4)
in the formula, R (u, v) is a real number image, I (u, v) is an imaginary number image, | F (u, v) | is a frequency spectrum of fourier transform;
further comprising: in order to facilitate frequency domain analysis and frequency domain filtering, the center of the frequency domain of the image is translated;
performing frequency domain center translation on the image, specifically comprising:
concentrating low-frequency components around the image to a central position, and moving high-frequency components around the image;
multiply the image by (-1)x+yDue to e1, then there is (-1)x+y=e(x + y) and substituting the formula (4) to obtain the following formula (5):
Figure BDA0002214827680000091
as can be seen from equation (5), the original coordinate (0,0) position of the image is moved to the (M/2, N/2) position, and the spectrum centering is completed.
Preferably, after the Fourier spectrogram is obtained, the spectrogram is divided into four regions according to M/2 and N/2, and then the upper left region, the upper right region, the lower left region, the lower right region, the upper left region, the lower right region, and the upper right region are respectively exchanged, so that the frequency domain center translation is completed.
Further, training the feature vector set through a support vector machine to obtain a fog concentration discrimination model, which specifically comprises:
learning and training the feature vector set to obtain an optimal decision surface, wherein an equation of the optimal decision surface is shown as the following formula (6);
in the formula, x is an input vector, namely a vector in the feature vector set; w is an adjustable weight vector; b is the offset of the hyperplane relative to the origin;
using class +1 and class-1 to represent the expected correspondence of the feature vector set, we obtain the following formula (7):
Figure BDA0002214827680000102
classifying samples above the upper interval boundary as positive classes, and classifying samples below the lower interval boundary as negative classes; two spaced boundaries at a distance of
Figure BDA0002214827680000103
Maximizing the separation margin between the two classes is equivalent to minimizing the euclidean norm of the weight vector w, i.e. equivalent to optimizing | | w | | |2, resulting in a conditional extremum as shown in equation (8) below:
Figure BDA0002214827680000104
a lagrange equation shown in the following formula (9) is constructed:
performing partial derivation calculation on the lagrangian equation of the formula (9) to obtain the following formula (10):
Figure BDA0002214827680000106
solving a by using SMO algorithmiFurther, calculating w;
the intercept b is calculated by the following formula (11):
Figure BDA0002214827680000111
and (4) solving an optimal decision surface equation according to the formula (11), namely finishing SVM training to obtain a fog concentration discrimination model.
Further, the method also comprises the following steps: evaluating the performance of the mist concentration discrimination model by adopting the error rate and the precision;
the method for evaluating the performance of the fog concentration discrimination model by adopting the classification error rate and the precision specifically comprises the following steps:
assuming that the sample set is D, the classification error rate is shown in the following equation (12):
Figure BDA0002214827680000112
the accuracy is shown in the following equation (13):
Figure BDA0002214827680000113
in the formula, I (·) is an indication function, and takes 1 when · is true and takes 0 when · is false; m is the total number of samples; f (x)i) To predict value, yiAre true sample values.
Preferably, the error rate and the accuracy are two most commonly used performance metrics in the classification task, and are suitable for both the two-classification task and the multi-classification task. The error rate is the ratio of the number of samples with wrong classification in the classification result to the total number of samples, whereas the accuracy rate is the ratio of the number of samples with correct classification in the classification result to the total number of samples.
In a specific embodiment, referring to fig. 2, an algorithm flowchart of a mist concentration level determination method according to an embodiment of the present invention is shown. As shown in fig. 3, the histograms of the images a and c in the figure correspond to the meter images at two kinds of fog concentrations, b is the histogram of the image corresponding to a, and d is the histogram of the image corresponding to c. The schematic diagram of the contrast characteristics of the images is shown in fig. 4, wherein a and c in the diagram correspond to the meter images under two fog concentrations respectively, b is the schematic diagram of the contrast characteristics corresponding to a, and d is the schematic diagram of the contrast characteristics corresponding to c. The Fourier characteristic diagram of the image is shown in FIG. 5, wherein a and c in the diagram correspond to meter images under two fog concentrations respectively, b is the Fourier characteristic diagram corresponding to a, and d is the Fourier characteristic diagram corresponding to c.
The method comprises the steps of equally dividing 1000 sample images into five sample sets, dividing each sample set into five grades according to fog concentration from high to low, taking each sample set as a test set and taking other four sample sets as training sets respectively in the whole process of fog concentration change. Referring to fig. 6, it is a result of discriminating the image of the test set by the fog density discrimination method according to the embodiment of the present invention, which can be obtained from fig. 6.
And carrying out cross validation on the proposed mist concentration grade discrimination algorithm, carrying out statistical average on the obtained precision and error rate, and verifying the stability of the algorithm.
The contrast characteristic is extracted independently, the DFT characteristic is extracted independently, the contrast and DFT combined characteristic is extracted and tested respectively, and the experimental results are compared to obtain the experimental results shown in the following table 1:
TABLE 1 comparison of results of different feature classifications
Feature extraction Average error rate Average rate of accuracy
Contrast characteristics 3.3% 96.7%
DFT feature 2.4% 97.6%
Contrast and DFT joint features 0.5% 99.5%
As can be seen from the experimental results in table 1, the final average accuracy of the mist concentration level discrimination method provided by the embodiment reaches 99.5%, the classification precision is high, and the practical requirement of mist concentration level discrimination of the meter of the transformer substation is met.
Compared with the prior art, the fog concentration grade discrimination method provided by the embodiment of the invention has the following beneficial effects:
selecting a plurality of different types of transformer substation meter images and images of each type of meter under different fog concentration levels as a sample set, and improving the diversity and coverage rate of the sample; the histogram of the image is normalized, so that the problem that the number of pixels of each gray level of the histogram is too large and the training is not facilitated due to too large image size is solved; the characteristic of fog concentration is effectively reflected through image contrast; through the characteristic extraction of Fourier change, the influence of illumination and noise is removed, and the identification and reading of a meter are facilitated; finally, the meter fog concentration grades are accurately classified, and the requirement for pre-judging the meter fog concentration of the transformer substation is met.
The fog concentration level judging method and device correspondingly provided by the embodiment of the invention comprise a processor, a memory and a computer program which is stored in the memory and is configured to be executed by the processor, wherein the fog concentration level judging method is realized when the processor executes the computer program. The fog concentration grade judging method and device can be computing equipment such as a desktop computer, a notebook computer, a palm computer, a cloud server and the like. The mist density level determination method device may include, but is not limited to, a processor and a memory.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. The general processor may be a microprocessor or the processor may be any conventional processor, etc., and the processor is a control center of the mist concentration level judging method apparatus, and various interfaces and lines are used to connect the respective parts of the entire mist concentration level judging method apparatus.
The memory can be used for storing computer programs and/or modules, and the processor can realize various functions of the mist concentration grade judging method and device by operating or executing the computer programs and/or modules stored in the memory and calling data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
Wherein, the module/unit integrated by the fog concentration grade discrimination method device can be stored in a computer readable storage medium if the module/unit is realized in the form of a software functional unit and is sold or used as an independent product. Based on such understanding, all or part of the flow in the method according to the embodiments of the present invention may also be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of the embodiments of the method. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, U.S. disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution media, and the like.
It should be noted that the above-described device embodiments are merely illustrative, and units illustrated as separate components may or may not be physically separate, and components illustrated as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. In addition, in the drawings of the embodiment of the apparatus provided by the present invention, the connection relationship between the modules indicates that there is a communication connection between them, and may be specifically implemented as one or more communication buses or signal lines. One of ordinary skill in the art can understand and implement it without inventive effort.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.

Claims (10)

1. A method for discriminating a mist density level, comprising:
establishing an image sample set for the obtained image, and dividing the image in the image sample set into a plurality of grades according to the fog concentration;
respectively extracting the feature vectors of contrast and Fourier for each image in the image sample set to obtain a feature vector set of the image sample set;
training the feature vector set through a support vector machine to obtain a fog concentration discrimination model;
and inputting the image to be distinguished into the fog concentration distinguishing model to obtain a fog concentration grade division result of the image to be distinguished.
2. The fog concentration level discrimination method according to claim 1, wherein the acquired images include a plurality of substation meter images of different types; the substation meter images in each category include substation meter images of different fog concentration levels.
3. The method for discriminating a mist density level according to claim 1, wherein the extracting of the feature vectors of the contrast and the fourier for each image in the image sample set to obtain the feature vector set of the image sample set specifically includes:
calculating an image histogram of the image, and performing normalization processing on the image histogram;
dividing the image histogram vector after normalization processing into a plurality of k equally-long cells, and adding the gray level pixel value distribution probability in each cell to obtain the contrast characteristic vector;
and carrying out Fourier transform on the image to obtain the Fourier transform characteristic of the image.
4. The mist concentration level discrimination method according to claim 3, further comprising: calculating the gray level pixel value distribution probability of the image through an image histogram;
the calculating the distribution probability of the gray level pixel values of the image through the image histogram specifically includes:
acquiring the occurrence frequency of each gray level in the image, and if the gray level of the image f (x, y) is L, distributing the gray value of the image between [0- (L-1) ];
the image histogram is a row vector with a dimension of L, the coordinate of the vector is the gray level of the image, and the value corresponding to the coordinate is the total number of pixels of the gray level;
counting the total number of pixels of the image according to the histogram of the image, and calculating the gray level pixel value distribution probability of each gray level according to the total number of pixels of the image, wherein the gray level pixel value distribution probability of each gray level is shown as the following formula (1):
Figure FDA0002214827670000021
wherein n is the total number of image pixels, niThe number of pixels in the image having a pixel value i, p (x)i) A probability is distributed for each gray level pixel value of the gray levels.
5. The fog concentration level discrimination method according to claim 3, wherein when the dimension of the normalized histogram vector of the image is 256, k is 32, and the gray level pixel value distribution probabilities in each of the cells are added to obtain a 32-dimensional contrast feature vector, and the calculation formula is as shown in the following formula (2):
6. the method for discriminating a mist density level according to claim 3, wherein the fourier transform of the image to obtain the fourier transform characteristic of the image specifically includes:
decomposing the image into sine and cosine components, wherein the two-dimensional Fourier transform formula of the image is shown as the following formula (3):
Figure FDA0002214827670000031
in the formula, u is 0, 1., M-1, v is 0, 1., N-1, F (x, y) represents an image matrix of M × N size, and F (u, v) is a fourier transform characteristic of an image.
7. The mist concentration level discrimination method according to claim 3, further comprising: the structural information of the image is represented by a fourier spectrum obtained from a real image and an imaginary image by the following formula (4):
|F(u,v)|=[R2(u,v)+I2(u,v)]1/2(4)
in the formula, R (u, v) is a real number image, I (u, v) is an imaginary number image, | F (u, v) | is a frequency spectrum of fourier transform;
further comprising: performing frequency domain center translation on the image;
the performing frequency domain center translation on the image specifically includes:
concentrating low-frequency components around the image to a central position, and moving high-frequency components around the image;
multiplying the image by (-1)x+yDue to e1, then there is (-1)x+y=e(x + y) and substituting the formula (4) to obtain the following formula (5):
Figure FDA0002214827670000032
as can be seen from equation (5), the original coordinate (0,0) position of the image is moved to the (M/2, N/2) position, and the spectrum centering is completed.
8. The method for discriminating a mist concentration level according to claim 1, wherein the training of the feature vector set by a support vector machine to obtain a mist concentration discrimination model specifically comprises:
learning and training the feature vector set to obtain an optimal decision surface, wherein an equation of the optimal decision surface is shown as the following formula (6);
wTx+b=0 (6)
x=(x1,x2,...,xd)T
w=(w1,w2,...wd)T
wherein x is an input vector, i.e. a vector in the set of feature vectors; w is an adjustable weight vector; b is the offset of the hyperplane relative to the origin;
using class +1 and class-1 to represent the expected correspondence of the feature vector set, we obtain the following formula (7):
Figure FDA0002214827670000041
classifying samples above the upper interval boundary as positive classes, and classifying samples below the lower interval boundary as negative classes; two spaced boundaries at a distance of
Figure FDA0002214827670000042
Maximizing the separation margin between the two classes is equivalent to minimizing the euclidean norm of the weight vector w, i.e. equivalent to optimizing | | w | | |2, resulting in a conditional extremum as shown in equation (8) below:
Figure FDA0002214827670000043
a lagrange equation shown in the following formula (9) is constructed:
Figure FDA0002214827670000044
performing partial derivation calculation on the lagrangian equation of the formula (9) to obtain the following formula (10):
solving a by using SMO algorithmiFurther, calculating w;
the intercept b is calculated by the following formula (11):
Figure FDA0002214827670000046
and (3) solving the optimal decision surface equation according to the formula (11), namely finishing SVM training to obtain the fog concentration discrimination model.
9. The mist concentration level discrimination method according to claim 1, further comprising: evaluating the performance of the mist concentration discrimination model by adopting error rate and precision;
the method for evaluating the performance of the fog concentration discrimination model by adopting the classification error rate and the precision specifically comprises the following steps:
assuming that the sample set is D, the classification error rate is shown as the following formula (12):
Figure FDA0002214827670000051
the accuracy is as shown in the following equation (13):
Figure FDA0002214827670000052
in the formula, I (·) is an indication function, and takes 1 when · is true and takes 0 when · is false; m is the total number of samples; f (x)i) To predict value, yiAre true sample values.
10. A mist concentration level discrimination method apparatus comprising a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, the processor implementing the mist concentration level discrimination method according to any one of claims 1 to 9 when executing the computer program.
CN201910911479.7A 2019-09-25 2019-09-25 Mist concentration grade discriminating method and device Active CN110705619B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910911479.7A CN110705619B (en) 2019-09-25 2019-09-25 Mist concentration grade discriminating method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910911479.7A CN110705619B (en) 2019-09-25 2019-09-25 Mist concentration grade discriminating method and device

Publications (2)

Publication Number Publication Date
CN110705619A true CN110705619A (en) 2020-01-17
CN110705619B CN110705619B (en) 2023-06-06

Family

ID=69196352

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910911479.7A Active CN110705619B (en) 2019-09-25 2019-09-25 Mist concentration grade discriminating method and device

Country Status (1)

Country Link
CN (1) CN110705619B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111738064A (en) * 2020-05-11 2020-10-02 南京邮电大学 Haze concentration identification method for haze image
CN112686105A (en) * 2020-12-18 2021-04-20 云南省交通规划设计研究院有限公司 Fog concentration grade identification method based on video image multi-feature fusion
CN113436283A (en) * 2021-06-24 2021-09-24 长安大学 Group fog detection method, system, device, storage medium and front-end device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103442209A (en) * 2013-08-20 2013-12-11 北京工业大学 Video monitoring method of electric transmission line
CN103903008A (en) * 2014-03-26 2014-07-02 国家电网公司 Power transmission line fog level recognition method and system based on images
CN105512623A (en) * 2015-12-02 2016-04-20 吉林大学 Foggy-day driving visual enhancement and visibility early warning system and method based on multiple sensors
CN105678735A (en) * 2015-10-13 2016-06-15 中国人民解放军陆军军官学院 Target salience detection method for fog images
CN206223609U (en) * 2016-11-14 2017-06-06 上海腾盛智能安全科技股份有限公司 A kind of smoke prewarning device based on SVM
CN107256017A (en) * 2017-04-28 2017-10-17 中国农业大学 route planning method and system
CN107610114A (en) * 2017-09-15 2018-01-19 武汉大学 Optical satellite remote sensing image cloud snow mist detection method based on SVMs
CN109961070A (en) * 2019-03-22 2019-07-02 国网河北省电力有限公司电力科学研究院 The method of mist body concentration is distinguished in a kind of power transmission line intelligent image monitoring

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103442209A (en) * 2013-08-20 2013-12-11 北京工业大学 Video monitoring method of electric transmission line
CN103903008A (en) * 2014-03-26 2014-07-02 国家电网公司 Power transmission line fog level recognition method and system based on images
CN105678735A (en) * 2015-10-13 2016-06-15 中国人民解放军陆军军官学院 Target salience detection method for fog images
CN105512623A (en) * 2015-12-02 2016-04-20 吉林大学 Foggy-day driving visual enhancement and visibility early warning system and method based on multiple sensors
CN206223609U (en) * 2016-11-14 2017-06-06 上海腾盛智能安全科技股份有限公司 A kind of smoke prewarning device based on SVM
CN107256017A (en) * 2017-04-28 2017-10-17 中国农业大学 route planning method and system
CN107610114A (en) * 2017-09-15 2018-01-19 武汉大学 Optical satellite remote sensing image cloud snow mist detection method based on SVMs
CN109961070A (en) * 2019-03-22 2019-07-02 国网河北省电力有限公司电力科学研究院 The method of mist body concentration is distinguished in a kind of power transmission line intelligent image monitoring

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
夏创文: ""高速公路网运行监测若干关键技术研究"", 《中国优秀博硕士学位论文全文数据库(博士)工程科技Ⅱ辑》 *
杨帆等: "《数字图像处理与分析(第3版)》", 31 May 2015 *
韩九强等: "《数字图像处理基于XAVIS组态软件》", 31 March 2019 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111738064A (en) * 2020-05-11 2020-10-02 南京邮电大学 Haze concentration identification method for haze image
WO2021228088A1 (en) * 2020-05-11 2021-11-18 南京邮电大学 Method for recognizing haze concentration in haze image
US20220076168A1 (en) * 2020-05-11 2022-03-10 Nanjing University Of Posts And Telecommunications Method for recognizing fog concentration of hazy image
US11775875B2 (en) * 2020-05-11 2023-10-03 Nanjing University Of Posts And Telecommunications Method for recognizing fog concentration of hazy image
CN112686105A (en) * 2020-12-18 2021-04-20 云南省交通规划设计研究院有限公司 Fog concentration grade identification method based on video image multi-feature fusion
CN112686105B (en) * 2020-12-18 2021-11-02 云南省交通规划设计研究院有限公司 Fog concentration grade identification method based on video image multi-feature fusion
CN113436283A (en) * 2021-06-24 2021-09-24 长安大学 Group fog detection method, system, device, storage medium and front-end device
CN113436283B (en) * 2021-06-24 2024-01-30 长安大学 Method, system and device for detecting mist, storage medium and front-end device

Also Published As

Publication number Publication date
CN110705619B (en) 2023-06-06

Similar Documents

Publication Publication Date Title
CN110705619B (en) Mist concentration grade discriminating method and device
CN109272016B (en) Target detection method, device, terminal equipment and computer readable storage medium
CN103034838B (en) A kind of special vehicle instrument type identification based on characteristics of image and scaling method
CN104809452A (en) Fingerprint identification method
CN109977191A (en) Problem map detection method, device, electronic equipment and medium
CN104978578A (en) Mobile phone photo taking text image quality evaluation method
CN109461133B (en) Bridge bolt falling detection method and terminal equipment
CN113269257A (en) Image classification method and device, terminal equipment and storage medium
TWI765442B (en) Method for defect level determination and computer readable storage medium thereof
Azad et al. New method for optimization of license plate recognition system with use of edge detection and connected component
CN109741322A (en) A kind of visibility measurement method based on machine learning
CN109766818A (en) Pupil center's localization method and system, computer equipment and readable storage medium storing program for executing
CN109448307A (en) A kind of recognition methods of fire disaster target and device
CN104038792A (en) Video content analysis method and device for IPTV (Internet Protocol Television) supervision
CN111242899A (en) Image-based flaw detection method and computer-readable storage medium
CN111539910B (en) Rust area detection method and terminal equipment
CN112085721A (en) Damage assessment method, device and equipment for flooded vehicle based on artificial intelligence and storage medium
CN108764253A (en) Pointer instrument digitizing solution
CN110348516B (en) Data processing method, data processing device, storage medium and electronic equipment
CN114418970A (en) Haze distribution and aerosol optical thickness detection method and device based on satellite remote sensing
Akbar et al. Tumor localization in tissue microarrays using rotation invariant superpixel pyramids
CN111199240A (en) Training method of bank card identification model, and bank card identification method and device
CN113269752A (en) Image detection method, device terminal equipment and storage medium
CN108985350B (en) Method and device for recognizing blurred image based on gradient amplitude sparse characteristic information, computing equipment and storage medium
CN115239947A (en) Wheat stripe rust severity evaluation method and device based on unsupervised learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant