CN110610190A - Convolutional neural network rainfall intensity classification method for rainy pictures - Google Patents

Convolutional neural network rainfall intensity classification method for rainy pictures Download PDF

Info

Publication number
CN110610190A
CN110610190A CN201910701424.3A CN201910701424A CN110610190A CN 110610190 A CN110610190 A CN 110610190A CN 201910701424 A CN201910701424 A CN 201910701424A CN 110610190 A CN110610190 A CN 110610190A
Authority
CN
China
Prior art keywords
rainfall
pictures
neural network
convolutional neural
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201910701424.3A
Other languages
Chinese (zh)
Inventor
郑飞飞
尹航
陶若凌
申永刚
张清周
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201910701424.3A priority Critical patent/CN110610190A/en
Publication of CN110610190A publication Critical patent/CN110610190A/en
Priority to PCT/CN2020/072281 priority patent/WO2021017445A1/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

The invention discloses a convolutional neural network rainfall intensity classification method for rainy pictures, which comprises the following steps of: (1) synthesizing a rainfall picture through image processing software to obtain a synthetic data set; (2) building a convolutional neural network, and pre-training the convolutional neural network by using the synthetic data set in the step (1); (3) acquiring an actual rainfall picture to obtain a real data set; (4) fine-tuning the pre-trained model by using the real data set in the step (3) to obtain a trained model; (5) and (5) using the trained model in the step (4) for real-time rainfall intensity classification. The classification method has better effect and lower error rate for the classification of the rainfall intensity of the real rainfall picture and the synthesized rainfall picture, and can greatly improve the spatial accuracy of the real-time weather information.

Description

Convolutional neural network rainfall intensity classification method for rainy pictures
Technical Field
The invention belongs to the field of municipal engineering rainwater real-time measurement, and particularly relates to a convolutional neural network rainfall intensity classification method for rainy pictures.
Background
At present, urban inland inundation frequently occurs in China, and huge economic property loss and even casualties are caused. Rainstorm has obvious spatial nonuniformity, so that the difference of disaster degree among various regions of a city is very obvious. The real-time rainfall levels of all regions are accurately obtained, and the method has basic significance for monitoring, prevention and control and emergency response of urban waterlogging. The current weather forecast cannot reflect the spatial nonuniformity of rainfall, and meanwhile, the accuracy is not enough, so that the requirement of real-time scheduling cannot be met. Although the existing rain intensity measuring tools such as a rain gauge and the like can accurately measure the rain intensity, the existing rain intensity measuring tools have the problems of high price, difficulty in real-time data transmission, incapability of reflecting spatial nonuniformity of rainfall and the like in real time.
The convolutional neural network has the characteristics of sparse connection, weight sharing and the like, the parameters of the neural network model can be effectively reduced, however, a large amount of data is still needed for training of the convolutional neural network, and the real rainfall picture is difficult to obtain in a large scale. Whereas existing public data sets rarely have similar data sets. This greatly hinders the application of convolutional neural networks to rain intensity classification of rainy days.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a convolutional neural network rainfall intensity classification method for rainy pictures.
In order to achieve the purpose, the invention adopts the following technical scheme:
a convolutional neural network rainfall intensity classification method for rainy pictures comprises the following steps:
(1) synthesizing a rainfall picture through image processing software to obtain a synthetic data set;
(2) building a convolutional neural network, and pre-training the convolutional neural network by using the synthetic data set in the step (1);
(3) acquiring an actual rainfall picture to obtain a real data set;
(4) fine-tuning the pre-trained model by using the real data set in the step (3) to obtain a trained model;
(5) and (5) using the trained model in the step (4) for real-time rainfall intensity classification.
Further, in the step (1), different rainfall intensities are respectively added to the original image through image processing software to obtain a synthesized rainfall picture.
Further, in the step (1), proper image processing software such as Photoshop is selected, and different numbers and sizes of rain marks are added to the original image so as to simulate rainfall pictures under different rainfall intensities; because the rainfall intensity is only related to the two parameters of the number and the size of the raindrops, other parameters such as the angle, the distribution, the contrast and the like of the raindrops are randomly set in the process of synthesizing the rainfall picture so as to enhance the robustness of the model.
Specifically, the selection of the image processing software parameters, taking Photoshop as an example, includes: rain drop density, relative size, distribution, angle, contrast and other parameters, the relative size refers to: the size of the rain layer (noise layer) relative to the base map, the angle means: an acute angle is formed between the rain mark and the horizontal line; the contrast ratio refers to the measurement of different brightness levels between the brightest white and the darkest black in a bright and dark area in an image, the larger the difference range is, the larger the contrast ratio is, and the smaller the difference range is, the smaller the contrast ratio is; however, the magnitude of rainfall intensity is only related to the number and size of raindrops, and is reflected on Photoshop parameters, namely rain mark density and relative size. The synthetic data set comprises six synthetic rainfall pictures, wherein the six synthetic rainfall pictures are respectively synthetic rainfall pictures of light rain, medium rain, heavy rain and extra heavy rain; randomly extracting 80% in the synthetic data set as a training set, 10% as a verification set and 10% as a test set; in some preferred modes, the number of the rainfall pictures in the training set is approximately equal, the number of the rainfall pictures in the verification set is approximately equal, and the number of the rainfall pictures in the test set is also approximately equal.
In some preferred modes, the specific process of step (2) is as follows: after the synthetic data set is obtained, a model is pre-trained on the synthetic data set by using a convolutional neural network, wherein the model comprises the building of a convolutional neural network model and the selection of each hyper-parameter, such as the selection of the number of layers of the convolutional neural network, the structure of the convolutional neural network and the like. The classification of the convolutional neural network on pictures can be divided into two parts: extracting features and classifying and identifying; the characteristic extraction part can efficiently extract the characteristics of the picture through a plurality of Convolution (Convolution) and downsampling (Subsampling) operations, and the extracted picture characteristics are stored in a characteristic map (feature map); the classification identification is to spread the feature graph into a fully connected (full connected) network, and further calculate the probability of belonging to each class, and the item with the highest probability is the predicted classification. A convolutional neural network is used to pre-train the synthesized larger scale data set. Specifically, a ResNet50 network can be used, with 50 layers of convolutional neural network, using five different convolutional neural network modules, with short-circuited connections between the modules.
Further, the specific process of the step (3) is as follows: establishing an image acquisition network, acquiring pictures of different places under different rainfall working conditions, respectively arranging rain gauges in different areas, and acquiring rainfall; classifying the acquired pictures according to the data of the rain gauge to be used as labels; the types of the labels are respectively light rain, medium rain, heavy rain and extra heavy rain; specifically, when the total precipitation amount in 24 hours is 0.1-9.9mm, the rainfall intensity level is light rain; when the total precipitation amount in 24 hours is 10.0-24.9mm, the rainfall intensity grade is medium rain; when the total precipitation amount in 24 hours is 25.0-49.9mm, the rainfall intensity grade is heavy rain; when the total precipitation amount in 24 hours is 50.0-99.9mm, the rainfall intensity grade is rainstorm; when the total precipitation amount in 24 hours is 100.0-249.9mm, the rainfall intensity grade is heavy rainstorm; when the total precipitation in 24 hours is more than 250.0mm, the rainfall intensity grade is extra heavy rainstorm. The image acquisition network is established by respectively arranging rain gauges in different areas and selecting monitoring cameras in different places for acquiring real rainfall pictures in each area; in some preferred modes, the rain gauge is arranged in a place without shielding, and is preferably arranged on the roof.
Further, the specific process of the step (4) is as follows: and (3) fine-tuning the pre-training model in the step (2) by using the real data set, wherein the fine-tuning comprises the steps of fixing parameters of each convolution and pooling layer of the feature extraction part, only training parameters of the full-connection layer of the classification and identification part, and training by using the real data set collected in the step (3). If the real picture size is large enough, training can be directly carried out on the real data set so as to further improve the classification precision of the model.
Further, the specific process of the step (5) is as follows: and (4) loading the model trained in the step (4), and carrying out real-time online classification on real rainfall pictures acquired in real time.
The invention has the beneficial effects that:
(1) the classification method is applied to an actual rainfall picture, can qualitatively acquire rainfall intensity information of local areas, and has fundamental significance for application fields such as real-time weather information acquisition, urban waterlogging monitoring and emergency response.
(2) The invention introduces the convolutional neural network in deep learning to the rainfall intensity classification of the rainy pictures for the first time, and can greatly improve the spatial accuracy of the real-time weather information. The method changes the mode that the rainfall intensity information needs to depend on weather forecast in the aspect of obtaining, the convolutional neural network has excellent performance of extracting the characteristics of the image, the rainfall information in the image can be extracted, and the influence of the background can be effectively filtered.
(3) According to the invention, a large number of rainfall pictures can be rapidly synthesized by adopting image processing software, the convolutional neural network can be trained to a quite excellent degree, and the real rainfall pictures can be better classified by finely adjusting the convolutional neural network by using a real data set. In the future, with the continuous accumulation of the data volume of the real rainfall pictures, the training can be directly carried out on the real data set, the classification performance of the real rainfall pictures can be further improved, and therefore the trained model is very high in operation speed in actual use and can be directly used for classifying the rainfall pictures collected in real time.
(4) The classification method provided by the invention has a good effect and a low error rate on the classification of the rainfall intensity of the real rainfall picture and the synthesized rainfall picture.
(5) The method can more efficiently and accurately acquire the rainfall intensity of the local area, and is more favorable for popularization of the convolutional neural network in the aspect of real-time rainfall intensity classification.
Drawings
Fig. 1 is a flowchart of a convolutional neural network rainfall intensity classification method for rainy pictures.
Fig. 2 shows an original and a composite picture according to the present invention.
Fig. 3 is a typical convolutional neural network classification model in embodiment 1 of the present invention.
Fig. 4 is an example of a convolution module of a ResNet50 network in embodiment 1 of the present invention.
Fig. 5 is a real rainfall picture collecting device in the invention.
Fig. 6 is a picture of the real rainfall collected in the present invention.
Fig. 7 is a flowchart of a method for online quantifying rainfall intensity of a convolutional neural network for a rainy day picture in embodiment 2.
Fig. 8 is an example of a typical convolutional neural network regression model in example 2.
Fig. 9 shows the specific verification result in example 2.
Detailed Description
The present invention will be described in detail below with reference to the attached drawings, and it should be noted that the specific embodiments described herein are only for explaining the present invention and are not to be construed as limiting the present invention.
Example 1
The invention extracts the rain picture characteristics through the convolutional neural network, and completes the training of the convolutional neural network in two steps, namely training in the synthetic data set and the real data set respectively, so that the rainfall information in the picture can be effectively extracted, meanwhile, the interference factors such as background, brightness, rain mark angle, distribution and the like can be ignored, and the classification accuracy is higher.
Specifically, a convolutional neural network rainfall intensity classification method for rainy pictures, as shown in fig. 1, includes the following steps:
(1) synthesizing a rainfall picture through image processing software to obtain a synthetic data set;
(2) building a Convolutional Neural Network (CNN), and pre-training the convolutional neural network by using the synthetic data set in the step (1);
(3) acquiring an actual rainfall picture to obtain a real data set;
(4) fine-tuning (fine-tune) the pre-trained model by using the real data set in the step (3) to obtain a trained model;
(5) and (5) using the trained model in the step (4) for real-time rainfall intensity classification.
In some preferred modes, the specific process of step (1) is as follows: respectively adding six different rainfall intensities to the original image through image processing software to obtain a synthesized rainfall picture;
in some preferred modes, in the step (1), a suitable image processing software is selected, wherein the suitable image processing software refers to software capable of processing image layers, and specifically refers to software capable of adding rain layers to an original image; for example, Photoshop, photospos Pro, GIMP, Hornil style pix, Krita and other software capable of processing image layers, in this embodiment, the selected software is Photoshop, and different numbers and sizes of rain marks are added to the original image to simulate rainfall pictures under different rainfall intensities; the rainfall intensity is an average amount of rainfall falling within a certain period of time, and is represented by a rainfall depth per unit time, and is reflected on an image, the exposure time of the image is constant (about 1/200 s), and the rainfall intensity is only related to the density and size of rain marks in the image.
Because the rainfall intensity is only related to two parameters of the number and the size of the rain marks, the rainfall pictures with different rainfall intensity levels can be synthesized by changing the numerical values of the two parameters in the process of synthesizing the rainfall pictures; the rainfall intensity grades comprise light rain, medium rain, heavy rainstorm and extra heavy rainstorm, so six types of rainfall pictures of light rain, medium rain, heavy rainstorm and extra heavy rainstorm need to be synthesized, in some preferred modes, when the same type of rainfall picture is synthesized, the two parameter values of the number and the size of rain marks are fixed values, and the synthesis is convenient; when different types of rainfall pictures are synthesized, the two parameters of the number and the size of the raindrops are changed, and other parameters such as the angle, the distribution, the contrast and the like of the raindrops are randomly set so as to enhance the robustness of the model. In other preferred modes, when the same type of rainfall picture is synthesized, the two parameter values of the number and the size of the rain marks are changed within a certain range; other parameters, such as rain drop angle, distribution, contrast, etc. are set randomly.
Specifically, the selection of the image processing software parameters, for example photoshop (ps), includes: rain drop density, relative size, distribution, angle, contrast and other parameters, the relative size refers to: the size of the rain layer (noise layer) relative to the base map, the angle means: an acute angle is formed between the rain mark and the horizontal line; the contrast ratio refers to the measurement of different brightness levels between the brightest white and the darkest black in a bright and dark area in an image, the larger the difference range is, the larger the contrast ratio is, and the smaller the difference range is, the smaller the contrast ratio is; however, the magnitude of rainfall intensity is only related to the number and size of raindrops, and is reflected on Photoshop parameters, namely rain mark density and relative size. Taking fig. 2 as an example, fig. 2(a) is an original figure, fig. 2(b) is a synthesized rainfall picture, the density of the rain marks is selected to be 18, the relative size is 400%, other parameters can be randomly selected, the parameters in fig. 2(b) are selected to be distributed in a gaussian distribution, and the angle is an acute angle 65 degrees between the rain marks and the horizontal line. In the embodiment, 100000 synthetic rainfall pictures are synthesized in six types of synthetic rainfall pictures of light rain, medium rain, heavy rain and extra heavy rain, wherein 80% of the synthetic rainfall pictures are randomly extracted as a training set, 10% of the synthetic rainfall pictures are extracted as a verification set, 10% of the synthetic rainfall pictures are extracted as a test set, and the sum of the three is a synthetic data set; in this embodiment, the two parameter values of the raindrop density and the relative size are roughly selected according to the effect of the synthesized picture, for example, when the rainfall intensity is light rain, the raindrop density (named as noise ratio in PS) is set to 20, and the relative size is set to 200%; when the rainfall intensity is medium rain, the density of rain marks is set to be 25, and the relative size is set to be 220%; when the rainfall intensity is heavy rain, the density of rain marks is set to be 30, and the relative size is set to be 240%; when the rainfall intensity is rainstorm, the density of rain marks is set to be 35, and the relative size is set to be 260%; when the rainfall intensity is heavy rainstorm, the density of rain marks is set to be 40, and the relative size is set to be 280%; when the rainfall intensity is extra-heavy rainstorm, the density of rain marks is set to be 45, and the relative size is set to be 300%.
In some preferred modes, the number of the rainfall pictures in the training set is approximately equal, the number of the rainfall pictures in the verification set is approximately equal, and the number of the rainfall pictures in the test set is also approximately equal.
In some preferred modes, the specific process of step (2) is as follows: after the synthetic data set is obtained, pre-training a model on the synthetic data set by using a convolutional neural network, wherein the pre-training comprises the building of a convolutional neural network model and the selection of various hyper-parameters, such as the selection of the number of layers of the convolutional neural network, the selection of a convolutional neural network structure and the like; a convolutional neural network is used to pre-train the synthesized data set. The convolutional neural network is a feedforward neural network which comprises convolutional calculation and has a deep structure, and the construction of the convolutional neural network is the structure of the convolutional neural network. A typical convolutional neural network is shown in fig. 3, and the classification of the convolutional neural network for pictures can be divided into two parts: extracting features and classifying and identifying; the characteristic extraction part can efficiently extract the characteristics of the picture through a plurality of Convolution (Convolution) and downsampling (Subsampling) operations, and the extracted picture characteristics are stored in a characteristic map (feature map); the classification identification is to spread the feature graph into a fully connected (full connected) network, and then calculate the probability of belonging to each class, and the item with the highest probability is the predicted classification; here, the calculation formula is a Softmax function, and the formula is
Where j is 1, …, K is 1, …, K, σ (z)jDenotes the relative probability of class j, zjProbability weight (z) representing class jjMay be negative, so base e is taken to calculate the relative probability such that the relative probability of all classes is greater than 0 and the sum is 1), k representing the total number of classes classified.
In this embodiment, a ResNet50 network is used, the number of layers of the convolutional neural network is 50, and the network architecture parameters are shown in table 1, the convolutional neural network uses five different convolutional neural network modules, short-circuit connections are used between the modules, taking module conv2_ v as an example, and fig. 4 is a specific structure of module conv2_ v. The network is pre-trained on a synthetic data set, 10000 rainfall pictures in a test set are obtained, and the classification accuracy is 98.63%.
TABLE 1 ResNet50 architecture parameters
In some preferred modes, the specific process of step (3) is as follows: acquiring actual rainfall pictures comprises establishing an image acquisition network, acquiring rainfall pictures of different places under different rainfall working conditions, respectively arranging rain gauges in different areas and acquiring rainfall; classifying the acquired pictures according to the data of the rain gauge to be used as labels; the types of the labels are respectively light rain, medium rain, heavy rain and extra heavy rain. The image acquisition network is established by respectively arranging rain gauges in different areas and selecting monitoring cameras in different places for acquiring real rainfall pictures in each area; in some preferred modes, the rain gauge is arranged in a place without shielding, and is preferably arranged on the roof.
In this embodiment, a rain gauge is arranged in each of six school districts, namely a hong jin harbor school district, a yuquan school district, a huajia pool school district, a zheng school district, a xi school district, and a haining school district, and cooperates with a security department of the zhe jiang university, four monitoring cameras at different locations are selected for each school district to collect real rainfall pictures, and fig. 5 is a picture of real rainfall picture collecting equipment, where fig. 5(a) is a picture of a rain gauge arranged in the yuquan school district of the zhe university used in this embodiment, and fig. 5(b) is a picture of monitoring equipment at a certain school district of the hong jin harbor of the zhe university used in this embodiment. And classifying the collected real rainfall pictures according to the data of the rain gauge to be used as labels. Table 2 is a specific rainfall rating scale.
TABLE 2 rainfall intensity grading Standard
Grade of intensity of rainfall Total precipitation (mm) in 24 hours
Light rain 0.1-9.9
Medium rain 10.0-24.9
Heavy rain 25.0-49.9
Storm rain 50.0-99.9
Heavy rainstorm 100.0-249.9
Extra-large heavy rain ≥250.0
In the present embodiment, from 1/2016 to 31/12/2018, 132 rainfall events are collected, and 3168 real rainfall pictures, wherein 80% of the rainfall events are used as a training set, 10% of the rainfall events are used as a verification set, and 10% of the rainfall events are used as a test set. An example of the real rainfall picture is shown in fig. 6, and the rainfall intensity level in fig. 6 is medium rainfall.
In some preferred modes, the specific process of step (4) is as follows: and (3) fine-tuning the pre-training model in the step (2) by using the real data set, wherein the fine-tuning comprises the steps of fixing parameters of each convolution and pooling layer of the feature extraction part, only training parameters of the full-connection layer of the classification and identification part, and training by using the real data set collected in the step (3). If the number of the real pictures is large enough, the real data set can be directly trained to further improve the classification precision of the model. In this embodiment, after the real data set is obtained, the convolutional neural network is finely tuned on the real rainfall image set, and the classification accuracy of the test set is 83.28%.
In some preferred modes, the specific process of step (5) is as follows: and (4) loading the model trained in the step (4), and carrying out real-time online classification on real rainfall pictures acquired in real time.
In some preferred modes, after the model is trained, the model is applied to a city real-time control system (or a city brain, etc.); a monitoring camera in a city is connected to a city real-time control system; when rainfall occurs, the monitoring camera collects picture data in real time at different places of a city, then the picture data are converted into electric signals, the electric signals are converted into digital image signals through an A/D converter, the digital image signals are processed through a digital signal processing chip (DSP), the processed signals are transmitted to a city real-time control system, the city real-time control system receives the picture signals and converts the picture signals into a picture form suitable for being received by a model, then the picture is input into a trained model, the model classifies rainfall intensity of the picture, and a classification result of real-time rainfall intensity is obtained, so that real-time scheduling is performed when rainstorm occurs, and loss caused by urban inland inundation is reduced.
In this embodiment, 10 rainfall events between 2019, 1 month and 1 day and 2019, 4 months and 30 days are used as practical application verification, the places are all the hong kong school areas of Zhejiang university, the places from one place to four are four different shooting places of the hong kong school areas of Zhejiang university, 40 real rainfall pictures are used in total, the weather conditions are based on the data of a rain gauge, the accuracy of the classification result is 85.0%, and the table 3 is a specific verification result.
TABLE 310 real-time verification of rainfall events
As shown in Table 3, the method of the present invention can accurately classify the real rainfall pictures, and the error rate is low.
Example 2
As shown in fig. 7, the method for online quantifying rainfall intensity of convolutional neural network for rainy pictures provided by the present invention comprises the following steps:
(1) synthesizing a rainfall picture through image processing software to obtain a synthetic data set;
(2) building and modifying a structure (CNN) of the convolutional neural network, and pre-training the convolutional neural network by using the synthetic data set in the step (1);
(3) acquiring an actual rainfall picture to obtain a real data set;
(4) fine-tuning (fine-tune) the pre-trained model by using the real data set in the step (3) to obtain a trained model;
(5) and (5) applying the model trained in the step (4) to real-time rainfall intensity online quantification.
In some preferred modes, the specific process of step (1) is as follows: adding different rainfall intensities to the original image through image processing software to obtain a synthetic rainfall picture;
in some preferred modes, in the step (1), a suitable image processing software is selected, wherein the suitable image processing software refers to software capable of processing image layers, and specifically refers to software capable of adding rain layers to an original image; for example, Photoshop, photospos Pro, GIMP, Hornil style pix, Krita and other software capable of processing image layers, in this embodiment, the selected software is Photoshop, and different numbers and sizes of rain marks are added to the original image to simulate rainfall pictures under different rainfall intensities; the rainfall intensity is an average amount of rainfall falling within a certain period of time, and is represented by a rainfall depth per unit time, and is reflected on an image, the exposure time of the image is constant (about 1/200 s), and the rainfall intensity is only related to the density and size of rain marks in the image.
Because the rainfall intensity is only related to two parameters of the number and the size of the rain marks, the rainfall intensity of the synthesized picture is determined according to the values of the two parameters in the process of synthesizing the rainfall picture; other parameters, such as rain drop angle, distribution, contrast, etc. are set randomly to enhance the robustness of the model.
Specifically, the selection of the parameters of the image processing software, taking Photoshop as an example, includes: rain drop density, relative size, distribution, angle, contrast and other parameters, the relative size refers to: the size of the rain layer (noise layer) relative to the base map, the angle means: an acute angle is formed between the rain mark and the horizontal line; the contrast ratio refers to the measurement of different brightness levels between the brightest white and the darkest black in a bright and dark area in an image, the larger the difference range is, the larger the contrast ratio is, and the smaller the difference range is, the smaller the contrast ratio is; the rainfall intensity is only related to the number and the size of raindrops and is reflected on Photoshop parameters, namely the density and the relative size of the raindrop; let the density of rain drop be x, the relative size be y, and the rainfall intensity be D, and assume that the relationship between the rainfall intensity and the density and relative size of rain drop is
D=kxy2
Where k is a constant, taken here to be 1; the assumed relationship is only used for quantifying the rainfall intensity value of the synthetic rainfall picture, and the obtained rainfall intensity value is used as a rainfall intensity label of the synthetic rainfall picture; the formula is a mapping relation between the assumed raindrop density x, the relative size y and the rainfall intensity D, and the rainfall intensity D is a numerical value and is only used as a label for synthesizing a rainfall picture and is dimensionless. Then, fine adjustment is carried out by using a real data set, and the mapping between the rainfall picture characteristics and the rainfall intensity is reconstructed; therefore, whether the relation between the rainfall intensity and the number and the relative size of raindrops can be truly reflected by the formula or not has little influence on the final rainfall intensity prediction result, and the formula is only used for the pre-training process of the model and the extraction capacity of the convolutional neural network on the rainfall picture characteristics.
In some preferred modes, the synthetic data set comprises six types of synthetic rainfall pictures, wherein the six types of synthetic rainfall pictures are synthetic rainfall pictures of light rain, medium rain, heavy rainstorm and extra heavy rainstorm respectively; randomly extracting 80% in the synthetic data set as a training set, 10% as a verification set and 10% as a test set;
in some preferred modes, the number of the rainfall pictures in the training set is approximately equal, the number of the rainfall pictures in the verification set is approximately equal, and the number of the rainfall pictures in the test set is also approximately equal.
Taking fig. 2 as an example, fig. 2(a) is an original figure, fig. 2(b) is a synthesized rainfall image, the raindrop density is selected to be 18, and the relative size is 400%, then the rainfall intensity value of the synthesized rainfall image is 288. Other parameters can be randomly selected, and the parameters in the figure 2(b) are selected to be distributed in a Gaussian distribution, and the angle is an acute angle 65 degrees formed by the rain mark and the horizontal line. In this embodiment, 100000 images with different rainfall intensities are synthesized, 80% of the images are randomly extracted as a training set, 10% of the images are extracted as a verification set, and 10% of the images are extracted as a test set, and the sum of the three is a synthesized data set.
In some preferred modes, the specific process of step (2) is as follows: after a synthetic data set is obtained, pre-training a model on the synthetic data set by using a convolutional neural network, wherein the pre-training comprises the steps of building and modifying the convolutional neural network model and selecting each hyper-parameter; selecting hyper-parameters, such as the number of layers of a convolutional neural network, the structure of the convolutional neural network and the like; the convolutional neural network is a feedforward neural network which comprises convolutional calculation and has a deep structure, and the establishment of the convolutional neural network is the design of the structure of the convolutional neural network; changing the classification model of the convolutional neural network into a regression model so as to quantify the rainfall intensity value and obtain an estimated value of the rainfall intensity; specifically, a linear regression model is selected to obtain a specific estimated value of rainfall intensity. The modified model structure is shown in fig. 8, and the quantization of the picture by the convolutional neural network can be divided into two parts, namely feature extraction and linear regression; the characteristic extraction part can efficiently extract the characteristics of the picture through a plurality of Convolution (Convolution) and downsampling (Subsampling) operations, and the extracted picture characteristics are stored in a characteristic map (feature map); the linear regression is to expand the feature map into a fully connected (full connected) network, and further estimate the rainfall intensity value, and the linear regression formula is as follows:
wherein the content of the first and second substances,for the estimated rainfall intensity value, W is a parameter of the last layer of full connection, X is an input variable, namely a calculation result of the second last layer, b is a parameter which can be learned, T represents a transpose of a matrix, and W and X are both matrices.
A convolutional neural network is used to pre-train the synthesized larger scale data set. In the embodiment, a ResNet50 network is used, the number of layers of the convolutional neural network is 50, and a linear regression layer is added after the convolutional neural network so as to output a specific rainfall intensity value; the network was pre-trained on a synthetic dataset, testing the rainfall image in the set to 10000 pictures, using the Mean Absolute Percentage Error (MAPE) to evaluate the accuracy of the prediction as follows:
in the formula yiThe label of the ith photo is collected for testing, wherein the label is the value of rainfall intensity;the predicted value of the ith picture in the test set is shown, and n is the number of the synthesized rainfall pictures in the test set; in this example, the predicted average absolute percentage error of the synthetic data test set is 4.91%.
In some preferred modes, the specific process of step (3) is as follows: the method comprises the steps of establishing an image acquisition network, acquiring images of different places under different rainfall conditions, and recording instantaneous rainfall intensity data by using a rain gauge as a label, wherein the unit of the rainfall intensity is mm/h. Specifically, when the total precipitation amount in 24 hours is 0.1-9.9mm, the rainfall intensity level is light rain; when the total precipitation amount in 24 hours is 10.0-24.9mm, the rainfall intensity grade is medium rain; when the total precipitation amount in 24 hours is 25.0-49.9mm, the rainfall intensity grade is heavy rain; when the total precipitation amount in 24 hours is 50.0-99.9mm, the rainfall intensity grade is rainstorm; when the total precipitation amount in 24 hours is 100.0-249.9mm, the rainfall intensity grade is heavy rainstorm; when the total precipitation in 24 hours is more than 250.0mm, the rainfall intensity grade is extra heavy rainstorm. The image acquisition network is established by respectively arranging rain gauges in different areas and selecting monitoring cameras in different places for acquiring real rainfall pictures in each area; the rain gauge can be arranged in a place without shielding, and is preferably arranged on the roof.
In the embodiment, a rain gauge is respectively arranged in six school areas, such as a hong Kong school area, a Yuquan school area, a Huajia pond school area, a Yangtze school area, a Xixi school area, a Haining school area and the like of Zhejiang university; in cooperation with the Zhejiang university safety and security department, four monitoring cameras in different places are selected for collecting real rainfall pictures in each school zone. Fig. 5 is a picture of a real rainfall picture collecting device, wherein fig. 5(a) is a picture of a rain gauge arranged in a yuquan school zone of the university of zhejiang used in the present embodiment, and fig. 5(b) is a picture of a monitoring device at one place of the yuquan school zone of the university of zhejiang used in the present embodiment. And recording the instantaneous rainfall intensity by using a rain gauge as a label according to the collected real rainfall picture, wherein the unit of the rainfall intensity is mm/h.
In the present embodiment, from 1/2016 to 31/12/2018, 132 rainfall events are collected, and 3168 real rainfall pictures, wherein 80% of the rainfall events are used as a training set, 10% of the rainfall events are used as a verification set, and 10% of the rainfall events are used as a test set. An example of a picture of real rainfall is shown in fig. 6.
In some preferred modes, the specific process of step (4) is as follows: and (3) fine-tuning the pre-training model by using the real data set, wherein the fine-tuning comprises the steps of fixing parameters of each convolution of the feature extraction part and the pooling layer, only training parameters of the full-connection layer, and training by using the real data set acquired in the step (3). When the real picture is large enough, training can be directly carried out on the real data set so as to further improve the classification precision of the model.
In this embodiment, after the real data set is obtained, the convolutional neural network is finely tuned on the real rainfall image set, and the average absolute percentage error of the real data test set is 15.63%.
In some preferred modes, the specific process of step (5) is as follows: and (4) loading the model trained in the step (4), and carrying out real-time online quantification on the real rainfall picture acquired in real time.
In some preferred modes, after the model is trained, the model is applied to a city real-time control system (or a city brain, etc.); a monitoring camera in a city is connected to a city real-time control system; when rainstorm occurs, a monitoring camera collects picture data in different places of a city in real time, then the picture data are converted into electric signals, the electric signals are converted into digital image signals through an A/D converter, the digital image signals are processed through a digital signal processing chip (DSP), the processed signals are transmitted to a city real-time control system, the city real-time control system receives the picture data signals and converts the picture data signals into a picture form suitable for being received by a model, then the picture is input into a trained model, the model carries out quantitative estimation on the rainfall intensity of the picture, and a prediction value of the real-time rainfall intensity is obtained, so that when rainstorm occurs, real-time scheduling is carried out, and loss caused by urban waterlogging is reduced.
In this embodiment, 10 rainfall events between 2019, 1 month and 1 day and 2019, 4 months and 30 days are used as practical application verification, the places are all purple hong Kong school districts of Zhejiang university, the places from one place to four are four different shooting places of the purple hong Kong school districts of Zhejiang university, 40 real rainfall pictures are used in total, the instantaneous rainfall intensity is recorded by a rain gauge, the average absolute percentage error is finally obtained and is 14.67%, and the specific verification result is shown in fig. 9. As can be seen from fig. 9, the method of the present invention can accurately perform quantitative prediction on the real rainfall picture, and the error rate is low.
It is to be understood that the described embodiments are merely a few embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

Claims (8)

1. A convolutional neural network rainfall intensity classification method for rainy pictures is characterized by comprising the following steps:
(1) synthesizing a rainfall picture through image processing software to obtain a synthetic data set;
(2) building a convolutional neural network, and pre-training the convolutional neural network by using the synthetic data set in the step (1);
(3) acquiring an actual rainfall picture to obtain a real data set;
(4) fine-tuning the pre-trained model by using the real data set in the step (3) to obtain a trained model;
(5) and (5) using the trained model in the step (4) for real-time rainfall intensity classification.
2. The convolutional neural network rainfall intensity classification method for rainy pictures according to claim 1, wherein in the step (1), different rainfall intensities are respectively added to the original image through image processing software to obtain a composite rainfall picture.
3. The method for classifying rainfall intensity of convolutional neural network aiming at rainy day pictures as claimed in claim 2, wherein in step (1), proper image processing software such as Photoshop is selected to add different numbers and sizes of rain marks to the original pictures so as to simulate rainfall pictures under different rainfall intensities.
4. The convolutional neural network rainfall classification method for rainy pictures according to claim 1, wherein in the step (2), the convolutional neural network classifies the pictures into two parts: extracting features and classifying and identifying; the feature extraction part extracts the features of the picture through a plurality of times of convolution and downsampling operations, and the extracted picture features are stored in a feature map; the classification identification is to spread the feature graph into a fully-connected network, and then calculate the probability of belonging to each class, and the item with the highest probability is the predicted classification.
5. The method for classifying rainfall intensity of convolutional neural network for rainy day pictures as claimed in claim 1, wherein the convolutional neural network, such as ResNet50 network, is used in step (2) to pre-train the network on the synthetic data set.
6. The convolutional neural network rainfall classification method for rainy pictures according to claim 1, wherein the specific process of the step (3) is as follows: establishing an image acquisition network, acquiring pictures of different places under different rainfall working conditions to obtain a real data set, classifying the acquired pictures according to the data of the rain gauge, and taking the pictures as labels, wherein the pictures are classified into light rain, medium rain, heavy rain and extra heavy rain.
7. The convolutional neural network rainfall classification method for rainy pictures according to claim 4, wherein the specific process of the step (4) is as follows: and (4) fine-tuning the pre-training model by using the real data set, wherein the fine-tuning comprises the steps of fixing parameters of each convolution and pooling layer of the feature extraction part, only training parameters of the full connection layer of the classification and identification part, and training by using the real data set in the step (3).
8. The convolutional neural network rainfall classification method for rainy pictures according to claim 1, wherein the specific process of the step (5) is as follows: and (4) loading the model trained in the step (4), and classifying the real rainfall pictures collected in real time.
CN201910701424.3A 2019-07-31 2019-07-31 Convolutional neural network rainfall intensity classification method for rainy pictures Withdrawn CN110610190A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910701424.3A CN110610190A (en) 2019-07-31 2019-07-31 Convolutional neural network rainfall intensity classification method for rainy pictures
PCT/CN2020/072281 WO2021017445A1 (en) 2019-07-31 2020-01-15 Convolutional neural network rainfall intensity classification method and quantification method aimed at rainy pictures

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910701424.3A CN110610190A (en) 2019-07-31 2019-07-31 Convolutional neural network rainfall intensity classification method for rainy pictures

Publications (1)

Publication Number Publication Date
CN110610190A true CN110610190A (en) 2019-12-24

Family

ID=68890317

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910701424.3A Withdrawn CN110610190A (en) 2019-07-31 2019-07-31 Convolutional neural network rainfall intensity classification method for rainy pictures

Country Status (1)

Country Link
CN (1) CN110610190A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111050081A (en) * 2019-12-27 2020-04-21 维沃移动通信有限公司 Shooting method and electronic equipment
CN111178439A (en) * 2019-12-31 2020-05-19 杭州电子科技大学 SAR image classification method based on convolutional neural network and fine adjustment
CN111914933A (en) * 2020-07-31 2020-11-10 中国民用航空华东地区空中交通管理局 Snowfall detection method and device, computer equipment and readable storage medium
WO2021017445A1 (en) * 2019-07-31 2021-02-04 浙江大学 Convolutional neural network rainfall intensity classification method and quantification method aimed at rainy pictures
CN112689078A (en) * 2021-01-22 2021-04-20 国网河南省电力公司信息通信公司 Rainwater identification management system based on artificial intelligence video analysis
CN112883969A (en) * 2021-03-01 2021-06-01 河海大学 Rainfall intensity detection method based on convolutional neural network
CN113534296A (en) * 2021-07-13 2021-10-22 象辑知源(武汉)科技有限公司 Method and device for measuring and calculating sand-dust weather forecast intensity error based on neural network
CN113552656A (en) * 2021-07-26 2021-10-26 福建农林大学 Rainfall intensity monitoring method and system based on outdoor image multi-space-time fusion
WO2021243898A1 (en) * 2020-06-05 2021-12-09 北京旷视科技有限公司 Data analysis method and apparatus, and electronic device, and storage medium
CN114296152A (en) * 2021-12-16 2022-04-08 中汽创智科技有限公司 Rainfall determination method, rainfall determination device, rainfall determination equipment and storage medium
CN117008219A (en) * 2023-10-07 2023-11-07 武汉大水云科技有限公司 Rainfall measurement method, device, equipment and storage medium based on artificial intelligence
CN117129390A (en) * 2023-10-26 2023-11-28 北京中科技达科技有限公司 Rainfall particle real-time monitoring system and method based on linear array camera shooting

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9465987B1 (en) * 2015-03-17 2016-10-11 Exelis, Inc. Monitoring and detecting weather conditions based on images acquired from image sensor aboard mobile platforms
CN107506823A (en) * 2017-08-22 2017-12-22 南京大学 A kind of construction method for being used to talk with the hybrid production style of generation
CN107703564A (en) * 2017-10-13 2018-02-16 中国科学院深圳先进技术研究院 A kind of precipitation predicting method, system and electronic equipment
CN108389220A (en) * 2018-02-05 2018-08-10 湖南航升卫星科技有限公司 Remote sensing video image motion target real-time intelligent cognitive method and its device
CN110009580A (en) * 2019-03-18 2019-07-12 华东师范大学 The two-way rain removing method of single picture based on picture block raindrop closeness
CN110049216A (en) * 2019-04-18 2019-07-23 安徽易睿众联科技有限公司 A kind of web camera that can identify type of precipitation in real time

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9465987B1 (en) * 2015-03-17 2016-10-11 Exelis, Inc. Monitoring and detecting weather conditions based on images acquired from image sensor aboard mobile platforms
CN107506823A (en) * 2017-08-22 2017-12-22 南京大学 A kind of construction method for being used to talk with the hybrid production style of generation
CN107703564A (en) * 2017-10-13 2018-02-16 中国科学院深圳先进技术研究院 A kind of precipitation predicting method, system and electronic equipment
CN108389220A (en) * 2018-02-05 2018-08-10 湖南航升卫星科技有限公司 Remote sensing video image motion target real-time intelligent cognitive method and its device
CN110009580A (en) * 2019-03-18 2019-07-12 华东师范大学 The two-way rain removing method of single picture based on picture block raindrop closeness
CN110049216A (en) * 2019-04-18 2019-07-23 安徽易睿众联科技有限公司 A kind of web camera that can identify type of precipitation in real time

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021017445A1 (en) * 2019-07-31 2021-02-04 浙江大学 Convolutional neural network rainfall intensity classification method and quantification method aimed at rainy pictures
CN111050081B (en) * 2019-12-27 2021-06-11 维沃移动通信有限公司 Shooting method and electronic equipment
CN111050081A (en) * 2019-12-27 2020-04-21 维沃移动通信有限公司 Shooting method and electronic equipment
CN111178439A (en) * 2019-12-31 2020-05-19 杭州电子科技大学 SAR image classification method based on convolutional neural network and fine adjustment
WO2021243898A1 (en) * 2020-06-05 2021-12-09 北京旷视科技有限公司 Data analysis method and apparatus, and electronic device, and storage medium
CN111914933A (en) * 2020-07-31 2020-11-10 中国民用航空华东地区空中交通管理局 Snowfall detection method and device, computer equipment and readable storage medium
CN112689078A (en) * 2021-01-22 2021-04-20 国网河南省电力公司信息通信公司 Rainwater identification management system based on artificial intelligence video analysis
CN112883969B (en) * 2021-03-01 2022-08-26 河海大学 Rainfall intensity detection method based on convolutional neural network
CN112883969A (en) * 2021-03-01 2021-06-01 河海大学 Rainfall intensity detection method based on convolutional neural network
CN113534296A (en) * 2021-07-13 2021-10-22 象辑知源(武汉)科技有限公司 Method and device for measuring and calculating sand-dust weather forecast intensity error based on neural network
CN113552656A (en) * 2021-07-26 2021-10-26 福建农林大学 Rainfall intensity monitoring method and system based on outdoor image multi-space-time fusion
CN114296152A (en) * 2021-12-16 2022-04-08 中汽创智科技有限公司 Rainfall determination method, rainfall determination device, rainfall determination equipment and storage medium
CN114296152B (en) * 2021-12-16 2023-09-15 中汽创智科技有限公司 Rainfall determining method, rainfall determining device, rainfall determining equipment and storage medium
CN117008219A (en) * 2023-10-07 2023-11-07 武汉大水云科技有限公司 Rainfall measurement method, device, equipment and storage medium based on artificial intelligence
CN117008219B (en) * 2023-10-07 2024-01-16 武汉大水云科技有限公司 Rainfall measurement method, device, equipment and storage medium based on artificial intelligence
CN117129390A (en) * 2023-10-26 2023-11-28 北京中科技达科技有限公司 Rainfall particle real-time monitoring system and method based on linear array camera shooting
CN117129390B (en) * 2023-10-26 2024-01-23 北京中科技达科技有限公司 Rainfall particle real-time monitoring system and method based on linear array camera shooting

Similar Documents

Publication Publication Date Title
CN110610190A (en) Convolutional neural network rainfall intensity classification method for rainy pictures
CN110633626A (en) Convolutional neural network rainfall intensity online quantification method for rainy pictures
CN113065578B (en) Image visual semantic segmentation method based on double-path region attention coding and decoding
CN112749654A (en) Deep neural network model construction method, system and device for video fog monitoring
CN113869162A (en) Violation identification method and system based on artificial intelligence
CN111462218A (en) Urban waterlogging area monitoring method based on deep learning technology
CN108509980B (en) Water level monitoring method based on dictionary learning
CN112396635B (en) Multi-target detection method based on multiple devices in complex environment
CN111983732A (en) Rainfall intensity estimation method based on deep learning
CN112801227B (en) Typhoon identification model generation method, device, equipment and storage medium
WO2021017445A1 (en) Convolutional neural network rainfall intensity classification method and quantification method aimed at rainy pictures
CN112287018A (en) Method and system for evaluating damage risk of 10kV tower under typhoon disaster
CN115690632A (en) Water environment monitoring method for inland river water body
CN115691049A (en) Convection birth early warning method based on deep learning
CN113469097B (en) Multi-camera real-time detection method for water surface floaters based on SSD network
CN110765900B (en) Automatic detection illegal building method and system based on DSSD
CN115830302A (en) Multi-scale feature extraction and fusion power distribution network equipment positioning identification method
CN114266980A (en) Urban well lid damage detection method and system
CN111929680B (en) Rapid flood inundation degree evaluation method based on SAR image
CN111627018A (en) Steel plate surface defect classification method based on double-flow neural network model
Zheng et al. Toward Improved Real‐Time Rainfall Intensity Estimation Using Video Surveillance Cameras
CN116778696B (en) Visual-based intelligent urban waterlogging early warning method and system
CN113506441B (en) Municipal bridge traffic early warning control method
CN117830722A (en) Foggy image visibility level classification method based on passive fog density segmentation
GUTIÉRREZ et al. A methodology to study beach morphodynamics based on self-organizing maps and digital images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20191224