CN112149536B - Squall line wind speed prediction method - Google Patents

Squall line wind speed prediction method Download PDF

Info

Publication number
CN112149536B
CN112149536B CN202010954611.5A CN202010954611A CN112149536B CN 112149536 B CN112149536 B CN 112149536B CN 202010954611 A CN202010954611 A CN 202010954611A CN 112149536 B CN112149536 B CN 112149536B
Authority
CN
China
Prior art keywords
squall
training
squall line
image
discrimination model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010954611.5A
Other languages
Chinese (zh)
Other versions
CN112149536A (en
Inventor
周象贤
郑昕
王少华
王振国
姜文东
刘岩
段静鑫
邵先军
李特
周路遥
李乃一
曹俊平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Electric Power Research Institute of State Grid Zhejiang Electric Power Co Ltd
Original Assignee
Electric Power Research Institute of State Grid Zhejiang Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Electric Power Research Institute of State Grid Zhejiang Electric Power Co Ltd filed Critical Electric Power Research Institute of State Grid Zhejiang Electric Power Co Ltd
Priority to CN202010954611.5A priority Critical patent/CN112149536B/en
Publication of CN112149536A publication Critical patent/CN112149536A/en
Application granted granted Critical
Publication of CN112149536B publication Critical patent/CN112149536B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The invention discloses a squall line wind speed prediction method. The technical scheme adopted by the invention is as follows: s1) collecting historical squall line samples, and performing data preprocessing and data augmentation; s2) learning and training a squall line discrimination model based on CNN through a convolutional neural network, wherein the concrete contents are as follows: transmitting a radar reflectivity picture as data, reading the image in a gray scale graph mode at first, extracting reflectivity factors in the image, then preprocessing the image, extracting important features in advance for the squall line discrimination model formed by final training, improving the precision of the squall line discrimination model, and training the squall line discrimination model to obtain the squall line discrimination model; s3) utilizing the squall line discrimination model to discriminate whether the squall line exists, if yes, outputting a result file, otherwise, returning to step S3 to wait for prediction of a next radar reflectivity picture. The invention adopts the convolutional neural network for identification, and can identify the radar real-time image which changes rapidly, thereby obtaining the real-time wind speed of the squall wind, which is stable and reliable.

Description

Squall line wind speed prediction method
Technical Field
The invention relates to a natural disaster prediction method, in particular to a squall line wind speed prediction method.
Background
The squall wind is a long and narrow thunderstorm rain belt formed by arranging a plurality of thunderstorm cloud monomers, and belongs to the category of strong convection weather. Generally, the squall line passes through a place with a sharp wind direction and a sharp increase of wind speed accompanied by disastrous weather such as thunderstorm, strong wind, hail, tornado and the like, has the characteristics of strong burst property and large destructive power, and is often impossible to defend by people. Sometimes, the sky is white cloud, turns between eyes, rolls and turns into dark cloud, then presses the head top, then lightning is added, heavy rain falls into a basin, and fierce wind wraps hail and attacks.
Some existing squall line wind speed prediction methods may not be capable of quickly and efficiently predicting squall line real-time wind speeds.
Disclosure of Invention
In view of this, the present invention provides a squall wind speed prediction method that employs a convolutional neural network to identify a rapidly changing radar real-time image to derive a squall wind speed in real time.
Therefore, the invention adopts the following technical scheme: a squall line wind speed prediction method, comprising:
s1: collecting historical squall line wind samples, and performing data preprocessing and data augmentation;
s2: learning and training a CNN-based squall wind discrimination model through a convolutional neural network, specifically comprising: transmitting a radar reflectivity picture, firstly reading the image in a gray scale picture mode, extracting reflectivity factors in the image, and then carrying out the following pretreatment: 1) The image is randomly flipped with a probability of 0.2; 2) Carrying out proper expansion corrosion operation and distortion operation on the image to realize the augmentation operation on the training data; 3) Thresholding is carried out on the image, the value of the pixel with the reflectivity less than 40dbz is assigned to be 0, and the pixel with the reflectivity more than 40dbz is reserved; 4) Obtaining an image after thresholding, and carrying out elimination operation on a connected region of which the pixel area is less than 0.06 percent of the total area of the input image so as to keep the following main information; through the above operations, extracting important features in advance for the squall line discrimination model formed through final training, improving the precision of the squall line discrimination model, and finally training to obtain a squall line discrimination model; wherein, the image which is randomly overturned with the probability of 0.2 is used for the training stage to expand the data set, and the test stage does not need to carry out the operation;
s3: after training of the squall wind discrimination model is completed, inputting a radar reflectivity picture to be predicted, firstly carrying out thresholding on the picture, assigning a value of a pixel less than 40dbz to be 0, and reserving a pixel more than 40 dbz; secondly, removing connected regions with pixel areas smaller than 0.06% of the total area of the input image to retain the main information; then, respectively carrying out horizontal, vertical, horizontal and vertical overturning operations on the images, and inputting all the images into the squall line discrimination model, wherein the images are obtained after the eliminating operation and the 4 images are counted; and judging whether the squall line exists or not by using the squall line wind judging model to obtain four results, carrying out weighted average to obtain a final result, if so, outputting a result file, otherwise, returning to the step S3 to wait for executing a next radar reflectivity picture.
The convolutional neural network is one of the most representative neural networks in the technical field of deep learning at present, and has made a lot of breakthrough progresses in the field of image analysis and processing, and on the standard image labeling set ImageNet commonly used in the academic world, many achievements are made based on the convolutional neural network, including image feature extraction and classification, scene recognition and the like. One of the advantages of the convolutional neural network over the conventional image processing algorithm is that a complex pre-processing process of the image is avoided, especially, the convolutional neural network participates in the image pre-processing process manually, and the convolutional neural network can be directly input into the original image to perform a series of works, so far, the convolutional neural network is widely applied to various image-related applications.
Further, the specific content of step S1 is as follows: the selected training data are radar data of a time period near all squall line cases of the radar monitoring station for many years, and after the data are extracted, squall line samples and non-squall line samples are obtained from the data; after the marking is finished, the mark is stored in a prescription 8 format, so that the mark is convenient to read; during training, 90% of the data is used as a training set, and the remaining 10% of the data is used as a validation set.
Further, in step S2, a driven quantity batch gradient descent method is adopted through convolutional neural network training, the size of batch is set to 64, momentum is set to 0.9, the learning rate is set to 0.0001, and the number of iterations is set to 50.
Further, a convolutional neural network based on a LeNet structure is used in step S2.
Further, in step S2, the radar images of the training data undergo convolutional neural network learning to output two values, which are then mapped into a probability of the squall line/non-squall line following a softmax function, with the final determination result being the larger probability; calculating the network output result and the label error, wherein the loss function adopts a cross entropy loss function, and the formula (1) is as follows:
Figure BDA0002678183890000021
where y is a label, i.e., 0 or 1,
Figure BDA0002678183890000022
and in order to predict a result, after each round of training, back-propagating the error to each parameter, updating the parameters, finally training to obtain a squall line discrimination model, and then calculating and determining the wind speed of the squall line according to the squall line discrimination model.
The invention has the following beneficial effects: the squall wind discrimination model is obtained through learning and training by a convolutional neural network according to radar image data, and then the wind speed grade is discriminated according to a radar echo map of the squall wind discrimination model.
Drawings
FIG. 1 is a flow chart in an embodiment of the present invention;
FIG. 2 is a graph of raw radar reflectivity data in accordance with an embodiment of the present invention;
FIG. 3 is a graphical visualization of radar reflectivity data after preprocessing in accordance with an embodiment of the present invention;
FIG. 4 is an output diagram of a squall line wind discrimination model in an embodiment of the present invention.
Detailed Description
The present invention will be described in detail with reference to the drawings and specific embodiments, and it is to be understood that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments, and all other embodiments obtained by those skilled in the art based on the embodiments of the present invention without any inventive work are within the scope of the present invention.
In this embodiment, as shown in fig. 1, the squall line wind speed prediction method according to the present invention includes the following steps:
s1: firstly, selecting training data as radar data of a time period near all squall line cases of a radar monitoring station for 20 years, and after extracting the data, obtaining 1073 squall line samples and 3127 non-squall line samples from the data as shown in FIG. 2; after the marking is finished, the mark is stored in a prescription 8 format, so that the mark is convenient to read; in the training process, 90% of data is used as a training set, and the remaining 10% of data is used as a verification set. A typical squall line process may be collected 8 times based on recent two years of squall line related literature, obtaining samples for a squall line process of approximately 1000 and nearby moments.
S2: learning and training a CNN-based squall line wind discrimination model through a convolutional neural network, specifically comprising:
transmitting a radar reflectivity picture, firstly reading the image in a gray scale picture mode, extracting reflectivity factors in the image, and then carrying out the following pretreatment: 1) The image is randomly flipped with a probability of 0.2; 2) Carrying out proper expansion corrosion operation and distortion operation on the image; 3) Thresholding is carried out on the image, the value of the pixel with the reflectivity less than 40dbz is assigned to be 0, and the pixel with the reflectivity more than 40dbz is reserved; 4) Obtaining an image after thresholding, and carrying out elimination operation on a connected region of which the pixel area is less than 0.06% of the total area of the input image to leave main information; through the above operations, extracting important features in advance for the squall wind discrimination model formed by final training, improving the squall wind discrimination model accuracy, and finally training to obtain a squall wind discrimination model; where images flipped randomly with a probability of 0.2 are used in the training phase to augment the data set, the testing phase does not need to do this.
S3: after training of the squall wind discrimination model is completed, inputting a radar reflectivity picture to be predicted, firstly carrying out thresholding on the picture, assigning a value of a pixel less than 40dbz to be 0, and reserving a pixel more than 40 dbz; secondly, removing connected regions with pixel areas smaller than 0.06% of the total area of the input image to leave main information; then, respectively performing horizontal, vertical, horizontal + vertical overturning operations on the images, and inputting all the images together with the total 4 images obtained in the previous step into the squall line wind discrimination model; and judging whether the squall line exists or not by using the squall line wind judging model to obtain four results, carrying out weighted average to obtain a final result, if so, outputting a result file, otherwise, returning to the step S3 to execute the next radar reflectivity picture.
In this embodiment, in step S2, a carry-over-run batch gradient descent method is adopted through convolutional neural network training, the size of batch is set to 64, momentum is set to 0.9, the learning rate is set to 0.0001, the number of iterations is set to 50, in addition, an early-maturing strategy is adopted, that is, the updated network is tested on a verification set every iteration, if the training error is continuously increased for 6 times, the training is exited, and the model with the minimum verification error in the iteration process is stored; a convolutional neural network model based on the LeNet structure is used in step S2.
In this embodiment, the radar images of the training data undergo convolutional neural network learning to output two values, which are then mapped to a probability of the squall line/non-squall line using a softmax function, with the largest probability being the final determination result; calculating the network output result and the label error, wherein the loss function is a cross entropy loss function:
Figure BDA0002678183890000041
where y is a label, i.e., 0 or 1,
Figure BDA0002678183890000042
and in order to predict a result, after each round of training, back-propagating the error to each parameter, updating the parameters, finally training to obtain a squall line discrimination model, and then calculating and determining the wind speed of the squall line according to the squall line discrimination model. As shown in fig. 4.
In this embodiment, the squall line wind speed look-up table is as in table one;
squall line wind speed in table
Figure BDA0002678183890000043
Figure BDA0002678183890000051
Although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the spirit and scope of the invention as defined in the appended claims. The techniques, shapes, and configurations not described in detail in the present invention are all known techniques.

Claims (5)

1. A squall line wind speed prediction method, comprising:
s1: collecting historical squall line samples;
s2: learning and training a CNN-based squall wind discrimination model through a convolutional neural network, specifically comprising: transmitting a radar reflectivity picture, reading the image in a gray scale picture mode, extracting reflectivity factors in the image, and then performing the following pretreatment: 1) The image is randomly flipped with a probability of 0.2; 2) Carrying out proper expansion corrosion operation and distortion operation on the image to realize the augmentation operation on the training data; 3) Thresholding is carried out on the image, the value of a pixel with reflectivity less than 40dbz is assigned to be 0, and pixels above 40dbz are reserved; 4) Obtaining an image after thresholding, and carrying out elimination operation on a connected region of which the pixel area is less than 0.06% of the total area of the input image, thereby keeping main information; through the above operations, extracting important features in advance for the squall wind discrimination model formed by final training, improving the squall wind discrimination model accuracy, and finally training to obtain a squall wind discrimination model;
s3: after training of the squall line wind discrimination model is completed, inputting a radar reflectivity picture to be predicted, firstly performing thresholding processing on the picture, wherein the value of a pixel less than 40dbz is 0, and pixels more than 40dbz are reserved; secondly, removing connected regions with pixel areas smaller than 0.06% of the total area of the input image, and accordingly, retaining main information; then, respectively carrying out horizontal, vertical, horizontal and vertical overturning operations on the images, and inputting all the images into the squall line discrimination model, wherein the images are obtained after the eliminating operation and the 4 images are counted; and judging whether the squall line exists or not by using the squall line wind judging model to obtain four results, carrying out weighted average to obtain a final result, if so, outputting a result file, otherwise, returning to the step S3 to wait for executing a next radar reflectivity picture.
2. The squall line wind speed prediction method according to claim 1, wherein step S1 comprises: the selected training data are radar data of a time period near all squall line cases in nearly 20 years of the radar monitoring station, and after the data are extracted, squall line samples and non-squall line samples are obtained from the data; after the marking is finished, the mark is stored in a prescription 8 format, so that the mark is convenient to read; during training, 90% of the data is used as a training set, and the remaining 10% of the data is used as a validation set.
3. The squall wind speed prediction method according to claim 1, wherein in step S2, a vectorial batch gradient descent method is used for training through a convolutional neural network, the batch size is set to 64, momentum is set to 0.9, the learning rate is set to 0.0001, the iteration number is set to 50, in addition, an early maturing strategy is used, that is, each iteration is performed with an updated network tested on a validation set, if the training error is continuously increased 6 times, the training is exited, and a model with the smallest validation error in the iteration process is stored.
4. The method of claim 1, wherein in step S2, a convolutional neural network based on a LeNet structure is used.
5. The squall line wind speed prediction method according to claim 4, wherein in step S2, the radar image of the training data is learned via a convolutional neural network to output two values, and then the two values are mapped into probabilities of the squall line/non-squall line using a softmax function, where the probability is greater than the final determination result; calculating the network output result and the label error, wherein the loss function adopts a cross entropy loss function, and the formula (1) is as follows:
Figure FDA0003873786970000021
where y is a label, i.e., 0 or 1,
Figure FDA0003873786970000022
and in order to predict a result, after each round of training, reversely propagating the error to each parameter, updating the parameters, and finally training to obtain the squall line wind discrimination model.
CN202010954611.5A 2020-09-11 2020-09-11 Squall line wind speed prediction method Active CN112149536B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010954611.5A CN112149536B (en) 2020-09-11 2020-09-11 Squall line wind speed prediction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010954611.5A CN112149536B (en) 2020-09-11 2020-09-11 Squall line wind speed prediction method

Publications (2)

Publication Number Publication Date
CN112149536A CN112149536A (en) 2020-12-29
CN112149536B true CN112149536B (en) 2022-11-29

Family

ID=73890882

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010954611.5A Active CN112149536B (en) 2020-09-11 2020-09-11 Squall line wind speed prediction method

Country Status (1)

Country Link
CN (1) CN112149536B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113779752A (en) * 2021-07-29 2021-12-10 北京玖天气象科技有限公司 Method for manufacturing squall line gale risk map of power transmission line

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110197218A (en) * 2019-05-24 2019-09-03 绍兴达道生涯教育信息咨询有限公司 Thunderstorm gale grade forecast classification method based on multi-source convolutional neural networks
CN110988883A (en) * 2019-12-18 2020-04-10 南京信息工程大学 Intelligent squall line characteristic identification early warning method in radar echo image
CN111487695A (en) * 2020-04-28 2020-08-04 国网江苏省电力有限公司电力科学研究院 Prediction method and prediction system for squall line system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2947938B1 (en) * 2009-07-10 2014-11-21 Thales Sa METHOD OF PREDICTING EVOLUTION OF A WEATHER PHENOMENON FROM DATA FROM A WEATHER RADAR
US11237299B2 (en) * 2017-05-01 2022-02-01 I.M. Systems Group, Inc. Self-learning nowcast system for modeling, recording, and predicting convective weather

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110197218A (en) * 2019-05-24 2019-09-03 绍兴达道生涯教育信息咨询有限公司 Thunderstorm gale grade forecast classification method based on multi-source convolutional neural networks
CN110988883A (en) * 2019-12-18 2020-04-10 南京信息工程大学 Intelligent squall line characteristic identification early warning method in radar echo image
CN111487695A (en) * 2020-04-28 2020-08-04 国网江苏省电力有限公司电力科学研究院 Prediction method and prediction system for squall line system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种雷达回波飑线智能识别的方法;王兴等;《热带气象学报》;20200615(第03期);31-41 *

Also Published As

Publication number Publication date
CN112149536A (en) 2020-12-29

Similar Documents

Publication Publication Date Title
CN110059694B (en) Intelligent identification method for character data in complex scene of power industry
CN111461190A (en) Deep convolutional neural network-based non-equilibrium ship classification method
CN113409314B (en) Unmanned aerial vehicle visual detection and evaluation method and system for corrosion of high-altitude steel structure
CN113096088B (en) Concrete structure detection method based on deep learning
CN113436169A (en) Industrial equipment surface crack detection method and system based on semi-supervised semantic segmentation
CN112307919B (en) Improved YOLOv 3-based digital information area identification method in document image
CN110598693A (en) Ship plate identification method based on fast-RCNN
CN116110036B (en) Electric power nameplate information defect level judging method and device based on machine vision
CN114841972A (en) Power transmission line defect identification method based on saliency map and semantic embedded feature pyramid
CN114155474A (en) Damage identification technology based on video semantic segmentation algorithm
CN111178438A (en) ResNet 101-based weather type identification method
CN112149536B (en) Squall line wind speed prediction method
CN112084860A (en) Target object detection method and device and thermal power plant detection method and device
CN111950357A (en) Marine water surface garbage rapid identification method based on multi-feature YOLOV3
CN111414855B (en) Telegraph pole sign target detection and identification method based on end-to-end regression model
US11908124B2 (en) Pavement nondestructive detection and identification method based on small samples
CN113052217A (en) Prediction result identification and model training method and device thereof, and computer storage medium
CN115830302B (en) Multi-scale feature extraction fusion power distribution network equipment positioning identification method
CN116452899A (en) Deep learning-based echocardiographic standard section identification and scoring method
CN115995056A (en) Automatic bridge disease identification method based on deep learning
CN115810006A (en) Reinforcing steel bar counting method and system based on MobileNet V3 improved model
CN113034446A (en) Automatic transformer substation equipment defect identification method and system
CN113344005A (en) Image edge detection method based on optimized small-scale features
CN110135395A (en) A method of train ticket is identified using depth learning technology
Swetha et al. Visual Weather Analytics-Leveraging Image Recognition for Weather Prediction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20211104

Address after: The eight district of Hangzhou city in Zhejiang province 310014 Huadian Zhaohui under No. 1 Lane

Applicant after: STATE GRID ZHEJIANG ELECTRIC POWER COMPANY LIMITED ELECTRIC POWER Research Institute

Address before: The eight district of Hangzhou city in Zhejiang province 310014 Huadian Zhaohui under No. 1 Lane

Applicant before: STATE GRID ZHEJIANG ELECTRIC POWER COMPANY LIMITED ELECTRIC POWER Research Institute

Applicant before: XIANGJI ZHIYUAN (WUHAN) TECHNOLOGY CO.,LTD.

GR01 Patent grant
GR01 Patent grant