CN113487529B - Cloud map target detection method for meteorological satellite based on yolk - Google Patents

Cloud map target detection method for meteorological satellite based on yolk Download PDF

Info

Publication number
CN113487529B
CN113487529B CN202110783150.4A CN202110783150A CN113487529B CN 113487529 B CN113487529 B CN 113487529B CN 202110783150 A CN202110783150 A CN 202110783150A CN 113487529 B CN113487529 B CN 113487529B
Authority
CN
China
Prior art keywords
cloud
visible light
picture
pictures
cloud picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110783150.4A
Other languages
Chinese (zh)
Other versions
CN113487529A (en
Inventor
何丽莉
付豪
白洪涛
曹英晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN202110783150.4A priority Critical patent/CN113487529B/en
Publication of CN113487529A publication Critical patent/CN113487529A/en
Application granted granted Critical
Publication of CN113487529B publication Critical patent/CN113487529B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30192Weather; Meteorology

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a meteorological satellite cloud image target detection method based on yolk, which comprises the following steps: firstly, extracting a plurality of infrared cloud pictures and a plurality of visible light cloud pictures; secondly, counting invalid information in the visible light cloud pictures, and dividing a plurality of visible light cloud pictures into the following parts according to the proportion of the invalid information in the visible light cloud pictures: a first visible light cloud picture, a second visible light cloud picture and a third visible light cloud picture; wherein the invalid information proportion psi of the first visible light cloud chart 1 Satisfy psi 1 <ψ min (ii) a The ratio psi of invalid information of the second visible cloud picture 2 Satisfy psi min ≤ψ 2 ≤ψ max (ii) a Invalid information ratio psi of the third visible cloud picture 3 Satisfy psi 3 >ψ max ;ψ min For the lower threshold of invalid information, # max An invalid information threshold upper limit; thirdly, fusing the first visible light cloud picture with the corresponding infrared cloud picture to obtain a fused cloud picture; forming a cloud atlas to be detected by the infrared cloud atlas, the fusion cloud atlas and the third visible light cloud atlas corresponding to the first visible light cloud atlas; and fourthly, performing target detection on the cloud pictures in the detected cloud picture set by adopting a Yolo algorithm, and identifying typical weather phenomena in the cloud pictures.

Description

Cloud map target detection method for meteorological satellite based on yolk
Technical Field
The invention belongs to the technical field of meteorological satellite cloud image identification and detection, and particularly relates to a meteorological satellite cloud image target detection method based on yolk.
Background
The weather satellite is an important meteorological tool, and in data transmitted by the weather satellite, more important is a weather cloud picture file, which plays an important role in weather forecast, especially in the work of analyzing precipitation and the like. The meteorological satellite can acquire visible cloud pictures in daytime, infrared cloud pictures in day and night, water vapor distribution maps and the like and perform facsimile transmission, thereby providing meteorological resources at home and abroadAnd the material utilization station receives and utilizes the material. For weather forecast, the contents of satellite cloud picture analysis include distinguishing cloud pictures of different channels, such as infrared cloud pictures and visible light cloud pictures, and analyzing the distribution of a large range of clouds and weather systems corresponding to the clouds. In the analysis of satellite clouds, there are some basic features, such as type (structure), which can be related to different weather systems or physical system processes through different types of clouds or clouds, such as typhoon, cyclone, low-vortex, etc. clouds in helical structure, and clouds in frontal, torrential and tropical radial zones in band structure [1] . For the analysis of such cloud systems, the judgment is basically carried out by the shape of different channel cloud pictures. However, at present, no better satellite cloud picture analysis method exists, and the speed and the precision are low mainly through a manual visual interpretation method and the like.
At present, a plurality of channels exist in satellite cloud pictures, but generally only infrared cloud pictures are used for cloud picture analysis. Because the meteorological satellite can detect the infrared cloud pictures in the day and at night, the infrared cloud pictures are obtained by detecting the wavelength of an object and an infrared region radiated by the atmosphere, and the cloud pictures can be detected as long as the cloud pictures have temperature and are irrelevant to illumination. In the imaging process, the higher the temperature, the higher the brightness, and the lower the temperature, the lower the brightness. The temperature of the cloud layer is obviously different from the temperature of the surrounding atmosphere, so that the shape of the cloud can be judged through infrared radiation. The visible cloud picture has the advantages that the detected cloud has better characteristics such as shape, texture and the like (better than the infrared cloud picture), but the visible cloud picture cannot be detected conventionally under the condition of insufficient illumination. In fact, in actual production, the utilization rate of other channels except the infrared cloud pictures is low.
Therefore, the search for a satellite cloud image detection algorithm which can more fully utilize satellite cloud image information and has high precision and real time has important research significance.
Disclosure of Invention
The invention aims to provide a meteorological satellite cloud picture target detection method based on the yolk, aiming at the defects of the prior art, and the method is characterized in that after an infrared cloud picture and a visible light cloud picture are fused, a yolk algorithm is adopted to detect and identify typical weather phenomena in the cloud picture; the information in the visible light cloud picture can be effectively utilized, and the detection speed and the detection precision of the satellite cloud picture are improved.
The technical scheme provided by the invention is as follows:
a meteorological satellite cloud target detection method based on yolk comprises the following steps:
extracting a plurality of infrared cloud pictures and a plurality of visible light cloud pictures from original satellite cloud picture data acquired by a meteorological satellite;
the infrared cloud pictures and the visible light cloud pictures are in one-to-one correspondence;
counting invalid information in the visible light cloud pictures, and dividing the visible light cloud pictures into a plurality of pieces according to the proportion of the invalid information in the visible light cloud pictures: a first visible light cloud picture, a second visible light cloud picture and a third visible light cloud picture;
wherein the invalid information proportion psi of the first visible light cloud picture 1 Satisfy psi 1 <ψ min
The ratio psi of invalid information of the second visible light cloud picture 2 Satisfy psi min ≤ψ 2 ≤ψ max
The ratio psi of invalid information of the third visible light cloud picture 3 Satisfy psi 3 >ψ max
In the formula, /) min For the lower threshold of invalid information, # max An invalid information threshold upper limit;
step three, fusing the second visible light cloud picture and the infrared cloud picture corresponding to the second visible light cloud picture to obtain a fused cloud picture; and
forming a cloud atlas to be detected by the infrared cloud atlas, the fused cloud atlas and the first visible light cloud atlas corresponding to the third visible light cloud atlas;
and fourthly, performing target detection on the cloud pictures in the detected cloud picture set by adopting a Yolo algorithm, and identifying typical weather phenomena in the cloud pictures.
Preferably, in the second step, an open-source opencv library system is used to read all pixel points of the visible light cloud image, and the proportion of the number of the useless pixel points in the visible light cloud image to the total pixel points is determined, so as to obtain the invalid information proportion in the visible light cloud image.
Preferably,. psi min =25%,ψ max =75%。
Preferably, in the third step, the fusing the second visible light cloud image with the corresponding infrared cloud image includes:
step 1, zooming the second visible light cloud picture and the infrared cloud picture to be fused into the same size;
step 2, after the second visible light cloud picture and the infrared cloud picture are respectively subjected to multi-scale expression by adopting a Laplace pyramid, the second visible light cloud picture and the infrared cloud picture are subjected to layered fusion;
when fusion is carried out on each layer, the region where the useful pixel points are located in the second visible light cloud picture is fused with other regions in the infrared cloud picture according to the position information to obtain a fused cloud picture;
the other areas refer to areas corresponding to positions of useless pixel points in the light cloud image in the infrared cloud image.
Preferably, before the fourth step, the method further comprises: and manufacturing a training data set in a COCO data set format according to satellite cloud picture data, training the Yolo algorithm model, and testing the speed and detection precision of the training model to obtain the Yolo algorithm weather cloud picture detection model.
Preferably, the method for detecting the meteorological satellite cloud picture target based on the Yolo further comprises the steps of carrying out Web-end deployment on the Yolo algorithm meteorological cloud picture detection model, and using a flash frame as a front-end frame and an Vue frame as a rear-end frame; the cloud picture is uploaded at the web end to be detected, and the effect of real-time detection is achieved.
Preferably, in the fourth step, a Yolov3 or Yolov5 algorithm is adopted to perform target detection on the cloud images in the detection cloud image set.
Preferably, in the fourth step, a Yolov5 algorithm is adopted to perform target detection on the cloud images in the detection cloud image set.
Preferably, the typical weather phenomena include: the cold front, warm front, typhoon, cyclone, strong convection and torrent clouds are 6 weather phenomena.
The beneficial effects of the invention are:
(1) according to the meteorological satellite cloud picture target detection method based on the yolk, the visible light cloud picture and the infrared cloud picture are fused, so that the advantage of clear texture of the visible light cloud picture is absorbed, and the defect that the visible light cloud picture cannot be measured at night is made up for by the infrared cloud picture; the two cloud pictures are integrated, so that more complete and accurate meteorological data can be obtained.
(2) The method for detecting the meteorological satellite cloud picture target based on the yolk can effectively improve the detection speed and the detection precision of the satellite cloud picture and achieve the effect of real-time monitoring.
Drawings
Fig. 1 is a visible light cloud with useful information on the right half according to the present invention.
Fig. 2 is a visible light cloud with partial useful information on the left side before fusion according to the present invention.
Fig. 3 is an infrared cloud before fusion according to the present invention.
Fig. 4 is a cloud image fused by the fusion method of the present invention.
Fig. 5 is a structural diagram of the algorithm of YOLOv5 according to the present invention.
FIG. 6 is a schematic representation of a CBL according to the present invention.
Fig. 7 is a structure diagram of a Res unit according to the present invention.
FIG. 8 is a diagram of the structure of SPP according to the present invention.
Fig. 9 is a main flowchart of the YOLOv5 algorithm according to the present invention.
FIG. 10 is a recall image according to the present invention.
FIG. 11 is a mAP 0.5 image according to the present invention.
FIG. 12 is a mAP 0.5:0.95 image according to the present invention.
Fig. 13 is a precision image according to the present invention.
Fig. 14 is a schematic diagram of a cloud detection result according to the present invention.
Fig. 15 is a schematic diagram of an MVVM framework according to the present invention.
Detailed Description
The present invention is described in further detail below with reference to the attached drawings so that those skilled in the art can implement the invention by referring to the description text.
The invention provides a method for detecting a cloud image target of a meteorological satellite based on Yolo, which is used for fusing images of two channels aiming at the characteristics of different weather phenomena of an infrared channel (with the wavelength of 10.5-12.5 micrometers) and a visible light channel (with the wavelength of 0.4-0.78 micrometers), and then identifying a typical weather phenomenon in a cloud image by using a YOLO algorithm.
Firstly, a method for fusing a visible cloud picture and an infrared cloud picture.
Firstly, by observing a VIS image (visible light cloud image), two data which can not obtain useful information exist in the image, one data is that the background presents a large area of blue due to no data which can not be measured by illumination; one is corrupted data, the background appears large areas of yellow. For both data, direct discarding and using infrared cloud images as detection data sets instead were adopted.
There is a large area of background information in the part of the visible light cloud image, but there is still much useful information (weather information), as shown in fig. 1, the information in the right half of the visible light cloud image belongs to useful information, but the information in the left half cannot be detected due to the absence of illumination. And if the information of the right half part is simply discarded, cloud image resources are wasted, so an image fusion algorithm is adopted, the infrared cloud image is used for complementing the lost information of the left half part of the visible light cloud image, and then the fused image is subjected to enhancement processing.
In this embodiment, a processing method of thresholding image pixels is adopted to determine which images are discarded directly, which images are fused, and which images are used directly.
The flow of the processing method for thresholding the image pixel is as follows: firstly, reading the two pictures (a visible light cloud picture and an infrared cloud picture) by using an imread function in a cv2 function library of python, and then scaling the pictures by using a resize function, so that the infrared cloud picture and the visible light cloud picture have the same size and are convenient to process later. And then reading all pixel points of a cloud picture in the picture, converting the picture data into a pure data format, judging the proportion of the quantity of useless pixel points to the total image, if the quantity exceeds a certain threshold value, indicating that the useless information contained in the visible light cloud picture is too much, directly discarding the picture, using an infrared cloud picture as a substitute, if the quantity is lower than the certain threshold value, indicating that the useless information contained in the visible light cloud picture is not too much and can be directly utilized, and if the quantity is between two threshold values, indicating that the proportion of the useful information and the useless information in the picture is close, and filling the useless information in the visible light picture by using the infrared cloud picture.
In this embodiment, the proportion of the useless information in the 118 images of the visible light channel of the satellite cloud image is as follows: less than 10% of 47 pictures, less than 20% of 7 pictures, less than 30% of 11 pictures, less than 40% of 6 pictures, less than 50% of 6 pictures, less than 60% of 5 pictures, less than 70% of 2 pictures, less than 80% of 4 pictures, less than 90% of 4 pictures and less than 99% of 26 pictures; it can be seen that there are a lot of useless information in a lot of pictures. When the lower threshold limit is 25% and the upper threshold limit is 75%, 63 pictures contain less garbage, 33 pictures contain more garbage, and the ratio of 22 pictures is similar. The infrared cloud images are directly adopted for replacing and fusing the visible light cloud images with more useless information (higher than the upper threshold limit), the visible light cloud images with less useless information (lower than the lower threshold limit) are directly used, and the visible light cloud images with moderate useless information (between the upper threshold limit and the lower threshold limit) are deeply fused with the infrared cloud images.
The invention adopts an improved Laplace fusion algorithm to carry out deep fusion on the visible light cloud image and the infrared cloud image.
The image pyramid is represented by different resolutions of the same picture and can express pictures with various scales, so the image pyramid can be used in the fields of picture scaling, image segmentation and the like. The principle of performing multi-scale representation on the image is to perform continuous convolution operation and down-sampling operation, the scale of the image is reduced in the process of each cycle operation, and the expression capability of the image is enhanced. The pictures closer to the top of the pyramid have higher abstraction degree and more information is lost. In order to make up for the defect of excessive information loss in the down-sampling process, the Laplace algorithm can improve the traditional pyramid, and after the image of the pyramid is obtained in each layer, the up-sampling of the image in the previous layer and the linear calculation of the image in the current layer are used for generating a new image, so that the details of the image are reserved.
In the specific process of up-sampling, firstly, the picture size is increased through interpolation, and the specific operation is to insert blank placeholders by judging the parity of the row and column numbers, and then use a convolution kernel to perform filtering processing. Thus, the size of the result obtained by the up-sampling is the same as that of the lower layer, and the two are subjected to linear operation to obtain the Laplacian pyramid.
The image fusion method of the laplacian pyramid is to decompose the pictures into different scales and then fuse the pictures in each level. Firstly, fusion is carried out from a pyramid at the top, the fusion method is to calculate the gradient of pixel points at the same position in two pictures and then compare the gradient with surrounding pixel points, if the pixel gradient of one picture is larger in the two pictures, the corresponding pixel points of the image are used in the fused picture, and the information such as the edge texture of the image is richer due to the larger gradient, and the information is more reserved. In the processing process of other layers, gradient information is obtained by combining the picture information of the upper layer and carrying out matrix operation, then comparison is carried out, and the result of fusion is selected as the result with larger gradient in the two pictures.
The invention carries out the fusion of the infrared cloud picture and the visible light picture. In the visible light image, the illumination is continuous, so that the useless information is basically concentrated in a certain area of the image, for example, the whole of the image is in the left half or the right half. Therefore, the invention improves the traditional fusion algorithm aiming at the characteristics of the visible light cloud picture. Firstly, Laplace operation is carried out on a visible light image and an infrared cloud image, and then when fusion is carried out on each layer, pixel information of which image in an original image is used by the fused image is determined through position information. In this example, the fusion was performed using 1/4 for the visible cloud and 3/4 for the infrared cloud at this stage, in combination with 75% and 25% selected in the previous stage of the experiment as fusion thresholds. The improved method is characterized in that a traditional mode of gradient determination is abandoned in the process of fusing each layer, 1/4 of a visible cloud image is directly used as a pixel of the front half part of a fused image, 3/4 of an infrared cloud image is used as a pixel of the rear half part of the fused image, and then the layers are fused for processing.
Since the inventive data set picture size is 1222 x 916, merging requires the input of a picture size to the power of 2 n, so the experiment first scales the picture to 512 x 512, and increases to 1222 x 916 after merging.
The visible light image and the infrared cloud image before fusion are respectively shown in fig. 2 and fig. 3, and the image after fusion is shown in fig. 4. The merged pictures are naturally spliced, the information of the two pictures is reserved, and the processing effect is good.
After the method provided by the invention is used for fusing the visible light cloud picture and the infrared cloud picture, the utilization rate of picture resources is improved, the picture information is richer, and a cushion is laid for the next image detection and identification.
And secondly, carrying out target detection modeling on the satellite cloud picture by adopting a YOLO algorithm so as to identify 6 typical weather phenomena of cold front, warm front, typhoon, cyclone, strong convection and torrent cloud systems.
1. Satellite image HDF5 data preprocessing
hdf is a new data format that has many advantages, firstly all data can be stored in a single hdf file, and secondly data of different formats can be stored in one hdf file, such as symbolic data, numerical data, graphical data, etc. The user can divide the stored data into different layers, and descriptive information can be added to each layer. Layering can be carried out according to the needs of the user, and is flexible. Meanwhile, the compatibility of the hdf file and other formats is good, and a new data mode can be added. Passing hdf files between different platforms also does not require additional data conversion operations because it is a platform independent format, and the same hdf file can be operated on different platforms.
In satellite cloud data, hdf format is used more. First, the hdf file can store picture information of an image, and unit information of the image can be described. And secondly, information of different channels of the satellite cloud picture can be stored in the hdf file hierarchy, so that a user can conveniently read the information. The experimental cloud chart data adopted by the invention adopts the data in the hdf format.
The satellite cloud picture data used by the invention is a cloud picture in the range of 0-60 degrees north latitude and 70-150 degrees east longitude which are intercepted from a full disk picture, and equal latitude and longitude projection files (hdf files) are generated. Each hdf file contains 14 data sets, storing 14 channels of equal longitude and latitude projection data. The data set name is: LonLatChannelB04, LonLatChannelB05, LonLatChannelB06, LonLatChannelB09, LonLatChannelB10, LonLatChannelB11, LonLatChannelB12, LonLatChannelB14, LonLatChannelB16, lonlatchannellir 1, lonlatchannellir 2, lonlatchannellir 3, lonlatchannellir 4, lonlatchannellvis. The last 3 characters of the data set name represent the channel name, e.g., B04, IR1, VIS. The data in the first row and the first column of the data set represent the data at 60 degrees north latitude and 70 degrees east longitude, and the data in the last row and the last column of the data set represent the data at 0 degree north latitude and 150 degrees east longitude. The data set of the VIS channel is 3666 rows and 4888 columns, and the data sets of other channels are 916 rows and 1222 columns. The data types for all data sets are 16-bit large unsigned integers. The units of data in B04, B05, B06, and VIS datasets are ALBEDO (%), and the units of B09, B10, B11, B12, B14, B16, IR1, IR2, IR3, IR4 datasets are KELVIN.
Using the h5py library from python, the procedure was as follows:
(1) file ("pathname") is first read using h5py
(2) Reading the files of the channel according to the dictionary format
(3) Pictures are displayed using the matplotlib library. An image can be simply displayed by functions pyplot.
Taking the "H8 _ ALL _ prog _ L2_20190114_0500. hdf" file as an example, a B04 channel image is read and displayed. Where the image width is 1222 and the height is 916, an index color mode is used.
The displayed picture needs to be saved according to the original resolution ratio for the training of the neural network. If the picture is stored by directly using the savefig function in matplotlib. The invention makes some settings of matlabpllotlb library.
Firstly, setting the size of a picture by using a set _ size _ inches function, then displaying by using a removed coordinate axis, then removing blanks around the picture by using a sublogs _ adjust function and a margins function, and finally saving the picture by using a savefig function to obtain an ideal picture.
And finally, circularly reading each file and each channel data of the files, and storing the files as the files.
The markup file provided by the weather-related expert is in csv format and comprises 7 fields of type, name, top, left, bottom, right and filename. For training, 6 fields of type, top, left, bottom, right, filename are needed.
In the "H8 _ ALL _ prog _ L2_20190114_0500. hdf" file, a portion of the sample may be truncated for a brief display.
2. Constructing a target detection marker dataset
The YOLO algorithm is based on a feed-forward convolutional neural network. YOLOv5 used CSPDaeknet (cross-phase local area network) as a backhaul, and PANET (path aggregation network) and SPP (spatial pyramid pooling) as a Neck, and Head as YOLOv 3. In YOLOv5, the activation functions of the various parts are differentiated, using a leak ReLU in the middle and hidden layers and a Sigmod activation function in the detection layer. These activation functions are computationally less costly than the v4 algorithm. YOLOv5 configured two optimization functions and respectively given the initialization hyper-parameters used by the corresponding training. Stochastic gradient optimization, better training for large datasets, and Adam optimizer for smaller custom datasets
The YOLOv5 network consists primarily of 4 components:
1) an input layer: data enhancement processing for input image
2) Backbone: forming richer features of an image by aggregating images at different resolution sizes
3) And (6) selecting Neck: mixing processing of input image features
4) Predition: confidence of prediction bounding box and category
The algorithm structure is shown in fig. 5.
The individual component structures and tensors are calculated as follows:
(1) CBL (as shown in fig. 6): combining traditional convolution operations and the like generates a new network element, which is the most fundamental component in a network.
(2) Res unit (shown in fig. 7): different numbers of continuous residual error structures exist in the structure, and more res _ units can enable the network to be constructed deeper
(3) Concat: two tensors are combined into one tensor in a certain dimension.
(4) Add: adding the two tensors in the same dimension to generate the same size result
(5) Spp structure (shown in FIG. 8): firstly, sampling operation is respectively carried out on different sizes, then different results are fused into one block through calculation, and finally convolution is carried out again to modify the size of output.
In this embodiment, a single-stage detection YOLOv5 algorithm is adopted, and a main flow thereof is shown in fig. 9.
The COCO dataset is required for the data set training of YOLOv5, and the VOC2007 dataset is first prepared and then transformed in the present invention.
A new folder for storing the whole data set:
the invention selects a labellimg open source tool, and generates a file with the same name and an xml format for storing the marking information after the file is stored. The object field is used for storing each sample information, two important data exist in the file, the bndbox stores the position of the object, and the name stores the category of the object.
Because the invention uses a large amount of marking data, if the tool is used for marking one by one, the marking is more complicated, so the invention uses a simple python program to read the required 6 fields from the csv file storing the marks in the front and generates the corresponding xml marking file according to the file format shown in the figure.
And then putting the xml markup file into a corresponding folder.
The invention uses a simple python program to generate by using the name of the training file used by each category stored in the txt file, and then the generated data needs to be converted into a coco data set, which is also realized by using a python file. Under each train or val directory is stored a corresponding picture name or label file name.
3. Training parameter settings
After the data set is prepared, the model is further configured with parameters, and the model with the highest accuracy of the YOLOv5 network is used in the embodiment as the YOLOv5x model.
(1) Setting model configuration files
The class of the object to be detected is modified to 6 according to yolov5x.yaml file.
(2) Setting training profiles
Modify the training folder and verification folder paths, the categories of objects, and the labels for each category according to the coco.
4. Training and assessment
The training environment is configured as follows:
(1) operating the system: CentOS Linux release 7.8.2003(Core)
(2) A display card: 8 TITAN X display card (2 pieces of the card are used in the experiment)
(3) A cpu: 56 by
(4) Memory: 256G
(5) The cuda version: 11.0
(6) The version of the pyrrch: 1.7.1
The algorithm sets the hyper-parameters prior to training.
Training process:
(1) setting parameters of DDP mode
The number of global processes, process numbers, etc. may be set.
(2) Setting whether to resume training
(3) A list of the processing hyper-parameters is loaded,
(4) model training
Firstly, a path of a training log is obtained, a result file is required to be read, the result file comprises a training result stored in each iteration, and a next experiment can be better performed after a last operation result is analyzed. And then, setting some models, for example, setting parameters for storing a group route, wherein the parameters are generally stored when the parameters are smaller than 3, a log file is generated during training, and the algorithm is stored by adopting an evolve folder. Setting a path saved in a weight file generated in each iteration process, and setting a path saved in a training result of each time. Some information is then obtained from the program, such as the round of training, the batch of dataset entries, the total batch of training (distributed training), the weight file generated after a picture is trained once, the process number (distributed training). Some settings at runtime then need to be saved, the hyper-parameter file hyp and the command parameters opt of the project. And distinguishing whether the type of cuda is trained by adopting a GPU (graphics processing unit), acquiring the number and the name of the category of the recognition target, and if single _ cls is set in the command line parameter, belonging to one category of the targets to be detected. Then, the pt-format model parameter file is read. If pre-training is selected, a script may be invoked to automatically download the pre-trained models from the network, and then create the models, or the parameters may be set with a command line or configuration file. The difference of using files or command lines is whether recovery parameters are set, if so, a model is created according to the configuration files; if the recovery parameters are set in the training, the training can be continued on the basis of the weight parameters of the last training; the file provides default training parameters of a data set for a user, the default is to use self-defined weights in the file, if the parameters are not set, the user directly sets an anchor frame and loads pre-training weights for training, the user-defined settings can be covered by the file settings, and the file settings have higher priority; therefore, before the program runs, which kind of parameter is used needs to be set and determined, if the training process is also carried out by adopting the weight file, the position of the anchor frame in the weight file is abandoned and the anchor frame defined by the user is used; if the recovery parameters are set, the parameters are not discarded and the weights are loaded with the anchor box configuration file. And setting a freezing layer model, setting the image size of the simulated processing batch, batch _ size, selecting and using a proper optimizer, setting an optimization mode of a pg0 group, setting a parameter weight of a training process, an optimization mode of bn for batch normalization of features, setting an optimization mode of a bias parameter biases, and then printing the optimized information. And a learning rate attenuation speed is required to be set, and the model is subjected to attenuation by using a cosine annealing mode and then is subjected to recovery training. The first generation of training data is initialized to select whether to continue training using the best previous training results. The weight of the best parameter file in the training process is then saved as best. And then judging, if the previous training process is finished, carrying out a new round of experiment, setting parameters, setting a loading optimizer and best _ fixness, loading a training result. If the weight is set and backed up for the recovery parameter, although the current recovery parameter can be approximately perfect and has no error, other problems may occur in the recovery process of the parameter, and the previous weight is lost, so that the data can be safer by adopting an additional backup operation. If the epochs are set to a greater number of training rounds, then the new epochs are set to the number of rounds that need to be retrained rather than the total number of rounds. Then reading the total step length of the model and the resolution of the model input picture from the model structure file, and checking the resolution of the input picture set by the user to ensure that the step length gs can be divided completely. Then see if distributed training is set, the rank process number is set and if the dataparallell mode supporting only native multicard is used, when rank is set to-1 and gpu is only one, distributed training will be skipped. The BatchNorm is then synchronized, bn is synchronized using cross-card. An EMA exponential moving average is created for the model, and if the number of GPU processes is greater than 1, this exponent is not created. And then creating a dataloader of the training set, acquiring the maximum class value in the label, comparing the maximum class value with the class number, and if the maximum class value is greater than the class number, indicating that a problem occurs. After which the model set-up is started. And setting a classification loss coefficient, a category number and a hyper-parameter according to the category number of the own data set. And then setting the value of giou as a coefficient of a label in object loss, initializing picture sampling weight according to labels, and acquiring the name of the category. And then, setting the category frequency, splicing the labels of all samples together, and performing visualization after statistics. And acquiring the categories of all samples, and setting the length, the width and the central position for visualization according to the statistics. The ratio of the length to the width of the default anchor to the data set label box is calculated. If the number of label boxes satisfying the above condition is less than 99% of the total, then some unsupervised machine learning method is used to obtain the new anchor point setting.
After which training of the model is started. And setting the iteration number of the warm-up training, and initializing the mAP and the results. The round of learning rate attenuation is set, so that after the training is interrupted, the recovery parameters can be used for learning rate attenuation after the training and before the training can be normally connected. Then printing the resolution of training and testing input pictures, the CPU process number called when loading the pictures, and the number of the sequence numbers starting the epochs. Then enter the cyclic training of each epoch. And if a picture sampling strategy is set, generating picture index indices by random and points according to the picture sampling weight model, class _ weights and maps which are initialized in the front and the number of categories contained in each picture, and selecting samples. If DDP mode is set, a sampling strategy is broadcast. The average loss information printed at training is then initialized. If the data is scrambled in DDP mode, the random sampled data of ddp.sampler is based on epoch + seed as random seed, so that the random seed number is different each time the epoch is different. And then, a progress bar is created by using tqdm, so that the information display during training is facilitated. The number of iterations iteration is then calculated. And (4) using warm-up training in the previous nw iterations, and selecting acumulate and a learning rate in a certain mode. The learning rate of the bias parameter bias decreases from 0.1 to the reference learning rate lr lf (epoch), and the other learning rates increase from 0 to lr lf (epoch). lf is the previously set attenuation coefficient of the cosine anneal, and the parameter momentmum is gradually increased from 0.9. Then, multi-scale training was performed, and sizes were randomly selected among imgsz 0.5, imgsz 1.5+ gs. Then, forward propagation of mixed precision is carried out, and losses including classification loss, onjectness loss, regression loss of a box and loss total loss value are calculated. The value of loss stores loss _ entries as a tuple containing the classification loss, object loss, regression loss of the box and total loss. Followed by counter-propagation. And after the model is propagated reversely for a certain number of times, updating the parameters once according to the accumulated gradient, and then carrying out learning rate attenuation. If the GPU or the cpu is used for training, the mAP needs to be updated, including updating EMA attributes and adding include attributes. And judging whether the epoch is the last round. And then testing the test set and calculating indexes such as mAP and the like. And simultaneously, the system is responsible for writing the indexes into the result file, and if the parameters of the packet command line are set, the result file also needs to be uploaded to the network. Then setting a visualization tool Tensorbard, and adding information such as model indexes and training loss to the tensorbard for displaying. The best updated mAP passes best _ fixness. Then, the model is saved, and in addition, the information such as epoch, results, optimizer and the like needs to be saved, and the optimizer is saved after the last round is finished. Other parameters, such as model, store a model of the EMA. After model training is completed, an optizer is removed from ckpt by using a strip _ optizer function, model. And after finishing, performing visualization operation on the results file. And finally, releasing the video memory of the video card.
Evolution of the hyper-parameters:
py file is used in the conda environment, device is set to "cuda 0, 1", parameter configuration file is set, and the batch size of model iteration is set to 16.
The results were obtained after 300 epoch trains of 1 hour, 13 minutes and 35 seconds.
The experimental recall ratio image is shown in fig. 10, and the experiment reaches more than 96% at the maximum. The mAP _0.5 image is shown in FIG. 11, up to 97% or more; the mAP _0.5:0.95 image is shown in FIG. 12. The precision image is shown in fig. 13 with a highest precision close to 99%.
When the mAP is used for evaluating the neural network result, the value reflects the excellent degree of the model, and the higher the mAP value is, the higher the precision of the corresponding classification algorithm is, and the better the performance is. The mAP is calculated by first calculating the AP value detected by each target and then performing weighted averaging on the AP values. The mAP is 0 at minimum and 1 at maximum, with larger values being preferred. This criterion is most important in the target detection algorithm. The YOLOv5x model used in this experiment gave an average accuracy value of approximately 99% for all classes. Meanwhile, the precision, recall rate, PR curve and the like of the experiment achieve good effects. When a good training result is obtained, the training speed is very high and can reach about 30 ms.
In another example, experiments were performed under the same conditions using the YOLOv3 algorithm, which is slightly weaker than YOLOv5 in terms of accuracy and recall, etc., but the YOLOv3 training time was shorter, using 49 minutes and 35 seconds under the same conditions. Namely, under the condition of the same iteration of 300 times, the training time of the YOLOv3 algorithm is reduced by 24 minutes compared with that of the YOLOv5 algorithm, the recall rate is improved by 1.43%, but the precision of the YOLOv3 algorithm is reduced by 3.75% compared with that of the YOLOv5 algorithm. So if the accuracy requirements are not particularly high, one can choose to use the YOLOv3 algorithm for faster training.
5. Results testing and analysis
The test procedure is as follows, firstly, judging whether to call test during training, and if so, acquiring the equipment during training. Otherwise, directly calling and selecting the equipment. The previous test _ batch0_ gt.jpg and test _ batch0_ pred.jpg are deleted. The model is then loaded, checking whether the resolution of the input picture is divisible by 32. If the device is not cpu and the number of GPUs is 1, the model is translated from Float32 to Float16 to increase the speed of forward propagation. Then, configuration is carried out, data configuration information is loaded, an iou threshold value is set, and the data configuration information is taken once every 0.05 from 0.5 to 0.95. DataLoader is then set to true the rect parameter because the test evaluation of yolov5 is based on a rectangular inference.
Test parameters and path index information are initialized. And setting display information of the tqdm progress bar. Initialization index, time. Initializing the loss of the test set, and initializing the dictionary statistical information of the json file, ap. Then the gradient propagation is disabled, the model is made to propagate forward, and the forward propagation time is cumulatively calculated. And calculating loss, if test is carried out during training, the loss of the GIoU, obj and cls of the test set needs to be calculated and returned through a training result. Then setting confidence threshold value, iou threshold value and NMS processing, and calculating the time for accumulating and processing NMS.
And then counting each picture, writing the prediction information into a txt file, generating a json file dictionary, counting tp and the like. Firstly, the label information of the si picture is obtained, wherein the label information comprises class, x, y, w and h. And acquiring the label category, counting the number of the test pictures, if the prediction is null, adding null information into stats, and storing the prediction result into a txt file. By obtaining the length and width of the corresponding picture, the path of the txt file is then set according to the picture name. The coordinates of the prediction box are adjusted to coordinates based on its original length and width. Save the category and coordinates to the txt file. And correcting the predicted coordinates to the inside of the picture. A json file dictionary in coco format is maintained. Firstly, acquiring a picture id, acquiring a frame, adjusting the frame to be based on the size of an original image, and converting the frame into an xyz format. The channel order of the format is changed, the box coordinate format in the json format of coco is xyz, where xy is the coordinate of the upper left corner, that is, the coordinate format in the json format of coco is: top left coordinates + width, so the center point coordinates need to be translated to the top left. A predictive assessment is then initiated. And storing the detected target, obtaining a frame in an xyz format, multiplying the frame by wh, independently processing each class in the picture, and calculating the index of the class of the tag frame and the index of the class of the prediction frame. And calculating the iou value of the prediction frame, and selecting the largest ious value. The detected target is then added to detected, and the true positive at different iou thresholds is obtained. Then, the structure of each picture is counted in stats. The group treth and prediction box of the first batch picture are drawn and saved. The information of the stats list is then spliced together. And printing the index result and showing the index of each category in detail. Time spent printing forward travel, time of nms, total time. Using the results of the previously saved json format prediction and evaluating the metrics by cocoapi, the tags of the test set also need to be converted to coco's json format. Firstly, acquiring a picture id, acquiring a json file path of a prediction frame and opening the json file path. Acquiring and initializing a json file of the test set label. A file of prediction boxes is initialized. An evaluator is created. And evaluating and then displaying the result. And finally returning a test index result.
The invention detects all pictures once, and can frame the cloud types and the confidence degrees thereof in the pictures, wherein 0 represents a cold front, 1 represents a warm front, 2 represents typhoon, 3 represents cyclone, 4 represents strong convection, and 5 represents a torrent cloud system. The cloud detection effect is shown in fig. 14. In fig. 14, two positions are circled with confidence frames that the category to which the detected object belongs is 3 (cyclone), and the predicted probability values are displayed as 0.87 and 0.91, respectively.
Model deployment and application
1. Model Web-end deployment
(1) Flash frame
The flash is a lightweight frame in the python language, provides the most basic functions of a web frame, and is more freely and flexibly used compared with other frames. The core of the system is simple, only basic communication functions are provided, and users need to select expansion to realize specific requirements. Separating the functions from the framework eliminates the need for the user to install unnecessary functions each time, but rather adds them according to their choice.
The flash has the great advantage that the flash enables users to develop more options, and the design style is extremely simple, so that the users can build application programs with more freedom. Firstly, the design of the flash is light and modularized, and programs meeting the requirements can be designed by using various abundant expansions. And secondly, the basic UI design of the API of the flash is attractive and coherent, so that a beautiful interface can be generated by simple design. The flash model is small, and is easy to deploy in an actual environment due to the fact that the python platform is benefited. In addition, the flash supports various http request processing functions, the framework is highly flexible, the configuration is flexible, and various output requirements can be met.
According to the invention, the neural network framework needs to be deployed at the web end, and only part of network functions need to be used, so that the flash in the back-end framework becomes a better choice, and the required functions can be added on the micro-framework. In addition, as the neural network framework platform used by the invention is written in python language, the neural network framework platform and the python language can be combined more conveniently and originally by using flash.
(2) Vue frame
Vue employ MVVM architecture, View, ViewModel, Model, as shown in FIG. 15. The view layer and the data layer in the MVVM framework do not exchange data directly, but exchange data through the ViewModel layer. Vue adopts the two-way binding technique of data, and the ViewModel is responsible for monitoring user data and then notifying the view layer when a change occurs. When the user operates the view layer through the ViewModel, the corresponding data is also informed to carry out persistence operation. The MVVM architecture provides hierarchical architecture abstraction for a front-end page, and data communication and persistence can be realized by combining other js libraries such as Ajax and the like, so that functions and experience of the system are enriched. Vue takes ViewModel in MVVM as core, adopts bidirectional data binding technology to ensure the consistency of M layer and V layer data, and belongs to lightweight frame, making development process simple and efficient. Vue use unique data-driven mode to change the mode of manual update of DOM view before, and the view is triggered to change automatically when the data is changed. Further Vue allowing views to be componentized, an entire web page is split into different tiles, each tile being a component. The network is formed by splicing and nesting a plurality of components, so that better maintainability and reusability of the code are realized.
Vue the frame has many characteristics. The method is a lightweight front-end framework, can automatically track and calculate expressions and calculation attributes in a template, uses an MVVM (multifunction vehicle virtual machine) framework and has rich component systems, and uses diversified interfaces. And secondly, the two-way binding of data is realized, and the template and the instruction are used for carrying out data operation on the document object model, so that the DOM realizes automatic real-time rendering. Vue have a rich instruction system, events in a developing page can be almost always completed using the instruction description in Vue. The instructions may be changed in conjunction with the value of the variable and accordingly incorporated into the application of the DOM. Vue also has powerful component functions to extend the elements in the original HTML. It achieves code reuse through encapsulation. Meanwhile, the method has an inheritance type communication mode among the assemblies, the father assembly and the son assembly transmit information in the attribute parameters, data are transmitted from the father assembly to the son assembly in a one-way mode, and the son assembly enables the father assembly to update the data through information response. Therefore, the development of the components has close relation with html and javascript, a user can select required integration in the components, the development process is accelerated, the code amount is reduced, and meanwhile, the development of the components supports hot overloading. Vue used a Vue-router plug-in that allowed applications to be built with only one page. The routing plug-in sets a path for browsing the webpage by the user, maps the access of the user to the path to the user-defined component, and compared with the traditional mode of using a hyperlink, the setting realizes the application of a single page
When the user accesses Vue, the view is displayed and the user again sends a request to the server to upload pictures. The server receives the analysis of the client side and analyzes, the flash stores the received pictures in an upload directory, then a related function of yolov5 is called to realize image recognition, and finally the image is transmitted back to Vue to update the view.
And performing Web end deployment on the trained model, using a flash frame as a front end frame and an Vue frame as a rear end frame, and detecting the picture through the trained weight file. The function of uploading the cloud pictures for detection at the web end is completed, and the effect of real-time detection is achieved.
The invention provides a satellite cloud picture multi-channel image fusion method, which is used for fusing images of two channels according to the characteristics of infrared channels (with the wavelength of 10.5-12.5 micrometers) and visible light channels (with the wavelength of 0.4-0.78 micrometers) on different weather phenomena: a) and (4) counting invalid information existing in the visible light channel cloud picture by using an open source opencv library and setting a threshold value for filtering. Discarding pictures with excessive invalid information, completely reserving pictures with less invalid information, and fusing the pictures in the middle area; b) and respectively carrying out Laplacian operation on the infrared cloud image and the visible light cloud image, and establishing a Laplacian image pyramid on the basis of the Gaussian image pyramid. A new fusion strategy was then proposed. Setting the fusion proportion and the relative position of the two cloud pictures according to the threshold value in the a), carrying out pixel level fusion on each layer based on the position, and carrying out linear filling on the pictures to obtain the fused result.
On the basis of image fusion, a YOLO algorithm is adopted to carry out target detection modeling on a satellite cloud picture so as to identify 6 typical weather phenomena of cold front, warm front, typhoon, cyclone, strong convection and torrent cloud system. Extracting cloud picture of each channel from original satellite cloud picture data (HDF5 format), and fusing visible light cloud picture and infrared cloud picture. And then, a training data set in a COCO data set format is made, model training is carried out, and the training model is tested. In the environment used in this embodiment, the YOLOv5 model can achieve an image detection speed of about 30FPS and a detection accuracy of more than 95%, and well meets the requirements of high satellite cloud image detection speed and high accuracy.
The invention also carries out Web end deployment on the trained model, uses a flash frame as a front end frame and an Vue frame as a rear end frame, and detects the picture through the trained weight file. The function of uploading the cloud pictures for detection at the web end is completed, and the effect of real-time detection is achieved.
While embodiments of the invention have been described above, it is not limited to the applications set forth in the description and the embodiments, which are fully applicable in various fields of endeavor to which the invention pertains, and further modifications may readily be made by those skilled in the art, it being understood that the invention is not limited to the details shown and described herein without departing from the general concept defined by the appended claims and their equivalents.

Claims (9)

1. A meteorological satellite cloud target detection method based on yolk is characterized by comprising the following steps:
extracting a plurality of infrared cloud pictures and a plurality of visible light cloud pictures from original satellite cloud picture data acquired by a meteorological satellite;
the infrared cloud pictures and the visible light cloud pictures are in one-to-one correspondence;
counting invalid information in the visible light cloud pictures, and dividing the visible light cloud pictures into a plurality of pieces according to the proportion of the invalid information in the visible light cloud pictures: a first visible light cloud picture, a second visible light cloud picture and a third visible light cloud picture;
wherein the invalid information proportion psi of the first visible light cloud picture 1 Satisfy psi 1 <ψ min
The invalid information of the second visible light cloud picture accounts for psi 2 Satisfy psi min ≤ψ 2 ≤ψ max
The ratio psi of invalid information of the third visible light cloud picture 3 Satisfy psi 3 >ψ max
In the formula, /) min For a lower threshold of invalid information, # max An invalid information threshold upper limit;
step three, fusing the second visible light cloud picture with the infrared cloud picture corresponding to the second visible light cloud picture to obtain a fused cloud picture; and
forming a cloud atlas to be detected by the infrared cloud atlas, the fused cloud atlas and the first visible light cloud atlas corresponding to the third visible light cloud atlas;
and step four, performing target detection on the cloud pictures in the detected cloud picture set by adopting a Yolo algorithm, and identifying typical weather phenomena in the cloud pictures.
2. The method for detecting the cloud target of the meteorological satellite based on the Yolo of claim 1, wherein in the second step, an open source opencv library system is used for reading all the pixel points of the visible cloud image, and the number of the useless pixel points in the visible cloud image is judged to be the proportion of the total pixel points, so that the invalid information proportion in the visible cloud image is obtained.
3. The method for detecting cloud target of meteorological satellite based on Yolo according to claim 2, characterized in that psi min =25%,ψ max =75%。
4. The method for detecting the targets by the cloud images of the meteorological satellite based on the yolk according to the claim 2 or the claim 3, wherein in the third step, the second visible light cloud image is fused with the corresponding infrared cloud image, and the method comprises the following steps:
step 1, zooming the second visible light cloud picture and the infrared cloud picture to be fused into the same size;
step 2, after the second visible light cloud picture and the infrared cloud picture are respectively subjected to multi-scale expression by adopting a Laplace pyramid, the second visible light cloud picture and the infrared cloud picture are subjected to layered fusion;
and when fusion is carried out on each layer, the region where the useful pixel points in the second visible light cloud picture are located is fused with other regions in the infrared cloud picture according to the position information, so that a fused cloud picture is obtained.
5. The method for detecting the cloud target of the meteorological satellite based on the yolk according to claim 4, further comprising before the fourth step: and manufacturing a training data set in a COCO data set format according to the satellite cloud picture data, training the Yolo algorithm model, and testing the speed and the detection precision of the training model to obtain the Yolo algorithm meteorological cloud picture detection model.
6. The method for detecting the targets of the cloud images of the meteorological satellites based on the yolk according to claim 5, further comprising the steps of carrying out Web-side deployment on the yolk algorithm cloud image detection model, wherein a flash frame is used as a front-end frame, and an Vue frame is used as a back-end frame; the cloud picture is uploaded at the web end to be detected, and the effect of real-time detection is achieved.
7. The method for detecting the target of the cloud images of the meteorological satellite based on the yolk according to claim 6, wherein in the fourth step, the yolk 3 or yolk 5 algorithm is adopted to detect the target of the cloud images in the detected cloud image set.
8. The method for detecting cloud target based on Yolo weather satellite images as claimed in claim 7, wherein in the fourth step, the Yolov5 algorithm is adopted to perform target detection on the cloud images in the detected cloud image set.
9. The yolk-based weather satellite cloud target detection method of claim 8, wherein the typical weather phenomenon comprises: the cold front, warm front, typhoon, cyclone, strong convection and torrent clouds are 6 weather phenomena.
CN202110783150.4A 2021-07-12 2021-07-12 Cloud map target detection method for meteorological satellite based on yolk Active CN113487529B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110783150.4A CN113487529B (en) 2021-07-12 2021-07-12 Cloud map target detection method for meteorological satellite based on yolk

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110783150.4A CN113487529B (en) 2021-07-12 2021-07-12 Cloud map target detection method for meteorological satellite based on yolk

Publications (2)

Publication Number Publication Date
CN113487529A CN113487529A (en) 2021-10-08
CN113487529B true CN113487529B (en) 2022-07-26

Family

ID=77938093

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110783150.4A Active CN113487529B (en) 2021-07-12 2021-07-12 Cloud map target detection method for meteorological satellite based on yolk

Country Status (1)

Country Link
CN (1) CN113487529B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114565056B (en) * 2022-03-15 2022-09-20 中科三清科技有限公司 Machine learning-based cold-front identification method and device, storage medium and terminal
CN115329493B (en) * 2022-08-17 2023-07-14 兰州理工大学 Impeller machinery fault detection method based on digital twin model of centrifugal pump
CN116342448B (en) * 2023-03-28 2023-09-29 北京华云星地通科技有限公司 Full-disc visible light fitting method, system, equipment and medium
CN117274243B (en) * 2023-11-17 2024-01-26 山东大学 Lightweight meteorological disaster detection method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102646272A (en) * 2012-02-23 2012-08-22 南京信息工程大学 Wavelet meteorological satellite cloud image merging method based on local variance and weighing combination
CN106023177A (en) * 2016-05-14 2016-10-12 吉林大学 Thunderstorm cloud cluster identification method and system for meteorological satellite cloud picture
CN110175959A (en) * 2019-05-20 2019-08-27 南京信息工程大学 A kind of typhoon cloud atlas Enhancement Method
CN111784619A (en) * 2020-07-03 2020-10-16 电子科技大学 Fusion method of infrared and visible light images

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130211230A1 (en) * 2012-02-08 2013-08-15 Convergent Life Sciences, Inc. System and method for using medical image fusion
CN103839243B (en) * 2014-02-19 2017-01-11 浙江师范大学 Multi-channel satellite cloud picture fusion method based on Shearlet conversion
KR101580585B1 (en) * 2014-12-02 2015-12-28 서울시립대학교 산학협력단 Method for data fusion of panchromatic and thermal-infrared images and Apparatus Thereof
CN105338262B (en) * 2015-10-09 2018-09-21 浙江大华技术股份有限公司 A kind of graphic images processing method and processing device
CN108073865B (en) * 2016-11-18 2021-10-19 南京信息工程大学 Aircraft trail cloud identification method based on satellite data
CN106780392B (en) * 2016-12-27 2020-10-02 浙江大华技术股份有限公司 Image fusion method and device
WO2019055547A1 (en) * 2017-09-13 2019-03-21 Project Concern International System and method for identifying and assessing topographical features using satellite data
CN108983219B (en) * 2018-08-17 2020-04-07 北京航空航天大学 Fusion method and system for image information and radar information of traffic scene
KR102142934B1 (en) * 2018-08-31 2020-08-11 인천대학교 산학협력단 Apparatus and Method for Fusing Using Weighted Least Squares Filter and Sparse Respresentation
CN113077482B (en) * 2018-09-29 2024-01-19 西安工业大学 Quality evaluation method of fusion image
CN111915546B (en) * 2020-08-04 2024-07-26 西安科技大学 Infrared and visible light image fusion method, system, computer equipment and application
CN112288663A (en) * 2020-09-24 2021-01-29 山东师范大学 Infrared and visible light image fusion method and system
CN112233074A (en) * 2020-09-30 2021-01-15 国网山西省电力公司大同供电公司 Power failure detection method based on visible light and infrared fusion image
CN112907493B (en) * 2020-12-01 2024-07-23 航天时代飞鸿技术有限公司 Multi-source battlefield image rapid mosaic fusion algorithm under unmanned aerial vehicle bee colony collaborative reconnaissance
CN112819740B (en) * 2021-02-02 2023-05-12 南京邮电大学 Medical image fusion method based on multi-component low-rank dictionary learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102646272A (en) * 2012-02-23 2012-08-22 南京信息工程大学 Wavelet meteorological satellite cloud image merging method based on local variance and weighing combination
CN106023177A (en) * 2016-05-14 2016-10-12 吉林大学 Thunderstorm cloud cluster identification method and system for meteorological satellite cloud picture
CN110175959A (en) * 2019-05-20 2019-08-27 南京信息工程大学 A kind of typhoon cloud atlas Enhancement Method
CN111784619A (en) * 2020-07-03 2020-10-16 电子科技大学 Fusion method of infrared and visible light images

Also Published As

Publication number Publication date
CN113487529A (en) 2021-10-08

Similar Documents

Publication Publication Date Title
CN113487529B (en) Cloud map target detection method for meteorological satellite based on yolk
CN111507271B (en) Airborne photoelectric video target intelligent detection and identification method
US10643320B2 (en) Adversarial learning of photorealistic post-processing of simulation with privileged information
CN109886066B (en) Rapid target detection method based on multi-scale and multi-layer feature fusion
CN113076871B (en) Fish shoal automatic detection method based on target shielding compensation
CN113255589B (en) Target detection method and system based on multi-convolution fusion network
CN107918776A (en) A kind of plan for land method, system and electronic equipment based on machine vision
CN112801270A (en) Automatic U-shaped network slot identification method integrating depth convolution and attention mechanism
CN113361528B (en) Multi-scale target detection method and system
CN117671509B (en) Remote sensing target detection method and device, electronic equipment and storage medium
CN114332473A (en) Object detection method, object detection device, computer equipment, storage medium and program product
CN116453121A (en) Training method and device for lane line recognition model
CN104463962A (en) Three-dimensional scene reconstruction method based on GPS information video
CN116863145A (en) Remote sensing image aircraft target detection model for improving YOLOv5 and method thereof
CN114359258B (en) Method, device and system for detecting target part of infrared moving object
CN116524070A (en) Scene picture editing method and system based on text
CN113947774B (en) Lightweight vehicle target detection system
EP4235492A1 (en) A computer-implemented method, data processing apparatus and computer program for object detection
CN115588150A (en) Pet dog video target detection method and system based on improved YOLOv5-L
CN113487374A (en) Block E-commerce platform transaction system based on 5G network
CN116469014B (en) Small sample satellite radar image sailboard identification and segmentation method based on optimized Mask R-CNN
Varga Identifying anthropogenic pressure on beach vegetation by means of detecting and counting footsteps on UAV images
CN116682014B (en) Method, device, equipment and storage medium for dividing lamp curtain building image
CN117854045B (en) Automatic driving-oriented vehicle target detection method
CN118279590B (en) Semi-supervised infrared image segmentation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant