WO2020156028A1 - 一种基于深度学习的室外非固定场景天气识别方法 - Google Patents
一种基于深度学习的室外非固定场景天气识别方法 Download PDFInfo
- Publication number
- WO2020156028A1 WO2020156028A1 PCT/CN2020/070261 CN2020070261W WO2020156028A1 WO 2020156028 A1 WO2020156028 A1 WO 2020156028A1 CN 2020070261 W CN2020070261 W CN 2020070261W WO 2020156028 A1 WO2020156028 A1 WO 2020156028A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- weather
- convolution
- convolutional neural
- neural network
- pictures
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 238000013135 deep learning Methods 0.000 title claims abstract description 15
- 238000012549 training Methods 0.000 claims abstract description 30
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 28
- 238000012360 testing method Methods 0.000 claims description 16
- 238000004364 calculation method Methods 0.000 claims description 14
- 238000000926 separation method Methods 0.000 claims description 12
- 238000012795 verification Methods 0.000 claims description 12
- 238000011176 pooling Methods 0.000 claims description 11
- 230000006870 function Effects 0.000 claims description 10
- 238000000605 extraction Methods 0.000 claims description 9
- 230000009466 transformation Effects 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 7
- 238000010200 validation analysis Methods 0.000 claims description 5
- 238000011423 initialization method Methods 0.000 claims description 4
- 230000004913 activation Effects 0.000 claims description 3
- ZPUCINDJVBIVPJ-LJISPDSOSA-N cocaine Chemical compound O([C@H]1C[C@@H]2CC[C@@H](N2C)[C@H]1C(=O)OC)C(=O)C1=CC=CC=C1 ZPUCINDJVBIVPJ-LJISPDSOSA-N 0.000 claims description 3
- 230000008034 disappearance Effects 0.000 claims description 3
- 230000007717 exclusion Effects 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 230000007547 defect Effects 0.000 abstract 1
- 230000007423 decrease Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- FDAKNRLXSJBEFL-UHFFFAOYSA-N dicyclohexyl-[2-propan-2-yloxy-6-[2,4,6-tri(propan-2-yl)phenyl]phenyl]phosphane Chemical compound CC(C)Oc1cccc(c1P(C1CCCCC1)C1CCCCC1)-c1c(cc(cc1C(C)C)C(C)C)C(C)C FDAKNRLXSJBEFL-UHFFFAOYSA-N 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000004297 night vision Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
Definitions
- the invention belongs to the technical field of navigation, and particularly relates to a method for identifying outdoor weather in an unfixed scene.
- All-source navigation is the application of multiple types of sensors to achieve rapid integration and reconfiguration of multiple combinations according to different environment and task requirements, thereby forming a navigation that can accurately locate, navigate and time in a variety of complex environments GPS.
- the optical sensors are limited by the weather environment. Weather such as heavy rain, blizzards, sandstorms, haze and other weather will have a greater impact on lidar, laser altimeter, infrared rangefinder, etc. Night vision equipment will fail on sunny days, and the camera Photos taken in the absence of light at night cannot provide effective navigation information. If these sensors are applied in a non-adaptive environment, the accuracy of navigation and positioning will be greatly reduced. In addition, in some scenarios, weather pictures are also needed to identify the current weather environment, and then make some decisions or judgments. Therefore, accurately identifying the current weather conditions is an urgent need for further research in this field.
- the present invention proposes a method for identifying outdoor weather in an unfixed scene based on deep learning.
- the technical solution of the present invention is:
- a deep learning-based outdoor weather recognition method for non-fixed scenes includes the following steps:
- step (3) Use the data set obtained in step (2) to train the lightweight convolutional neural network
- step (1) the structural characteristics of the lightweight convolutional neural network are as follows:
- the two-dimensional size of the feature map is gradually reduced, and the ratio of the input size to the output size is as close to 2 as possible each time the feature map is reduced; the number of channels of the feature map gradually increases, and it obeys the distribution of the pyramid form.
- the ratio of the number of output channels to the number of input channels should be as close as possible to 1.
- step (1) the structural characteristics of the lightweight convolutional neural network are as follows:
- the convolution kernel has a receptive field of 5 ⁇ 5, 7 ⁇ 7 or 9 ⁇ 9, and the convolutional layer is realized by expanding convolution or deep separation convolution; for the pooling layer, adopts the maximum pooling layer and the average The pooling layer down-samples the feature map.
- step (1) the structural characteristics of the lightweight convolutional neural network are as follows:
- a multi-branch strategy is adopted; one branch is used as a residual structure to reduce the phenomenon of gradient disappearance; the remaining branches use expanded convolution or deep separation convolution for feature extraction : First, use 1 ⁇ 1 convolution to perform nonlinear transformation on the input to expand the number of channels; then use dilated convolution or depth separation convolution to perform feature extraction, where the expansion factor is introduced in the deep convolution operation without increasing the parameters In the case of calculation amount, increase the receptive field; finally use 1 ⁇ 1 convolution to linearly transform the feature map after expansion convolution or combine the feature map after deep convolution to make the number of output channels of each branch the same.
- step (2) is as follows:
- step (201) the number of pictures of the six weather should be as close as possible; in step (202), the ratio of training set, validation set and test set is 8:1:1 or 6:2:2 , And all three sub-data sets should contain six types of weather pictures.
- the parameters that need to be initialized and adjusted in the network include but are not limited to: learning rate, training sample batch size, activation function, weight initialization method, loss function, optimizer, and training times.
- step (3) L2 regularization and Dropout are adopted, and a Batch Normalization layer is introduced, where the parameters of L2 regularization and Dropout need to be initialized and adjusted.
- step (3) with the goal of minimizing the loss of the training set and the verification set, the network parameters and network structure are adjusted, training is carried out continuously, and finally the test set is used to test the performance of the network.
- step (4) is as follows:
- (401) Encapsulate the trained lightweight convolutional neural network into an executable program and transplant it to an embedded platform or removable device;
- the present invention overcomes the shortcomings of traditional methods that can only identify fixed scenes, and can be applied to embedded platforms and mobile devices;
- the present invention does not require any assistance, and can recognize the weather conditions in outdoor non-fixed scenes only through a single RGB image; and the present invention has no specific requirements on the angle of the camera to take pictures, and the practicability is very high;
- the method realized by the present invention has a very small amount of calculation and relatively high accuracy, so real-time monitoring can be realized.
- Figure 1 is a basic flow chart of the present invention
- Figure 2 is a diagram of a multi-branch structure in the present invention.
- Figure 3 is a structural diagram of a convolutional network in the present invention.
- Figure 4 is a network pre-training method in the present invention.
- the present invention designs an outdoor unfixed scene weather recognition method based on deep learning, as shown in Figure 1, and the steps are as follows:
- Step 1 Build the basic structure of a lightweight convolutional neural network
- Step 2 Collect various weather pictures and make them into a data set in a specific format
- Step 3 Use the data set obtained in step 2 to train the lightweight convolutional neural network
- Step 4 Transplant the trained lightweight convolutional neural network to an embedded platform or a mobile device, use the captured weather pictures as the input of the lightweight convolutional neural network, and output the probabilities corresponding to various weather conditions.
- step 1 can be implemented using the following preferred solutions:
- the convolutional network is designed for some scenes that need to recognize the current weather conditions through weather pictures. It needs to be applied on an embedded platform. Therefore, a relatively small amount of calculation is required on the basis of ensuring accuracy.
- the input image of the network should be a three-channel RGB image.
- the two-dimensional size (width and height) of the feature map should be gradually reduced, and the ratio of input and output sizes should be as close as possible to each reduction. 2.
- the number of channels in the feature map should gradually increase and obey the pyramid distribution.
- the ratio of the number of output channels to the number of input channels should be as close to 1 as possible each time it increases, so as to speed up the network operation.
- the receptive field of the convolution kernel should be 5 ⁇ 5, 7 ⁇ 7 or 9 ⁇ 9 in order to better extract global features. Expanded convolution can be used to expand the receptive field without increasing calculations At the same time, in order to speed up the operation of the network, deep separation convolution can be used, which can greatly reduce the amount of calculation.
- the maximum pooling layer and the average pooling layer are used to down-sample the feature map, and the convolution kernel can be of regular size here. The calculation amount required for deep separation convolution and conventional convolution is shown in the following formula:
- D K represents the size of the convolution kernel
- M represents the number of input channels
- DF represents the size of the output feature map
- N represents the number of output channels
- the calculation amount of the deep separation convolution is D K ⁇ D K ⁇ M ⁇ D F +M ⁇ N ⁇ D F ⁇ D F
- the calculation amount of deep convolution is D K ⁇ D K ⁇ M ⁇ D F ⁇ D F
- the calculation amount of 1 ⁇ 1 is M ⁇ N ⁇ D F ⁇ D F
- the calculation amount of conventional convolution is D K ⁇ D K ⁇ M ⁇ N ⁇ D F ⁇ D F
- the ratio of the two is If the size of the convolution kernel is 3 ⁇ 3, the calculation amount of depth separation convolution is about the amount of calculation of conventional convolution
- W is the size of the input image, generally the width and height are equal
- F is the size of the convolution kernel
- S is the step length, that is, stride, Indicates rounding up.
- rate represents the expansion factor
- height and weight represent the actual receptive field of the convolution kernel.
- a preferred convolutional neural network structure is given below, and the present invention is not limited to this structure:
- the input image size can be selected between 224-448, the receptive field of the convolution kernel is 5 ⁇ 5, which is realized by expanding convolution; when the convolution layer is used to extract the feature map, the number of output channels and input of the feature map The ratio of the number of channels can be between 1-2; when the maximum pooling layer is used for downsampling, the ratio of the two-dimensional size of the input feature map and the output feature map can be between 1-3.
- a multi-branch strategy is adopted.
- One of the branches is used as the residual structure to reduce the phenomenon of gradient disappearance; the other branches can first use 1 ⁇ 1 convolution to perform nonlinear transformation on the input, expand the number of channels, and then use expanded convolution or deep separation convolution to It performs feature extraction, where the deep convolution operation introduces an expansion factor, which can increase the receptive field without increasing the amount of parameter calculation, and then uses 1 ⁇ 1 convolution to linearly transform or correct the feature map after the expansion convolution
- the feature maps after deep convolution are combined to make the number of output channels of each branch consistent.
- Figure 2 shows a multi-branch structure of the present invention.
- the number of branches can be selected from 2 to 5, and one of them is used as the residual structure. If the number of input and output channels is equal , Then no transformation is performed on this branch. If the number of input and output channels is not equal, a 1 ⁇ 1 convolution is used to make a linear change to make the number of input and output channels equal; the remaining branches can first use 1
- the ⁇ 1 convolution kernel performs a nonlinear transformation on the input feature map, expands the number of channels to increase the feature information, and then uses deep separation convolution or expansion convolution for feature extraction, where deep convolution uses convolution with larger receptive fields
- the core is used for feature extraction.
- the receptive field of the convolution kernel can be between 5 ⁇ 5-9 ⁇ 9.
- an expansion factor can be introduced to increase the receptive field.
- the expansion factor can be between 2-4, and finally 1 ⁇ 1 Convolution performs linear transformation on the feature map after expansion convolution or combines the feature map after deep convolution, and makes the number of output channels of each branch consistent.
- the resulting convolutional neural network structure is shown in Figure 3. Show.
- step 2 can be implemented using the following preferred solutions:
- step 3 can be implemented using the following preferred solutions:
- the parameters that need to be initialized and adjusted in the network include learning rate, training sample batch size, activation function, weight initialization method, loss function, optimizer, training times, etc.
- the following are the initialization parameters designed based on this method. This method includes but is not limited to this parameter:
- the initial parameter settings are as follows: use a dynamic learning rate, which decreases proportionally with the number of training iterations, and the range is 0.00001-0.01.
- the batch_size value is a multiple of 16, which is a multiple of 16-256.
- Activate The function is a function of the ReLU family, the weight initialization can be Xavier initialization or truncated Gaussian initialization method, the loss function is cross-entropy loss function, the optimizer is Adam or RMSprop, and the number of training is between 10-50 ephos.
- L2 regularization and dropout are used in the network, and the Batch Normalization layer is introduced to strengthen regularization, prevent overfitting, and speed up the convergence of the network.
- the parameters of L2 regularization and Dropout also need to be initialized and adjusted. The following are the initialization parameters designed based on this method. This method includes but is not limited to this parameter: L2 regularization coefficient can be between 0.00001-0.001 and dropout coefficient can be between 0.5-1.0.
- the network pre-training method of the present invention aims at minimizing the loss of the training set and the verification set, and adjusts the network parameters and the network structure. If the training set loss continues to decrease and the validation set loss continues to decrease, it means that the network is still learning; if the training set loss continues to decrease, the validation set loss tends to remain unchanged, indicating that the network is overfitting, and the L2 regularization and Dropout coefficients need to be increased; If the loss of the training set tends to be the same and the loss of the verification set continues to decrease, it indicates that there is a problem with the data set and the data set constructed in step 2 should be checked; if the loss of the training set tends to be the same, the verification set loss tends to be unchanged, indicating When learning encounters a bottleneck, you need to increase the depth of the network or reduce the learning rate, and adjust the parameters in (1); if the training set loss continues to rise, the verification set loss continues to rise, indicating that the network structure is not properly designed or the training hyper
- step 4 can be implemented using the following preferred solutions:
- This method includes but is not limited to this platform: encapsulating the trained network into an executable program in .py format, and then porting it to the Linux platform, using multithreading technology, Using the mutual exclusion lock, first start the camera to collect a picture of the current weather, and then transmit it to the packaged convolutional network, and finally output the corresponding probability of each weather, and the corresponding probability is the current weather condition.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (10)
- 一种基于深度学习的室外非固定场景天气识别方法,其特征在于,包括以下步骤:(1)构建轻量型卷积神经网络的基本结构;(2)收集各种天气的图片并将其制作为特定格式的数据集;(3)利用步骤(2)得到的数据集对轻量型卷积神经网络进行训练;(4)将训练好的轻量型卷积神经网络移植到嵌入式平台或可移动设备中,将拍摄的天气图片作为轻量型卷积神经网络的输入,输出各种天气情况对应的概率。
- 根据权利要求1所述基于深度学习的室外非固定场景天气识别方法,其特征在于,在步骤(1)中,轻量型卷积神经网络的结构特征如下:在特征变换过程中,特征图的二维尺寸逐渐减小,每次特征图减小时输入尺寸与输出尺寸的比例尽量接近2;特征图的通道数逐渐增多,服从金字塔形式的分布,特征图的输出通道数与输入通道数的比例尽量接近1。
- 根据权利要求1所述基于深度学习的室外非固定场景天气识别方法,其特征在于,在步骤(1)中,轻量型卷积神经网络的结构特征如下:对于卷积层,卷积核具有5×5、7×7或9×9的感受野,卷积层采用扩张卷积或深度分离卷积实现;对于池化层,采取最大池化层和平均池化层对特征图做下采样。
- 根据权利要求1所述基于深度学习的室外非固定场景天气识别方法,其特征在于,在步骤(1)中,轻量型卷积神经网络的结构特征如下:在网络的中间层对特征图进行特征提取时,采用多支路策略;其中一条支路作为残差结构,减小梯度消失的现象;其余支路采用扩张卷积或深度分离卷积进行特征提取:首先采用1×1的卷积对输入进行非线性变换,扩充通道数;然后采 用扩张卷积或深度分离卷积对其进行特征提取,其中深度卷积操作中引入扩张因子,在不增加参数计算量的情况下增大感受野;最后采用1×1卷积对经过扩张卷积后的特征图做线性变换或将经过深度卷积后的特征图结合起来,使得各支路的输出通道数相同。
- 根据权利要求1所述基于深度学习的室外非固定场景天气识别方法,其特征在于,步骤(2)的具体过程如下:(201)收集晴天、黑夜、暴雨、暴雪、沙尘暴、雾霾六种天气的RGB图片,且这些图片为不同场景、不同角度拍摄的图片;(202)将收集到的所有图片按比例分为三个子数据集,分别为训练集、验证集和测试集,训练集和验证集用于对网络进行训练,测试集用于测试网络的性能;(203)将所有图片进行归一化,然后制作成Tensorflow框架下特定的数据格式。
- 根据权利要求5所述基于深度学习的室外非固定场景天气识别方法,其特征在于,在步骤(201)中,六种天气的图片数量应尽量接近相等;在步骤(202)中,训练集、验证集与测试集的比例为8:1:1或者6:2:2,且三个子数据集中均应包含六种天气图片。
- 根据权利要求1所述基于深度学习的室外非固定场景天气识别方法,其特征在于,在步骤(3)中,网络中需要初始化和调整的参数包括但不限于:学习率、训练样本批大小、激活函数、权重初始化方法、损失函数、优化器和训练次数。
- 根据权利要求1所述基于深度学习的室外非固定场景天气识别方法,其特征在于,在步骤(3)中,采用L2正则化和Dropout,并且引入Batch Normalization层,其中L2正则化和Dropout的参数需要初始化和调整。
- 根据权利要求1所述基于深度学习的室外非固定场景天气识别方法,其特征在于,在步骤(3)中,以训练集和验证集的损失最小化为目标,对网络参数和网络结构进行调整,不断进行训练,最后采用测试集来测试网络的性能。
- 根据权利要求1所述基于深度学习的室外非固定场景天气识别方法,其特征在于,步骤(4)的具体过程如下:(401)将训练好的轻量型卷积神经网络封装成可执行程序,移植到嵌入式平台或可移动设备上;(402)采用多线程技术,利用互斥锁,先启动相机拍摄当前的天气图片,然后调用封装好的卷积神经网络来识别图片,输出每种天气对应的概率,其中概率最大的即为当前的天气情况。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910078620.XA CN109784298A (zh) | 2019-01-28 | 2019-01-28 | 一种基于深度学习的室外非固定场景天气识别方法 |
CN201910078620.X | 2019-01-28 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020156028A1 true WO2020156028A1 (zh) | 2020-08-06 |
Family
ID=66502629
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/070261 WO2020156028A1 (zh) | 2019-01-28 | 2020-01-03 | 一种基于深度学习的室外非固定场景天气识别方法 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109784298A (zh) |
WO (1) | WO2020156028A1 (zh) |
Cited By (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112084890A (zh) * | 2020-08-21 | 2020-12-15 | 杭州电子科技大学 | 基于gmm和cqfl的多尺度识别交通信号标志的方法 |
CN112233073A (zh) * | 2020-09-30 | 2021-01-15 | 国网山西省电力公司大同供电公司 | 一种变电设备红外热成像异常实时检测方法 |
CN112396020A (zh) * | 2020-11-30 | 2021-02-23 | 中铁建新疆京新高速公路有限公司 | 基于人工智能算法的雪深监测系统 |
CN112434786A (zh) * | 2020-10-22 | 2021-03-02 | 西安交通大学 | 一种基于winograd动态卷积块的图像处理方法 |
CN112651468A (zh) * | 2021-01-18 | 2021-04-13 | 佛山职业技术学院 | 一种多尺度轻量化图像分类方法及其存储介质 |
CN112818893A (zh) * | 2021-02-10 | 2021-05-18 | 北京工业大学 | 一种面向移动终端的轻量化开集地标识别方法 |
CN112836669A (zh) * | 2021-02-22 | 2021-05-25 | 宁波大学 | 一种司机分心驾驶检测方法 |
CN112836673A (zh) * | 2021-02-27 | 2021-05-25 | 西北工业大学 | 一种基于实例感知和匹配感知的重识别方法 |
CN113050042A (zh) * | 2021-04-15 | 2021-06-29 | 中国人民解放军空军航空大学 | 基于改进UNet3+网络的雷达信号调制类型识别方法 |
CN113092531A (zh) * | 2021-03-18 | 2021-07-09 | 西北工业大学 | 一种基于卷积神经网络的机电阻抗连接结构损伤检测方法 |
CN113109782A (zh) * | 2021-04-15 | 2021-07-13 | 中国人民解放军空军航空大学 | 一种直接应用于雷达辐射源幅度序列的新型分类方法 |
CN113284042A (zh) * | 2021-05-31 | 2021-08-20 | 大连民族大学 | 一种多路并行图像内容特征优化风格迁移方法及系统 |
CN113392701A (zh) * | 2021-05-10 | 2021-09-14 | 南京师范大学 | 一种基于YN-Net卷积神经网络的输电线路障碍物检测方法 |
CN113837004A (zh) * | 2021-08-20 | 2021-12-24 | 北京工业大学 | 一种基于深度学习的游梁式抽油机运动学分析方法 |
CN113887425A (zh) * | 2021-09-30 | 2022-01-04 | 北京工业大学 | 一种面向低算力运算装置的轻量化物体检测方法与系统 |
CN114154620A (zh) * | 2021-11-29 | 2022-03-08 | 上海应用技术大学 | 人群计数网络的训练方法 |
CN114241344A (zh) * | 2021-12-20 | 2022-03-25 | 电子科技大学 | 一种基于深度学习的植物叶片病虫害严重程度评估方法 |
CN114420151A (zh) * | 2022-01-21 | 2022-04-29 | 陕西师范大学 | 基于并联张量分解卷积神经网络的语音情感识别方法 |
CN114781708A (zh) * | 2022-04-11 | 2022-07-22 | 东南大学 | 一种基于轻量级自编码网络的短期风功率预测方法 |
CN114781472A (zh) * | 2022-03-02 | 2022-07-22 | 多点(深圳)数字科技有限公司 | 一种基于自适应卷积核的跨门店生鲜识别方法 |
CN114881879A (zh) * | 2022-05-17 | 2022-08-09 | 燕山大学 | 一种基于亮度补偿残差网络的水下图像增强方法 |
CN114926821A (zh) * | 2022-06-24 | 2022-08-19 | 长城汽车股份有限公司 | 车辆故障警示方法及系统 |
CN114998820A (zh) * | 2022-04-25 | 2022-09-02 | 中国海洋大学 | 一种基于多任务学习的天气识别方法及系统 |
CN115115890A (zh) * | 2022-07-17 | 2022-09-27 | 西北工业大学 | 一种基于自动化机器学习的轻量化高速公路团雾分类方法 |
CN115222142A (zh) * | 2022-07-29 | 2022-10-21 | 贵州电网有限责任公司 | 一种极端气象条件下输变电变压器设备故障预测分析方法 |
CN115900712A (zh) * | 2022-11-03 | 2023-04-04 | 深圳大学 | 一种信源可信度评价组合定位方法 |
CN116958717A (zh) * | 2023-09-20 | 2023-10-27 | 山东省地质测绘院 | 基于机器学习的地质大数据智能清洗方法 |
CN112884747B (zh) * | 2021-02-28 | 2024-04-16 | 长安大学 | 一种融合循环残差卷积与上下文提取器网络的自动桥梁裂缝检测系统 |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109784298A (zh) * | 2019-01-28 | 2019-05-21 | 南京航空航天大学 | 一种基于深度学习的室外非固定场景天气识别方法 |
CN110287849B (zh) * | 2019-06-20 | 2022-01-07 | 北京工业大学 | 一种适用于树莓派的轻量化深度网络图像目标检测方法 |
CN110532878B (zh) * | 2019-07-26 | 2022-11-29 | 中山大学 | 一种基于轻量化卷积神经网络的驾驶员行为识别方法 |
CN110555465B (zh) * | 2019-08-13 | 2022-03-11 | 成都信息工程大学 | 一种基于cnn与多特征融合的天气图像识别方法 |
CN110415544B (zh) * | 2019-08-20 | 2021-05-11 | 深圳疆程技术有限公司 | 一种灾害天气预警方法及汽车ar-hud系统 |
CN110908399B (zh) * | 2019-12-02 | 2023-05-12 | 广东工业大学 | 一种基于轻量型神经网络的无人机自主避障方法及系统 |
CN111476713B (zh) * | 2020-03-26 | 2022-07-22 | 中南大学 | 基于多深度卷积神经网络融合的天气图像智能识别方法及系统 |
CN111832407B (zh) * | 2020-06-08 | 2022-03-15 | 中南民族大学 | 一种激光雷达开机自动判别方法、设备及存储设备 |
CN112214925A (zh) * | 2020-09-17 | 2021-01-12 | 上海微亿智造科技有限公司 | 适用于深度模型训练的烧结工艺产品质量特征处理方法及系统 |
CN112801148A (zh) * | 2021-01-14 | 2021-05-14 | 西安电子科技大学 | 基于深度学习的火情识别定位系统及方法 |
CN112990333A (zh) * | 2021-03-27 | 2021-06-18 | 上海工程技术大学 | 一种基于深度学习的天气多分类识别方法 |
CN114220024B (zh) * | 2021-12-22 | 2023-07-18 | 内蒙古自治区气象信息中心(内蒙古自治区农牧业经济信息中心)(内蒙古自治区气象档案馆) | 基于深度学习的静止卫星沙尘暴识别方法 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108921013A (zh) * | 2018-05-16 | 2018-11-30 | 浙江零跑科技有限公司 | 一种基于深度神经网络的视觉场景识别系统及方法 |
KR101933856B1 (ko) * | 2017-07-03 | 2018-12-31 | (주)시정 | 콘벌루션 신경망을 이용한 영상 처리 시스템 및 이를 이용한 영상 처리 방법 |
CN109784298A (zh) * | 2019-01-28 | 2019-05-21 | 南京航空航天大学 | 一种基于深度学习的室外非固定场景天气识别方法 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104834912B (zh) * | 2015-05-14 | 2017-12-22 | 北京邮电大学 | 一种基于图像信息检测的天气识别方法及装置 |
CN105242330B (zh) * | 2015-10-15 | 2017-10-20 | 广东欧珀移动通信有限公司 | 一种天气状况的检测方法、装置及移动终端 |
CN109214406B (zh) * | 2018-05-16 | 2021-07-09 | 长沙理工大学 | 基于D-MobileNet神经网络的图像分类方法 |
CN108875593A (zh) * | 2018-05-28 | 2018-11-23 | 上海交通大学 | 基于卷积神经网络的可见光图像天气识别方法 |
-
2019
- 2019-01-28 CN CN201910078620.XA patent/CN109784298A/zh active Pending
-
2020
- 2020-01-03 WO PCT/CN2020/070261 patent/WO2020156028A1/zh active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101933856B1 (ko) * | 2017-07-03 | 2018-12-31 | (주)시정 | 콘벌루션 신경망을 이용한 영상 처리 시스템 및 이를 이용한 영상 처리 방법 |
CN108921013A (zh) * | 2018-05-16 | 2018-11-30 | 浙江零跑科技有限公司 | 一种基于深度神经网络的视觉场景识别系统及方法 |
CN109784298A (zh) * | 2019-01-28 | 2019-05-21 | 南京航空航天大学 | 一种基于深度学习的室外非固定场景天气识别方法 |
Non-Patent Citations (1)
Title |
---|
YANG YUANFEI; ZENG SHANGYOU; ZHOU YUE; FENG YANYAN; PAN BING: "Image Recognition Based on Light Weight Convolution Neural Network", VIDEO ENGINEERING, no. 03, 31 March 2018 (2018-03-31), CN, pages 40 - 44, XP009522477, ISSN: 1002-8692, DOI: 10.16280/j.videoe.2018.03.006 * |
Cited By (47)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112084890A (zh) * | 2020-08-21 | 2020-12-15 | 杭州电子科技大学 | 基于gmm和cqfl的多尺度识别交通信号标志的方法 |
CN112084890B (zh) * | 2020-08-21 | 2024-03-22 | 杭州电子科技大学 | 基于gmm和cqfl的多尺度识别交通信号标志的方法 |
CN112233073A (zh) * | 2020-09-30 | 2021-01-15 | 国网山西省电力公司大同供电公司 | 一种变电设备红外热成像异常实时检测方法 |
CN112434786A (zh) * | 2020-10-22 | 2021-03-02 | 西安交通大学 | 一种基于winograd动态卷积块的图像处理方法 |
CN112434786B (zh) * | 2020-10-22 | 2023-09-19 | 西安交通大学 | 一种基于winograd动态卷积块的图像处理方法 |
CN112396020A (zh) * | 2020-11-30 | 2021-02-23 | 中铁建新疆京新高速公路有限公司 | 基于人工智能算法的雪深监测系统 |
CN112651468A (zh) * | 2021-01-18 | 2021-04-13 | 佛山职业技术学院 | 一种多尺度轻量化图像分类方法及其存储介质 |
CN112651468B (zh) * | 2021-01-18 | 2024-06-04 | 佛山职业技术学院 | 一种多尺度轻量化图像分类方法及其存储介质 |
CN112818893A (zh) * | 2021-02-10 | 2021-05-18 | 北京工业大学 | 一种面向移动终端的轻量化开集地标识别方法 |
CN112836669A (zh) * | 2021-02-22 | 2021-05-25 | 宁波大学 | 一种司机分心驾驶检测方法 |
CN112836669B (zh) * | 2021-02-22 | 2023-12-12 | 宁波大学 | 一种司机分心驾驶检测方法 |
CN112836673A (zh) * | 2021-02-27 | 2021-05-25 | 西北工业大学 | 一种基于实例感知和匹配感知的重识别方法 |
CN112836673B (zh) * | 2021-02-27 | 2024-06-04 | 西北工业大学 | 一种基于实例感知和匹配感知的重识别方法 |
CN112884747B (zh) * | 2021-02-28 | 2024-04-16 | 长安大学 | 一种融合循环残差卷积与上下文提取器网络的自动桥梁裂缝检测系统 |
CN113092531B (zh) * | 2021-03-18 | 2023-06-23 | 西北工业大学 | 一种基于卷积神经网络的机电阻抗连接结构损伤检测方法 |
CN113092531A (zh) * | 2021-03-18 | 2021-07-09 | 西北工业大学 | 一种基于卷积神经网络的机电阻抗连接结构损伤检测方法 |
CN113109782A (zh) * | 2021-04-15 | 2021-07-13 | 中国人民解放军空军航空大学 | 一种直接应用于雷达辐射源幅度序列的新型分类方法 |
CN113050042A (zh) * | 2021-04-15 | 2021-06-29 | 中国人民解放军空军航空大学 | 基于改进UNet3+网络的雷达信号调制类型识别方法 |
CN113050042B (zh) * | 2021-04-15 | 2023-08-15 | 中国人民解放军空军航空大学 | 基于改进UNet3+网络的雷达信号调制类型识别方法 |
CN113109782B (zh) * | 2021-04-15 | 2023-08-15 | 中国人民解放军空军航空大学 | 一种直接应用于雷达辐射源幅度序列的分类方法 |
CN113392701B (zh) * | 2021-05-10 | 2024-04-26 | 南京师范大学 | 一种基于YN-Net卷积神经网络的输电线路障碍物检测方法 |
CN113392701A (zh) * | 2021-05-10 | 2021-09-14 | 南京师范大学 | 一种基于YN-Net卷积神经网络的输电线路障碍物检测方法 |
CN113284042B (zh) * | 2021-05-31 | 2023-11-07 | 大连民族大学 | 一种多路并行图像内容特征优化风格迁移方法及系统 |
CN113284042A (zh) * | 2021-05-31 | 2021-08-20 | 大连民族大学 | 一种多路并行图像内容特征优化风格迁移方法及系统 |
CN113837004A (zh) * | 2021-08-20 | 2021-12-24 | 北京工业大学 | 一种基于深度学习的游梁式抽油机运动学分析方法 |
CN113837004B (zh) * | 2021-08-20 | 2024-05-31 | 北京工业大学 | 一种基于深度学习的游梁式抽油机运动学分析方法 |
CN113887425B (zh) * | 2021-09-30 | 2024-04-12 | 北京工业大学 | 一种面向低算力运算装置的轻量化物体检测方法与系统 |
CN113887425A (zh) * | 2021-09-30 | 2022-01-04 | 北京工业大学 | 一种面向低算力运算装置的轻量化物体检测方法与系统 |
CN114154620B (zh) * | 2021-11-29 | 2024-05-21 | 上海应用技术大学 | 人群计数网络的训练方法 |
CN114154620A (zh) * | 2021-11-29 | 2022-03-08 | 上海应用技术大学 | 人群计数网络的训练方法 |
CN114241344B (zh) * | 2021-12-20 | 2023-05-02 | 电子科技大学 | 一种基于深度学习的植物叶片病虫害严重程度评估方法 |
CN114241344A (zh) * | 2021-12-20 | 2022-03-25 | 电子科技大学 | 一种基于深度学习的植物叶片病虫害严重程度评估方法 |
CN114420151A (zh) * | 2022-01-21 | 2022-04-29 | 陕西师范大学 | 基于并联张量分解卷积神经网络的语音情感识别方法 |
CN114420151B (zh) * | 2022-01-21 | 2024-05-31 | 陕西师范大学 | 基于并联张量分解卷积神经网络的语音情感识别方法 |
CN114781472A (zh) * | 2022-03-02 | 2022-07-22 | 多点(深圳)数字科技有限公司 | 一种基于自适应卷积核的跨门店生鲜识别方法 |
CN114781472B (zh) * | 2022-03-02 | 2024-05-24 | 多点(深圳)数字科技有限公司 | 一种基于自适应卷积核的跨门店生鲜识别方法 |
CN114781708A (zh) * | 2022-04-11 | 2022-07-22 | 东南大学 | 一种基于轻量级自编码网络的短期风功率预测方法 |
CN114998820A (zh) * | 2022-04-25 | 2022-09-02 | 中国海洋大学 | 一种基于多任务学习的天气识别方法及系统 |
CN114881879A (zh) * | 2022-05-17 | 2022-08-09 | 燕山大学 | 一种基于亮度补偿残差网络的水下图像增强方法 |
CN114926821A (zh) * | 2022-06-24 | 2022-08-19 | 长城汽车股份有限公司 | 车辆故障警示方法及系统 |
CN115115890B (zh) * | 2022-07-17 | 2024-03-19 | 西北工业大学 | 一种基于自动化机器学习的轻量化高速公路团雾分类方法 |
CN115115890A (zh) * | 2022-07-17 | 2022-09-27 | 西北工业大学 | 一种基于自动化机器学习的轻量化高速公路团雾分类方法 |
CN115222142A (zh) * | 2022-07-29 | 2022-10-21 | 贵州电网有限责任公司 | 一种极端气象条件下输变电变压器设备故障预测分析方法 |
CN115900712B (zh) * | 2022-11-03 | 2023-08-29 | 深圳大学 | 一种信源可信度评价组合定位方法 |
CN115900712A (zh) * | 2022-11-03 | 2023-04-04 | 深圳大学 | 一种信源可信度评价组合定位方法 |
CN116958717B (zh) * | 2023-09-20 | 2023-12-12 | 山东省地质测绘院 | 基于机器学习的地质大数据智能清洗方法 |
CN116958717A (zh) * | 2023-09-20 | 2023-10-27 | 山东省地质测绘院 | 基于机器学习的地质大数据智能清洗方法 |
Also Published As
Publication number | Publication date |
---|---|
CN109784298A (zh) | 2019-05-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020156028A1 (zh) | 一种基于深度学习的室外非固定场景天气识别方法 | |
CN110163187B (zh) | 基于f-rcnn的远距离交通标志检测识别方法 | |
Dong et al. | UAV-based real-time survivor detection system in post-disaster search and rescue operations | |
CN108647655B (zh) | 基于轻型卷积神经网络的低空航拍影像电力线异物检测方法 | |
CN103679674B (zh) | 一种无人飞行器实时图像拼接方法及系统 | |
CN113076871B (zh) | 一种基于目标遮挡补偿的鱼群自动检测方法 | |
CN111986240A (zh) | 基于可见光和热成像数据融合的落水人员检测方法及系统 | |
CN113569672A (zh) | 轻量级目标检测与故障识别方法、装置及系统 | |
CN113111740A (zh) | 一种遥感图像目标检测的特征编织方法 | |
CN113495575A (zh) | 一种基于注意力机制的无人机自主着陆视觉引导方法 | |
CN114782298A (zh) | 一种具有区域注意力的红外与可见光图像融合方法 | |
CN117671502A (zh) | 一种森林火灾检测方法及系统 | |
CN115115973A (zh) | 一种基于多感受野与深度特征的弱小目标检测方法 | |
CN114170528B (zh) | 一种基于卫星云图的强对流区域识别方法 | |
CN114998801A (zh) | 基于对比自监督学习网络的森林火灾烟雾视频检测方法 | |
CN110751271A (zh) | 一种基于深度神经网络的图像溯源特征表征方法 | |
CN114463340A (zh) | 一种边缘信息引导的敏捷型遥感图像语义分割方法 | |
CN117557780A (zh) | 一种机载多模态学习的目标检测算法 | |
CN110991305B (zh) | 一种遥感图像下的飞机检测方法及存储介质 | |
CN114494893B (zh) | 基于语义重用上下文特征金字塔的遥感图像特征提取方法 | |
Yin et al. | M2F2-RCNN: Multi-functional faster RCNN based on multi-scale feature fusion for region search in remote sensing images | |
CN114359258B (zh) | 红外移动对象目标部位的检测方法、装置及系统 | |
CN113869151B (zh) | 一种基于特征融合的跨视角步态识别方法及系统 | |
CN116189012A (zh) | 一种基于改进yolox的无人机地面小目标检测方法 | |
CN113034598B (zh) | 一种基于深度学习的无人机电力巡线方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20748594 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20748594 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 21.09.2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20748594 Country of ref document: EP Kind code of ref document: A1 |