CN113221668B - Frame extraction method in wind generating set blade video monitoring - Google Patents

Frame extraction method in wind generating set blade video monitoring Download PDF

Info

Publication number
CN113221668B
CN113221668B CN202110424657.0A CN202110424657A CN113221668B CN 113221668 B CN113221668 B CN 113221668B CN 202110424657 A CN202110424657 A CN 202110424657A CN 113221668 B CN113221668 B CN 113221668B
Authority
CN
China
Prior art keywords
blade
video
frame
picture
frame extraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110424657.0A
Other languages
Chinese (zh)
Other versions
CN113221668A (en
Inventor
雷红涛
李刚
刘磊
陈高科
张苑
梅建刚
任毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
XI'AN XIANGXUN TECHNOLOGY CO LTD
Original Assignee
XI'AN XIANGXUN TECHNOLOGY CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by XI'AN XIANGXUN TECHNOLOGY CO LTD filed Critical XI'AN XIANGXUN TECHNOLOGY CO LTD
Priority to CN202110424657.0A priority Critical patent/CN113221668B/en
Publication of CN113221668A publication Critical patent/CN113221668A/en
Application granted granted Critical
Publication of CN113221668B publication Critical patent/CN113221668B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F03MACHINES OR ENGINES FOR LIQUIDS; WIND, SPRING, OR WEIGHT MOTORS; PRODUCING MECHANICAL POWER OR A REACTIVE PROPULSIVE THRUST, NOT OTHERWISE PROVIDED FOR
    • F03DWIND MOTORS
    • F03D17/00Monitoring or testing of wind motors, e.g. diagnostics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B10/00Integration of renewable energy sources in buildings
    • Y02B10/30Wind power
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E10/00Energy generation through renewable energy sources
    • Y02E10/70Wind energy
    • Y02E10/72Wind turbines with rotation axis in wind direction

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Sustainable Development (AREA)
  • Sustainable Energy (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Mechanical Engineering (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a frame extraction method in wind generating set blade video monitoring, and aims to solve the problems that the existing blade monitoring method based on video analysis is large in video data processing amount, and requirements on hardware resources and bandwidth are high in real-time intelligent analysis and fault diagnosis, so that the use cost is high. The method comprises the following steps: the method comprises the steps of firstly building and training a classification network model based on deep learning, then extracting video frames containing blades through a trained blade frame extraction model, and finally extracting a final specified number of video frames on the premise of ensuring that each blade of each wind driven generator can be uniformly extracted and the extracted blade position is favorable for detecting abnormal conditions such as subsequent blade cracks, icing and the like through a position priority strategy and a confidence coefficient screening strategy.

Description

Frame extraction method in wind generating set blade video monitoring
Technical Field
The invention relates to a frame extraction method in wind generating set blade video monitoring.
Background
Wind energy is a clean and pollution-free renewable green energy, in recent years, wind power development and construction are rapid, and installed capacity of grid-connected wind power is high. The blade is used as an important component of the wind generating set, the manufacturing cost is high, the replacement cost is high, the blade is easy to cause fault shutdown and even scrap once cracks, icing and the like occur, and the monitoring on the blade is the most effective method for timely finding potential safety hazards.
The blade monitoring method based on video analysis is visual and effective, so that the method is widely applied by wind power manufacturers. However, the amount of processed video data is large, and the requirements on hardware resources and bandwidth are high due to real-time intelligent analysis and fault diagnosis, so that the use cost is high, and the low-cost deployment of the video data in the wind power plant is challenged. Therefore, the method for extracting the frame of the video with low complexity, rapidness, accuracy and effectiveness is researched, technical support is provided for low-cost deployment of the intelligent video monitoring system of the wind power plant blade, and the method has important practical significance.
Disclosure of Invention
The frame extraction method in the video monitoring of the wind generating set blade is provided for solving the problems that the existing video analysis-based blade monitoring method is large in video data processing amount, and high in requirements on hardware resources and bandwidth in real-time intelligent analysis and fault diagnosis, and further high in use cost.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
a frame extraction method in wind generating set blade video monitoring is characterized in that: extracting video frames containing the leaves by a deep learning algorithm, and extracting the video frames with the finally specified number by a position priority strategy and a confidence coefficient screening strategy; the method specifically comprises the following steps:
1) Building image classification network model based on deep learning
Selecting a network model ResNet18 or SquezeNet, cutting the depth of the ResNet18 network characteristic graph, or cutting the depth of the SquezeNet network characteristic graph, and removing part of 'Fire Module' blocks of the SquezeNet network;
2) Model training
Collecting blade videos and arranging the blade videos into a video frame data set, and training the built network model by using the arranged video frame data set to obtain a blade frame extraction model based on deep learning;
3) Obtaining a specified number of frames containing blades
3.1 Marking the number of the blades of the wind driven generator as N, wherein each blade corresponds to an Id;
3.2 The video of the leaves is collected, the number of the leaves in the video is counted from 0, the count of each leaf from the entering picture to the leaving picture is added with 1, and the number is n, so that the corresponding relation between each number and the actual leaf can be obtained through the formula (1):
Id=n%N (1);
3.3 Extracting frames containing blades in the video by a blade frame extraction model, recording the number of frames extracted from an entering picture to a leaving picture by the blades with the number of n as m, and numbering the extracted video frames from 0, so that an extracted frame information list shown by the formula (2) is obtained after each video is extracted by the blade frame extraction model:
[…,[n,[conf 0 ,conf 1 ,…,conf m ],[pic 0 ,pic 1 ,…,pic m ]],…] (2);
wherein conf is a confidence value that a frame of picture is judged to contain a blade or not, and pic is complete path information temporarily stored in the frame of picture;
3.4 From the list information in the formula (2), through a position priority strategy and a confidence screening strategy, summarizing the extracted frames according to the path pic, and deleting useless frames to obtain the final specified number topN of blade-containing frames.
Further, the location priority policy is specifically implemented according to the following method:
if the total number of the extracted pictures in the formula (2) is not more than the topN frame, directly taking all the pictures as a final frame extraction result;
otherwise, acquiring a list of intermediate positions of each blade from the entering picture to the leaving picture according to the formula (3):
Figure BDA0003028866760000031
if n is more than or equal to topN, taking the previous topN frame in the list of the formula (3) as a final frame extraction result; and if n is less than topN, all n frames in the list of the formula (3) are taken as frame extraction results, and the rest topN-n frames are obtained by confidence screening.
Further, the confidence screening strategy is specifically implemented according to the following method:
firstly, obtaining a frame which contains the blade and is not extracted by solving a difference set of the formula (2) and the formula (3);
then, the difference sets are arranged in a descending order according to the confidence conf, and front topN-n frames are extracted;
and finally, combining the frame extraction result with the frame extraction result of the formula (3) to obtain a final frame extraction result.
Further, the step 2) of collecting the blade video and arranging the blade video into the video frame data set is implemented according to the following steps:
a) Collecting a leaf video, and selecting a video frame material of scenes such as day, night, sunny day, cloudy day and rainy and snowy day, so that the material simultaneously comprises two types of leaf video frames and non-leaf video frames;
b) Dividing the picture into positive and negative samples according to the boundary standard for judging the included leaves, and after removing repeated data, the number of the negative samples is 2-4 times of that of the positive samples, and forming a data set together;
the defining criteria are: edges of two sides of each blade are shown in the picture, the area of a circumscribed rectangle of each blade is not less than 30% of the picture, and the picture meeting the condition is taken as a positive sample; pictures that do not satisfy the aforementioned conditions are taken as negative examples.
Compared with the prior art, the invention has the beneficial effects that:
the frame extraction method in the wind generating set blade video monitoring is a low-complexity, quick, accurate and effective video frame extraction algorithm, provides technical support for intelligent analysis and fault diagnosis of the wind generating set blade state, achieves low-cost deployment of an intelligent video monitoring system of a wind power plant blade, and finally achieves all-weather unattended operation of the wind power plant.
Drawings
FIG. 1 is a flowchart illustrating the overall processing of a frame extraction method in video monitoring of a wind turbine blade according to the present invention;
FIG. 2 is a diagram illustrating parameters before and after modification of the ResNet18 network model in the present invention;
FIG. 3 is a diagram showing a comparison of the structure of the SqueezeNet network model before and after modification;
FIG. 4 is a comparison of parameters before and after modification of the Squeezenet network model of the present invention;
FIG. 5 is a flow diagram of a location-first policy and a confidence screening policy of the present invention;
fig. 6 is a diagram of the results of video frames extracted by the method of the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The frame extracting method in the wind generating set blade video monitoring provided by the invention has the flow shown in fig. 1, firstly a classification network model based on deep learning is built and trained, then a video frame containing the blade is extracted through the trained blade frame extracting model, and finally a video frame with a final designated number is extracted through a position priority strategy and a confidence coefficient screening strategy on the premise of ensuring that each blade of each wind generating set can be uniformly extracted and the extracted blade position is favorable for detecting abnormal conditions such as subsequent blade cracks, icing and the like, and the method specifically comprises the following steps:
1) Determining a leaf frame extraction algorithm based on deep learning
The determination of whether a video frame contains a leaf can be solved by using leaf detection or picture classification (binary classification including leaf and no leaf). Compared with a classification method, the detection algorithm has higher complexity, is contrary to the original purpose of a preset low-complexity algorithm and is difficult to realize on equipment with limited calculation capacity, so the classification method is adopted in the invention. The traditional classification method is difficult to consider both the complexity and the accuracy, and the method modifies the classical convolutional neural network, builds a smaller and faster classification network model, and achieves the balance of complexity and accuracy.
The model building method and training process are explained below.
1.1 Model construction
Based on the requirement of low-complexity deployment, the invention selects a small and fast classical network structure and further cuts the network structure on the basis of considering the model effect. The network structure clipping process is explained below by taking the ResNet18 and SqueezeNet classical small networks as examples.
(1) Resnet18 network architecture modification
The ResNet18 network model is a deeper model, has better nonlinear expression capability, and can obtain more abstract and better fitting characteristics. Therefore, the depth of the ResNet18 network characteristic diagram is modified on the premise of not changing the model depth, and model parameters before and after modification are illustrated as shown in FIG. 2.
(2) Squeezenet network architecture modification
For the SqueezeNet network, the complexity is still high if only the depth of the network feature map is modified. Considering that the SqueezeNet is composed of 'Fire Module' blocks, some 'Fire Module' blocks can be removed on the basis of modifying the depth of the network feature map, and structural and model parameter descriptions before and after modification are shown in FIGS. 3 and 4.
1.2 Model training
Because the model training method is mature and the tool is complete, the training process is not repeated, and only the data set acquisition method is explained. Firstly, the wind driven generator is usually installed in an open place outdoors, so the selected materials include scenes of day time, night time, sunny day, cloudy day and rainy and snowy day; secondly, because only a small part of the leaves appear in the picture, the significance of subsequent fault analysis is not large, and computing resources are consumed, so that the definition standard including the leaves is that the edges of two sides of each leaf appear in the picture, and the area of the circumscribed rectangle of each leaf is not less than 30% of the picture; and finally, in order to ensure the classification effect, after the repeated data is removed, the number of the pictures which do not meet the condition is 2-4 times of the number of the pictures which meet the condition. And training the built network model by using the sorted video frame data set to obtain a blade frame extraction model based on deep learning.
2) Obtaining a specified number of frames containing blades
The wind generator comprises a plurality of blades (typically 3), denoted by the number N of blades, one Id for each blade. During operation of the wind turbine, each blade is alternately present in the view and there is usually a certain time interval from when one blade leaves the view to when another blade appears in the view. Numbering the blades in the video from 0, adding 1 to the count of each blade from the entering picture to the leaving picture (the counting can be realized by adding 1 to the count when the blade continuously exceeds a preset threshold Thr frame and does not appear) and numbering n, so that the corresponding relation between each number and the actual blade can be obtained through the formula (1):
Id=n%N (1)
after each blade is extracted by a blade frame extraction model from an entering picture to a leaving picture, the number of the obtained blade-containing frames is recorded as m, and the blade-containing frames are numbered from 0, so that an extraction frame information list shown in a formula (2) is obtained after each video segment is extracted:
[…,[n,[conf 0 ,conf 1 ,…,conf m ],[pic 0 ,pic 1 ,…,pic m ]],…] (2)
where conf is a confidence value that a frame of picture is judged to contain a leaf or not, and pic is complete path information temporarily stored in the frame of picture.
The list information in the formula (2) is further screened by a position priority policy and a confidence level, the extracted frames are summarized according to the path pic, useless frames are deleted, and a final video frame with a preset number topN is obtained, and the processing flow is shown in fig. 5 and further explained below.
(1) Location-first policy
The number m of the blade frame is contained from the entering picture to the leaving picture of each blade, the time sequence is represented, and the position of the blade is represented at the same time, the middle number means that the blade is closer to the middle position of the picture, and the position is more beneficial to the state analysis of the subsequent blade, so that for the blade which enters the picture to leave the picture each time, a middle position can be obtained, and a position list is obtained, as shown in formula (3):
Figure BDA0003028866760000071
if the total number of the extracted pictures in the formula (2) is not more than the topN frame, directly taking all the pictures as a final frame extraction result; otherwise, the intermediate position list is obtained according to equation (3). Further, if n is larger than or equal to topN, the previous topN result in the list of the formula (3) is taken as the final frame extraction result, and n is numbered continuously, which means that the blades entering the picture can be extracted to the middle position; if n is less than topN, all n results in the list of the formula (3) are taken as frame extraction results, and the rest topN-n frame results are obtained through confidence degree screening.
(2) Confidence screening method
The higher the confidence of the extracted frame in the formula (2), the higher the probability that the picture contains the complete leaf, and the more favorable the frame is for the analysis of the subsequent leaf state, so if the position screening fails to screen the specified number of frames, the remaining frames are screened by the confidence. Obtaining a frame which contains the blade but is not extracted by solving the difference set of the formula (2) and the formula (3); then, the difference sets are arranged in a descending order according to the confidence conf, and the previous topN-n frame results are extracted; and finally, combining the frame extraction result with the frame extraction result of the formula (3) to obtain a final frame extraction result.
Taking blade monitoring of a certain wind driven generator as an example, the frame extraction process comprises the following steps:
the method comprises the following steps: data acquisition and collation
According to the method for acquiring the video frame data set in the 1.2), the materials of scenes of day, night, sunny days, cloudy days and rainy and snowy days are collected, and the fact that the materials comprise two types of blades and no blades is ensured. And then according to the defined standard in 1.2), dividing the picture into positive (meeting the standard in 1.2) and negative (not meeting the standard in 1.2) samples, and after removing the repeated data, the negative samples are 2-4 times of the positive samples and jointly form a data set.
Step two: model design and training
Selecting a deep learning frame, building the model in the 1.1) and training to obtain a blade frame extraction model based on deep learning.
Step three: model deployment
And (3) deploying the model and the whole algorithm of the flow in the figure 1 to equipment for frame extraction, and sending the result to a blade state analysis algorithm.
The method selects an mxnet deep learning framework, deploys the modified ResNet18 model to an arm platform for testing, uses a 1080P video, sets the number of extraction frames to be 20, and finally extracts the result as shown in FIG. 6. Meanwhile, the results of the comparison test of the modified ResNet18 model and SqueezeNet model, the MobileNet V3 model and the traditional method on the arm platform are shown in Table 1 (which indicates that the indexes are overall indexes and the traditional method has an unsatisfactory frame extraction effect on scenes in cloudy days and rainy and snowy days). The result shows that the method provided by the invention well balances the relation between complexity and accuracy and better meets the requirement of low-cost deployment.
TABLE 1
Frame extraction method Time/frame Rate of accuracy
After the invention is modified by the SqueezeNet 93ms 90.67%
After the ResNet18 of the invention is modified 109ms 95.29%
MobileNet V3 model 197ms 95.94%
Conventional methods using frame differences and morphology 137ms 60.77%

Claims (2)

1. A frame extraction method in wind generating set blade video monitoring is characterized in that: extracting video frames containing the leaves by a deep learning algorithm, and extracting the video frames with the finally specified number by a position priority strategy and a confidence coefficient screening strategy; the method specifically comprises the following steps:
1) Building a picture classification network model based on deep learning
Selecting a network model ResNet18 or SquezeNet, cutting the depth of the ResNet18 network characteristic graph, or cutting the depth of the SquezeNet network characteristic graph, and removing part of 'Fire Module' blocks of the SquezeNet network;
2) Model training
Collecting blade videos and arranging the blade videos into a video frame data set, and training the built network model by using the arranged video frame data set to obtain a blade frame extraction model based on deep learning;
3) Obtaining a specified number of frames containing blades
3.1 Marking the number of the blades of the wind driven generator as N, wherein each blade corresponds to an Id;
3.2 The video of the leaves is collected, the number of the leaves in the video is counted from 0, the count of each leaf from the entering picture to the leaving picture is added with 1, and the number is n, so that the corresponding relation between each number and the actual leaf can be obtained through the formula (1):
Id=n%N (1);
3.3 Extracting frames containing blades in the video by a blade frame extraction model, recording the number of frames extracted from an entering picture to a leaving picture by the blades with the number of n as m, and numbering the extracted video frames from 0, so that an extracted frame information list shown by the formula (2) is obtained after each video is extracted by the blade frame extraction model:
[…,[n,[conf 0 ,conf 1 ,…,conf m ],[pic 0 ,pic 1 ,…,pic m ]],…] (2);
wherein conf is a confidence value that a frame of picture is judged to contain a blade or not, and pic is complete path information temporarily stored in the frame of picture;
3.4 From the list information in the formula (2), through a position priority strategy and a confidence screening strategy, summarizing the extracted frames according to the path pic, and deleting useless frames to obtain final blade-containing frames with the specified number topN;
the location-first policy is specifically implemented as follows:
if the total number of the extracted pictures in the formula (2) is not more than the topN frame, directly taking all the pictures as a final frame extraction result; otherwise, acquiring an intermediate position list according to the formula (3); if n is larger than or equal to topN, the previous topN result in the formula (3) list is used as the final frame extraction result; if n is less than topN, all n results in the list of the formula (3) are taken as frame extraction results, and the rest topN-n frame results are obtained through confidence coefficient screening;
Figure FDA0003977706340000021
the confidence screening strategy is specifically implemented as follows:
firstly, solving a difference set between the formula (2) and the formula (3) to obtain a frame which contains the blade but is not extracted; then, the difference sets are arranged in a descending order according to the confidence conf, and the previous topN-n frame results are extracted; and finally, combining the frame extraction result with the frame extraction result of the formula (3) to obtain a final frame extraction result.
2. The frame extraction method in the wind turbine generator system blade video monitoring according to claim 1, wherein the step 2) of collecting the blade video and arranging the blade video into the video frame data set is implemented according to the following steps:
a) Collecting a leaf video, and selecting a video frame material of scenes such as day, night, sunny day, cloudy day and rainy and snowy day, so that the material simultaneously comprises two types of leaf video frames and non-leaf video frames;
b) Dividing the picture into positive and negative samples according to the boundary standard for judging the included leaves, and after removing repeated data, the number of the negative samples is 2-4 times of that of the positive samples, and forming a data set together;
the defining criteria are: the two side edges of the blade are all shown in the picture, the area of the circumscribed rectangle of the blade is not less than 30% of the picture, and the picture meeting the condition is taken as a positive sample; pictures that do not satisfy the foregoing conditions are taken as negative examples.
CN202110424657.0A 2021-04-20 2021-04-20 Frame extraction method in wind generating set blade video monitoring Active CN113221668B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110424657.0A CN113221668B (en) 2021-04-20 2021-04-20 Frame extraction method in wind generating set blade video monitoring

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110424657.0A CN113221668B (en) 2021-04-20 2021-04-20 Frame extraction method in wind generating set blade video monitoring

Publications (2)

Publication Number Publication Date
CN113221668A CN113221668A (en) 2021-08-06
CN113221668B true CN113221668B (en) 2023-04-07

Family

ID=77088247

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110424657.0A Active CN113221668B (en) 2021-04-20 2021-04-20 Frame extraction method in wind generating set blade video monitoring

Country Status (1)

Country Link
CN (1) CN113221668B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113803223B (en) * 2021-08-11 2022-12-20 明阳智慧能源集团股份公司 Method, system, medium and equipment for monitoring icing state of fan blade in real time

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108416294A (en) * 2018-03-08 2018-08-17 南京天数信息科技有限公司 A kind of fan blade fault intelligent identification method based on deep learning
CN109002807A (en) * 2018-07-27 2018-12-14 重庆大学 A kind of Driving Scene vehicle checking method based on SSD neural network
WO2020181685A1 (en) * 2019-03-12 2020-09-17 南京邮电大学 Vehicle-mounted video target detection method based on deep learning
CN112506653A (en) * 2020-12-03 2021-03-16 浙江大华技术股份有限公司 Frame extraction frame rate adjusting method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8559516B2 (en) * 2007-06-14 2013-10-15 Sony Corporation Video sequence ID by decimated scene signature

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108416294A (en) * 2018-03-08 2018-08-17 南京天数信息科技有限公司 A kind of fan blade fault intelligent identification method based on deep learning
CN109002807A (en) * 2018-07-27 2018-12-14 重庆大学 A kind of Driving Scene vehicle checking method based on SSD neural network
WO2020181685A1 (en) * 2019-03-12 2020-09-17 南京邮电大学 Vehicle-mounted video target detection method based on deep learning
CN112506653A (en) * 2020-12-03 2021-03-16 浙江大华技术股份有限公司 Frame extraction frame rate adjusting method and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Object detection based on SSD-ResNet;Xin Lu et al;《 2019 IEEE 6th International Conference on Cloud Computing and Intelligence Systems (CCIS)》;20200423;89-92页 *
一种基于深度学习的静态手势实时识别方法;张勋等;《现代计算机》;20171205;第34卷;6-11页 *
基于局部二元图的视频对象阴影检测方法;张玲等;《系统工程与电子技术》;20070615(第06期);974-977页 *

Also Published As

Publication number Publication date
CN113221668A (en) 2021-08-06

Similar Documents

Publication Publication Date Title
CN111798412B (en) Intelligent diagnosis method and system for defects of power transformation equipment based on infrared image
CN107016405B (en) A kind of pest image classification method based on classification prediction convolutional neural networks
CN109660206B (en) Wasserstein GAN-based photovoltaic array fault diagnosis method
CN108229550B (en) Cloud picture classification method based on multi-granularity cascade forest network
CN111652326B (en) Fruit maturity identification method and system based on MobileNet v2 network improvement
CN103291544B (en) Digitizing Wind turbines power curve method for drafting
CN110598726A (en) Transmission tower bird damage risk prediction method based on random forest
CN104463196A (en) Video-based weather phenomenon recognition method
CN101661559A (en) Digital image training and detecting methods
CN110889841A (en) YOLOv 3-based bird detection algorithm for power transmission line
CN112991264B (en) Method for detecting crack defect of monocrystalline silicon photovoltaic cell
CN113221668B (en) Frame extraction method in wind generating set blade video monitoring
CN110059076A (en) A kind of Mishap Database semi-automation method for building up of power transmission and transformation line equipment
CN109919921A (en) Based on the influence degree modeling method for generating confrontation network
CN110674887A (en) End-to-end road congestion detection algorithm based on video classification
CN115291110A (en) Electric pile aging prediction method based on characteristic parameter extraction and aging experience base construction
CN113379005B (en) Intelligent energy management system and method for power grid power equipment
CN114882373A (en) Multi-feature fusion sandstorm prediction method based on deep neural network
CN114359578A (en) Application method and system of pest and disease damage identification intelligent terminal
CN113536944A (en) Distribution line inspection data identification and analysis method based on image identification
CN117197554A (en) Transformer oil leakage real-time detection method and system
CN110555460B (en) Image slice-based bird detection method for power transmission line at mobile terminal
CN114281846B (en) New energy power generation prediction method based on machine learning
Ge et al. Bird’s nest detection algorithm for transmission lines based on deep learning
CN114863211A (en) Magnetic shoe defect detection and segmentation method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant