CN117973212A - Yaw angle calculation method and device for wind driven generator - Google Patents

Yaw angle calculation method and device for wind driven generator Download PDF

Info

Publication number
CN117973212A
CN117973212A CN202410169475.7A CN202410169475A CN117973212A CN 117973212 A CN117973212 A CN 117973212A CN 202410169475 A CN202410169475 A CN 202410169475A CN 117973212 A CN117973212 A CN 117973212A
Authority
CN
China
Prior art keywords
driven generator
wind driven
detected
data set
blade
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410169475.7A
Other languages
Chinese (zh)
Inventor
陈禹明
张晓晔
郑培文
黎佩馨
陈皓
徐琪
朱曦萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Southern Power Grid Power Technology Co Ltd
Original Assignee
China Southern Power Grid Power Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Southern Power Grid Power Technology Co Ltd filed Critical China Southern Power Grid Power Technology Co Ltd
Priority to CN202410169475.7A priority Critical patent/CN117973212A/en
Publication of CN117973212A publication Critical patent/CN117973212A/en
Pending legal-status Critical Current

Links

Abstract

The invention discloses a method and a device for calculating yaw angle of a wind driven generator, comprising the following steps: acquiring a first data set, and constructing a first segmentation model according to a deep convolutional neural network and the first data set; the first data set comprises blade images and cabin images of the wind driven generator and corresponding labels; acquiring a real-time image of a wind driven generator to be detected, and acquiring blade and cabin information of the wind driven generator to be detected according to the first segmentation model; and carrying out area linear extraction according to the blade and cabin information of the wind driven generator to be detected, and calculating the yaw angle of the wind driven generator to be detected. According to the method, the real-time image of the wind driven generator is subjected to semantic segmentation according to the first segmentation model, so that the region straight line extraction is carried out on the nacelle and the blades of the wind driven generator, the yaw angle of the wind driven generator to be detected is calculated, and the accuracy and the efficiency of the detection of the nacelle orientation of the wind driven generator are improved.

Description

Yaw angle calculation method and device for wind driven generator
Technical Field
The invention relates to the technical field of data processing, in particular to a yaw angle calculation method and device of a wind driven generator.
Background
The wind driven generator blade is affected by factors such as strong wind load, sand scouring, lightning strike, atmospheric oxidation, wet air corrosion and the like in the working process, and the problems such as air holes, cracks, abrasion, corrosion and the like can be inevitably caused, if the wind driven generator blade is not timely treated, the blade is broken, the safe operation of a unit is seriously threatened, and therefore the health condition of the wind driven generator blade needs to be monitored.
In the prior art, the inspection of the propeller blades of the wind driven generator is mainly performed by traditional methods such as spider men and telescope observation, even if unmanned aerial vehicle is used for inspection, the degree of automation is not high, an inspector is required to directly observe through video images, the labor cost is high, the efficiency and the accuracy are difficult to ensure, and in addition, the wind driven generator is required to pause operation when the inspection is performed, and the time consumption is long and the price is high.
Disclosure of Invention
The invention provides a method and a device for calculating a yaw angle of a wind driven generator, which are used for solving the technical problems of high cost and low efficiency caused by low inspection automation cost of the existing wind driven generator.
In order to solve the above technical problems, an embodiment of the present invention provides a method for calculating a yaw angle of a wind turbine, including:
Acquiring a first data set, and constructing a first segmentation model according to a deep convolutional neural network and the first data set; the first data set comprises blade images and cabin images of the wind driven generator and corresponding labels;
Acquiring a real-time image of a wind driven generator to be detected, and acquiring blade and cabin information of the wind driven generator to be detected according to the first segmentation model;
And carrying out area linear extraction according to the blade and cabin information of the wind driven generator to be detected, and calculating the yaw angle of the wind driven generator to be detected.
According to the method, the real-time image of the wind driven generator is subjected to semantic segmentation according to the first segmentation model, so that the region straight line extraction is carried out on the nacelle and the blades of the wind driven generator, the yaw angle of the wind driven generator to be detected is calculated, and the accuracy and the efficiency of the detection of the nacelle orientation of the wind driven generator are improved. Meanwhile, the wind driven generator is not required to stop operation during detection, so that the time consumption and high cost of manual inspection are avoided.
Further, the acquiring the first data set specifically includes:
according to a preset detection rule, controlling the unmanned aerial vehicle to shoot each wind driven generator, and obtaining detection data sets of a plurality of wind driven generators, wherein the detection rule comprises shooting conditions, shooting heights, shooting angles and flight tracks of the unmanned aerial vehicle; the detection data set comprises video data and image data;
extracting key frame data in the video data, and screening the key frame data and the image data according to the picture integrity to generate a second data set;
and identifying and labeling all the wind driven generators, the blades and the cabins in the second data set, and generating a first data set.
Further, the constructing a first segmentation model according to the deep convolutional neural network and the first data set includes:
Converting the first data set according to a preset format to generate a semantic segmentation data set, and dividing the semantic segmentation data set into a training set and a testing set;
Constructing a first segmentation model according to an encoder and a decoder, and training the first segmentation model according to the training set and the testing set;
And evaluating the first segmentation model according to the cross entropy loss function until the model converges to reach a preset condition, and outputting the trained first segmentation model.
Further, the acquiring a real-time image of the wind driven generator to be detected, and acquiring blade and cabin information of the wind driven generator to be detected according to the first segmentation model specifically includes:
Acquiring a real-time image of a wind driven generator to be detected according to an unmanned aerial vehicle, and inputting the real-time image into the first segmentation model for semantic segmentation;
And acquiring an area mask image corresponding to the blade of the wind driven generator to be detected and a point set corresponding to the engine room.
Further, the area straight line extraction is performed according to the blade and cabin information of the wind driven generator to be detected, and the yaw angle of the wind driven generator to be detected is calculated, specifically:
Detecting a region mask image corresponding to a blade of the wind driven generator according to a preset operator, extracting blade profile information, and generating a blade profile binary image;
extracting a first straight line in the blade profile binary image, wherein the first straight line is the longest straight line in the blade profile binary image;
And generating a cabin aggregation point according to the cabin point set, calculating normal vectors of the cabin aggregation point and a first straight line according to geometric logic, and calculating the yaw angle of the wind driven generator to be detected according to the normal vectors and a preset reference vector.
In a second aspect, the present invention provides a yaw angle calculation apparatus for a wind turbine, comprising: the system comprises a model construction module, a detection module and a calculation module;
The model construction module is used for acquiring a first data set and constructing a first segmentation model according to the depth convolution neural network and the first data set; the first data set comprises blade images and cabin images of the wind driven generator and corresponding labels;
The detection module is used for acquiring real-time images of the wind driven generator to be detected and acquiring blade and cabin information of the wind driven generator to be detected according to the first segmentation model;
And the calculating module is used for carrying out area linear extraction according to the blade and cabin information of the wind driven generator to be detected and calculating the yaw angle of the wind driven generator to be detected.
Further, the model construction module is specifically configured to:
according to a preset detection rule, controlling the unmanned aerial vehicle to shoot each wind driven generator, and obtaining detection data sets of a plurality of wind driven generators, wherein the detection rule comprises shooting conditions, shooting heights, shooting angles and flight tracks of the unmanned aerial vehicle; the detection data set comprises video data and image data;
extracting key frame data in the video data, and screening the key frame data and the image data according to the picture integrity to generate a second data set;
and identifying and labeling all the wind driven generators, the blades and the cabins in the second data set, and generating a first data set.
Further, the model building module is further configured to:
Converting the first data set according to a preset format to generate a semantic segmentation data set, and dividing the semantic segmentation data set into a training set and a testing set;
Constructing a first segmentation model according to an encoder and a decoder, and training the first segmentation model according to the training set and the testing set;
And evaluating the first segmentation model according to the cross entropy loss function until the model converges to reach a preset condition, and outputting the trained first segmentation model.
Further, the detection module is specifically configured to:
Acquiring a real-time image of a wind driven generator to be detected according to an unmanned aerial vehicle, and inputting the real-time image into the first segmentation model for semantic segmentation;
And acquiring an area mask image corresponding to the blade of the wind driven generator to be detected and a point set corresponding to the engine room.
Further, the computing module is specifically configured to:
Detecting a region mask image corresponding to a blade of the wind driven generator according to a preset operator, extracting blade profile information, and generating a blade profile binary image;
extracting a first straight line in the blade profile binary image, wherein the first straight line is the longest straight line in the blade profile binary image;
And generating a cabin aggregation point according to the cabin point set, calculating normal vectors of the cabin aggregation point and a first straight line according to geometric logic, and calculating the yaw angle of the wind driven generator to be detected according to the normal vectors and a preset reference vector.
Drawings
FIG. 1 is a schematic flow chart of a yaw angle calculation method of a wind driven generator according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a first division model of a yaw angle calculation method of a wind turbine according to an embodiment of the present invention;
FIG. 3 is a flow chart for region straight line extraction and yaw angle calculation provided by an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a yaw angle calculation device of a wind driven generator according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
Referring to fig. 1, fig. 1 is a flowchart of a yaw angle calculation method of a wind turbine according to an embodiment of the present invention, including steps 101 to 103, specifically as follows:
Step 101: acquiring a first data set, and constructing a first segmentation model according to a deep convolutional neural network and the first data set; the first data set comprises blade images and cabin images of the wind driven generator and corresponding labels;
in this embodiment, the acquiring the first data set specifically includes:
according to a preset detection rule, controlling the unmanned aerial vehicle to shoot each wind driven generator, and obtaining detection data sets of a plurality of wind driven generators, wherein the detection rule comprises shooting conditions, shooting heights, shooting angles and flight tracks of the unmanned aerial vehicle; the detection data set comprises video data and image data;
extracting key frame data in the video data, and screening the key frame data and the image data according to the picture integrity to generate a second data set;
and identifying and labeling all the wind driven generators, the blades and the cabins in the second data set, and generating a first data set.
In the embodiment, an M300 unmanned aerial vehicle is used for hovering above a wind driven generator, the whole image and video of the wind driven generator are shot vertically downwards, and a video key frame is extracted, so that a first data set is generated.
In this embodiment, when capturing an overall image of a wind turbine, for a plurality of wind turbines in a wind farm, the overall image of the wind turbine is captured according to a detection rule, directly above the wind turbine. The detection rule prescribes shooting height, shooting angle and unmanned aerial vehicle shooting flight track. The shooting height and the shooting angle in the detection rule are formulated according to the actual conditions of each wind power plant.
In this embodiment, when capturing video, for a plurality of fans of a wind farm, capturing video of the wind turbine according to detection rules, immediately above the wind turbine. The detection rule prescribes shooting height, shooting angle and unmanned aerial vehicle shooting flight track.
In this embodiment, by way of example, the unmanned aerial vehicle is at a position which is 50m, 70m, 90m, 120m and 150m equidistant from the wind turbine nacelle, and the viewing angle of the unmanned aerial vehicle camera is offset from the vertical direction by 0 degree, 10 degrees, 15 degrees and 20 degrees, and the unmanned aerial vehicle spins 360 degrees clockwise at the same time, and the whole video and the whole image of the wind turbine are shot to generate a detection data set, so that the diversity of the wind turbine nacelle orientation image is ensured.
In this embodiment, the detection rule further specifies shooting conditions including illumination intensity, shooting time, shooting weather, and the like. For example, in different illumination (such as morning, noon, afternoon), different weather (cloudy, rainy, sunny) is first image shooting and video shooting to guarantee data diversity.
In this embodiment, a detection data set is obtained, a key frame in the video data is extracted, manual screening is performed, and part of wind turbine images which are not shot completely are removed, so that wind turbine images of different equipment and different scenes are obtained until the number of the wind turbine images reaches a preset threshold value, the preset threshold value is set to 2000 by default, and a second data set to be marked is generated.
In this embodiment, according to using labelme tools, the second dataset to be annotated is annotated, for each image in the second dataset, all the wind turbine targets in the image are annotated by the smallest circumscribed rectangular frame, the wind turbine blades and the nacelle are respectively annotated by polygons according to the regional outline, and an annotation file, namely the first dataset, is generated.
Said constructing a first segmentation model from a depth convolutional neural network and said first dataset, comprising:
Converting the first data set according to a preset format to generate a semantic segmentation data set, and dividing the semantic segmentation data set into a training set and a testing set;
Constructing a first segmentation model according to an encoder and a decoder, and training the first segmentation model according to the training set and the testing set;
And evaluating the first segmentation model according to the cross entropy loss function until the model converges to reach a preset condition, and outputting the trained first segmentation model.
In this embodiment, the markup file is converted into the semantic segmentation data set according to a preset format, which is an exemplary VOC format.
In this embodiment, a first segmentation model is constructed based on the encoder-Decoder (Encoder-Decoder) of the dilation convolution FCN (Dilated FCN). The first segmentation model includes an encoder and a decoder therein.
In this embodiment, the first segmentation model is DeepLabV3+ model as a semantic segmentation model.
In this embodiment, the body of the encoder (Encoder) is composed of two parts, a backbone network and a pooling module. The backbone network is composed of a traditional deep neural network, such as ResNet series network, and mainly aims to gradually reduce the resolution of the feature map and provide high-level semantic information of the image. The pooling module is a spatial pyramid pooling module (Atrous SPATIAL PYRAMID Pooling, ASPP) with hole convolution (Dilated Conv) and enriches the context information by performing pooling operations at different resolutions and introducing multi-scale information. The main function of the Decoder is to further integrate the bottom layer features and the high layer features, enrich the image space information and improve the accuracy of the segmentation boundary.
Referring to fig. 2, fig. 2 is a schematic diagram of a first segmentation model of a yaw angle calculation method of a wind turbine according to an embodiment of the present invention.
In this embodiment, the encoder and decoder are included in the first segmentation model. The encoder section includes 4 hole convolution layers (Dilated Conv). And as the backbone network continuously extracts image features, the resolution of the feature map is continuously reduced, the hole convolution with larger two expansion rates is not beneficial to extracting the feature map information with resolution intersection, the hole rate of the hole convolution is adjusted, and the hole rate is illustratively adjusted to 4, 8, 12 and 16, so that the extraction of the low-fraction feature map information is improved, and the hole convolution layer output is expressed as:
In this embodiment, the spatial pyramid pooling module (ASPP) stacks the hole convolutions with different hole rates to obtain the information gain with more scales.
In this embodiment, since the 3x3 convolution learns redundant information, which results in an increase in the parameter quantity of the model and affects the reasoning speed of the model, under the condition that the cavitation rate is not changed in all 3x3 cavity convolutions of the ASPP module, the 3x3 convolution is converted into a cascade of 3x1 and 1x3 convolutions by using the depth separable convolution (DEPTHWISE SEPARABLE CONVOLUTION), compared with the original structure, the parameter quantity of 1/3 month can be reduced, the parameter quantity and the calculated quantity of the ASPP module are effectively reduced, and the model training and reasoning speed is improved.
In this embodiment, training the first segmentation model according to the training set and the testing set; and evaluating the first segmentation model according to a cross entropy loss function until the model converges to reach a preset condition, wherein the cross entropy loss function is a pixel-by-pixel cross entropy loss function, and the formula is as follows:
wherein w and h respectively represent the width and height of the output characteristic diagram; The classification prediction probability of the pixel point (i, j) is represented.
Step 102: acquiring a real-time image of a wind driven generator to be detected, and acquiring blade and cabin information of the wind driven generator to be detected according to the first segmentation model;
In this embodiment, the acquiring a real-time image of the wind turbine to be detected, and acquiring, according to the first segmentation model, blade and nacelle information of the wind turbine to be detected specifically includes:
Acquiring a real-time image of a wind driven generator to be detected according to an unmanned aerial vehicle, and inputting the real-time image into the first segmentation model for semantic segmentation;
And acquiring an area mask image corresponding to the blade of the wind driven generator to be detected and a point set corresponding to the engine room.
In the embodiment, an image shot by the unmanned aerial vehicle is received for semantic segmentation, and a region mask image and a point set corresponding to the wind turbine blade and the engine room are output and used as input of a region straight line extraction algorithm.
Step 103: and carrying out area linear extraction according to the blade and cabin information of the wind driven generator to be detected, and calculating the yaw angle of the wind driven generator to be detected.
In this embodiment, the area straight line extraction is performed according to the blade and nacelle information of the wind turbine to be detected, and the yaw angle of the wind turbine to be detected is calculated, which specifically includes:
Detecting a region mask image corresponding to a blade of the wind driven generator according to a preset operator, extracting blade profile information, and generating a blade profile binary image;
extracting a first straight line in the blade profile binary image, wherein the first straight line is the longest straight line in the blade profile binary image;
And generating a cabin aggregation point according to the cabin point set, calculating normal vectors of the cabin aggregation point and a first straight line according to geometric logic, and calculating the yaw angle of the wind driven generator to be detected according to the normal vectors and a preset reference vector.
Referring to fig. 3, fig. 3 is a flowchart of region straight line extraction and yaw angle calculation according to an embodiment of the present invention.
In the embodiment, a Canny operator is applied to detect the mask image contour of the blade region of the wind turbine, and the blade contour of the wind turbine is extracted to obtain a blade contour binary image I mask;
In the embodiment, a longest straight line in the blade area contour point set is extracted from the blade contour binary image I mask and is recorded as a blade area straight line L;
In the embodiment, the cabin point set is polymerized to obtain a cabin polymerization point C, and a normal vector V norm of the polymerization point C relative to a blade area straight line L is calculated according to mathematical geometric logic;
In this embodiment, the angle between the algorithm vector V norn and the preset reference vector V base is calculated, that is, the yaw angle is calculated, and the exemplary default forward direction is used as the reference.
In the embodiment, the yaw angle of the wind driven generator is calculated based on image segmentation and area straight line extraction, the problem of randomness of the nacelle orientation of the wind driven generator is solved, and real-time calculation of the yaw angle can be realized by only hovering an unmanned aerial vehicle over the wind driven generator and shooting an image of the nacelle. The yaw angle of the wind driven generator can be calculated end to end without additionally installing any sensing equipment on the wind driven generator or accessing a wind power plant background management system, and the yaw angle calculation method has extremely strong portability, usability and operability.
In the embodiment, the direction of the fan cabin is determined, so that the intelligent planning of the route from the flying spot to the front of the wind driven generator of the unmanned aerial vehicle is realized, the unmanned aerial vehicle leaf-sticking inspection route is intelligently connected, and the whole intelligent planning of the route and automatic inspection of the unmanned aerial vehicle are realized.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a yaw angle calculation device of a wind turbine according to an embodiment of the present invention, including a model building module 401, a detecting module 402 and a calculating module 403;
the model construction module 401 is configured to acquire a first data set, and construct a first segmentation model according to a deep convolutional neural network and the first data set; the first data set comprises blade images and cabin images of the wind driven generator and corresponding labels;
The detection module 402 is configured to obtain a real-time image of a wind turbine to be detected, and obtain blade and nacelle information of the wind turbine to be detected according to the first segmentation model;
The calculating module 403 is configured to perform area straight line extraction according to the blade and nacelle information of the wind turbine to be detected, and calculate a yaw angle of the wind turbine to be detected.
In this embodiment, the model building module is specifically configured to:
according to a preset detection rule, controlling the unmanned aerial vehicle to shoot each wind driven generator, and obtaining detection data sets of a plurality of wind driven generators, wherein the detection rule comprises shooting conditions, shooting heights, shooting angles and flight tracks of the unmanned aerial vehicle; the detection data set comprises video data and image data;
extracting key frame data in the video data, and screening the key frame data and the image data according to the picture integrity to generate a second data set;
and identifying and labeling all the wind driven generators, the blades and the cabins in the second data set, and generating a first data set.
In this embodiment, the model building module is further configured to:
Converting the first data set according to a preset format to generate a semantic segmentation data set, and dividing the semantic segmentation data set into a training set and a testing set;
Constructing a first segmentation model according to an encoder and a decoder, and training the first segmentation model according to the training set and the testing set;
And evaluating the first segmentation model according to the cross entropy loss function until the model converges to reach a preset condition, and outputting the trained first segmentation model.
In this embodiment, the detection module is specifically configured to:
Acquiring a real-time image of a wind driven generator to be detected according to an unmanned aerial vehicle, and inputting the real-time image into the first segmentation model for semantic segmentation;
And acquiring an area mask image corresponding to the blade of the wind driven generator to be detected and a point set corresponding to the engine room.
In this embodiment, the computing module is specifically configured to:
Detecting a region mask image corresponding to a blade of the wind driven generator according to a preset operator, extracting blade profile information, and generating a blade profile binary image;
extracting a first straight line in the blade profile binary image, wherein the first straight line is the longest straight line in the blade profile binary image;
And generating a cabin aggregation point according to the cabin point set, calculating normal vectors of the cabin aggregation point and a first straight line according to geometric logic, and calculating the yaw angle of the wind driven generator to be detected according to the normal vectors and a preset reference vector.
According to the method, the real-time image of the wind driven generator is subjected to semantic segmentation according to the first segmentation model, so that the region straight line extraction is carried out on the nacelle and the blades of the wind driven generator, the yaw angle of the wind driven generator to be detected is calculated, and the accuracy and the efficiency of the detection of the nacelle orientation of the wind driven generator are improved. Meanwhile, the wind driven generator is not required to stop operation during detection, so that the time consumption and high cost of manual inspection are avoided.
The foregoing embodiments have been provided for the purpose of illustrating the general principles of the present invention, and are not to be construed as limiting the scope of the invention. It should be noted that any modifications, equivalent substitutions, improvements, etc. made by those skilled in the art without departing from the spirit and principles of the present invention are intended to be included in the scope of the present invention.

Claims (10)

1. A method of calculating a yaw angle of a wind turbine, comprising:
Acquiring a first data set, and constructing a first segmentation model according to a deep convolutional neural network and the first data set; the first data set comprises blade images and cabin images of the wind driven generator and corresponding labels;
Acquiring a real-time image of a wind driven generator to be detected, and acquiring blade and cabin information of the wind driven generator to be detected according to the first segmentation model;
And carrying out area linear extraction according to the blade and cabin information of the wind driven generator to be detected, and calculating the yaw angle of the wind driven generator to be detected.
2. The method for calculating a yaw angle of a wind turbine according to claim 1, wherein the acquiring a first data set is specifically:
according to a preset detection rule, controlling the unmanned aerial vehicle to shoot each wind driven generator, and obtaining detection data sets of a plurality of wind driven generators, wherein the detection rule comprises shooting conditions, shooting heights, shooting angles and flight tracks of the unmanned aerial vehicle; the detection data set comprises video data and image data;
extracting key frame data in the video data, and screening the key frame data and the image data according to the picture integrity to generate a second data set;
and identifying and labeling all the wind driven generators, the blades and the cabins in the second data set, and generating a first data set.
3. The method of claim 2, wherein constructing a first segmentation model from a deep convolutional neural network and the first dataset comprises:
Converting the first data set according to a preset format to generate a semantic segmentation data set, and dividing the semantic segmentation data set into a training set and a testing set;
Constructing a first segmentation model according to an encoder and a decoder, and training the first segmentation model according to the training set and the testing set;
And evaluating the first segmentation model according to the cross entropy loss function until the model converges to reach a preset condition, and outputting the trained first segmentation model.
4. A method for calculating a yaw angle of a wind turbine according to claim 3, wherein the acquiring a real-time image of the wind turbine to be detected, and acquiring blade and nacelle information of the wind turbine to be detected according to the first segmentation model, specifically includes:
Acquiring a real-time image of a wind driven generator to be detected according to an unmanned aerial vehicle, and inputting the real-time image into the first segmentation model for semantic segmentation;
And acquiring an area mask image corresponding to the blade of the wind driven generator to be detected and a point set corresponding to the engine room.
5. The method for calculating yaw angle of wind turbine according to claim 4, wherein the area straight line extraction is performed according to the blade and nacelle information of the wind turbine to be detected, and the yaw angle of the wind turbine to be detected is calculated specifically as follows:
Detecting a region mask image corresponding to a blade of the wind driven generator according to a preset operator, extracting blade profile information, and generating a blade profile binary image;
extracting a first straight line in the blade profile binary image, wherein the first straight line is the longest straight line in the blade profile binary image;
And generating a cabin aggregation point according to the cabin point set, calculating normal vectors of the cabin aggregation point and a first straight line according to geometric logic, and calculating the yaw angle of the wind driven generator to be detected according to the normal vectors and a preset reference vector.
6. A wind turbine yaw angle calculation apparatus, comprising: the system comprises a model construction module, a detection module and a calculation module;
The model construction module is used for acquiring a first data set and constructing a first segmentation model according to the depth convolution neural network and the first data set; the first data set comprises blade images and cabin images of the wind driven generator and corresponding labels;
The detection module is used for acquiring real-time images of the wind driven generator to be detected and acquiring blade and cabin information of the wind driven generator to be detected according to the first segmentation model;
And the calculating module is used for carrying out area linear extraction according to the blade and cabin information of the wind driven generator to be detected and calculating the yaw angle of the wind driven generator to be detected.
7. The wind turbine yaw angle calculation device of claim 6, wherein the model building module is specifically configured to:
according to a preset detection rule, controlling the unmanned aerial vehicle to shoot each wind driven generator, and obtaining detection data sets of a plurality of wind driven generators, wherein the detection rule comprises shooting conditions, shooting heights, shooting angles and flight tracks of the unmanned aerial vehicle; the detection data set comprises video data and image data;
extracting key frame data in the video data, and screening the key frame data and the image data according to the picture integrity to generate a second data set;
and identifying and labeling all the wind driven generators, the blades and the cabins in the second data set, and generating a first data set.
8. The wind turbine yaw angle calculation apparatus of claim 7, wherein the model building module is further configured to:
Converting the first data set according to a preset format to generate a semantic segmentation data set, and dividing the semantic segmentation data set into a training set and a testing set;
Constructing a first segmentation model according to an encoder and a decoder, and training the first segmentation model according to the training set and the testing set;
And evaluating the first segmentation model according to the cross entropy loss function until the model converges to reach a preset condition, and outputting the trained first segmentation model.
9. The wind-driven generator yaw angle calculation device of claim 8, wherein the detection module is specifically configured to:
Acquiring a real-time image of a wind driven generator to be detected according to an unmanned aerial vehicle, and inputting the real-time image into the first segmentation model for semantic segmentation;
And acquiring an area mask image corresponding to the blade of the wind driven generator to be detected and a point set corresponding to the engine room.
10. The wind turbine yaw angle calculation device of claim 9, wherein the calculation module is specifically configured to:
Detecting a region mask image corresponding to a blade of the wind driven generator according to a preset operator, extracting blade profile information, and generating a blade profile binary image;
extracting a first straight line in the blade profile binary image, wherein the first straight line is the longest straight line in the blade profile binary image;
And generating a cabin aggregation point according to the cabin point set, calculating normal vectors of the cabin aggregation point and a first straight line according to geometric logic, and calculating the yaw angle of the wind driven generator to be detected according to the normal vectors and a preset reference vector.
CN202410169475.7A 2024-02-06 2024-02-06 Yaw angle calculation method and device for wind driven generator Pending CN117973212A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410169475.7A CN117973212A (en) 2024-02-06 2024-02-06 Yaw angle calculation method and device for wind driven generator

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410169475.7A CN117973212A (en) 2024-02-06 2024-02-06 Yaw angle calculation method and device for wind driven generator

Publications (1)

Publication Number Publication Date
CN117973212A true CN117973212A (en) 2024-05-03

Family

ID=90845653

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410169475.7A Pending CN117973212A (en) 2024-02-06 2024-02-06 Yaw angle calculation method and device for wind driven generator

Country Status (1)

Country Link
CN (1) CN117973212A (en)

Similar Documents

Publication Publication Date Title
CN108416294B (en) Fan blade fault intelligent identification method based on deep learning
CN109299688B (en) Ship detection method based on deformable fast convolution neural network
CN110261394A (en) Online fan blade damages real-time diagnosis system and method
CN110569843B (en) Intelligent detection and identification method for mine target
CN111652835A (en) Method for detecting insulator loss of power transmission line based on deep learning and clustering
US20230107092A1 (en) System and method for monitoring wind turbine rotor blades using infrared imaging and machine learning
CN115719337A (en) Wind turbine surface defect detection method
CN107992899A (en) A kind of airdrome scene moving object detection recognition methods
CN111860593A (en) Fan blade fault detection method based on generation countermeasure network
CN115761537A (en) Power transmission line foreign matter intrusion identification method oriented to dynamic characteristic supplement mechanism
Hao et al. Detection of bird nests on power line patrol using single shot detector
CN114627074A (en) Ground shooting fan blade real-time monitoring method based on deep learning
CN114694130A (en) Method and device for detecting telegraph poles and pole numbers along railway based on deep learning
CN113606099A (en) Method and system for monitoring icing of blades of wind turbine generator
CN113569644A (en) Airport bird target detection method based on machine vision
CN111597939B (en) High-speed rail line nest defect detection method based on deep learning
CN117973212A (en) Yaw angle calculation method and device for wind driven generator
CN114612763A (en) Wind turbine generator pitch bearing crack monitoring method and computer program product
Wang et al. Research on appearance defect detection of power equipment based on improved faster-rcnn
Özer et al. An approach based on deep learning methods to detect the condition of solar panels in solar power plants
CN111784632A (en) Wind driven generator blade surface defect detection system and method based on deep learning
Gang et al. Research on key technology of infrared detection of power equipment
CN113658273B (en) Scene self-adaptive target positioning method and system based on space perception
CN115761439A (en) Boiler inner wall sink detection and identification method based on target detection
CN117267064A (en) Wind turbine generator blade lightning stroke damage assessment method and device

Legal Events

Date Code Title Description
PB01 Publication