CN108799011A - Device and method for monitoring blades of wind turbine generator - Google Patents

Device and method for monitoring blades of wind turbine generator Download PDF

Info

Publication number
CN108799011A
CN108799011A CN201710295739.3A CN201710295739A CN108799011A CN 108799011 A CN108799011 A CN 108799011A CN 201710295739 A CN201710295739 A CN 201710295739A CN 108799011 A CN108799011 A CN 108799011A
Authority
CN
China
Prior art keywords
image
frame
blade
gray level
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710295739.3A
Other languages
Chinese (zh)
Other versions
CN108799011B (en
Inventor
王百方
程庆阳
乔志强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Goldwind Science and Creation Windpower Equipment Co Ltd
Original Assignee
Beijing Goldwind Science and Creation Windpower Equipment Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Goldwind Science and Creation Windpower Equipment Co Ltd filed Critical Beijing Goldwind Science and Creation Windpower Equipment Co Ltd
Priority to CN201710295739.3A priority Critical patent/CN108799011B/en
Publication of CN108799011A publication Critical patent/CN108799011A/en
Application granted granted Critical
Publication of CN108799011B publication Critical patent/CN108799011B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F05INDEXING SCHEMES RELATING TO ENGINES OR PUMPS IN VARIOUS SUBCLASSES OF CLASSES F01-F04
    • F05BINDEXING SCHEME RELATING TO WIND, SPRING, WEIGHT, INERTIA OR LIKE MOTORS, TO MACHINES OR ENGINES FOR LIQUIDS COVERED BY SUBCLASSES F03B, F03D AND F03G
    • F05B2260/00Function
    • F05B2260/80Diagnostics
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E10/00Energy generation through renewable energy sources
    • Y02E10/70Wind energy
    • Y02E10/72Wind turbines with rotation axis in wind direction

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

An apparatus and method for monitoring blades of a wind turbine assembly are provided. The apparatus comprises: the video acquisition unit is used for carrying out video acquisition on the blades of the wind turbine generator so as to obtain a video image sequence; the background image acquisition unit is used for extracting a background image from the video image sequence obtained by the video acquisition unit; and an image abnormality detection unit that detects whether there is an abnormal image feature on a leaf in an image in the video image sequence based on the extracted background image.

Description

The device and method that the blade of Wind turbines is monitored
Technical field
The present invention relates to technical field of wind power generation, more particularly, are related to a kind of blade monitoring for Wind turbines Device and method.
Background technology
With continuing to increase for energy crisis and pollution pressure, various countries increasingly pay attention to green energy resource exploitation and It utilizes.Wind energy has obtained quick development as one of widest green energy resource is developed and utilized in the world at present.However, During the wind energy under to low temperature environment develops and utilizes, blade ice formation issues cause prodigious difficulty.In this regard, At present both at home and abroad blade freeze monitoring above all done many researchs, for example, be based on vibration signal, ambient temperature and humidity signal and The blade icing monitoring device of wind velocity signal, using the technologies such as infrared and sound emission carry out blade freeze monitoring device and By analyzing and handling come output power data and compare the output power data and standard wind power curve to realize Monitoring etc. to blade icing feature.
However, existing blade icing monitoring device and method are substantially frozen using indirect data indirectly Monitoring infers icing phenomenon by the analysis to other signals, but due to leading to the factor of blade icing in reality very Complexity, and the operating mode residing for blade is not unalterable, therefore the indirect data may be led since blade freezes The phenomenon that causing, and then be easy to causeing icing wrong report or false.In addition, current blade icing monitoring device and method are due to prison The limitation in principle is surveyed, causes it that cannot find icing feature in time, therefore may freeze very when finding icing feature Seriously, and accurately ice forming locations and icing amount effectively can not be estimated, and then design is timely and effectively prevented and kill off Ice scheme causes very big obstruction.Furthermore current blade icing monitoring device and method cannot be satisfied to blade freezing process The demand of follow-up study is carried out, while can not analyze and identify blade icing character, and then to reliable design effectively anti-deicing Scheme causes certain difficulty.
Therefore a kind of device that can be timely and accurately monitored to the off-note on the blade of Wind turbines is proposed And method, it is of great immediate significance.
Invention content
In order at least solve the above-mentioned problems in the prior art, the present invention provides a kind of for Wind turbines The device and method that blade is monitored.
It is an aspect of the invention to provide a kind of equipment that the blade to Wind turbine is monitored, which is characterized in that It may include:Video acquisition unit carries out video acquisition to obtain sequence of video images to the blade of Wind turbines;Background image obtains Unit is taken, the sequence of video images extraction background image obtained from video acquisition unit;Image abnormity detection unit, based on extraction Background image come detect on the blade in the image in sequence of video images whether there is abnormal image feature.
Described image abnormality detecting unit may include:Background image cutting unit utilizes the gray-scale map of the background image The gray level image containing a vaned frame or more frame image in picture and sequence of video images obtains frame or more frame leaf Piece gray level image, wherein described frame or more frame blade gray level image is used for carrying out institute by described image abnormality detecting unit State detection.
Described image abnormality detecting unit may also include:Leaf position recognition unit, for a frame or more frame leaf Each frame blade gray level image in piece gray level image, which carries out leaf position identification, to be come to described frame or more frame blade gray scale Image is screened, wherein the available blade gray level image filtered out is used for carrying out described by described image abnormality detecting unit Detection.
Described image abnormality detecting unit may also include:Original leaf area acquiring unit, identifies using by leaf position The available blade gray level image that unit filters out extracts protophyll from described with the corresponding original image of blade gray level image Panel region only includes the image of the original leaf area to generate, wherein the image for only including the leaf area of generation It is used for carrying out the detection by described image abnormality detecting unit.
Described image abnormality detecting unit may also include:Original leaf area processing unit, what it is by generation only includes described The image of leaf area is divided into multiple subregions, is analyzed the multiple subregion to determine that there are the areas of abnormal image feature Domain, and the information of the multiple subregion is encoded.
The video acquisition unit can at predetermined time intervals adjust the camera lens that the video acquisition unit includes to adopting Collect location point to acquire the sequence of video images of predetermined time length, and by the mirror after acquisition of sequence of video images Head adjusts to zero position.
The background image acquiring unit can be based on any one method in average estimation and mediant estimation method The background image is estimated using the multiple image in sequence of video images.
The image not comprising blade in sequence of video images can directly be used as described by the background image acquiring unit Background image.
The background image acquiring unit can by by the import gray average of the 1st frame image and n-th frame image into The first difference between mouth gray average is compared to determine whether the n-th -3 frame image being used as the background with import threshold value Image is to realize the update of background image;After the update for realizing background image using the n-th -3 frame image, the background image Acquiring unit can be by equal by the outlet gray scale of the outlet gray average in the 1st frame image and the image after n-th frame image The update that the second difference between value is compared to determine whether to restart background image with outlet threshold value operates, wherein 3 < n≤m, wherein m is the quantity for the image that sequence of video images includes.
When the background image acquiring unit calculates the first difference and the second difference of a frame image, the background First difference and the second difference can be respectively stored into the first sequence of differences and the second sequence of differences by image acquisition unit, by institute State import threshold value be updated to be multiplied by first using the maximum value in all first differences for being included in the first sequence of differences it is predetermined Multiple and the value obtained, and the outlet threshold value is updated to using including in all second differences in the second sequence of differences Maximum value be multiplied by the second prearranged multiple and the value that obtains.
Being located in sequence of video images can be met the first difference and be more than the import by the background image cutting unit A frame or more after one frame image of threshold value until meeting the second difference and being more than a frame image of the outlet threshold value Each frame image in frame image is changed into gray level image, and by the gray level image of each frame image and the background image Gray level image carry out difference operation and obtain described frame or more frame blade gray level image.
The background image cutting unit can open any one operation in operation and closed operation to described using morphology One frame or more frame blade gray level image is handled.
The leaf position recognition unit can utilize autoregression model for treated described frame or more frame blade Gray level image carries out leaf position identification and is screened come described frame or more frame blade gray level image to treated to obtain Obtain the available blade gray level image with intact leaf profile.
It is detected under an identified frame blade gray level image via blade edge when being determined by using autoregression model Sideline is located at described with the primary election intersection point of blades leading edges line and the primary election blade tip point of the frame blade gray level image When on the corresponding position of blades leading edges line and not affected by noise, the leaf position recognition unit can determine a frame Blade gray level image is the available blade gray level image for having intact leaf profile.
The original leaf area processing unit can be by the only image comprising leaf area of the generation along picture traverse Direction is averagely divided into N number of first subregion, concurrently judges whether deposited in each first subregion in N number of first subregion In abnormal image feature, only by N number of first subregion there are each first subregions of abnormal image feature along image Length direction is averagely divided into M the second subregions, and respectively to there is no abnormal image features in N number of first subregion In the information of each the first subregion and the M the second subregions there are each the second subregion of abnormal image feature Information is encoded.
The information being encoded may include following information:It is the temporal information of present image, the regional information of Wind turbines, current The lower sideline of the blade number of blade in image, the gray level image of the blades leading edges line in present image and present image The y-coordinate value of intersection point and the index information being made of the partition number of the first subregion and the partition number of the second subregion, In, for there is no the first subregion of abnormal image feature, the partition number of the first subregion in the index information be this The partition number of number of one subregion in N number of first subregion, the second subregion in the index information is 0;For depositing The partition number of the first subregion in the second subregion of abnormal image feature, the index information includes second subregion Number of first subregion in N number of first subregion, the partition number of the second subregion in the index information be this second Number of the subregion in the M the second subregions.
It is another aspect of the invention to provide a kind of method that the blade to Wind turbine is monitored, feature exists In, it may include:Video acquisition is carried out to obtain sequence of video images to the blade of Wind turbines;From the sequence of video images obtained Extract background image;It whether there is on the blade in the image in sequence of video images to detect based on the background image of extraction Abnormal image feature.
The step of detection, may include:Utilize containing in the gray level image and sequence of video images of the background image The gray level image of frame for blade or more frame image obtains frame or more frame blade gray level image, wherein a frame Or more frame blade gray level image be used to carry out the detection.
The step of detection, may also include:For each frame blade in described frame or more frame blade gray level image Gray level image carries out leaf position identification to be screened to described frame or more frame blade gray level image, wherein filters out Available blade gray level image be used to carry out the detection.
The step of detection, may also include:Using the available blade gray level image filtered out, blade is used from described The corresponding original image of gray level image extracts original leaf area to generate the image for only including the original leaf area, In, the only image comprising the original leaf area of generation be used to carry out the detection.
The step of detection, may also include:The only image comprising the original leaf area of generation is divided into multiple Subregion is analyzed the multiple subregion to determine that there are the subregions of abnormal image feature, and to the letter of the multiple subregion Breath is encoded.
The step of video acquisition is to obtain sequence of video images is carried out to Wind turbines may include:It at predetermined time intervals will be into The camera lens of row video acquisition is adjusted acquires the sequence of video images of predetermined time length to acquisition position point, and in video image The camera lens is adjusted to zero into position after the acquisition of sequence.
The step of extracting background image from the sequence of video images obtained may include:Estimated based on average estimation and intermediate value Any one method in meter method estimates the background image using the multiple image in sequence of video images.
The step of extracting background image from sequence of video images may include:It directly acquires and does not include from sequence of video images The image of blade is used as the background image.
The step of extracting background image from sequence of video images may include:By will be equal in the import gray scale of the 1st frame image The first difference between value and the import gray average of n-th frame image is compared to determine whether n-th -3 with import threshold value Frame image is used as the background image to realize the update of background image;Background image is being realized using the n-th -3 frame image more After new, by by the outlet gray average of the outlet gray average in the 1st frame image and the image after n-th frame image it Between the second difference be compared to determine whether to restart the update of background image with outlet threshold value and operate, wherein 3 < n ≤ m, wherein m is the quantity for the image that sequence of video images includes.
The step of extracting background image from sequence of video images may also include:Whenever the first difference for calculating a frame image When with the second difference, the first difference and the second difference are respectively stored into the first sequence of differences and the second sequence of differences, by institute State import threshold value be updated to be multiplied by first using the maximum value in all first differences for being included in the first sequence of differences it is predetermined Multiple and the value obtained, and the outlet threshold value is updated to using including in all second differences in the second sequence of differences Maximum value be multiplied by the second prearranged multiple and the value that obtains.
The step of obtaining frame or more frame blade gray level image may include:Being located in sequence of video images is met the One difference is more than after a frame image of the import threshold value and is more than the frame figure for exporting threshold value until meeting the second difference Each frame image in frame before picture or more frame image is changed into gray level image, and by the gray scale of each frame image The gray level image of image and the background image carries out difference operation to obtain frame or more frame blade gray level image.
The step of obtaining frame or more frame blade gray level image may also include:It is opened in operation and closed operation using morphology Any one described frame or more frame blade gray level image handled.
The step of being screened to described frame or more frame blade gray level image may include:It is directed to using autoregression model Treated described frame or more frame blade gray level image carries out leaf position identification and carrys out to treated a frame or more Multiframe blade gray level image is screened to obtain the available blade gray level image with intact leaf profile.
To treated, described frame or more frame blade gray level image is screened has intact leaf profile to obtain Available blade gray level image the step of may include:It is determined via blade edge detection when being determined by using autoregression model The lower sideline of a frame blade gray level image and the primary election intersection point of blades leading edges line and the frame blade gray level image Primary election blade tip point is located on the corresponding position of the blades leading edges line and when not affected by noise, determines a frame Blade gray level image is the available blade gray level image for having intact leaf profile.
The step of the step of division and coding, may include:The only image comprising leaf area of the generation is wide along image Degree direction is averagely divided into N number of first subregion, concurrently judge in each first subregion in N number of first subregion whether There are abnormal image features, only by there are each first subregions of abnormal image feature along figure in N number of first subregion As length direction is averagely divided into M the second subregions, and respectively to abnormal image feature is not present in N number of first subregion Each the first subregion information and in the M the second subregions there are each second subregions of abnormal image feature Information encoded.
The information being encoded may include following information:It is the temporal information of present image, the regional information of Wind turbines, current The lower sideline of the blade number of blade in image, the gray level image of the blades leading edges line in present image and present image The y-coordinate value of intersection point and the index information being made of the partition number of the first subregion and the partition number of the second subregion, In, for there is no the first subregion of abnormal image feature, the partition number of the first subregion in the index information be this The partition number of number of one subregion in N number of first subregion, the second subregion in the index information is 0;For depositing The partition number of the first subregion in the second subregion of abnormal image feature, the index information includes second subregion Number of first subregion in N number of first subregion, the partition number of the second subregion in the index information be this second Number of the subregion in the M the second subregions.
It it is an aspect of the invention to provide a kind of computer readable storage medium, has program stored therein, which is characterized in that institute It states program and may include instruction for executing the operation that the above-described blade to Wind turbine is monitored.
It is an aspect of the invention to provide a kind of computer, including it is stored with the readable medium of computer program, it is special Sign is that described program includes the instruction for executing the operation that the above-described blade to Wind turbine is monitored.
It, being capable of Direct Recognition blade by the above-mentioned device and method for being monitored to the blade icing of Wind turbines Icing feature simultaneously can just monitor icing feature at blade icing initial stage, to realize the tracking and monitoring to blade freezing process And start deicing system in time and carry out deicing, and then prevent from freezing accumulating and cause significant trouble.It is also possible to pass through Visual monitoring intuitively identifies ice forming locations and icing amount, to help user to be analyzed to design to ice form It is reliable and effective to prevent from freezing and deicing scheme.In addition, the present invention can also help user by visual monitoring icing condition The influence that analysis freezes to operating states of the units, so as to be provided effectively for the runnability of optimization unit at low ambient temperatures Support.In addition to this, the present invention can also by vision technique while being monitored to blade icing feature to blade On other off-notes (such as crackle, notch etc.) monitoring is identified, so as to avoid these off-notes cause weight Major break down and shutdown.Furthermore the present invention can also track and identify the operating status of blade by vision technique, that is, pass through The rotating speed of impeller and the shimmy state of blade are identified to the analysis of sequence of video images, and then realized to operating states of the units Assessment, this contributes to the analysis to unit miscellaneous equipment operating status.
Description of the drawings
By the way that the detailed description of exemplary embodiment of the present, those skilled in the art will obtain below in conjunction with the accompanying drawings Complete understanding of the present invention, wherein:
Fig. 1 is the frame of the equipment for being monitored to the blade of Wind turbines of exemplary embodiment according to the present invention Figure;
Fig. 2 is the block diagram of the video acquisition unit 100 of exemplary embodiment according to the present invention;
Fig. 3 shows the specific example of video acquisition unit 100;
Fig. 4 is the scheme of installation of the video acquisition unit 100 of exemplary embodiment according to the present invention;
Fig. 5 is the diagram of the tooling for installing video acquisition unit 100 of exemplary embodiment according to the present invention;
Fig. 6 shows 6 frame images in sequence of video images;
Fig. 7, which is shown, is handled 6 frame images shown in Fig. 6 by using average estimation and mediant estimation method And the background image obtained;
Fig. 8 shows that background image acquiring unit 200 directly uses the image not comprising blade in sequence of video images Make the process of background image;
Fig. 9 is the equipment for being detected to the blade of Wind turbines for showing exemplary embodiment according to the present invention The block diagram of image abnormity detection unit 300 in 10;
Figure 10 shows the gray level image of the background image of exemplary embodiment according to the present invention and contains vaned figure Error image between the gray level image of picture;
Figure 11 show background image cutting unit 310 using morphology closed operation to the error image in Figure 10 at The optimum results obtained after reason;
Figure 12 shows background image cutting unit 310 using morphology closed operation to multiple differences similar in Figure 10 The optimum results that image is obtained after being handled;
Figure 13 is shown a in Figure 12 is handled via Sobel operators after the blade profile edge graph that is extracted Picture;
Figure 14 shows that background image cutting unit 310 generates the process for the image sequence for only including original leaf area;
Figure 15 shows that blade tip not yet enters the detection visual field of video acquisition unit 100 and includes the image of many noises;
Figure 16 shows original image in original video image sequence and is generated by background image cutting unit 310 Only include the image of original leaf area;
Figure 17 to Figure 19 shows the image for only including original leaf area to being generated by background image cutting unit 310 Divide for the first time and second divides obtained image;
Figure 20 shows the format of image information coding;
Figure 21 shows the diagram encoded to the abnormal image feature on blade;
Figure 22 is the method for being monitored to the blade of Wind turbines of exemplary embodiment according to the present invention General flow chart;
Figure 23 is that showing for exemplary embodiment according to the present invention carries out video acquisition to obtain using Wind turbines The detail flowchart of sequence of video images;
Figure 24 be exemplary embodiment according to the present invention slave sequence of video images in directly acquire not comprising blade Image is used as the flow chart of the method for background image;
Figure 25 is being detected in video image sequence based on the background image of extraction for exemplary embodiment according to the present invention With the presence or absence of the flow chart of the method for abnormal image feature on the blade in image in row;
Figure 26 is described frame or more frame blade gray-scale map to treated of exemplary embodiment according to the present invention Flow chart as being screened the method to obtain the available blade gray level image with intact leaf profile.
Specific implementation mode
Hereinafter, the embodiment of the present invention is described in detail with reference to the attached drawings, wherein in the accompanying drawings, identical drawing reference numeral is used In the identical component of expression.
Fig. 1 is the frame of the equipment for being monitored to the blade of Wind turbines of exemplary embodiment according to the present invention Figure.
As shown in fig. 1, the equipment includes video acquisition unit 100, background image acquiring unit 200 and image abnormity Detection unit 300.
Video acquisition unit 100 is used to carry out video acquisition to Wind turbines to obtain sequence of video images.Such as Fig. 2 institutes It states, video acquisition unit 100 may include light filling unit 101, cradle head control unit 102, cleaning unit 103 and camera unit 104.
Light filling unit 101 can be infrared light filling unit, for carrying out light filling irradiation to subject so that be taken Object can be captured more clearly.Cradle head control unit 102 can control light filling unit 101 and 104 progress of camera unit horizontal Continuous rotation (that is, 360 degree rotation) and in vertical direction rotation (that is, ± 90 degree of rotations are carried out on the basis of horizontal plane). The camera lens of camera unit 104 is carried out clearly in addition, cradle head control unit 102 can control cleaning unit 103 (for example, rain brush) It is clean, to ensure there is no foreign matter on camera lens.
Video acquisition unit 100 can have the function of network remote monitoring, video server function and high-definition intelligent work( The monopod video camera of energy.It is imaged for example, video acquisition unit 100 can be T-type network high definition integration holder shown in Fig. 3 Machine, the T-type network high definition integration monopod video camera are built-in with small-sized WebServer servers, network video server, compile solution Code device and other processors.301 in Fig. 3 indicate that light filling unit 101,302 indicates cradle head control unit, and 303 indicate that cleaning is single Member, 304 indicate camera unit.
In addition, as shown in figure 4, video acquisition unit 100 (i.e. 502 in Fig. 5) can pass through tooling device shown in Fig. 5 503 are installed on the cabin top of Wind turbines, wherein the communication line and power cord of video acquisition unit 100 can be via surveys Wind holder 501 accesses the cabin of Wind turbines.In addition, video acquisition unit 100 should be kept for the installation site of all units It is unified, and should ensure the open of the visual field as much as possible and avoid blocking.As shown in figure 4, being 30 meters to 70 in the length L1 of blade In the case of rice, the installation site of video acquisition unit 100 should be greater than 2.5 meters with impeller distance L2 in the horizontal direction, In, when the length L1 of blade is more than 60 meters, distance L2 should be greater than 3.5 meters.In addition, the camera in video acquisition unit 100 For unit 104 when towards impeller direction, camera unit 104 must not have barrier in downward view:Using horizontal direction as base Accurate ± 90 ° of ranges and ± 45 ° of ranges on the basis of vertical direction.In addition, video acquisition unit 100 should also take shelter from the thunder And water-proofing treatment.
In the gatherer process of sequence of video images, in order to ensure video image that video acquisition unit 100 is acquired The saving of quality and video memory space, video acquisition unit 100 can carry out interim video-capture operations, for example, can The primary video acquisition of predetermined amount of time T is carried out with t at predetermined time intervals.In particular, being mounted in video acquisition unit 100 After on the cabin top of Wind turbines, dead-center position setting can be carried out to video acquisition unit 100, wherein the zero position Set the stop place for being current video collecting unit 100 in the acquisition without sequence of video images.In order to protect camera list The position of the camera lens of camera unit 104 vertically downward, can be set as the dead-center position by the camera lens of member 104, and by described zero Point, which installs, is set to preset point.In the case, video acquisition unit 100 can at predetermined time intervals control in camera unit 104 Including camera lens adjust to acquisition position point and control cleaning unit 103 to camera lens carry out cleaning, then control camera list Member 104 acquires the sequence of video images of predetermined time length, wherein the acquisition position point be relative to preset point for into The position of row video acquisition.Hereafter, after the acquisition of sequence of video images, the calling of video acquisition unit 100 is pre-set The preset point camera lens is adjusted to the dead-center position.
The sequence of video images that background image acquiring unit 200 can be obtained from video acquisition unit 100 extracts background image.
In particular, background image acquiring unit 200 can be based on arbitrary in average estimation and mediant estimation method A kind of method estimates background image using the multiple image in sequence of video images.Profit respectively is introduced below with reference to Fig. 6 The process that background image is estimated with average estimation and mediant estimation method.
Since too big variation will not occur for background image in a short time, background image acquiring unit 200 can base Estimate background image using continuous a few frame images in average estimation.As shown in Figure 6, a, b, c, d, e in Fig. 6 and F is 6 frame images in sequence of video images.It is assumed that the pixel value at coordinate (x, y) in this 6 frame image is respectively P1x,y、 P2x,y、P3x,y、P4x,y、P5x,yAnd P6x,y, then background image acquiring unit 200 can be using equation 1 come at coordinates computed (x, y) Pixel value.
TPx,y=(P1x,y+P2x,y+P3x,y+P4x,y+P5x,y+P6x,y)/6 (1)
Therefore, background image acquiring unit 200 can utilize above-mentioned formula (1) successively to the same position in this 6 frame image Each pixel calculated, to obtain background estimating image shown in a in Fig. 7.Pass through above-mentioned Estimation of Mean side The visual signature of pixel region reduces 5/6 where the background image that method obtains makes blade, so in the background image obtained Some vaned features can be retained.
In addition, background image acquiring unit 200 can be utilized based on field on the basis of average estimation averagely to equal Value method of estimation optimizes to estimate background image.In particular, background image acquiring unit 200 can be average first with field Method processing sequence of video images in each frame image, that is, then the average template in one field of selection utilizes equation 2 The mean value g (x, y) of all pixels in calculation template, then with mean value g (x, y) instead of the pixel value of the current pixel in original image F (x, y), wherein the template is made of current pixel and its adjacent several pixels, for example, the template by current pixel with And the 4 pixels composition adjacent with the up, down, left and right four direction of current pixel.
G (x, y)=1/m ∑ f (x, y) (2)
Wherein, m indicates to include the pixel total number of all pixels including current pixel in template.
Then, background image acquiring unit 200 recycles average estimation to being handled by the average method in field Each pixel averaged STP in the image of predetermined quantity afterwardsx,y.For example, when the m in equation 2 is 5, it is available Equation 3 estimates background image.
STPx,y=(mean5 (P1x,y)+mean5(P2x,y)+mean5(P3x,y)+mean5(P4x,y)+mean5(P5x,y)+ mean5(P6x,y))/6 (3)
Wherein, STPx,yIndicate in the method average by field treated 6 images be located at it is each at (x, y) Pixel value mean5 (the P1 of a pixelx,y)、mean5(P2x,y)、mean5(P3x,y)、mean5(P4x,y)mean5(P5x,y) and mean5(P6x,y) average value, wherein the pixel value mean5 (P1 at (x, y) of image ax,yThe ∑s of)=g (x, y)=1/5 f (x, And mean5 (the P2 at (x, y) of image b, c, d, e and f y),x,y)、mean5(P3x,y)、mean5(P4x,y)mean5 (P5x,y) and mean5 (P6x,y) the similar mean5 (P1 at (x, y) with image a of computational methodsx,y) computational methods.
Therefore, background image acquiring unit 200 can get such as Fig. 7 by using based on the average average estimation in field In b shown in background image, wherein the leaf characteristic of a in b ratios Fig. 7 in Fig. 7 weakens to a certain degree.
In addition, background image acquiring unit 200 can also utilize mediant estimation method using more in sequence of video images Frame image estimates background image.Specifically, it is assumed that the pixel value at the coordinate (x, y) in 6 frame images in Fig. 6 is respectively P1x,y、P2x,y、P3x,y、P4x,y、P5x,yAnd P6x,y, then background image acquiring unit 200 can be utilized based on mediant estimation method Formula 4 calculates the pixel value MP at the pixel (x, y) in background imagex,y, to obtain entire background image.
MPx,y=median (P1x,y,P2x,y,P3x,y,P4x,y,P5x,y,P6x,y) (4)
Above-described average estimation and mediant estimation method are that there are used when special circumstances for background Method is estimated using above-mentioned method of estimation to obtain Background for example, when there is the object moved always in background Picture.
In addition, when not including Moving Objects in background, background image acquiring unit 200 can be directly by sequence of video images In the image not comprising blade be used as the background image.In particular, since video acquisition unit 100 is in acquisition video figure When as sequence, there can be interval of time when appearing in the video surveillance visual field to two blade alternates, and at this section Between be spaced in the sequence of video images that is acquired not comprising blade, therefore background image acquiring unit 200 can be directly by this section A frame image in the sequence of video images acquired in time interval is used as background image, and using subsequent such The background image before updating out is replaced similar to the frame image in the sequence of video images acquired in time interval, with reality The continuous renewal of existing background image.This is described in detail hereinafter with reference to Fig. 8.
Background image acquiring unit 200 can obtain background image by process shown in Fig. 8.Wherein, background image obtains Take unit 200 can be by by first between the import gray average of the 1st frame image and the import gray average of n-th frame image Difference is compared to determine whether the n-th -3 frame image being used as the background image to realize background image with import threshold value Update, and after the update for realizing background image using the n-th -3 frame image, background image acquiring unit 200 is by will be 1st frame image outlet gray average and the image after n-th frame image outlet gray average between the second difference with Outlet threshold value is compared to determine whether to restart the update operation of background image, wherein 3 < n≤m, wherein m is to regard The quantity for the image that frequency image sequence includes.
In particular, sequence of video images can be read in background image acquiring unit 200 first, it later can be from the video of reading The 1st frame image is obtained in image sequence.After reading the 1st frame image, background image acquiring unit 200 can be in the 1st frame image Vane inlet region E and blade exit region F is determined in gray level image, wherein the width W of vane inlet region EEFor image The predetermined percentage of width W, for example, 5% to 10%, the height H of vane inlet region EEFor the predetermined percentage of the height H of image Than for example, 2%, the width W of blade exit region FFFor the predetermined percentage of the width W of image, for example, 2%, blade exit The height H of region FFFor the predetermined percentage of the height H of image, for example, 5% to 10%, but the invention is not restricted to this.This Afterwards, background image acquiring unit 200 calculates the import gray average G in vane inlet region using equation 5EWith blade exit area The outlet gray average G in domainF, and by the import gray average G in vane inlet regionEIt is equal with the outlet gray scale in blade exit region Value GFThe import reference gray level mean value G1 and outlet reference gray level mean value G2 for being used separately as determining needed for background image.
Wherein, as calculating import gray average GEWhen, the width W of entry zone when row and col expressions are using pixel as unitE With height HE, i and j indicate the coordinate of pixel when using pixel as unit on entry zone E, f (i, j) denotation coordination respectively Gray value at (i, j).
Hereafter, background image acquiring unit 200 reads n-th frame image, according to method identical with the 1st frame image n-th Vane inlet region E and blade exit region F is determined in frame image, and utilizes equation 5 according to method identical with the 1st frame image To execute the import gray average G for calculating n-th frame imageEnWith outlet gray average GFnThe first calculating operation, wherein 3 < n ≤ m, m are the quantity for the image that sequence of video images includes.
Hereafter, background image acquiring unit 200 can perform threshold calculations operation.In particular, background image acquiring unit 200 can calculate the import gray average G of n-th frame image using equation 6EnBetween the import reference gray level mean value G1 One difference C1n, and utilize the outlet gray average G of the calculating n-th frame image of equation 7FnWith the outlet reference gray level mean value G2 it Between the second difference C2n
C1n=| GEn-G1| (6)
C2n=| GFn-G2| (7)
Hereafter, background image acquiring unit 200 can be by the first difference C1 of n-th frame imagenWith the second difference C2nIt deposits respectively Then storage calculates separately maximum value a and the difference in sequence of differences D1 in sequence of differences D1 and D2 using equation 8 and equation 9 Maximum value b in sequence D 2.Hereafter, background image acquiring unit 200 is utilized respectively equation 10 and equation 11 to calculate separately n-th The import threshold value h1 of frame imagenWith outlet threshold value h2n
A=max (D1) (8)
B=max (D2) (9)
h1n=a*0.46 (10)
h2n=b*0.45 (11)
Hereafter, background image acquiring unit 200 can perform background image extraction operation.In particular, background image obtains Unit 200 can perform the import gray average G of n-th frame imageEnWith the first difference between the import reference gray level mean value G1 C1nWhether the import threshold value h1 of n-th frame image is more thannFirst determine operation.
If the first difference C1nLess than or equal to the import threshold value h1n, then background image acquiring unit 200 can perform n The update of=n+1 operates, and reads n-th frame image, and re-execute n-th frame image according to method identical with the (n-1)th frame image The first calculating operation and first determine operation.Wherein, in first determines operation, the needs of background image acquiring unit 200 are pressed Threshold calculations operation is re-executed according to method identical with the (n-1)th frame to obtain the import threshold value h1 of n-th frame imagenWith outlet threshold Value h2n, and utilize the import threshold value h1 of the n-th frame image regainednFirst to execute n-th frame determines operation.
In addition, if the first difference C1 of n-th frame imagenMore than the import threshold value h1 of n-th frame imagen, then background image obtain Take unit 200 that can the n-th -3 frame image be determined as background image, that is, n-th frame image meets vane inlet threshold condition, blade The entry zone of n-th frame image is just entered.
In the case where n-th frame image meets vane inlet condition, background image acquiring unit 200 can perform n=n+1's Update operation, and determine whether n-th frame image meets blade exit condition.In particular, background image acquiring unit 200 is readable N-th frame image is taken, the import gray average G of n-th frame image is then executed according to method identical with the 1st frame imageEnThe outlet and Gray average GFnThe second calculating operation, and execute the outlet gray average G of n-th frame imageFnIt is equal with the outlet reference gray level Whether the second difference between value C2 is more than the outlet threshold value h2 of n-th frame imagenSecond determine operation.Wherein, really second In fixed operation, background image acquiring unit 200 needs to re-execute threshold calculations and operate according to method identical with the (n-1)th frame Obtain the import threshold value h1 of n-th frame imagenWith outlet threshold value h2n, and utilize the outlet threshold value h2 of the n-th frame image regainedn Second to execute n-th frame image determines operation.
If the second difference C2 of n-th frame imagenLess than or equal to the outlet threshold value h2 of n-th frame imagen, then background image Acquiring unit 200 can perform the update operation of n=n+1, read n-th frame image, and execute the second calculating operation of n-th frame image Operation is determined with second.
If the second difference C2 of n-th frame imagenMore than the outlet threshold value h2 of n-th frame imagen, then background image acquisition is single Member 200 can determine that n-th frame image meets blade exit threshold condition, that is, blade has just rotated out of blade exit region, to carry on the back Scape image acquisition unit 200 needs to restart the judgement of the vane inlet threshold condition of a new round, that is, executes n=n+1 more New operation, reads n frame images, and executes the first calculating operation of n-th frame image and first and determine operation, and in n-th frame image The first difference C1nMore than the import threshold value h1 of n-th frame imagenWhen, the n-th -3 frame is extracted as background image, the Background is used in combination The background image determined in judging as update previous round vane inlet threshold condition, and then realize the continuous renewal of background image, With weaken or eliminate background change over time and the error that introduces.
Image abnormity detection unit 300 can be detected based on the background image extracted by background image acquiring unit 200 It whether there is abnormal image feature on the blade in image in sequence of video images, wherein the abnormal image feature can be with It is ice, snow, crackle, notch etc..
With reference to Fig. 9, image abnormity detection unit 300 may include background image cutting unit 310, wherein background image point Gray level image can be converted to by the background image obtained by background image acquiring unit 200 first by cutting unit 310, then utilize the back of the body The gray level image containing a vaned frame or more frame image in the gray level image and sequence of video images of scape image obtains One frame or more frame blade gray level image.
In addition, being located in sequence of video images can be met a frame figure of first condition by background image cutting unit 310 As (that is, meeting a frame image of condition for import) is later until meeting a frame image of second condition (that is, meeting exit condition One frame image) before a frame or more frame image in each frame image be changed into gray level image, wherein first condition One difference C1nMore than the import threshold value h1 of pth frame imagen, second condition is the second difference C2 of q frame imagesnMore than q frames The outlet threshold value h2 of imagen, wherein p and q is integer, and p < q.On this basis, background image cutting unit 310 can incite somebody to action The gray level image of each frame image and the gray level image of background image carry out difference operation to obtain frame or more frame blade Gray level image.
Further, since the color of blade is white and surface smoother, therefore the reflective condition of blade is relatively good, into And when by when being converted to gray level image by rgb format containing vaned image, blade is in the gray-scale map in sequence of video images Gray value as in can be very big, but when background image is converted to gray level image by rgb format, the sky in background image Gray value of the region in gray level image is smaller, so subtracting background image with the gray level image containing vaned image In the error image obtained after gray level image, the gray value of blade region will become larger numerical value, and other regions Gray value will be close to 0.Therefore, as shown in Figure 10, in order to keep the visual effect of image apparent, background image segmentation is single The larger region of gray value in the error image (that is, noise and leaf area) can be converted to black by member 310, and will Other regions in the error image are converted to white.To sum up, the frame that is obtained by background image cutting unit 310 or More frame blade gray level images can be noise and leaf area as shown in Figure 10 be black and other regions be white Image.
Since blade can with the variation of motion parts and slightly behind the visual field for rotating into video acquisition unit 100 Change the subregional brightness conditions of background portion so that frame after above-mentioned blade dividing processing or more frame blade gray-scale map As in include many noises, therefore background image cutting unit 310 can be opened using morphology it is any one in operation and closed operation Kind operation one frame or more frame blade gray level image (image as shown in Figure 10) is handled, come remove a frame or Noise in more multiframe blade gray level image and the notch in frame blade gray level image of filling a frame or more.
In particular, in morphology open operation and closed operation be by morphology expansive working and etching operation according to Two kinds of operations that image is handled that different priority execution is sequentially formed.In detail, as shown in equation 12, set A opens operation by structural elements B morphology and may be expressed as:
That is, set A first carries out corrosion treatment by structural elements B, then the result of corrosion treatment is carried out with structural elements B again swollen Swollen processing, therefore morphology opens the profile that operation can be used for smooth object.Equation 12 can be equivalently expressed as such as 13 institute of equation Show:
In addition, as shown in equation 14, set A may be expressed as by structural elements B morphology closed operations:
That is, set A first carries out expansion process by structural elements B, corruption then is carried out to the structure of expansion process with structural elements B again Erosion is handled, therefore the morphology closed operation narrow breaking portion that can be used for connecting in figure and is filled elongated part and (filled up The hole smaller than structural elements B).
Therefore, background image cutting unit 310 can utilize morphology to open any one operation pair in operation and closed operation One frame or more frame blade gray level image is handled to remove the noise in blade gray level image and fill notch, for example, knot Constitutive element B can be the circular configuration member of a diameter of 18 pixels, and but the invention is not restricted to this.For example, background image segmentation is single Member 310 to the blade gray level image in Figure 10 can obtain in Figure 11 after morphology closed operation using morphology closed operation Shown in optimum results, that is, final blade gray level image, wherein the noise whole quilt in the blade gray level image in Figure 10 Removal, and the contour feature of blade has obtained accurate extraction.
It is detected by image abnormity by treated frame or more the frame blade gray level image of background image cutting unit 310 single Member 300, which is used for detecting, whether there is abnormal image feature on the blade in the image in sequence of video images.
In addition, image abnormity detection unit 300 may include leaf position recognition unit 320, wherein leaf position identification is single Member 320 can be directed to each frame blade in the frame obtained by background image cutting unit 310 or more frame blade gray level image Gray level image carries out leaf position identification to be screened to described frame or more frame blade gray level image, wherein filters out Available blade gray level image be used for detecting in the image in sequence of video images by described image abnormality detecting unit 300 It whether there is abnormal image feature on blade.
In particular, leaf position recognition unit 320 using autoregression model for treated described frame or more Frame blade gray level image carries out leaf position identification and is sieved come described frame or more frame blade gray level image to treated Choosing is to obtain the available blade gray level image with intact leaf profile.Wherein, when by using autoregression model determine via The lower sideline of a frame blade gray level image and the primary election intersection point of blades leading edges line determined by blade edge detection and described The primary election blade tip point of one frame blade gray level image is located on the corresponding position of the blades leading edges line and not by noise When influence, leaf position recognition unit 320 determines that the frame blade gray level image is the available leaf for having intact leaf profile Piece gray level image.This is described in detail hereinafter with reference to Figure 12 and Figure 13.
Figure 12 shows the 4 frame blade gray level images obtained by background image cutting unit 310.Since blade is normal There is certain speed, over time, blade relative to the camera unit in video acquisition unit 100 in rotary course It will appear in the different location in blade gray level image, therefore the blade information for including in different blade gray level images not phase Together, the quantity for the pixel for being included such as each leaf area in 4 width blade gray level images in Figure 12 is different.However, due to When blade is rotated according to fixed speed, the intersection point of the leading edge sideline of blade and the lower boundary of image is according to another fixed speed Linear slide process is carried out along the lower boundary, therefore each position of the intersection point on the lower boundary corresponds to leaf The position (i.e. leaf area information) of piece in the picture.Therefore, when the position of the intersection point is obtained, image abnormity detection is single Member 300 can be index with the intersection point to obtain corresponding leaf area information (that is, the position of blade in the picture).It is first First, leaf position recognition unit 320 can be obtained by using Sobel operators to being handled by background image cutting unit 310 One frame blade gray level image carries out blade edge detection to extract blade profile edge, as shown in a in Figure 13.Hereafter, leaf Piece location identification unit 320 needs to obtain the coordinate of the blade tip point P of the primary election of blade profile and the intersection point Q of primary election.
In particular, leaf position recognition unit 320 can be from the image (a in such as Figure 13) handled via Sobel operators All coordinate values (x, y) of middle extraction blade edge position, wherein in a of Figure 13, the upper left corner of image is that coordinate is former Point (0,0), x are the row coordinates where blade edge pixel in gray level image, and y is in gray level image where blade edge pixel Row coordinate.As can be seen that the row coordinate (that is, y-coordinate) of blade tip point P is minimum from a in Figure 13, i.e. yp=y_min, therefore leaf Piece location identification unit 320 can be carried using equation 15 from all coordinate values (x, y) of the blade edge position of extraction Take all coordinate values (x, y) with minimum row coordinate y=y_min.Due to blade edge pixel x and y coordinates and do not have Single mapping relationship, that is, the same y-coordinate likely corresponds to multiple x coordinates, therefore using median method (that is, utilizing equation 16) the x coordinate x_up_point of blade tip point P is determined.Leaf position recognition unit 320 can obtain blade tip point by the above process The coordinate (x, y) of P=(x_up_point, y_min).
Y_min=min (y) (15)
X_up_piont=med (xY=y_min) (16)
Wherein, med (xY=y_min) indicate through median method to tool in all coordinate values (x, y) of blade edge position The value for thering is the x coordinate of all coordinate values (x, y) of minimum row coordinate y=y_min to carry out median calculation and obtaining.By in Figure 13 A is it is found that the x coordinate x_down_point of intersection point Q is total line number of image, that is, the maximum x coordinate value x_size of image.Due to Same place x coordinate at intersection position corresponds to multiple y-coordinates, therefore leaf position recognition unit 320 can pass through equation 17 The minimum value in all y-coordinates corresponding with x_size is sought to seek the y-coordinate the_y of intersection point Q.Leaf position identification is single Member 320 can obtain coordinate (x, y)=(x_size, the the_y) of intersection point Q by the above process
The_y=min (yX=x_down_piont) (17)
Secondly, leaf position recognition unit 320 can sit the x coordinate value and y of the blade tip point P in a frame blade gray level image The x coordinate value and y-coordinate value of scale value and intersection point Q carry out simple linear regression analysis to obtain fitting a straight line.In order to reduce Influence of noise in the above-mentioned preliminary culling process of blade tip point P and intersection point Q, leaf position recognition unit 320 can only consider capable seat Mark x>30 pixel coordinate, and only consider 30 < y≤Y-30, wherein Y is total columns of image, is described more fully below logical Simple linear regression analysis is crossed to obtain the process of fitting a straight line.
In particular, as shown in the b in Figure 13, it is assumed that the y-coordinate between blade tip point P and intersection point Q is defined as sequences y 0, Leaf position recognition unit 320 is by using least square method to the coordinate of the blade tip point P and intersection point Q that are obtained via the above process Simple linear regression analysis is carried out to fit the fitting a straight line for the functional relation for meeting equation 18, such as the dotted line of the b in Figure 13 It is shown.
F=ay+b (18)
Secondly, leaf position recognition unit 320 can to by the fitting a straight line be located at blade tip point P and intersection point Q it Between each pixel x coordinate value composition First ray and the second sequence for being made of following x coordinate value carry out it is related Property calculate to obtain the first relative coefficient, wherein i-th of x coordinate value in second sequence is in current vane gray scale The maximum value in all x coordinate values corresponding with i-th of y-coordinate value in sequences y 0 on the leaf area of image, wherein 0 < i≤I, wherein I is the total number of the y-coordinate value in sequences y 0, in addition, I is equal to is located at institute in the fitting a straight line State the quantity of all pixels point between intersection point Q and the blade tip point P.
In particular, since each y-coordinate in y-coordinate sequences y 0 is more than the y-coordinate y of blade tip point PPAnd less than friendship The y-coordinate y of point QQ, therefore leaf position recognition unit 320 can be calculated by the way that y-coordinate sequences y 0 to be updated in equation 18 Calculated x coordinate is simultaneously stored in sequence f_x by each x coordinate corresponding with each y-coordinate in y-coordinate sequences y 0 In.In addition, leaf recognition unit 320 can carry each x coordinate of the blades leading edges pixel for the condition that meets according to equation 19 It takes out and is stored in sequence x_val, that is, on the leading edge sideline between blade tip point P and intersection point Q in the b in Figure 13 The x coordinate of outermost pixel.
X_val=max (xY=y0) (19)
Hereafter, leaf recognition unit 320 can according to equation 20 to sequence f_x and sequence x_val carry out cross-correlation analysis come Obtain cross-correlation coefficient ρ:
Wherein, Cov indicates covariance operation, σf_xAnd σx_valThe standard deviation of sequence f_x and sequence x_val are indicated respectively.
Leaf position recognition unit 320 can will be according to 20 calculated relative coefficient ρ of equation and threshold value ρthresholdIt carries out Compare, wherein threshold value ρthresholdCan be 0.98 or 0.99, but the invention is not restricted to this.
In particular, if relative coefficient ρ is more than predetermined threshold ρthreshold, then show to extract by above procedure The blade tip point P of the primary election and intersection point Q of primary election is in front of the blade on the corresponding position on edge, and hardly affected by noise.? On the basis of this, leaf position recognition unit 320 can determine that the image currently read is to include the image of blade information, and simultaneously empty White background image, in addition, leaf position recognition unit 320, which may further determine that, does not have noise or noise in the image currently read Subsequent processing is not influenced, and blade profile regular edges are complete.Therefore, leaf position recognition unit 320, which can determine, works as premise The intersection point Q of taking-up is correct and can be used for carrying out subsequent processing.Therefore, if the first relative coefficient is more than predetermined threshold, Leaf position recognition unit 320 can determine that current blade gray level image is available blade gray level image.
In addition, if relative coefficient ρ is less than or equal to predetermined threshold ρthreshold, then leaf position recognition unit 320 can Determine that the intersection point Q currently extracted is incorrect, that is, it is available blade gray level image that can determine current blade gray level image not.
In addition, as shown in figure 9, image abnormity detection unit 300 may also include original leaf area acquiring unit 330, In, original leaf area acquiring unit 330 is using the available blade gray-scale map filtered out by leaf position recognition unit 320 Picture, it only includes original leaf area to extract original leaf area from original image corresponding with available blade gray level image to generate Image.Wherein, the only image comprising the leaf area of generation is used for detecting in video by image abnormity detection unit 300 It whether there is abnormal image feature on the blade in image in image sequence.
In particular, Figure 14 shows the image acquisition procedures for only including original leaf area.As shown in Figure 14, first First, background image cutting unit 310 is after obtaining a frame or more frame blade gray level image by a frame or more frame leaf Blade gray scale is obtained in the call number deposit array pic_idex0 that piece gray level image corresponds in original video image sequence Image sequence.Hereafter, the utilization one-variable linear regression of leaf position recognition unit 320 and correlation analysis are to a frame or more Multiframe blade gray level image is screened will be similar to that blade tip shown in figure 15 has not been entered into the prison of video acquisition unit 100 It surveys the visual field and includes that the images of many noises filters out, to obtain the available blade of the high quality with intact leaf profile Gray level image, and the call number that the available blade gray level image of acquisition corresponds in original video image sequence is stored in array Pic_index1 can use blade grayscale image sequence to obtain.In the case, original leaf area acquiring unit 330 can Using using available blade gray level image corresponding with the call number preserved in array pic_index1 as leaf image area information Index, will only from the original image corresponding with the call number preserved in array pic_index1 in original video image sequence Including the original image information of original leaf area is extracted to generate the image for only including the original leaf area, thus Obtain the image sequence for only including original leaf area.
For example, by handling above, image abnormity detection unit 300 can will be similar to figure in original video image sequence The image of a in 16 is converted into the image as shown in the b in Figure 16, that is, all filters all background area image information Fall, includes only the image information of original leaf area in the whole image being achieved in that, so as to be effectively prevented from image Noise to extraction image abnormal characteristic (for example, icing feature) interfere.In addition, the protophyll obtained by above-mentioned processing Panel region helps to carry out the judgement of icing type, in particular, due to proposition original leaf area be substantially relative to The region of background area movement, therefore the feature (for example, the ice being fixed together with blade) being fixed together with blade all can It is extracted, therefore when the icing feature on blade has exceeded the blade profile of standard, passes through the background point carried out above There are certain differences for the blade profile of leaf area and standard that the process of cutting is extracted, can directly pass through analysis at this time The contour shape of the leaf area extracted can directly judge icing feature.
In addition, as shown in Figure 9, image abnormity detection unit 300 may also include original leaf area processing unit 340, Wherein, original leaf area processing unit 340 can by by original leaf area acquiring unit 330 generate only include leaf area Image be divided into multiple subregions, the multiple subregion is analyzed to determine there are the region of abnormal image feature, and right The information of the multiple subregion is encoded.
In particular, the only packet that original leaf area processing unit 340 will be generated by original leaf area acquiring unit 330 Image containing original leaf area is averagely divided into N number of first subregion along picture traverse direction, concurrently judges described N number of first It whether there is abnormal image feature in each first subregion in subregion, it is only abnormal to the presence in N number of first subregion Each first subregion of characteristics of image is averagely divided into M the second subregions along image length direction, without to described N number of first In subregion there is no other first subregions of abnormal image feature to be divided, in addition, original leaf area processing unit 340 Can in N number of first subregion there is no the information of each the first subregion of abnormal image feature to encode, and it is right In the M the second subregions there are the information of each the second subregion of abnormal image feature to be encoded.Below with reference to This is described in detail in Figure 17 to Figure 21.
For example, as shown in figure 17, original leaf area processing unit 340 can be along picture traverse direction by whole image region It is averagely divided into N number of first area, for example, 6 the first subregions, however, the present invention is not limited thereto.As an example it is assumed that will entirely scheme As region division is N number of first subregion, the line number of each frame image pixel matrix is Rows=1080, and each first point Number of lines of pixels shared by area is Band_i, then can calculate Band_i by equation 21:
Band_i=Rows/N (21)
Then the coboundary of i-th of first subregions is 1+Band_i × (i-1), and lower boundary is Band_i+Band_i × (i- 1), wherein therefore a frame blade gray level image can be divided into N number of first point by i=1,2,3 ..., N by this process Area.Original leaf area processing unit 340 by image-region is split in this manner may make each first The position in region and size do not change with the rotation of blade, that is, are equivalent to that increase multiple exceptions for each frame image special Monitoring section is levied, once blade rotates into the visual monitoring visual field of video acquisition unit 100, original leaf area processing unit 340 can use this N number of first area quickly to be scanned to the image-region where blade simultaneously, to can not only carry Height can also improve by carrying out fast parallel processing to multitude of video image to video figure the accuracy of identification of off-note The processing speed of picture.
If original leaf area processing unit 340 finds N number of the by being carried out to N number of first area after analyzing processing There are abnormal image feature in a first area in one region, there is no abnormal image features in other first subregions (such as Shown in Figure 18), then original leaf area processing unit 340 can only there will be one first subregions of abnormal image feature M the second subregions are averagely divided into along image length direction, for example, 24 the second subregions are averagely divided into, as shown in figure 19, but It is that the invention is not limited thereto.As an example it is assumed that assuming that the columns of each frame image pixel matrix is Cols=1920, then can lead to Equation 22 is crossed to calculate the width Width_j of each the second subregion:
Then the left margin of j-th of second subregions be 1+Width_j × (j-1), right margin be Width_j+Width_j × (j-1), wherein therefore first subregion can be divided into M second point by j=1,2,3 ..., M by this process Area.Hereafter, blade original area processing unit 340 can be determined by concurrently carrying out analyzing processing to this M the second subregions There are abnormal image features in which of M the second subregions subregion.Wherein, to N number of first subregion and M second point When area carries out analyzing processing to determine abnormal image feature, original leaf area processing unit 340 can utilize in the prior art Various methods carry out each first subregion or each second subregion the analysis of image abnormal characteristic, for example, by calculating one First-order Gradient, second order gradient or average value in a first subregion etc. whether there is abnormal image to determine in first subregion Feature, however, the present invention is not limited thereto.
Further, since the position coordinates of the intersection point Q determined by leaf position recognition unit 320 with leaf area in the picture There are one-to-one relationships for position, therefore specific second subregion there are abnormal image feature is being determined by above procedure Later, the position coordinates of the available intersection point Q determined by leaf position recognition unit 320 of blade original area processing unit 340 Accurately determine out whether abnormal image feature appears on leaf area, so can prevent Abnormal Leaves feature wrong report or The generation of false, to which the identification to the off-note on blade can be improved.
Further, since blade is constantly in rotary state in the case where Wind turbines run well, thus blade regarding Position in the visual monitoring visual field of frequency collecting unit is constantly changing with the time, that is, each first subregion or second Subregion is put included leaf image information and is constantly being changed in different times, and between different leaves in same subregion Image information be also likely to be present difference, so blade original area processing unit 340 passes through to there are abnormal image features The information of second subregion and there is no the information of the first subregion of abnormal image feature to be encoded, so realize to each the One subregion and the second subregion track and identify.
In particular, to there is no each the first subregions of abnormal image feature in N number of first subregion Information carry out coding and in the M the second subregions there are the information of each the second subregion of abnormal image feature into When row coding, the information being encoded may include following information:The temporal information Time of present image, the regional information of Wind turbines The ash of the blade number Blade of blade in Place, present image, blades leading edges line and present image in present image It spends the y-coordinate value Node of the intersection point of the lower sideline of image and is compiled by the partition number of the first subregion and the subregion of the second subregion Number composition index information Region, wherein when to being encoded there is no the information of the first subregion of abnormal image feature, The partition number of the first subregion in index information Region is number of first subregion in N number of first subregion, rope The partition number that fuse ceases the second subregion in Region is 0;To there are the information of the second subregion of abnormal image feature into When row coding, the partition number of the first subregion in index information Region is the first subregion comprising second subregion described Number in N number of first subregion, the partition number of the second subregion in index information Region are second subregions at the M Number in second subregion.In addition, the information being encoded may also include:The corresponding region is (that is, there are abnormal image features Second subregion or the first subregion there is no abnormal image feature) image information and end mark.
For example, Figure 20 shows the format of image information coding, wherein " Time " indicates the temporal information of present image, Coded format is " YMDH ", that is, the time is accurate to hour, such as " 2016021220 " indicate that the temporal information of present image is On 2 12nd, 2016 at 8 points in evenings;" Place " indicates the regional information of Wind turbines, that is, the video figure belonging to present image As sequence is obtained from which Wind turbines of which wind field, coded format is " F-T ", wherein " F " indicates wind field number, " T " indicates that Wind turbines number, such as " 9-12 " indicate No. 12 Wind turbines of No. 9 wind fields;" Blade " indicates present image In blade blade number, coded format is " I ", and when Wind turbines have 3 blades, the value range of " I " is I= 1,2,3, such as No. 2 blades are indicated when " I "=" 2 ";" Node " indicates the blades leading edges line and present image in present image Gray level image lower sideline intersection point y-coordinate value, coded format be " P ", wherein when image resolution ratio be 1920 × When 1080, the value range of " P " is 1920, for example, when " P "=" 345 ", indicates that the row coordinate of intersection point Q is 345; " Region " indicates the index information being made of the partition number n of the first subregion and the partition number m of the second subregion, encodes lattice Formula is " n-m ", wherein for there is no the first subregion of abnormal image feature, n is first subregion at described N number of first point Number in area, m are 0 (that is, indicating that there is no abnormal image features in first subregion, need not carry out first subregion Divided for the second time to obtain multiple second subregions), for such situation, 0 < n≤N, N indicates the total quantity of the first subregion;For There are the second subregions of abnormal image feature, and n is comprising there are the first subregions of second subregion of abnormal image feature in institute Have a number in the first subregion, m indicate that there are second subregions of abnormal image feature in first subregion all second Number in subregion, for such situation, 0 < n≤N, 0 < m≤M, wherein N indicates that the total quantity of the first subregion, M indicate The total quantity of the second subregion obtained when first subregion is divided into multiple second subregions.For example, as shown in Figure 21, working as figure There are index when abnormal image feature, stored in Region letters in the 9th the second subregion in the 4th the first subregion in 21 The coded format of breath is " 4-9 ", however, other first subregions in Figure 21 are (that is, other other than the 4th the first subregion First subregion) in be not present abnormal image feature, therefore, when being encoded respectively to the information of these the first subregions, The coded format of the index information stored in Region is " i-0 ", wherein i indicates one first point in these first subregions Number of the area in N number of first subregion;Image information " Info " indicates that corresponding subregion (that is, the first subregion or second subregion) is passing through By obtained after 340 analyzing processing of blade original area processing unit characteristics of image (for example, the gradient of gray value, maximum value, Minimum value, mean value, variance etc.) banner word, in addition, the specifying information of the banner word of each characteristics of image is not stored in In " Info " field, but stored in the database in the form of the variable of corresponding banner word name;End mark " End " indicates The end of image information encoded content.
Figure 22 is the method for being monitored to the blade of Wind turbines of exemplary embodiment according to the present invention Flow chart.
With reference to Figure 22, in step 2210, video acquisition is carried out to obtain sequence of video images to Wind turbines.Wherein, right Wind turbines carry out video acquisition to obtain sequence of video images the step of include:Video acquisition will be carried out at predetermined time intervals Camera lens is adjusted acquires the sequence of video images of predetermined time length and in the acquisition knot of sequence of video images to acquisition position point The camera lens is adjusted to zero into position after beam.This progress step 2210 is described in detail hereinafter with reference to Figure 23.
Figure 23 is shown carries out video acquisition to obtain the detail flowchart of sequence of video images using Wind turbines.
In step 2211, dead-center position setting is carried out to video acquisition unit 100, wherein the dead-center position is current Stop place of the video acquisition unit 100 in the acquisition without sequence of video images.In order to protect the mirror of camera unit 104 The position of the camera lens of camera unit 104 vertically downward can be set as the dead-center position by head.
After completing dead-center position setting, in step 2212, it sets the dead-center position to preset point.
In step 2213, the camera lens that control camera unit 104 includes is adjusted to acquisition position point, and controls cleaning unit The camera lens that 103 pairs of camera units 104 include carries out cleaning, wherein the acquisition position point is relative to preset point Position for carrying out video acquisition.
In step 2214, control camera unit 104 acquires the sequence of video images of predetermined time length.
After the sequence of video images for acquiring predetermined time length, in step 2215, call pre-set described Preset point adjusts the camera lens to the dead-center position.Hereafter, after a predetermined time, back to step 2213 Carry out the collecting work of sequence of video images next time.
Referring back to Figure 22, in step 2220, background image is extracted from the sequence of video images obtained.Wherein, from obtaining Sequence of video images extraction background image the step of may include:Based on arbitrary in average estimation and mediant estimation method A kind of method estimates the background image using the multiple image in sequence of video images.Due to above with reference to Fig. 6 and Fig. 7 This is described in detail, therefore no longer carries out repeated description herein.
It, can also be from video image after estimating background image using average estimation or mediant estimation method The image not comprising blade is directly acquired in sequence to be used as the background image, wherein carry on the back being extracted from sequence of video images When scape image, by the way that first between the import gray average of the 1st frame image and the import gray average of n-th frame image is poor Value is compared to determine whether the n-th -3 frame image being used as the background image to realize background image more with import threshold value Newly, and after the update for realizing background image using the n-th -3 frame image, by will be equal in the outlet gray scale of the 1st frame image Value and the second difference between the outlet gray average of the image after n-th frame image are compared to determine with outlet threshold value Whether the update operation of background image is restarted, wherein 3 < n≤m, wherein m is the image that sequence of video images includes Quantity.This is described in detail hereinafter with reference to Figure 24.
With reference to Figure 24, in step 2401, the 1st frame image is obtained from sequence of video images.
In step 2402, vane inlet region E and blade exit region F is determined in the gray level image of the 1st frame image.
In step 2403, the gray average G of vane inlet region E is calculated using equation 5EWith blade exit region F's Gray average GF, and by the import gray average G of vane inlet region EEWith the outlet gray average G of blade exit region FFPoint The import reference gray level mean value C1 and outlet reference gray level mean value C2 needed for the background image Yong Zuo not be determined.
In step 2404, n-th frame image is read, and determined in n-th frame image according to method identical with the 1st frame image Vane inlet region E and blade exit region F, and according to method identical with the 1st frame image calculating is executed using equation 5 The import gray average G of n frame imagesEnWith outlet gray average GFnThe first calculating operation, and calculate the import of n-th frame image Threshold value h1nWith outlet threshold value h2n, wherein 3 < n≤m, m are the quantity for the image that sequence of video images includes.Due to above This is described in detail, therefore no longer carries out repeated description herein.
In step 2405, the import gray average G of n-th frame image is determinedEnBetween import reference gray level mean value G1 One difference C1nWhether the import threshold value h1 of n-th frame image is more thann
If determining the first difference C1 in step 2405nWhether import threshold value h1 is less than or equal ton, then n=n+1 is executed Update operation, and proceed in step 2407 determine n whether be less than or equal to sequence of video images in image quantity m.
If determining that n is less than or equal to m, executes identical with former frame back to step 2404 in step 2407 Operation.If determining that n is more than m, illustrates that all images in current sequence of video images have been located in step 2407 Reason finishes, therefore the method terminates.
If determining the first difference C1 in step 2405nMore than import threshold value h1n, then the n-th -3 frame image is determined as carrying on the back Scape image, and proceed to step 2409 and operated with the update for executing n=n+1.
Hereafter, in step 2410, the quantity m for the image whether n is less than or equal in sequence of video images is determined.
If determining that n is more than m, illustrates that all images in current sequence of video images are processed in step 2410 It finishes, therefore the method terminates.
, whereas if determining that n is less than or equal to m, then proceeds to step 2411, wherein in step in step 2410 2411, n-th frame image is read, the import gray average G of n-th frame image is calculated according to method identical with the 1st frame imageEnWith go out Mouth gray average GFn, and calculate according to method identical with the (n-1)th frame the import threshold value h1 of n-th frame imagenWith outlet threshold value h2n
Hereafter, in step 2412, the second difference C2 of n-th frame image is determinednWhether the outlet threshold value of n-th frame image is more than h2n
If determining the second difference C2 in step 2412nLess than or equal to the outlet threshold value h2 of n-th frame imagen, then carry out The update that n=n+1 is executed to step 2409 operates and carries out subsequent operation.
, whereas if determining the second difference C2 in step 2412nMore than the outlet threshold value h2 of n-th frame imagen, then illustrate Blade has just rotated out of the blade exit region of n-th frame, to need to restart the vane inlet threshold condition of a new round Judge, therefore proceed to step 2406, the update for executing n=n+1 operates and carries out subsequent operation.
By above procedure, it can be achieved that the continuous renewal of background image, to weaken or eliminate background change over time and The error of introducing.
Referring back to Figure 22, after obtaining background image by step 2220, in step 2230, the back of the body based on extraction Scape image whether there is abnormal image feature to detect on the blade in the image in sequence of video images.Hereinafter with reference to figure 25 are described in detail this.
As shown in figure 25, in step 2510, contain leaf using in the gray level image and sequence of video images of background image The gray level image of frame for piece or more frame image obtains frame or more frame blade gray level image.Wherein, obtain a frame or More the step of multiframe blade gray level image, includes:After being located in sequence of video images is met a frame image of first condition Each frame image in frame until meeting a frame image of second condition or more frame image is changed into gray level image, Wherein, first condition is the first difference C1nMore than the import threshold value h1 of pth frame imagen, second condition is the second of q frame images Difference C2nMore than the outlet threshold value h2 of q frame imagesn, wherein p and q is integer, and p < q.It on this basis, can will be described The gray level image of each frame image and the gray level image of background image carry out difference operation to obtain frame or more frame blade gray scale Image.In addition, the step of obtaining a frame or more frame blade gray level image further includes:Operation is opened by using morphology and closes behaviour Any one in work handles described frame or more frame blade gray level image.Due to having been carried out to this above in detail Thin description, therefore no longer repeated herein.It can get optimum results as shown in Figure 11 by step 2510, i.e., it is final Blade gray level image, wherein the noise in blade gray level image in Figure 10 is all removed, and the contour feature of blade obtains Accurate extraction is arrived.
In step 2520, carried out for each frame blade gray level image in described frame or more frame blade gray level image Leaf position identifies to screen described frame or more frame blade gray level image.Wherein, to a frame or more frame The step of blade gray level image is screened include:Using autoregression model for treated described frame or more frame blade Gray level image carries out leaf position identification and is screened come described frame or more frame blade gray level image to treated to obtain Obtain the available blade gray level image with intact leaf profile.It will then be described in detail to described in treated with reference to Figure 26 One frame or more frame blade gray level image is screened to obtain the mistake of the available blade gray level image with intact leaf profile Journey.
After obtaining the available blade gray level image with intact leaf profile, in step 2530, utilize what is filtered out Blade gray level image can be used, original leaf area can be extracted with the corresponding original image of blade gray level image to generate from described Only include the image of the original leaf area.
After producing the only image comprising the original leaf area, in step 2540, what it is by generation only includes institute The image for stating original leaf area is divided into multiple subregions, is analyzed the multiple subregion to determine that there are abnormal image spies The subregion of sign, and the information of the multiple subregion is encoded.Wherein, the step of division and the step of coding, may include:It will The only image comprising leaf area of the generation is averagely divided into N number of first subregion along picture traverse direction, concurrently judges It whether there is abnormal image feature in each first subregion in N number of first subregion, it only will be in N number of first subregion There are each first subregions of abnormal image feature to be averagely divided into M the second subregions along image length direction, and to institute It states in N number of first subregion and carries out coding there is no the information of each the first subregion of abnormal image feature and to the M In second subregion there are the information of each the second subregion of abnormal image feature to be encoded.Wherein, to described N number of Coding is carried out in one subregion there is no the information of each the first subregion of abnormal image feature and to the M the second subregions In when being encoded there are the information of each the second subregion of abnormal image feature, the information being encoded includes following letter Breath:The blade of the temporal information " Time " of present image, the regional information " Place " of Wind turbines, blade in present image Number the y-coordinate of the intersection point of the lower sideline of the gray level image of " Blade ", the blades leading edges line in present image and present image Value " Node " and the index information " Region " being made of the partition number of the first subregion and the partition number of the second subregion, Wherein, for there is no the first subregion of abnormal image feature, the partition number of the first subregion in the index information is this The partition number of number of first subregion in N number of first subregion, the second subregion in the index information is 0, for There are the second subregion of abnormal image feature, the partition number of the first subregion in the index information is comprising second subregion Number of first subregion in N number of first subregion, the partition number of the second subregion in the index information be this Number of two subregions in the M the second subregions.In addition, the information being encoded may also include:Corresponding subregion is (that is, there are different Second subregion of normal characteristics of image or the first subregion there is no abnormal image feature) image information " Info " and terminate Indicate " End ".Since this being described in detail with reference to Figure 16 to Figure 21 above, no longer repeated herein.
When to treated, described frame or more frame blade gray level image is screened and obtained with intact leaf wheel When the available blade gray level image of exterior feature, an identified frame is detected via blade edge if determined by using autoregression model The lower sideline of blade gray level image and the primary election intersection point of blades leading edges line and the primary election leaf of the frame blade gray level image Cusp is located on the corresponding position of the blades leading edges line and not affected by noise, the then frame blade gray-scale map As being determined to be the available blade gray level image with intact leaf profile.This is described in detail hereinafter with reference to Figure 26.
Figure 26 is to show that treated, described frame or more frame blade gray level image is screened to obtain to have had The flow chart of the method for the available blade gray level image of completeblade profile.
With reference to Figure 26, in step 2610, using Sobel operators to treated described frame or more frame blade gray-scale map A frame blade gray level image as in carries out blade edge detection and extracts the coordinate value of blade edge position.
In step 2620, x coordinate value and y-coordinate value to the blade tip point of the primary election in the frame blade gray level image with And the intersection point of the primary election of the lower sideline of blades leading edges line and the frame blade gray level image x coordinate value and y-coordinate value into Row simple linear regression analysis obtains fitting a straight line.
In step 2630, to by all pictures between the intersection point and the blade tip point in the fitting a straight line The First ray of the x coordinate value composition of vegetarian refreshments and the second sequence being made of following x coordinate value carry out correlation calculations to obtain Obtain relative coefficient, wherein i-th of x coordinate value in second sequence is the blade in the frame blade gray level image The maximum value in all x coordinate values corresponding with i-th of y-coordinate value in the y-coordinate value of all pixels point on region, 0 < i≤I, wherein I is the quantity of all pixels point.
In step 2640, determine whether the first relative coefficient is more than predetermined threshold.
If determining that the first related coefficient is more than the predetermined threshold, determines currently in step 2650 in step 2640 A frame blade gray level image be the available blade gray level image for having intact leaf profile.
It is true in step 2660 if determining that the first related coefficient is less than or equal to the predetermined threshold in step 2640 A frame blade gray level image before settled is not the available blade gray level image for having intact leaf profile.Due to having joined above The above content is described in detail according to Figure 13, therefore is no longer repeated here.
In addition, the present invention also provides a kind of computer readable storage medium, have program stored therein, described program may include using In the instruction for executing various operations in the method that the above-mentioned blade to Wind turbine is monitored.Specifically, described program can To include the instruction for executing each step described in Figure 22 to Figure 26.
In addition, the present invention also provides a kind of computers, including it is stored with the readable medium of computer program, described program It include the instruction for executing various operations in the method that the above-mentioned blade to Wind turbine is monitored.Specifically, described Program may include the instruction for executing each step described in Figure 22 to Figure 26.
Based on described above for can be used for intuitively obtaining to the device and method that the blade of Wind turbines is monitored Exception information on blade is taken, for example, blade icing information can be intuitively obtained, for example, icing type, position, area etc., and Relatively reliable effective information can be provided for design de-icing method.In addition, the device and method can be to improve Wind turbines low Blade construction under warm environment provides foundation with control strategy, and can reduce the operation cost of Wind turbines and the fortune of personnel Intensity is tieed up, and data accumulation is provided for other follow-up research work.Above each embodiment of the present invention is merely exemplary, and The present invention is not limited to this.Those skilled in the art should understand that:Without departing from the principles and spirit of the present invention, It can change these embodiments, wherein the scope of the present invention limits in claim and its equivalent.

Claims (32)

1. the equipment that a kind of blade to Wind turbine is monitored, which is characterized in that including:
Video acquisition unit carries out video acquisition to obtain sequence of video images to the blade of Wind turbines;
Background image acquiring unit, the sequence of video images extraction background image obtained from video acquisition unit;
Image abnormity detection unit is detected based on the background image of extraction on the blade in the image in sequence of video images With the presence or absence of abnormal image feature.
2. equipment as described in claim 1, which is characterized in that described image abnormality detecting unit includes:
Background image cutting unit contains vaned one using in the gray level image and sequence of video images of the background image The gray level image of frame or more frame image obtains frame or more frame blade gray level image,
Wherein, described frame or more frame blade gray level image is used for carrying out the detection by described image abnormality detecting unit.
3. equipment as claimed in claim 2, which is characterized in that described image abnormality detecting unit further includes:
Leaf position recognition unit, for each frame blade gray level image in described frame or more frame blade gray level image into Row leaf position identifies to screen described frame or more frame blade gray level image,
Wherein, the available blade gray level image filtered out is used for carrying out the detection by described image abnormality detecting unit.
4. equipment as claimed in claim 3, which is characterized in that described image abnormality detecting unit further includes:
Original leaf area acquiring unit, using the available blade gray level image filtered out by leaf position recognition unit, from It is described that original leaf area can be extracted with the corresponding original image of blade gray level image to generate only comprising the original vane region The image in domain,
Wherein, the only image comprising the leaf area of generation is used for carrying out the inspection by described image abnormality detecting unit It surveys.
5. equipment as claimed in claim 4, which is characterized in that described image abnormality detecting unit further includes:
The only image comprising the leaf area of generation is divided into multiple subregions, to institute by original leaf area processing unit It states multiple subregions to be analyzed to determine that there are the regions of abnormal image feature, and the information of the multiple subregion is compiled Code.
6. equipment as described in claim 1, which is characterized in that the video acquisition unit is at predetermined time intervals by the video The camera lens that collecting unit includes is adjusted acquires the sequence of video images of predetermined time length to acquisition position point, and in video The camera lens is adjusted to zero into position after the acquisition of image sequence.
7. equipment as described in claim 1, which is characterized in that the background image acquiring unit be based on average estimation and Any one method in mediant estimation method estimates the background image using the multiple image in sequence of video images.
8. equipment as claimed in claim 3, which is characterized in that the background image acquiring unit is directly by sequence of video images In the image not comprising blade be used as the background image.
9. equipment as claimed in claim 8, which is characterized in that
The background image acquiring unit is by will be in the import gray scale of the import gray average of the 1st frame image and n-th frame image The first difference between mean value and import threshold value be compared to determine whether by the n-th -3 frame image be used as the background image with Realize the update of background image;
After the update for realizing background image using the n-th -3 frame image, the background image acquiring unit is by will be in the 1st frame The outlet gray average of image and the second difference between the outlet gray average of the image after n-th frame image and outlet threshold Value is compared to determine whether to restart the update operation of background image,
Wherein, 3 < n≤m, wherein m is the quantity for the image that sequence of video images includes.
10. equipment as claimed in claim 9, which is characterized in that whenever the background image acquiring unit calculates a frame figure When the first difference of picture and the second difference, the first difference and the second difference are respectively stored by the background image acquiring unit In one sequence of differences and the second sequence of differences, it includes all in the first sequence of differences that the import threshold value, which is updated to utilize, Maximum value in first difference is multiplied by the first prearranged multiple and the value that obtains, and by the outlet threshold value be updated to using include The maximum value in all second differences in second sequence of differences is multiplied by the second prearranged multiple and the value that obtains.
11. equipment as claimed in claim 9, which is characterized in that the background image cutting unit will be in sequence of video images Be located at meet the first difference be more than the import threshold value a frame image after until meet the second difference be more than the outlet Each frame image in a frame before one frame image of threshold value or more frame image is changed into gray level image, and will be described each The gray level image of the gray level image of frame image and the background image carries out difference operation to obtain a frame or more frame blade Gray level image.
12. equipment as claimed in claim 11, which is characterized in that the background image cutting unit opens operation using morphology Described frame or more frame blade gray level image is handled with any one operation in closed operation.
13. equipment as claimed in claim 12, which is characterized in that the leaf position recognition unit utilizes autoregression model needle To treated described frame or more frame blade gray level image carry out leaf position identification come to treated a frame or More multiframe blade gray level image is screened to obtain the available blade gray level image with intact leaf profile.
14. equipment as claimed in claim 13, which is characterized in that determined via blade edge when by using autoregression model The lower sideline of a frame blade gray level image and the primary election intersection point of blades leading edges line and the frame blade determined by detection The primary election blade tip point of gray level image is located on the corresponding position of the blades leading edges line and when not affected by noise, institute It states leaf position recognition unit and determines that the frame blade gray level image is the available blade gray-scale map for having intact leaf profile Picture.
15. equipment as claimed in claim 4, which is characterized in that the original leaf area processing unit is by the generation The image for only including leaf area is averagely divided into N number of first subregion along picture traverse direction, concurrently judges described N number of first It whether there is abnormal image feature in each first subregion in subregion, it is only that the presence in N number of first subregion is abnormal Each first subregion of characteristics of image is averagely divided into M the second subregions along image length direction, and respectively to described N number of the In one subregion there is no the presence in the information of each the first subregion of abnormal image feature and the M the second subregions The information of each the second subregion of abnormal image feature is encoded.
16. a kind of method that blade to Wind turbine is monitored, which is characterized in that including:
Video acquisition is carried out to obtain sequence of video images to the blade of Wind turbines;
Background image is extracted from the sequence of video images obtained;
It whether there is abnormal image on the blade in the image in sequence of video images based on the background image of extraction to detect Feature.
17. the method described in claim 16, which is characterized in that the step of detection includes:
Using in the gray level image and sequence of video images of the background image containing a vaned frame or more frame image Gray level image obtains frame or more frame blade gray level image,
Wherein, described frame or more frame blade gray level image be used to carry out the detection.
18. method as claimed in claim 17, which is characterized in that the step of detection further includes:
For in described frame or more frame blade gray level image each frame blade gray level image carry out leaf position identification come Described frame or more frame blade gray level image is screened,
Wherein, the available blade gray level image filtered out be used to carry out the detection.
19. method as claimed in claim 18, which is characterized in that the step of detection further includes:
Using the available blade gray level image filtered out, from it is described former with the corresponding original image extraction of blade gray level image Beginning leaf area only includes the image of the original leaf area to generate,
Wherein, the only image comprising the original leaf area of generation be used to carry out the detection.
20. method as claimed in claim 19, which is characterized in that the step of detection further includes:
The only image comprising the original leaf area of generation is divided into multiple subregions, the multiple subregion is analyzed To determine that there are the subregions of abnormal image feature, and the information of the multiple subregion is encoded.
21. the method described in claim 16, which is characterized in that carry out video acquisition to Wind turbines to obtain video image The step of sequence includes:The camera lens for carrying out video acquisition is adjusted at predetermined time intervals and acquires the predetermined time to acquisition position point The sequence of video images of length, and the camera lens is adjusted to zero into position after the acquisition of sequence of video images.
22. the method described in claim 16, which is characterized in that extract the step of background image from the sequence of video images obtained Suddenly include:The multiframe in sequence of video images is utilized based on any one method in average estimation and mediant estimation method Image estimates the background image.
23. method as claimed in claim 19, which is characterized in that the step of extracting background image from sequence of video images is wrapped It includes:The image not comprising blade is directly acquired from sequence of video images to be used as the background image.
24. method as claimed in claim 23, which is characterized in that the step of extracting background image from sequence of video images is wrapped It includes:
By by between the import gray average of the 1st frame image and the import gray average of n-th frame image the first difference with Import threshold value is compared to determine whether the n-th -3 frame image being used as the background image to realize the update of background image;
After the update for realizing background image using the n-th -3 frame image, by by the outlet gray average of the 1st frame image and The second difference between the outlet gray average of the image after n-th frame image is compared to determine whether with outlet threshold value Restart the update operation of background image,
Wherein, 3 < n≤m, wherein m is the quantity for the image that sequence of video images includes.
25. method as claimed in claim 24, which is characterized in that the step of extracting background image from sequence of video images is also wrapped It includes:Whenever calculating the first difference and the second difference of a frame image, the first difference and the second difference are respectively stored into In one sequence of differences and the second sequence of differences, it includes all in the first sequence of differences that the import threshold value, which is updated to utilize, Maximum value in first difference is multiplied by the first prearranged multiple and the value that obtains, and by the outlet threshold value be updated to using include The maximum value in all second differences in second sequence of differences is multiplied by the second prearranged multiple and the value that obtains.
26. method as claimed in claim 24, which is characterized in that the step of obtaining a frame or more frame blade gray level image is wrapped It includes:Being located in sequence of video images is met the first difference to be more than after a frame image of the import threshold value until meeting the Two differences be more than a frame image of the outlet threshold value before a frame or more frame image in each frame image be changed into ash Image is spent, and the gray level image of the gray level image of each frame image and the background image is subjected to difference operation to obtain one Frame or more frame blade gray level image.
27. method as claimed in claim 26, which is characterized in that the step of obtaining a frame or more frame blade gray level image is also Including:Any one in operation and closed operation is opened using morphology to be carried out to described frame or more frame blade gray level image Processing.
28. method as claimed in claim 27, which is characterized in that sieved to described frame or more frame blade gray level image The step of selecting include:Using autoregression model, for treated, described frame or more frame blade gray level image carries out blade position Setting identification and being screened come described frame or more frame blade gray level image to treated has intact leaf profile to obtain Available blade gray level image.
29. method as claimed in claim 28, which is characterized in that described frame or more frame blade gray-scale map to treated Include the step of the available blade gray level image with intact leaf profile to obtain as being screened:
When the lower sideline for detecting an identified frame blade gray level image via blade edge by using autoregression model determination It is located at the blade with the primary election intersection point of blades leading edges line and the primary election blade tip point of the frame blade gray level image When on the corresponding position of leading edge line and not affected by noise, determine that the frame blade gray level image is with intact leaf The available blade gray level image of profile.
30. method as claimed in claim 19, which is characterized in that the step of division and the step of coding includes:By the production The raw only image comprising leaf area is averagely divided into N number of first subregion along picture traverse direction, concurrently judges described N number of It whether there is abnormal image feature in each first subregion in first subregion, only by the presence in N number of first subregion Each first subregion of abnormal image feature is averagely divided into M the second subregions along image length direction, and respectively to the N In a first subregion there is no in the information of each the first subregion of abnormal image feature and the M the second subregions There are the information of each the second subregion of abnormal image feature to be encoded.
31. a kind of computer readable storage medium, has program stored therein, which is characterized in that described program includes for executing as weighed Profit requires the instruction of any one of 16-30 operations.
32. a kind of computer, including it is stored with the readable medium of computer program, which is characterized in that described program includes being used for Execute the instruction operated as described in any one of claim 16-30.
CN201710295739.3A 2017-04-28 2017-04-28 Device and method for monitoring blades of wind turbine generator Active CN108799011B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710295739.3A CN108799011B (en) 2017-04-28 2017-04-28 Device and method for monitoring blades of wind turbine generator

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710295739.3A CN108799011B (en) 2017-04-28 2017-04-28 Device and method for monitoring blades of wind turbine generator

Publications (2)

Publication Number Publication Date
CN108799011A true CN108799011A (en) 2018-11-13
CN108799011B CN108799011B (en) 2020-01-31

Family

ID=64070325

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710295739.3A Active CN108799011B (en) 2017-04-28 2017-04-28 Device and method for monitoring blades of wind turbine generator

Country Status (1)

Country Link
CN (1) CN108799011B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111255636A (en) * 2018-11-30 2020-06-09 北京金风科创风电设备有限公司 Method and device for determining tower clearance of wind generating set
CN111340747A (en) * 2018-11-30 2020-06-26 北京金风科创风电设备有限公司 Method, equipment and system for processing blade image of wind generating set
WO2021036639A1 (en) * 2019-08-31 2021-03-04 深圳市广宁股份有限公司 Intelligent detection method for wind power device and related products
CN113803223A (en) * 2021-08-11 2021-12-17 明阳智慧能源集团股份公司 Method, system, medium and equipment for monitoring icing state of fan blade in real time
CN116823872A (en) * 2023-08-25 2023-09-29 尚特杰电力科技有限公司 Fan inspection method and system based on target tracking and image segmentation

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006030183A1 (en) * 2004-09-14 2006-03-23 The University Of Manchester Control of a doubly-fed induction generator
CN101188745A (en) * 2007-11-27 2008-05-28 北京中星微电子有限公司 Intelligent drowning video monitoring system and method for natatorium
CN102526913A (en) * 2011-12-12 2012-07-04 上海东锐风电技术有限公司 Wind power generator cabin fire-extinguishing system
CN103982378A (en) * 2014-04-25 2014-08-13 广东电网公司电力科学研究院 Method for diagnosing surface icing faults of wind power generator blade of power system based on machine visual images
CN104239887A (en) * 2014-09-16 2014-12-24 张鸿 Medical image processing method and device
CN205779505U (en) * 2016-06-30 2016-12-07 大唐陕县风力发电有限责任公司 Wind power plant based on unmanned plane inspection tour system
JP2017053292A (en) * 2015-09-11 2017-03-16 三菱重工業株式会社 Wind power generator and inclusion method thereof

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006030183A1 (en) * 2004-09-14 2006-03-23 The University Of Manchester Control of a doubly-fed induction generator
CN101188745A (en) * 2007-11-27 2008-05-28 北京中星微电子有限公司 Intelligent drowning video monitoring system and method for natatorium
CN102526913A (en) * 2011-12-12 2012-07-04 上海东锐风电技术有限公司 Wind power generator cabin fire-extinguishing system
CN103982378A (en) * 2014-04-25 2014-08-13 广东电网公司电力科学研究院 Method for diagnosing surface icing faults of wind power generator blade of power system based on machine visual images
CN104239887A (en) * 2014-09-16 2014-12-24 张鸿 Medical image processing method and device
JP2017053292A (en) * 2015-09-11 2017-03-16 三菱重工業株式会社 Wind power generator and inclusion method thereof
CN205779505U (en) * 2016-06-30 2016-12-07 大唐陕县风力发电有限责任公司 Wind power plant based on unmanned plane inspection tour system

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111255636A (en) * 2018-11-30 2020-06-09 北京金风科创风电设备有限公司 Method and device for determining tower clearance of wind generating set
CN111340747A (en) * 2018-11-30 2020-06-26 北京金风科创风电设备有限公司 Method, equipment and system for processing blade image of wind generating set
CN111255636B (en) * 2018-11-30 2023-07-25 北京金风科创风电设备有限公司 Method and device for determining tower clearance of wind generating set
CN111340747B (en) * 2018-11-30 2024-04-19 北京金风科创风电设备有限公司 Method, equipment and system for processing blade image of wind generating set
WO2021036639A1 (en) * 2019-08-31 2021-03-04 深圳市广宁股份有限公司 Intelligent detection method for wind power device and related products
CN113803223A (en) * 2021-08-11 2021-12-17 明阳智慧能源集团股份公司 Method, system, medium and equipment for monitoring icing state of fan blade in real time
CN113803223B (en) * 2021-08-11 2022-12-20 明阳智慧能源集团股份公司 Method, system, medium and equipment for monitoring icing state of fan blade in real time
CN116823872A (en) * 2023-08-25 2023-09-29 尚特杰电力科技有限公司 Fan inspection method and system based on target tracking and image segmentation
CN116823872B (en) * 2023-08-25 2024-01-26 尚特杰电力科技有限公司 Fan inspection method and system based on target tracking and image segmentation

Also Published As

Publication number Publication date
CN108799011B (en) 2020-01-31

Similar Documents

Publication Publication Date Title
CN108799011A (en) Device and method for monitoring blades of wind turbine generator
CN105373135B (en) A kind of method and system of aircraft docking guidance and plane type recognition based on machine vision
JP5325899B2 (en) Intrusion alarm video processor
CN109447168A (en) A kind of safety cap wearing detection method detected based on depth characteristic and video object
KR101183105B1 (en) Method of establishing information of cloud data and establishing system of information of cloud data
CN101635835A (en) Intelligent video monitoring method and system thereof
CN108764167A (en) A kind of target of space time correlation recognition methods and system again
CN104134222A (en) Traffic flow monitoring image detecting and tracking system and method based on multi-feature fusion
WO2022078182A1 (en) Throwing position acquisition method and apparatus, computer device and storage medium
CN106952293B (en) Target tracking method based on nonparametric online clustering
Gomez-Rodriguez et al. Smoke monitoring and measurement using image processing: application to forest fires
CN109886994A (en) Adaptive sheltering detection system and method in video tracking
CN114202646A (en) Infrared image smoking detection method and system based on deep learning
CN103888731A (en) Structured description device and system for mixed video monitoring by means of gun-type camera and dome camera
CN109948474A (en) AI thermal imaging all-weather intelligent monitoring method
CN115272876A (en) Remote sensing image ship target detection method based on deep learning
CN111767826A (en) Timing fixed-point scene abnormity detection method
CN112329584A (en) Method, system and equipment for automatically identifying foreign matters in power grid based on machine vision
CN112349150A (en) Video acquisition method and system for airport flight guarantee time node
Liu et al. [Retracted] Self‐Correction Ship Tracking and Counting with Variable Time Window Based on YOLOv3
CN118038153A (en) Method, device, equipment and medium for identifying external damage prevention of distribution overhead line
CN112784914B (en) Pipe gallery video intelligent attribute detection method and system based on cloud processing
CN113792452B (en) Method for inverting rainfall intensity based on video of raindrop speed
CN108198422A (en) A kind of road ponding extraction system and method based on video image
CN106791647A (en) A kind of hydroelectric power plant&#39;s condition monitoring system and method based on video intelligent identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant