CN115937114A - Fan picture preprocessing method and device - Google Patents

Fan picture preprocessing method and device Download PDF

Info

Publication number
CN115937114A
CN115937114A CN202211489235.2A CN202211489235A CN115937114A CN 115937114 A CN115937114 A CN 115937114A CN 202211489235 A CN202211489235 A CN 202211489235A CN 115937114 A CN115937114 A CN 115937114A
Authority
CN
China
Prior art keywords
pixel point
target
target pixel
boundary line
initial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211489235.2A
Other languages
Chinese (zh)
Other versions
CN115937114B (en
Inventor
严超
史晨晨
李志轩
何犇
唐东明
刘珂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuxi Haina Intelligent Technology Co ltd
Original Assignee
Wuxi Haina Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuxi Haina Intelligent Technology Co ltd filed Critical Wuxi Haina Intelligent Technology Co ltd
Priority to CN202211489235.2A priority Critical patent/CN115937114B/en
Publication of CN115937114A publication Critical patent/CN115937114A/en
Application granted granted Critical
Publication of CN115937114B publication Critical patent/CN115937114B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E10/00Energy generation through renewable energy sources
    • Y02E10/70Wind energy
    • Y02E10/72Wind turbines with rotation axis in wind direction

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a fan picture preprocessing method and device, and the fan picture preprocessed by the method can be used for automatic diagnosis and positioning of fan blade defects, such as rapid diagnosis, classification and positioning of defects of fan blade surface cracks, corrosion, breakage and the like. The method mainly includes the steps that a fan target area is extracted through foreground segmentation, cutting data are conducted along a fan blade to conduct defect identification, the situation that small defects cannot be identified due to image scaling is avoided, automatic thresholding brightness equalization processing operation is conducted on the cutting target area specially, the problem that the defect identification is affected due to too dark brightness can be avoided, and therefore too small defects on the fan can be identified. Meanwhile, the technical problem of resource waste caused by detection of a large number of foreground-free pictures is greatly solved by target area cutting.

Description

Fan picture preprocessing method and device
Technical Field
The invention relates to the field of fan inspection, in particular to a fan picture preprocessing method and device.
Background
The defect identification of the fan blade is an important component in the unmanned aerial vehicle fan inspection, and the existing fan blade defect identification method mainly comprises two types: the first is based on a traditional manual identification method, which is used for manually detecting defects by observing through a telescope and utilizing rope sag, and the second is based on an image identification method, which is used for identifying and positioning the defects of an image shot by an unmanned aerial vehicle by utilizing a computer vision algorithm.
The first traditional fan defect inspection method is used for detecting defects through manual observation by using a telescope and rope sag, the method is dependent on manual work seriously, a large amount of manpower, financial resources and time are required to be invested, the detection efficiency is low, and the maintenance cost of a fan is increased. For the second method, the fan image data is mainly acquired through the unmanned aerial vehicle and other equipment, then the acquired image data is preprocessed, and the fan defects in the image are identified by using the corresponding image processing algorithm and the deep learning algorithm. The deep learning method has a limit on the resolution of an input image, and the image is often required to be processed before being subjected to algorithm identification.
It should be noted that, in the prior art, after an image is acquired, in order to meet the requirement of the resolution of the image of the algorithm, the image is directly scaled and cropped, but the image is directly scaled to a suitable size, and an excessively large scaling ratio may cause a defect of a small resolution to disappear in a network, resulting in accuracy of detecting an image defect.
The invention is provided in view of the above.
Disclosure of Invention
The invention provides a preprocessing method and a preprocessing device for fan pictures, which are used for solving the technical problem that in the prior art, after an image is obtained, all parts of the picture can be directly cut into a plurality of small grids in order to meet the picture resolution requirement of an algorithm, so that a large number of small grid pictures are taken as foreground-free pictures, and resources are wasted; and the technical problem that the undersize defect identification effect of the fan blade is influenced by downsampling the image is solved.
According to a first aspect of the present invention, a method for preprocessing a fan picture is provided, the method including: acquiring a fan picture; performing foreground segmentation on the fan picture to obtain a segmented image, and highlighting an initial blade obtained by segmentation in the segmented image; determining a target region in the segmented image according to the boundary of the initial blade, wherein the display area of the target region is larger than that of the initial blade; and directly cutting the target area into a plurality of grid pictures with preset sizes without changing the area resolution of the fan blade.
Further, the method further comprises: inputting the grid pictures with the preset sizes into the obtained defect identification model to obtain a defect result, wherein before the target area is cut into the grid pictures with the preset sizes, the method comprises the following steps: acquiring the defect identification model; and adjusting the preset size according to the type of the defect identification model.
Further, determining a target region in the segmented image according to the boundary of the initial leaf comprises: acquiring boundary pixel points of the segmented blade and boundary lines of the segmented image; and determining a target area in the segmented image according to the relation between the boundary pixel point and the boundary line.
Further, determining a target region in the segmented image according to a relationship between the boundary pixel point and the boundary line, including: judging that the pixel point at the top edge is not superposed with the upper boundary line and the pixel point at the bottom edge is not superposed with the lower boundary line; acquiring a first reference point at a preset distance above the uppermost pixel point, and respectively acquiring a first target pixel point and a second target pixel point which are the same as the longitudinal coordinate of the first reference point on the left boundary line and the right boundary line; acquiring a second reference point at a preset distance below the bottommost pixel point, and respectively acquiring a third target pixel point and a fourth target pixel point which are the same as the second reference point in vertical coordinate on a left boundary line and a right boundary line; and determining a target area according to the first target pixel point, the second target pixel point, the third target pixel point and the fourth target pixel point.
Further, determining a target region in the segmented image according to a relationship between the boundary pixel point and the boundary line, including: judging that the leftmost pixel point is not coincident with the left boundary line and the rightmost pixel point is not coincident with the right boundary line; acquiring a third reference point which is a preset distance away from the leftmost pixel point, and respectively acquiring a fifth target pixel point and a sixth target pixel point which are the same as the abscissa of the third reference point on an upper boundary line and a lower boundary line; acquiring a fourth reference point at a preset distance on the right of the rightmost pixel point, and respectively acquiring a seventh target pixel point and an eighth target pixel point which have the same abscissa as the fourth reference point on the upper boundary line and the lower boundary line; and determining a target area according to the fifth target pixel point, the sixth target pixel point, the seventh target pixel point and the eighth target pixel point.
Further, determining a target area according to the fifth target pixel point, the sixth target pixel point, the seventh target pixel point and the eighth target pixel point includes: under the condition that the distance between the fifth target pixel point and the seventh target pixel point does not exceed a preset length, directly determining a region obtained by connecting the fifth target pixel point, the sixth target pixel point, the seventh target pixel point and the eighth target pixel point as a target region; under the condition that the distance between the fifth target pixel point and the seventh target pixel point exceeds a preset length, connecting the fifth target pixel point, the sixth target pixel point, the seventh target pixel point and the eighth target pixel point to obtain an initial region, dividing the initial region into a plurality of sub-initial regions, and determining a target region according to the plurality of sub-initial regions.
Further, determining a target area according to the plurality of sub-initial areas includes: acquiring sub-boundary pixel points of the segmented blade in each sub-initial region and sub-boundary lines of each sub-initial region; determining sub-target areas in each sub-initial area according to the relation between the sub-boundary pixel points and the sub-boundary lines; and determining the combination of the sub target areas in all the sub initial areas as the target area.
Further, before inputting the plurality of grid pictures with preset sizes into the defect identification model, the method further comprises: judging the gray level threshold value of each grid picture with a preset size; determining the grid picture with the gray threshold value smaller than a preset threshold value as a grid picture to be enhanced; and performing brightness enhancement on the grid picture to be enhanced.
Further, performing brightness enhancement on the mesh picture to be enhanced, including: acquiring a first image histogram of the grid picture to be enhanced; obtaining a first pixel value range in which a first main pixel point set of the enhanced grid picture is located based on the first image histogram; acquiring a second pixel value range in which a second main pixel point set of a sample picture of the defect identification model is located; adjusting the pixel value of the first set of primary pixels from the first range of pixel values to the second range of pixel values.
The invention provides a method and equipment for preprocessing a fan picture, wherein the method comprises the following steps: acquiring a fan picture; performing foreground segmentation on the fan picture to obtain a segmented image, and highlighting an initial blade obtained by segmentation in the segmented image; determining a target region in the segmented image according to the boundary of the initial blade, wherein the display area of the target region is larger than that of the initial blade; and cutting the target area into a plurality of grid pictures with preset sizes. The method solves the technical problem that in the prior art, after the image is obtained, all parts of the image can be directly cut into a plurality of small grids in order to meet the requirement of the resolution ratio of the image of the algorithm, so that a large number of small grid images are non-foreground images, and resource waste is caused.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a flow chart of a method for pre-processing a fan picture according to the present invention;
fig. 2 to 6 are schematic diagrams of the division of the target area provided by the present invention.
Detailed Description
In order to make the aforementioned and other features and advantages of the invention more apparent, the invention is further described below in conjunction with the accompanying drawings. It is understood that the specific embodiments described herein are for purposes of illustration only and are not intended to be limiting.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that the specific details need not be employed to practice the present invention. In other instances, well-known steps or operations are not described in detail to avoid obscuring the invention.
Example one
The application provides a fan picture preprocessing method, as shown in fig. 1, including:
and S11, acquiring a fan picture.
Specifically, in this scheme, can be by the execution main part of server or other devices that have the data processing function as the method of this scheme, above-mentioned fan picture can be for patrolling and examining many visible light pictures that unmanned aerial vehicle shot on patrolling and examining the waypoint.
And S13, performing foreground segmentation on the fan picture to obtain a segmented image, and highlighting the initial blade obtained by segmentation in the segmented image.
Specifically, in the scheme, a foreground segmentation model can be adopted to perform foreground segmentation on the fan picture, and in the image after the foreground segmentation, the initial blades obtained by segmentation can be highlighted.
Optionally, in the scheme, the original fan image may be adjusted in size in advance, for example, 256 × 256, then the adjusted fan image is labeled for training of the foreground segmentation model, where it needs to be described that, in order to lighten the foreground segmentation model, the time used by the foreground segmentation part is the least, a small segmentation network model may be built in the scheme, a small resnet is used as a backbone network of the segmentation model, and then the labeled fan image data is used for training the model.
Step S15, determining a target area in the segmented image according to the boundary of the initial blade, wherein the display area of the target area is larger than the display area of the initial blade and smaller than the area of the segmented image.
Optionally, in this solution, the target region may be determined in the segmented image according to the boundary of the initial blade, or the target region may be determined in the fan picture according to the boundary of the initial blade, where the segmented image is actually the fan picture itself, but the boundary contour of the initial blade is superimposed on the basis of the fan picture in the segmented image, that is, the initial blade is highlighted. The boundary contour of the initial blade is highlighted in the segmented image, and the scheme can directly define a target area in the segmented image (namely, expand the initial blade) according to the coordinates of the boundary contour of the initial blade in the segmented image, and can also define the target area on the fan blade according to the coordinates of the boundary contour of the initial blade in the segmented image.
And S17, cutting the target area into a plurality of grid pictures with preset sizes.
Specifically, in this scheme, after an initial blade is segmented and displayed, a target region is determined in a segmented image according to a boundary of the initial blade, where it is to be noted that, only an approximate position of the blade in the image can be determined through foreground segmentation in the step S13, but since an error may easily occur in a foreground segmentation process, the initial blade may be highlighted and displayed as an incomplete fan blade, and if a region of the initial blade is directly cropped at this time, an actual part of the fan blade image may not be in the cropped region, so that this scheme needs to complement the initial blade segmented through the foreground, that is, a target region is determined in the segmented image according to the boundary of the initial blade, a display area of the target region is larger than a display area of the initial blade, the target region after the completion may completely include the initial blade, and if the target region is cropped, it is ensured that the cropped region completely includes the actual fan blade.
It should be further noted that, on one hand, the method is different from the method of directly cropping the obtained fan image in the prior art, because the actual area of the fan blade in the image is less than one half of all the images, and the fan blade itself has a large amount of invalid foreground-free content in the image, the method performs the cropping after performing the foreground extraction on the fan image through the foreground segmentation model, thereby avoiding the situation that a large amount of foreground-free images are cropped, on the other hand, the method does not directly perform the sub-cropping on the initial blade image after the foreground segmentation before the actual cropping, but performs the cropping after completing the initial blade image based on the boundary of the initial blade, and finally can ensure that the target area completely contains the initial blade, the area of the target area is minimum, thereby making the subsequently cropped images more accurate, having fewer invalid content images, and reducing the time for the algorithm model to perform the abnormal recognition on the images. The combination of the two technical points of 'obtaining the initial blade by foreground segmentation' + 'determining the target area according to the initial blade boundary' can greatly reduce the cut picture with invalid content, so that the technical problem that in the prior art, after the image is obtained, all parts of the picture can be directly cut into a plurality of small grids in order to meet the picture resolution requirement of an algorithm, a large number of small grid pictures are not foreground pictures, and resource waste is caused is solved.
It should be further noted that when the initial blade is supplemented, the initial blade is supplemented according to the boundary of the initial blade, and it can be ensured that the target area is the minimum on the basis that the supplemented target area completely contains the actual blade, so that the efficiency of subsequent picture cutting is improved, and resource waste is avoided.
It should be further noted that, in the present solution, the determined target area is cut, and a picture does not need to be zoomed, so that the technical problems in the prior art that the image is directly zoomed to a suitable size, and a defect of a small resolution disappears in a network due to an excessively large zoom ratio, and the image defect detection precision is also solved.
Optionally, after step S17, the present solution further includes: and S19, inputting the grid pictures with the preset sizes into the acquired defect identification model to obtain a defect result.
Specifically, the defect identification model is a model trained in advance according to data samples, and optionally, in order to better identify the defects of the data and detect single data more quickly, the image and the label data can be synchronously cut into 512 by 512 sizes for training the network model. In order to obtain a more accurate defect result, the present solution may use an existing UNeXt network model as a defect recognition segmentation model for training.
Before the step S17 cuts the target area into a plurality of mesh pictures with preset sizes, the method includes:
step S161, acquiring the defect identification model.
And S162, adjusting the preset size according to the type of the defect identification model.
Specifically, in the scheme, the pre-established defect identification model can be obtained, the preset size can be adjusted according to the type of the defect identification model, namely, in the scheme, the size of the cut grid picture can be adjusted according to the type of the defect identification model, so that the cut grid picture is suitable for the defect identification model, and the fastest processing speed is achieved. Through the steps S161 to S162, the cut mesh picture can be adapted to various defect recognition models.
Optionally, step S15, determining a target region in the segmented image according to the boundary of the initial blade, includes:
and step S151, acquiring boundary pixel points of the segmented blade and boundary lines of the segmented image.
Step S152, determining a target region in the segmented image according to the relationship between the boundary pixel points and the boundary line.
Specifically, referring to fig. 2, fig. 2 is a divided image in which the initial blade Y is displayed, and in this embodiment, the boundary lines of the divided image are the four sides of the rectangle shown in fig. 2: A. b, C and D are four boundary lines in the divided image, namely an upper boundary line A, a left boundary line B, a lower boundary line C and a right boundary line D. With reference to fig. 2, according to the scheme, a target area can be determined according to the relationship between the boundary pixel points of the initial blade Y and the four boundary lines a, B, C, and D of the segmented image, where the target area is a clipping area to be clipped by a plurality of mesh pictures.
It should be noted here that when a traditional fan patrol inspection control unmanned aerial vehicle shoots a picture on a blade, the fan is often stopped, then the blade is adjusted to be a fixed posture, then a camera of the unmanned aerial vehicle is controlled to shoot a static fan blade at each navigation point, because the fan blade can be in different postures, then the fan blade picture shot by the unmanned aerial vehicle can also have a horizontal posture or a longitudinal posture of the fan blade, the posture of the initial picture in the split image can be judged to be horizontal or longitudinal through the relation between the boundary pixel points of the initial blade Y and the four boundary lines a, B, C and D of the split image, and the cutting areas are also different under the condition that the initial blade Y is in different postures, so that the scheme can be applied to different postures of the blade by the mode, namely no matter what posture the initial blade is in the image, the scheme can find a proper mode to quickly determine the cutting areas.
Optionally, step S152 determines a target region in the segmented image according to the relationship between the boundary pixel point and the boundary line, and includes:
in step S1521, it is determined that the uppermost pixel point does not coincide with the upper boundary line and the lowermost pixel point does not coincide with the lower boundary line.
Specifically, with reference to fig. 3, the uppermost pixel point of the initial blade Y is E, and the lowermost pixel point is I, and if the uppermost pixel point E of the initial blade Y is not coincident with the upper boundary line (boundary line a) of the image and the lowermost pixel point I is not coincident with the lower boundary line (boundary line C), the initial blade Y is determined to be in a "horizontal" posture in the segmented image according to the present scheme.
Step 1522, a first reference point at a preset distance above the uppermost pixel point is obtained, and a first target pixel point and a second target pixel point which are the same as the first reference point in vertical coordinate are obtained on the left boundary line and the right boundary line respectively.
Specifically, with reference to fig. 3, the preset distance may be 20 pixels, and in the case that the initial blade Y is in the "horizontal" posture in the segmented image, the present solution may move up by 20 pixels with the uppermost pixel E in the initial blade Y as the upper limit, and then obtain the first reference point F, that is, the first reference point is located 20 pixels right above the uppermost pixel E in the initial blade Y. Then, according to the scheme, a first target pixel point G and a second target pixel point H which are the same as the vertical coordinate of the first reference point F are respectively obtained on the left boundary line (on the boundary line B) and the right boundary line (on the boundary line D) of the segmented image. Or taking the first reference point F as a starting point, making a straight line parallel to the boundary line A or the boundary line C, wherein the intersection point of the straight line and the boundary line B is a first target pixel point G, and the intersection point of the straight line and the boundary line D is a second target pixel point H.
Step S1523, a second reference point at a preset distance below the bottommost pixel point is obtained, and a third target pixel point and a fourth target pixel point which are the same as the second reference point in longitudinal coordinate are obtained on the left boundary line and the right boundary line respectively;
specifically, with reference to fig. 3, the preset distance may be 20 pixels, and in the case that the initial blade Y is in the horizontal posture in the segmented image, the scheme may move the lowermost pixel point I in the initial blade Y downward by 20 pixels as the lower limit, and then obtain the second reference point J, that is, the second reference point J is located 20 pixels right below the lowermost pixel point I in the initial blade Y. Then, according to the scheme, a third target pixel point K and a fourth target pixel point L which are the same as the ordinate of the second reference point J are obtained on the left boundary line (on the boundary line B) and the right boundary line (on the boundary line D) of the segmented image respectively. Or taking the second reference point J as a starting point, making a straight line parallel to the boundary line A or the boundary line C, wherein the intersection point of the straight line and the boundary line B is a third target pixel point K, and the intersection point of the straight line and the boundary line D is a fourth target pixel point L.
Step 1524, determining a target area according to the first target pixel point, the second target pixel point, the third target pixel point and the fourth target pixel point.
Specifically, the target area can be directly or indirectly obtained according to the four points of the first target pixel point, the second target pixel point, the third target pixel point and the fourth target pixel point.
Specifically, with reference to fig. 3, the scheme takes a rectangular region formed by the first target pixel point G, the second target pixel point H, the third target pixel point K, and the fourth target pixel point L as a target region.
Optionally, in step S152, determining a target area in the segmented image according to a relationship between the boundary pixel point and the boundary line, including:
in step S1525, it is determined that the leftmost pixel point does not coincide with the left boundary line and the rightmost pixel point does not coincide with the right boundary line.
Specifically, with reference to fig. 4, the leftmost pixel point of the initial blade Y is M, and the rightmost pixel point is Q, and if the leftmost pixel point M of the initial blade Y does not coincide with the left boundary line (boundary line B) of the image and the rightmost pixel point Q does not coincide with the right boundary line (boundary line D), the method determines that the initial blade Y is in the vertical posture in the segmented image.
Step 1526, a third reference point which is a preset distance to the left of the leftmost pixel point is obtained, and a fifth target pixel point and a sixth target pixel point which are the same as the abscissa of the third reference point are obtained on the upper boundary line and the lower boundary line respectively.
Specifically, with reference to fig. 4, the preset distance may be 20 pixels, and in the case that the initial blade Y is in the "longitudinal" posture in the segmented image, the scheme may move the leftmost pixel point M in the initial blade Y to the right left direction by 20 pixels as the left limit, and then obtain the third reference point N, that is, the third reference point is located at the position of 20 pixels right left of the leftmost pixel point M in the initial blade Y. Then, according to the scheme, a fifth target pixel point O and a sixth target pixel point P which are the same as the abscissa of the third reference point N are obtained on the upper boundary line (on the boundary line A) and the lower boundary line (on the boundary line C) of the divided image respectively. Or taking the third reference point N as a starting point, making a straight line parallel to the boundary line B or the boundary line D, wherein the intersection point of the straight line and the boundary line A is a fifth target pixel point O, and the intersection point of the straight line and the boundary line C is a sixth target pixel point P.
Step 1527, a fourth reference point at a preset distance to the right of the rightmost pixel point is obtained, and a seventh target pixel point and an eighth target pixel point which are the same as the abscissa of the fourth reference point are obtained on the upper boundary line and the lower boundary line respectively.
Specifically, with reference to fig. 4, the preset distance may be 20 pixels, and in the case that the initial blade Y is in the vertical posture in the segmented image, the present solution may move the rightmost pixel point Q in the initial blade Y by 20 pixels in the right-to-right direction as the right limit, and then obtain the fourth reference point R, that is, the fourth reference point is located at the position of 20 pixels right of the rightmost pixel point Q in the initial blade Y. Then, according to the scheme, a seventh target pixel point S and an eighth target pixel point T which are the same as the abscissa of the fourth reference point R are obtained on the upper boundary line (on the boundary line A) and the lower boundary line (on the boundary line C) of the divided image respectively. Or taking the fourth reference point R as a starting point, making a straight line parallel to the boundary line B or the boundary line D, wherein an intersection point of the straight line and the boundary line a is a seventh target pixel point S, and an intersection point of the straight line and the boundary line C is an eighth target pixel point T.
Step 1528, determining a target area according to the fifth target pixel point, the sixth target pixel point, the seventh target pixel point and the eighth target pixel point.
Specifically, the target area can be directly or indirectly obtained according to the five target pixel points, the sixth target pixel point, the seventh target pixel point and the eighth target pixel point.
Optionally, in step S1528, determining a target area according to the fifth target pixel, the sixth target pixel, the seventh target pixel, and the eighth target pixel includes:
step 15281, when the distance between the fifth target pixel point and the seventh target pixel point does not exceed the predetermined length, directly determining the region obtained by connecting the fifth target pixel point, the sixth target pixel point, the seventh target pixel point and the eighth target pixel point as the target region.
Specifically, with reference to fig. 4, if the distance between the fifth target pixel O and the seventh target pixel S does not exceed the predetermined length, it indicates that the angle of inclination of the initial blade Y in the image is relatively small, the area of a rectangular area OPTS formed by connecting the fifth target pixel, the sixth target pixel, the seventh target pixel and the eighth target pixel is also relatively small, so that there is less image without foreground content in the target area, and the rectangular area OPTS is directly determined as the target area and cut.
Step S15282, when the distance between the fifth target pixel point and the seventh target pixel point exceeds the predetermined length, connecting the fifth target pixel point, the sixth target pixel point, the seventh target pixel point, and the eighth target pixel point to obtain an initial region, dividing the initial region into a plurality of sub-initial regions, and determining a target region according to the plurality of sub-initial regions.
Specifically, with reference to fig. 5, if the distance between the fifth target pixel point O and the seventh target pixel point S exceeds a predetermined length, it indicates that the inclination angle of the initial blade Y in the image is relatively large, and therefore, the area of the rectangular region OPTS formed by connecting the fifth target pixel point, the sixth target pixel point, the seventh target pixel point, and the eighth target pixel point is relatively large, in this case, if the rectangular region OPTS is directly determined as the target region and is cropped, a large number of images without foreground content appear in the target region, and therefore, the rectangular region OPTS is not directly cropped in the present scheme, but the rectangular region OPTS is first determined as the initial region, then the initial region is divided into a plurality of sub-initial regions, and then the target region is specifically determined according to the plurality of sub-initial regions, and by this way, the problem that a large number of invalid images without foreground are obtained due to direct cropping when the initial blade has a large inclination in the image can be avoided.
Optionally, step S15282 determines the target area according to the plurality of sub-initial areas, including:
step 152821 obtains sub-boundary pixel points of the divided blade in each sub-initial region and sub-boundary lines of each sub-initial region.
Step S152822, determining sub-target regions in each sub-initial region according to the relationship between the sub-boundary pixel points and the sub-boundary lines.
In step S152823, a combination of sub target regions in all the sub initial regions is determined as a target region.
Specifically, referring to fig. 6, in the case that the angle of the initial blade Y in the image is relatively large, the present embodiment does not directly perform cropping on the rectangular area OPTS formed by connecting the five target pixels, the sixth target pixel, the seventh target pixel, and the eighth target pixel, but first divides the initial area into a plurality of sub-initial areas, as shown in fig. 6, two points W and X are arbitrarily taken on the upper boundary line and the lower boundary line, then the rectangular area OPXW is one of the plurality of sub-initial areas, and the sub-boundary lines of the sub-initial areas are respectively OW (the upper boundary line of the sub-initial area), OP (the left boundary line of the sub-initial area), PX (the lower boundary line of the sub-initial area), and WX (the right boundary line of the sub-initial area), and then the sub-target area is determined in each of the sub-initial areas according to the relationship between the sub-boundary pixels of the initial blade displayed in the sub-initial area OPXW and the sub-boundary line. More specifically, the scheme may obtain a lowermost sub-boundary pixel point Z of the initial blade displayed in the sub-initial region OPXW, move the lowermost sub-boundary pixel point Z by a preset distance to obtain a reference point V under the condition that the lowermost sub-boundary pixel point Z does not coincide with the lower boundary PX, then obtain a pixel point U on the OP with the same vertical coordinate as the reference point V, and then use OUVW as a sub-target region determined from the current sub-initial region, thereby removing the UPXV of the invalid part (without foreground content) of the sub-initial region OPXW, and simultaneously, extending the sub-boundary pixel point Z downward to ensure that a complete blade can be obtained. In this way, the present embodiment can further reduce the cropping area when the angle at which the initial blade Y is tilted in the image is relatively large, thereby avoiding cropping to an invalid image without foreground content.
It should be noted here that, in the conventional fan blade inspection, the blade is often fixed in a fixed posture (for example, a posture of falling Y), and then the unmanned aerial vehicle camera is controlled to shoot the blade at the waypoint, in this case, the posture of the fan blade is relatively fixed, and the mode of cutting the blade is also relatively fixed, but in the novel fan blade inspection, the fan blade may not be required to be in a static state, that is, the fan blade is in a rotating mode for inspection, for example, in the related art, in a state that the wind turbine generator does not shut down, the unmanned aerial vehicle moves from the front to the back of the fan for photographing, in this case, in the process that the unmanned aerial vehicle camera shoots, the blade may be in an arbitrary posture (for example, the fan blade may be in an arbitrary inclination angle although vertical in the above embodiment), and in this scheme, the fan blade may be effectively cut regardless of which inclination angle through the above embodiment, and this kind of cutting mode may be effectively adapted to the mode of cutting the fan blade in a rotating mode for inspection.
Optionally, before the step S19 inputs the mesh pictures with the preset sizes into the defect identification model, the method further includes:
step S181, determining a gray level threshold of each grid image of a preset size.
Step S182, determining the grid picture with the gray threshold value smaller than the preset threshold value as the grid picture to be enhanced.
And step S183, performing brightness enhancement on the grid picture to be enhanced.
Specifically, when the fan blade image is shot, due to shooting conditions, weather and illumination reasons, part of fan data can be darker, and the fan blade defect identification effect can be influenced. In order to avoid the influence of the data quality problem on the fan defect identification, the brightness of darker data is automatically adjusted. The scheme can judge the gray threshold value of each grid picture with a preset size, and if the gray threshold value (namely the gray coefficient) is too small, the scheme determines that the picture is dark and needs brightness enhancement, and the scheme carries out self-adaptive brightness adjustment according to a light intensity (color) averaging criterion. It should be noted that, in this embodiment, only the brightness of the image smaller than the threshold is adjusted, so as to achieve purposeful equalization of data, and obtain a better recognition result while saving computation power.
Optionally, in step S182, performing brightness enhancement on the mesh picture to be enhanced, including:
acquiring a first image histogram of the grid picture to be enhanced; obtaining a first pixel value range in which a first main pixel point set of the enhanced grid picture is located based on the first image histogram; acquiring a second pixel value range in which a second main pixel point set of a sample picture of the defect identification model is positioned; adjusting pixel values of the first set of primary pixels from the first range of pixel values to the second range of pixel values.
Specifically, in the case that it is determined that a specific grid picture is dark, the pixel values of all the pixel points of the grid to be enhanced are not directly changed, so that the brightness of the picture is improved, but a histogram is used to obtain a pixel value range (i.e., the first pixel value range) in which the main pixel points of all the pixel points of the grid picture to be enhanced are distributed. Then, the method obtains the pixel value range (i.e., the second pixel value range) in which the main pixels are distributed in the sample picture of the defect identification model, and it should be noted that the sample picture used in training the defect identification model is the sample picture with standard brightness, the brightness value of the sample picture is also the brightness value which can be processed and identified by the defect identification model most easily, and the method adjusts the pixel value of the first main pixel point set in the grid picture to be enhanced to the second pixel value range. It should be further noted that, on one hand, the present solution does not uniformly adjust the pixel values one by one for all the pixels in the mesh picture, but adjusts the pixel values of the main pixel point set (that is, some pixels in the mesh picture to be enhanced are not all pixels), so as to ensure that the computational power is saved under the condition of enhancing the brightness of the mesh picture to be enhanced, on the other hand, the present solution also does not directly set a fixed standard pixel value for adjustment, but obtains the pixel value of the sample picture in the defect identification model as the target pixel value first, so that the finally adjusted brightness just meets the brightness requirement of the defect identification model on identification, and the problems of excessive brightness enhancement, waste of unnecessary computational power are avoided.
Optionally, step S17 cuts the target area into a plurality of mesh pictures with preset sizes, including:
and moving the sliding window with the preset size in the target area, and cutting the sliding window in the target area to obtain a grid picture in each moving time by a preset step length, wherein after the sliding window moves, under the condition that the boundary of the sliding window exceeds the target boundary line of the target area, the target boundary line is used as the boundary of the sliding window, and the cutting is continuously performed after the position of the sliding window in the target area is adjusted.
Specifically, in this scheme, after the target region is determined in advance, a sliding window of 512 × 512 pixels may be adopted, the moving step is set to 400, and the target region is trimmed.
For example, if the lower and right bounds exceed the target region, the present scheme sets the lower and right bounds of the target region to the lower and right bounds of the sliding window while changing the upper and left bounds of the sliding window. Ensuring that all parts of the target area are cropped into multiple mesh pictures.
In an alternative embodiment, if the defect detection is completed, the defect detection result of each small mesh graph needs to be returned to the original graph, because each small graph is cut according to overlapping, the image is deduplicated according to the cutting position, the detection results are combined, and the final defect detection result is output.
In summary, the scheme of the application can achieve the following effects: first, guarantee the timeliness of discernment effect, very big reduction fan blade defect discernment used long time, promoted fan blade defect discernment effect, also reduced the degree of dependence to hardware. And secondly, the whole data processing and identifying process reduces the participation of personnel as much as possible and is completed by corresponding algorithms, so that the invention improves the inspection efficiency and also indirectly reduces the inspection cost of the fan blade.
It will be understood that the specific features, operations and details described herein above with respect to the method of the present invention may be similarly applied to the apparatus and system of the present invention, or vice versa. Further, each step of the method of the invention described above may be performed by a respective component or unit of the device or system of the invention.
It should be understood that the various modules/units of the apparatus of the present invention may be implemented in whole or in part by software, hardware, firmware, or a combination thereof. The modules/units may be embedded in the processor of the computer device in the form of hardware or firmware or independent of the processor, or may be stored in the memory of the computer device in the form of software for being called by the processor to execute the operations of the modules/units. Each of the modules/units may be implemented as a separate component or module, or two or more modules/units may be implemented as a single component or module.
In one embodiment, a computer device (electronic device) is provided that includes a memory and a processor, the memory having stored thereon computer instructions executable by the processor, the computer instructions, when executed by the processor, instruct the processor to perform the steps of the method of an embodiment of the invention. The computer device may broadly be a server, a terminal, or any other electronic device having the necessary computing and/or processing capabilities. In one embodiment, the computer device may include a processor, memory, a network interface, a communication interface, etc., connected by a system bus. The processor of the computer device may be used to provide the necessary computing, processing and/or control capabilities. The memory of the computer device may include a non-volatile storage medium and an internal memory. An operating system, a computer program, and the like may be stored in or on the non-volatile storage medium. The internal memory may provide an environment for the operating system and the computer programs in the non-volatile storage medium to run. The network interface and the communication interface of the computer device may be used to connect and communicate with an external device via a network. Which when executed by a processor performs the steps of the method of the invention.
The invention may be implemented as a computer readable storage medium having stored thereon a computer program which, when executed by a processor, causes the steps of a method of an embodiment of the invention to be performed. In one embodiment, the computer program is distributed across a plurality of computer devices or processors coupled by a network such that the computer program is stored, accessed, and executed by one or more computer devices or processors in a distributed fashion. A single method step/operation, or two or more method steps/operations, may be performed by a single computer device or processor or by two or more computer devices or processors. One or more method steps/operations may be performed by one or more computer devices or processors, and one or more other method steps/operations may be performed by one or more other computer devices or processors. One or more computer devices or processors may perform a single method step/operation, or perform two or more method steps/operations.
Those of ordinary skill in the art will appreciate that the method steps of the present invention may be directed to associated hardware such as a computer device or a processor, by a computer program, which may be stored in a non-transitory computer readable storage medium, that when executed, cause the steps of the present invention to be performed. Any reference herein to memory, storage, databases, or other media may include non-volatile and/or volatile memory, as appropriate. Examples of non-volatile memory include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), flash memory, magnetic tape, floppy disk, magneto-optical data storage, hard disk, solid state disk, and the like. Examples of volatile memory include Random Access Memory (RAM), external cache memory, and the like.
The respective technical features described above may be arbitrarily combined. Although not all possible combinations of features are described, any combination of features should be considered to be covered by the present specification as long as there is no contradiction between such combinations.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A fan picture preprocessing method is characterized by comprising the following steps:
acquiring a fan picture;
performing foreground segmentation on the fan picture to obtain a segmented image, and highlighting an initial blade obtained by segmentation in the segmented image;
determining a target region in the segmented image according to the boundary of the initial blade, wherein the display area of the target region is larger than that of the initial blade;
and cutting the target area into a plurality of grid pictures with preset sizes.
2. The method of claim 1, further comprising: inputting the grid pictures with the preset sizes into the obtained defect identification model to obtain a defect result, wherein before the target area is cut into the grid pictures with the preset sizes, the method comprises the following steps:
acquiring the defect identification model;
and adjusting the preset size according to the type of the defect identification model.
3. The method of claim 1, wherein determining a target region in the segmented image based on the initial leaf boundary comprises:
acquiring boundary pixel points of the segmented blade and boundary lines of the segmented image;
and determining a target area in the segmented image according to the relation between the boundary pixel point and the boundary line.
4. The method according to claim 3, wherein determining a target region in the segmented image according to a relationship between the boundary pixel point and the boundary line comprises:
judging that the top pixel point is not coincident with the upper boundary line and the bottom pixel point is not coincident with the lower boundary line;
acquiring a first reference point at a preset distance above the uppermost pixel point, and respectively acquiring a first target pixel point and a second target pixel point which are the same as the longitudinal coordinate of the first reference point on the left boundary line and the right boundary line;
acquiring a second reference point at a preset distance below the bottommost pixel point, and respectively acquiring a third target pixel point and a fourth target pixel point which are the same as the second reference point in vertical coordinate on a left boundary line and a right boundary line;
and determining a target area according to the first target pixel point, the second target pixel point, the third target pixel point and the fourth target pixel point.
5. The method of claim 3, wherein determining a target region in the segmented image according to the relationship between the boundary pixel points and the boundary line comprises:
judging that the leftmost pixel point is not overlapped with the left boundary line and the rightmost pixel point is not overlapped with the right boundary line;
acquiring a third reference point at a preset distance to the left of the leftmost pixel point, and respectively acquiring a fifth target pixel point and a sixth target pixel point which are the same as the abscissa of the third reference point on the upper boundary line and the lower boundary line;
acquiring a fourth reference point at a preset distance on the right of the rightmost pixel point, and respectively acquiring a seventh target pixel point and an eighth target pixel point which are the same as the abscissa of the fourth reference point on the upper boundary line and the lower boundary line;
and determining a target area according to the fifth target pixel point, the sixth target pixel point, the seventh target pixel point and the eighth target pixel point.
6. The method of claim 5, wherein determining a target region according to the fifth target pixel, the sixth target pixel, the seventh target pixel, and the eighth target pixel comprises:
under the condition that the distance between the fifth target pixel point and the seventh target pixel point does not exceed a preset length, directly determining a region obtained by connecting the fifth target pixel point, the sixth target pixel point, the seventh target pixel point and the eighth target pixel point as a target region;
under the condition that the distance between the fifth target pixel point and the seventh target pixel point exceeds a preset length, connecting the fifth target pixel point, the sixth target pixel point, the seventh target pixel point and the eighth target pixel point to obtain an initial region, dividing the initial region into a plurality of sub-initial regions, and determining a target region according to the plurality of sub-initial regions.
7. The method of claim 6, wherein determining a target region from the plurality of sub-initial regions comprises:
acquiring sub-boundary pixel points of the segmented blade in each sub-initial region and sub-boundary lines of each sub-initial region;
determining sub-target areas in each sub-initial area according to the relation between the sub-boundary pixel points and the sub-boundary lines;
and determining the combination of the sub target areas in all the sub initial areas as the target area.
8. The method of claim 2, wherein before inputting the plurality of preset-sized mesh pictures into the defect identification model, the method further comprises:
judging the gray level threshold value of each grid picture with a preset size;
determining the grid picture with the gray threshold value smaller than a preset threshold value as a grid picture to be enhanced;
and performing brightness enhancement on the grid picture to be enhanced.
9. The method according to claim 8, wherein the luminance enhancement of the mesh picture to be enhanced comprises:
acquiring a first image histogram of the grid picture to be enhanced;
obtaining a first pixel value range in which a first main pixel point set of the enhanced grid picture is located based on the first image histogram;
acquiring a second pixel value range in which a second main pixel point set of a sample picture of the defect identification model is located;
adjusting the pixel value of the first set of primary pixels from the first range of pixel values to the second range of pixel values.
10. The method according to claim 1, wherein the cropping the target area into a plurality of grid pictures with preset sizes comprises:
and moving the sliding window with the preset size in the target area, and cutting the sliding window in the target area to obtain a grid picture in each moving time by a preset step length, wherein after the sliding window moves, under the condition that the boundary of the sliding window exceeds the target boundary line of the target area, the target boundary line is used as the boundary of the sliding window, and the cutting is continuously performed after the position of the sliding window in the target area is adjusted.
CN202211489235.2A 2022-11-25 2022-11-25 Preprocessing method and device for fan pictures Active CN115937114B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211489235.2A CN115937114B (en) 2022-11-25 2022-11-25 Preprocessing method and device for fan pictures

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211489235.2A CN115937114B (en) 2022-11-25 2022-11-25 Preprocessing method and device for fan pictures

Publications (2)

Publication Number Publication Date
CN115937114A true CN115937114A (en) 2023-04-07
CN115937114B CN115937114B (en) 2024-06-04

Family

ID=86655342

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211489235.2A Active CN115937114B (en) 2022-11-25 2022-11-25 Preprocessing method and device for fan pictures

Country Status (1)

Country Link
CN (1) CN115937114B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117351062A (en) * 2023-12-04 2024-01-05 尚特杰电力科技有限公司 Fan blade defect diagnosis method, device and system and electronic equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651885A (en) * 2016-12-31 2017-05-10 中国农业大学 Image segmentation method and apparatus
CN108986119A (en) * 2018-07-25 2018-12-11 京东方科技集团股份有限公司 Image partition method and device, computer equipment and readable storage medium storing program for executing
US20210019890A1 (en) * 2018-10-16 2021-01-21 Tencent Technology (Shenzhen) Company Limited Image segmentation method and apparatus, computer device, and storage medium
CN113554582A (en) * 2020-04-22 2021-10-26 中国科学院长春光学精密机械与物理研究所 Defect detection method, device and system for functional hole in cover plate of electronic equipment
CN114240989A (en) * 2021-11-30 2022-03-25 中国工商银行股份有限公司 Image segmentation method and device, electronic equipment and computer storage medium
CN114266895A (en) * 2021-12-27 2022-04-01 中国电建集团中南勘测设计研究院有限公司 Fan blade image segmentation and splicing method and device
CN114463648A (en) * 2022-01-09 2022-05-10 中国长江三峡集团有限公司 Method for keeping fan blade in middle of camera visual field based on pure vision
CN114972397A (en) * 2022-06-21 2022-08-30 湘潭大学 Infrared image defect contour detection method for wind power blade composite material
CN115359239A (en) * 2022-08-25 2022-11-18 中能电力科技开发有限公司 Wind power blade defect detection and positioning method and device, storage medium and electronic equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651885A (en) * 2016-12-31 2017-05-10 中国农业大学 Image segmentation method and apparatus
CN108986119A (en) * 2018-07-25 2018-12-11 京东方科技集团股份有限公司 Image partition method and device, computer equipment and readable storage medium storing program for executing
US20210019890A1 (en) * 2018-10-16 2021-01-21 Tencent Technology (Shenzhen) Company Limited Image segmentation method and apparatus, computer device, and storage medium
CN113554582A (en) * 2020-04-22 2021-10-26 中国科学院长春光学精密机械与物理研究所 Defect detection method, device and system for functional hole in cover plate of electronic equipment
CN114240989A (en) * 2021-11-30 2022-03-25 中国工商银行股份有限公司 Image segmentation method and device, electronic equipment and computer storage medium
CN114266895A (en) * 2021-12-27 2022-04-01 中国电建集团中南勘测设计研究院有限公司 Fan blade image segmentation and splicing method and device
CN114463648A (en) * 2022-01-09 2022-05-10 中国长江三峡集团有限公司 Method for keeping fan blade in middle of camera visual field based on pure vision
CN114972397A (en) * 2022-06-21 2022-08-30 湘潭大学 Infrared image defect contour detection method for wind power blade composite material
CN115359239A (en) * 2022-08-25 2022-11-18 中能电力科技开发有限公司 Wind power blade defect detection and positioning method and device, storage medium and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
B. HU ET AL.: "Surface damage detection method for blade of wind turbine based on image segmentation", 《2020 5TH INTERNATIONAL CONFERENCE ON COMMUNICATION, IMAGE AND SIGNAL PROCESSING (CCISP)》, 3 December 2020 (2020-12-03), pages 154 - 158 *
谭兴国等: "基于无人机巡检的风机叶片表面缺陷检测技术", 《电测与仪表》, 13 October 2022 (2022-10-13), pages 1 - 10 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117351062A (en) * 2023-12-04 2024-01-05 尚特杰电力科技有限公司 Fan blade defect diagnosis method, device and system and electronic equipment
CN117351062B (en) * 2023-12-04 2024-02-23 尚特杰电力科技有限公司 Fan blade defect diagnosis method, device and system and electronic equipment

Also Published As

Publication number Publication date
CN115937114B (en) 2024-06-04

Similar Documents

Publication Publication Date Title
CN108876743B (en) Image rapid defogging method, system, terminal and storage medium
CN111507958B (en) Target detection method, training method of detection model and electronic equipment
US10620005B2 (en) Building height calculation method, device, and storage medium
CN109146832B (en) Video image splicing method and device, terminal equipment and storage medium
CN110473221B (en) Automatic target object scanning system and method
CN110610483B (en) Crack image acquisition and detection method, computer equipment and readable storage medium
CN111311487B (en) Rapid splicing method and system for photovoltaic module images
CN112750104B (en) Method and device for automatically matching optimal camera by monitoring ship through multiple cameras
CN115937114B (en) Preprocessing method and device for fan pictures
CN114926407A (en) Steel surface defect detection system based on deep learning
CN113537037A (en) Pavement disease identification method, system, electronic device and storage medium
CN116485779A (en) Adaptive wafer defect detection method and device, electronic equipment and storage medium
CN112673801A (en) On-line detection method and system for broken impurities of grain combine harvester
CN114004858B (en) Method and device for identifying surface codes of aerial cables based on machine vision
CN117456371B (en) Group string hot spot detection method, device, equipment and medium
CN114266895A (en) Fan blade image segmentation and splicing method and device
CN111724375B (en) Screen detection method and system
CN116681879B (en) Intelligent interpretation method for transition position of optical image boundary layer
CN116843687A (en) Communication optical cable surface flaw detection method and device
CN117351472A (en) Tobacco leaf information detection method and device and electronic equipment
CN111738264A (en) Intelligent acquisition method for data of display panel of machine room equipment
CN113920068B (en) Body part detection method and device based on artificial intelligence and electronic equipment
CN116363097A (en) Defect detection method and system for photovoltaic panel
CN116152191A (en) Display screen crack defect detection method, device and equipment based on deep learning
CN115311443A (en) Oil leakage identification method for hydraulic pump

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant