CN117893509A - Model training method, defect detection method, electronic equipment and storage medium - Google Patents

Model training method, defect detection method, electronic equipment and storage medium Download PDF

Info

Publication number
CN117893509A
CN117893509A CN202410073759.6A CN202410073759A CN117893509A CN 117893509 A CN117893509 A CN 117893509A CN 202410073759 A CN202410073759 A CN 202410073759A CN 117893509 A CN117893509 A CN 117893509A
Authority
CN
China
Prior art keywords
image
training
model
sample image
defect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410073759.6A
Other languages
Chinese (zh)
Inventor
华志刚
陈洪溪
钟嶒楒
郭荣
林润达
庄伟�
高升
武霖
张越
张强
陈家颖
叶晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Power Equipment Research Institute Co Ltd
Zhongneng Integrated Smart Energy Technology Co Ltd
Original Assignee
Shanghai Power Equipment Research Institute Co Ltd
Zhongneng Integrated Smart Energy Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Power Equipment Research Institute Co Ltd, Zhongneng Integrated Smart Energy Technology Co Ltd filed Critical Shanghai Power Equipment Research Institute Co Ltd
Priority to CN202410073759.6A priority Critical patent/CN117893509A/en
Publication of CN117893509A publication Critical patent/CN117893509A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Remote Sensing (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a model training method, a defect detection method, electronic equipment and a storage medium; the method comprises the following steps: obtaining an abnormal image, wherein one abnormal image corresponds to one abnormal mask, and the abnormal mask is used for indicating the position and the type of a defect in the corresponding abnormal image; preprocessing the abnormal image and the corresponding abnormal mask thereof to obtain at least one sample image and at least one segmentation mask, and dividing the sample image into a training image set and a verification image set; training the basic model by using a training image set to obtain an image segmentation model; inputting the verification image set into an image segmentation model, and determining detection accuracy; if the detection accuracy is smaller than the preset threshold, updating the training image set, and returning to execute the step of training the basic model by using the training image set to obtain the image segmentation model until the detection accuracy is larger than or equal to the preset threshold. The scheme realizes that the accuracy of the model is improved by using fewer sample images.

Description

Model training method, defect detection method, electronic equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of image processing, in particular to a model training method, a defect detection method, electronic equipment and a storage medium.
Background
The defect detection of the boiler water wall is an important link of boiler overhaul, and the traditional mode of the defect detection of the boiler water wall is to carry detection equipment manually or to enter the boiler for detection by using an unmanned aerial vehicle. After detection, the obtained water-cooled wall image is identified by utilizing artificial visual or image processing algorithm, so that the water-cooled wall defect image with defects is detected.
In the prior art, the accuracy of the manual visual detection mode depends on the current situation of the detection personnel, so that the accuracy of the manual detection mode is poor. Therefore, most users choose to detect using image processing algorithms.
However, in the detection method using the image processing algorithm, a large number of sample images need to be obtained as reference objects, so that the detection accuracy is improved. However, because the internal structure of the boiler water wall is complex and the environment is poor, fewer sample images can be obtained, and the final detection accuracy is low.
Disclosure of Invention
The invention provides a model training method, a defect detection method, electronic equipment and a storage medium, which realize that fewer sample images are utilized so as to improve the accuracy of a defect detection model.
In a first aspect, an embodiment of the present invention provides a model training method, including:
obtaining an abnormal image, wherein one abnormal image corresponds to one abnormal mask, and the abnormal mask is used for indicating the position and the type of a defect in the corresponding abnormal image;
Preprocessing an abnormal image and an abnormal mask corresponding to the abnormal image to obtain at least one sample image and at least one segmentation mask, and dividing the sample image into a training image set and a verification image set, wherein one sample image corresponds to one segmentation mask, and the number of the sample images is larger than that of the abnormal images;
training the basic model by utilizing a training image set to obtain an image segmentation model, wherein the image segmentation model is used for detecting whether the boiler water wall has defects or not;
inputting the verification image set into an image segmentation model, and determining detection accuracy;
If the detection accuracy is smaller than the preset threshold, updating the training image set, and returning to execute the step of training the basic model by using the training image set to obtain the image segmentation model until the detection accuracy is larger than or equal to the preset threshold.
According to the model training method provided by the invention, the abnormal image and the corresponding abnormal mask are obtained, and the abnormal image and the corresponding abnormal mask are preprocessed and then divided into a training image set and a verification image set; training the basic model by using the training image set to obtain an image segmentation model, calculating the detection accuracy of the image segmentation model by using the verification image set, determining that the image segmentation model is trained when the detection accuracy is equal to or greater than a preset threshold, updating the training image set when the detection accuracy is less than the preset threshold, and training the basic model again by using the updated training image set until the detection accuracy of the image segmentation model is equal to or greater than the preset threshold. In the technical scheme, on one hand, the obtained few abnormal images and the corresponding abnormal masks are preprocessed to obtain more sample images, so that the problem of fewer sample images is solved; meanwhile, the preprocessing of the abnormal images is utilized, so that the diversity of image data is increased, and a foundation is provided for the subsequent improvement of the generalization capability of the training model. On the other hand, the trained image segmentation model is tested by using the images in the verification image set which do not participate in training, and the generalization capability of the model is evaluated by using the sample images which do not participate in training while the detection accuracy of the image segmentation model is determined. And finally, when the detection accuracy is smaller than a preset threshold, updating the training image set, and modifying the defect description in the training sample image so as to improve the accuracy and the sensitivity of defect detection.
In a second aspect, an embodiment of the present invention further provides a defect detection method, where the method includes:
determining image data of a boiler water wall;
Inputting image data into an image segmentation model, and detecting whether a boiler water wall has defects or not; the image segmentation model is obtained by adopting the model training method according to any embodiment of the invention.
According to the defect detection method provided by the embodiment of the invention, after the image segmentation model is trained, the image data of the boiler water wall collected in the real scene is determined, and the boiler water wall is detected by using the image segmentation model, so that the defect position and type of the boiler water wall are obtained, the defect detection of the boiler water wall in a mode of replacing manual visual observation is realized, and the purpose of accurately detecting the defect of the boiler water wall by using the image segmentation model obtained by training a few sample images is achieved.
In a third aspect, an embodiment of the present invention further provides a model training apparatus, where the apparatus includes:
The image acquisition module is used for acquiring an abnormal image, wherein one abnormal image corresponds to one abnormal mask, and the abnormal mask is used for indicating the position and the type of the defect in the corresponding abnormal image;
The preprocessing module is used for preprocessing the abnormal image and the corresponding abnormal mask thereof to obtain at least one sample image and at least one segmentation mask, and dividing the sample image into a training image set and a verification image set, wherein one sample image corresponds to one segmentation mask, and the number of the sample images is larger than that of the abnormal images;
The model training module is used for training the basic model by utilizing the training image set to obtain an image segmentation model, wherein the image segmentation model is used for detecting whether the boiler water wall has defects or not;
The accuracy detection module is used for inputting the verification image set into the image segmentation model to determine the detection accuracy;
And the optimization module is used for updating the training image set if the detection accuracy is smaller than a preset threshold value, and returning to execute the step of training the basic model by using the training image set to obtain the image segmentation model until the detection accuracy is larger than or equal to the preset threshold value.
In a fourth aspect, an embodiment of the present invention further provides a defect detection apparatus, including:
The image data acquisition module is used for determining image data of the boiler water wall;
The detection module is used for inputting the image data into the image segmentation model and detecting whether the boiler water wall has defects or not; the image segmentation model is obtained by adopting the model training method according to any embodiment of the invention.
In a fifth aspect, an embodiment of the present invention provides an electronic device, including:
one or more processors;
A memory for storing one or more programs,
The one or more programs, when executed by the one or more processors, cause the one or more processors to implement the model training method or the defect detection method of any embodiment of the present invention.
In a sixth aspect, embodiments of the present invention provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the model training method or the defect detection method of any of the embodiments of the present invention.
It should be noted that the above-mentioned computer instructions may be stored in whole or in part on a computer-readable storage medium. The computer readable storage medium may be packaged together with the processor of the model training apparatus or the defect detecting apparatus, or may be packaged separately from the processor of the model training apparatus or the defect detecting apparatus, which is not limited in the present invention.
The description of the third, fourth, fifth and sixth aspects of the present invention may refer to the detailed description of the first or second aspect; further, the advantageous effects described in the second aspect, the third aspect, the fourth aspect, the fifth aspect and the sixth aspect may refer to the advantageous effect analysis of the first aspect or the second aspect, and are not described herein.
In the present invention, the names of the above-described model training means or defect detecting means do not constitute a limitation on the devices or function modules themselves, and in actual implementation, these devices or function modules may appear under other names. Insofar as the function of each device or function module is similar to that of the present invention, it falls within the scope of the claims of the present invention and the equivalents thereof.
These and other aspects of the invention will be more readily apparent from the following description.
Drawings
In order to more clearly illustrate the technical solutions of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and should not be considered as limiting the scope, and that other related drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a model training method provided by the invention;
FIG. 2 is a diagram illustrating an example of preprocessing an abnormal image according to the present invention;
FIG. 3 is a diagram illustrating an example of a defect description of a modified training sample image provided by the present invention;
FIG. 4 is another flow chart of the model training method provided by the present invention;
FIG. 5 is a schematic flow chart of a defect detection method according to the present invention;
FIG. 6 is a schematic diagram of a model training apparatus according to the present invention;
FIG. 7 is a schematic diagram of a defect detecting device according to the present invention;
fig. 8 is a schematic structural diagram of an electronic device according to the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present invention are shown in the drawings.
It should be noted that, the term "and/or" herein is merely an association relationship describing the association object, and indicates that three relationships may exist, for example, a and/or B may indicate: a exists alone, A and B exist together, and B exists alone.
The terms "first" and "second" and the like in the description and in the drawings are used for distinguishing between different objects or between different processes of the same object and not for describing a particular order of objects.
Furthermore, references to the terms "comprising" and "having" and any variations thereof in the description of the present invention are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed but may optionally include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Before discussing exemplary embodiments in more detail, it should be mentioned that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart depicts operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently, or at the same time. Furthermore, the order of the operations may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figures. The processes may correspond to methods, functions, procedures, subroutines, and the like. Furthermore, embodiments of the invention and features of the embodiments may be combined with each other without conflict.
It should be noted that, in the embodiments of the present invention, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the description of the present invention, unless otherwise indicated, the meaning of "a plurality" means two or more.
Fig. 1 is a schematic flow chart of a model training method provided by the invention, which can be used for training a model to be trained, for example, when detecting defects of a water wall of a boiler, the model training method provided by the invention can be used for training an original model to obtain a trained image segmentation model, and detecting images of the water wall of the boiler to identify defect positions and/or defect types in the images of the water wall of the boiler. The method may be performed by a model training apparatus, which may be implemented in software and/or hardware. In a specific embodiment, the apparatus may be integrated in an electronic device, which may be any intelligent device with network communication functions, such as a computer, a server, etc. For easy understanding, the embodiment of the invention is described in detail by taking a model applied to detecting defects of a boiler water wall after training as an example. As shown in fig. 1, the model training method provided in this embodiment may include the following steps:
s101, acquiring an abnormal image, wherein one abnormal image corresponds to an abnormal mask, and the abnormal mask is used for indicating the position and the type of a defect in the corresponding abnormal image.
In this embodiment, the abnormal image is an image in which an abnormality may exist. Illustratively, a boiler water wall detection image is taken as an example. The abnormal image may include a detection image including a defect of the boiler water wall, or may include a detection image of the boiler water wall without defects. The anomaly mask may be manually noted in the anomaly image.
Specifically, the sources of the abnormal images may include an image set that has been disclosed in the related industrial scene and an image set that is acquired in the actual application scene. For example, a small amount of images can be shot from an actual boiler water wall scene, and related processing is performed to obtain an abnormal image. The shooting is performed in an actual boiler water wall scene, so that the consumed manpower and time are large, and the risk is high. Therefore, in general, only a very small number of images, for example, 10 or less images, can be acquired. After a small number of captured images are obtained, the images need to be subjected to a correlation process, such as assigning a corresponding anomaly mask to each image to indicate whether or not there is a defect in each image, the type of defect present, and the location of the defect. After a corresponding anomaly mask is assigned to each image, an anomaly image may be acquired.
S102, preprocessing the abnormal image and the corresponding abnormal mask thereof to obtain at least one sample image and at least one segmentation mask, and dividing the sample image into a training image set and a verification image set, wherein one sample image corresponds to one segmentation mask, and the number of the sample images is larger than that of the abnormal images.
The preprocessing comprises at least one of scaling processing, clipping processing and overturning processing, and the sizes of all the preprocessed sample images are consistent. The training image set is a set of images for training the base model. The training image set includes at least one training sample image, one training sample image corresponding to each of the first segmentation masks. The verification image set is an image set for verifying the accuracy of the image segmentation model after the basic model image is trained. The set of verification images includes at least one verification sample image, one verification sample image corresponding to a second segmentation mask indicating the location and type of defects in its corresponding verification sample image.
Specifically, after the abnormal image is acquired, at least one of a scaling process, a cropping process, and a flipping process may be performed on the abnormal image to expand the sample image. As illustrated by way of example in fig. 2. Fig. 2 is a diagram illustrating an example of abnormal image preprocessing provided by the present invention. As can be seen from fig. 2, the abnormal image in fig. 2 is an abnormal image in which the defect type is coking. Preprocessing the abnormal image by scaling, for example, reducing the abnormal image in fig. 2 to a size as that of the sample image 1 according to any proportion to obtain the sample image 1, namely, an image after the abnormal image is subjected to the reduction processing; preprocessing the cutting processing of the abnormal image, such as cutting a blank area without defects in the abnormal image in fig. 2, so as to obtain a sample image 2, namely an image obtained after the cutting processing of the abnormal image; the pre-processing of the overturn processing is performed on the abnormal image, for example, the abnormal image in fig. 2 is vertically overturned, so as to obtain a sample image 3, namely, an image after the overturn processing is performed on the abnormal image. Meanwhile, in addition to the preprocessing method shown in fig. 2, the abnormal image may be subjected to any-scale amplification and/or clipping, any-scale scaling, horizontal or vertical overturning, or horizontal or vertical overturning, and scaling according to any scale, clipping, and the like. And preprocessing the abnormal image to obtain at least one preprocessed sample image.
Notably, since each outlier image corresponds to an outlier mask. Therefore, when the abnormal image is preprocessed, the corresponding abnormal mask is also preprocessed along with the abnormal image. Meanwhile, in order to ensure the accuracy of model training, at the end of preprocessing the abnormal image and the corresponding abnormal mask, the sizes of all sample images and the corresponding segmentation mask can be consistent.
After the abnormal image and the corresponding abnormal mask are preprocessed, at least one sample image and a segmentation mask corresponding to the sample image can be obtained. At this time, all sample images may be divided into a training image set and a verification image set. The dividing ratio can be determined according to actual requirements, for example, all sample images are equally divided into a training image set and a verification image set; for example, two-thirds of all the sample images are used as the training image set, and the remaining third is used as the verification image set.
In the embodiment, a small amount of acquired abnormal images and corresponding abnormal masks thereof are preprocessed to obtain more sample images, so that the problem of fewer sample images is solved; meanwhile, the preprocessing of the abnormal images is utilized, so that the diversity of image data is increased, and a foundation is provided for the subsequent improvement of the generalization capability of the training model.
S103, training the basic model by using the training image set to obtain an image segmentation model, wherein the image segmentation model is used for detecting whether the boiler water wall has defects or not.
The basic model is a general image model which is not trained by the current demand scene, such as an image segmentation model. In this embodiment, the base Model may alternatively be a full segmentation Model (SEGMENT ANYTHING Model, SAM).
Specifically, the training of the base model with the training image set may include:
Loading a basic model according to the current use scene, and finely adjusting relevant parameters of the basic model; illustratively, loading a basic model according to a scene for detecting the defect image of the boiler water wall as required, and finely adjusting parameters of the basic model according to the actual condition of the current scene;
Converting the sample images with a hint map encoder, for example, applying a visual converter (Vision Transformer, VIT), inputting the images in the training image set into the model along with the segmentation mask noted manually to achieve preliminary training of the base model; illustratively, inputting the water wall images in the training image set and the manually marked segmentation masks corresponding to each water wall image into a model to realize the preliminary training of the basic model; meanwhile, in the training process, an optimizer and a loss function of the basic model can be continuously adjusted to promote updating of model parameters, and finally a trained image segmentation model is obtained.
S104, inputting the verification image set into an image segmentation model, and determining the detection accuracy.
Specifically, after the basic model is trained, an image segmentation model is obtained, and the accuracy of the image segmentation model can be determined by using the verification image set. The water wall images in each verification image set are sequentially placed into the image segmentation model, so that the image segmentation model detects the water wall images to obtain a detection result of the image segmentation model; and comparing the detection result of the image segmentation model with the actual result to determine the accuracy of the image segmentation model.
In this embodiment, the trained image segmentation model is tested by using the images in the verification image set which do not participate in training, and the generalization capability of the model is evaluated by using the sample images which do not participate in training while the detection accuracy of the image segmentation model is determined to a certain extent.
S105, if the detection accuracy is smaller than a preset threshold, updating the training image set, and returning to execute the step of training the basic model by using the training image set to obtain the image segmentation model until the detection accuracy is larger than or equal to the preset threshold.
The method comprises the steps of updating a training image set, namely, for each training sample image in the training image set, modifying defect description of the training sample image, and updating a first segmentation mask corresponding to the training sample image after the defect description is modified. The preset threshold value can be set and adjusted according to the actual application scene requirement or the user requirement.
Specifically, after the detection accuracy of the image segmentation model is calculated, determining whether the detection accuracy is smaller than a preset threshold; if the detection accuracy is equal to or greater than a preset threshold, the training of the image segmentation model is indicated to reach the requirement, so that the training of the image segmentation model can be ended; if the detection accuracy is smaller than the preset threshold, the fact that the training of the image segmentation model does not meet the requirement is indicated, therefore, the first segmentation mask corresponding to the defect description of the sample image in the training image set is required to be updated, the updated training image set is reused for training the basic model which does not meet the requirement, the updated image segmentation model is obtained, and the steps are repeated until the detection accuracy of the updated image segmentation model is equal to or larger than the preset threshold, and the training is stopped.
Wherein modifying the defect description of the training sample image may include: modifying the scope of defect cues, modifying the outline of defect cues, discarding or adding defect types. Illustratively, as shown in fig. 3, fig. 3 is a diagram illustrating an exemplary defect description of a modified training sample image provided by the present invention. Such as the original sample image in fig. 3, in which the original sample image of the water wall with the defect of the bulge is shown. If the defect prompting scope is modified, for example, as shown in a modified sample image 1 in fig. 3, the defect prompting scope is narrowed; besides the defect prompting range is reduced, the defect prompting range can be enlarged; if the defect-hint profile is modified, for example, as shown in the modified sample image 2 in fig. 3, the defect hint profile may be modified to make the defect hint profile more closely approximate to the shape of the original defect image; the prompting outline of the defect can also be modified into a shape different from the shape of the original defect image; if the defect type is discarded or added, the defect "coking" present in the original sample image is not shown in the original sample image, as shown in the modified sample image 3 in fig. 3. Thus, defects in the original sample image can be shown as "coking" to obtain an image as shown in modified sample image 3.
In this embodiment, when the detection accuracy is smaller than a preset threshold, the training image set is updated, and the defect description in the training sample image is modified, so as to improve the accuracy and sensitivity of defect detection.
According to the model training method provided by the invention, the abnormal image and the corresponding abnormal mask are obtained, and the abnormal image and the corresponding abnormal mask are preprocessed and then divided into a training image set and a verification image set; training the basic model by using the training image set to obtain an image segmentation model, calculating the detection accuracy of the image segmentation model by using the verification image set, determining that the image segmentation model is trained when the detection accuracy is equal to or greater than a preset threshold, updating the training image set when the detection accuracy is less than the preset threshold, and training the basic model again by using the updated training image set until the detection accuracy of the image segmentation model is equal to or greater than the preset threshold. In the technical scheme, on one hand, the obtained few abnormal images and the corresponding abnormal masks are preprocessed to obtain more sample images, so that the problem of fewer sample images is solved; meanwhile, the preprocessing of the abnormal images is utilized, so that the diversity of image data is increased, and a foundation is provided for the subsequent improvement of the generalization capability of the training model. On the other hand, the trained image segmentation model is tested by using the images in the verification image set which do not participate in training, and the generalization capability of the model is evaluated by using the sample images which do not participate in training while the detection accuracy of the image segmentation model is determined. And finally, when the detection accuracy is smaller than a preset threshold, updating the training image set, and modifying the defect description in the training sample image so as to improve the accuracy and the sensitivity of defect detection.
The model training method provided by the embodiment of the invention is further described below, as shown in fig. 4, and fig. 4 is another flow chart of the model training method provided by the invention. The embodiment of the present invention describes in detail the steps of training the base model using the training image set, inputting the verification image set into the image segmentation model, determining the detection accuracy, and updating the training image set on the basis of the above embodiments and various alternative implementations.
Referring to fig. 4, the model training method of the present embodiment may include the steps of:
s401, acquiring an abnormal image.
Specifically, the sources of the abnormal images may include an image set that has been disclosed in the related industrial scene and an image set that is acquired in the actual application scene. Meanwhile, each abnormal image corresponds to an abnormal mask, and the abnormal mask can be obtained by manually marking the position and the type of the defect in the abnormal image.
S402, preprocessing the abnormal image and the corresponding abnormal mask thereof to obtain at least one sample image and at least one segmentation mask, and dividing the sample image into a training image set and a verification image set.
Specifically, after the abnormal image is acquired, at least one of a scaling process, a cropping process, and a flipping process may be performed on the abnormal image to expand the sample image. Meanwhile, when the abnormal image is preprocessed, the same processing is also performed on the corresponding abnormal mask so that the obtained sample image is consistent with the corresponding segmentation mask in size. After the abnormal image and the corresponding abnormal mask are preprocessed, at least one sample image and a segmentation mask corresponding to the sample image can be obtained. At this time, all sample images may be divided into a training image set and a verification image set. The dividing ratio can be determined according to actual requirements.
In the embodiment, a small amount of acquired abnormal images and corresponding abnormal masks thereof are preprocessed to obtain more sample images, so that the problem of fewer sample images is solved; meanwhile, the preprocessing of the abnormal images is utilized, so that the diversity of image data is increased, and a foundation is provided for the subsequent improvement of the generalization capability of the training model.
S403, inputting the current training sample image into a basic model, and determining a training result.
The current training sample image is any one training sample image in the training image set. In one implementation manner, the current training sample image may be any image in the training sample set obtained by dividing the initial training sample image; in another implementation manner, the current training sample image may be any one training sample image obtained after updating the training sample set when the detection accuracy of the image segmentation model is smaller than a preset threshold.
Specifically, a currently obtained training sample image is input into a basic model, the basic model marks the defect type and position in the current training sample image, and the defect type and position are compared with the actual defect type and position to obtain a training result of each current training sample image.
For example, in one implementation, the training results may only indicate whether the training model is complete and accurately detects the defect. For example, there are a defect a and a defect b in the current training sample image; the position of the defect a is A, and the type is corrosion; the location of defect B is B and the type is a crack. After training the training sample image by using the basic training model, in one case, the training result is: only one defect a exists, the position is A, and the type is corrosion; in another case, the training result is: two defects a and B exist, wherein the position of the defect a is A, the type is corrosion, the position of the defect B is B, and the type is thick; in yet another case, the training result is: there are two defects a and B, the position of defect a is C, the type is corrosion, the position of defect B is B, the type is swelling.
S404, determining a loss function of the basic model according to the training result and the first segmentation mask corresponding to the current training sample image.
Wherein the loss function is used to indicate the degree of similarity between the results of the model test and the actual results.
Specifically, the loss function of the base model may be determined according to the training result of the current training sample image placed in the base model and the actual result indicated by the first segmentation mask corresponding to the current training sample image. Illustratively, the loss function of the base model is calculated as the focal point loss function. The method comprises the following specific steps: converting a segmentation mask corresponding to a current training sample image in a training result and the first segmentation mask into required data, such as vectors, through a lifting diagram encoder; converting a segmentation mask corresponding to a current training sample image in a training result into probability through a preset function, and calculating cross entropy loss; the preset function may be an S-type (sigmoid) function; and calculating the accuracy of the segmentation mask corresponding to the current training sample image in the training result by using the first segmentation mask, and obtaining the focus loss by combining the cross entropy loss.
S405, adjusting parameters of the basic model according to the loss function.
Specifically, since the loss function is used to indicate the degree of similarity between the result of model detection and the actual result, it is possible to determine whether the basic model meets the use requirement of the user according to the loss function after calculating the loss function, and adjust the parameters of the basic model when the use requirement is not met.
In this embodiment, the updating of the parameters of the base model is facilitated by calculating the loss function of the base model.
S406, whether the current training sample image is the last training sample image in the training image set, if so, executing S408, and if not, executing S407.
Specifically, after calculating the loss function according to the training result of each current training sample image and adjusting the parameters of the basic model, it is required to determine whether the current training sample image is the last training sample image in the training image set. I.e. whether the training is finished or not needs to be judged. If the current training sample image is not the last training sample image in the training image set, taking the next training sample image of the current training sample image as the current training sample image; and if the current training sample image is the last training sample image in the training image set, taking the basic model as an image segmentation model.
S407, taking the next training sample image of the current training sample image as the current training sample image, and returning to execute S403.
Specifically, if the current training sample image is not the last training sample image in the training image set, the next training sample image of the current training sample image may be used as the current training sample image, and a new current training sample image may be input into the base model to determine the training result.
S408, taking the basic model as an image segmentation model.
Specifically, if the current training sample image is the last training sample image in the training image set, the end of the training of the model can be determined, and the basic model is used as the image segmentation model.
In this embodiment, by determining whether the current training sample image is the last training sample image in the training image set, the training is performed on the images in the training image set automatically, a training result is generated, and a loss function is determined according to the training result, so as to continuously adjust parameters of the basic model; meanwhile, after all images in the training image set are trained, the basic model is automatically determined to be an image segmentation model.
S409, inputting the verification sample image into an image segmentation model for each verification sample image, determining a detection defect, and determining a detection result corresponding to the verification sample image according to the detection defect and a second segmentation mask corresponding to the verification sample image.
Wherein the set of verification images includes at least one verification sample image, one verification sample image corresponding to a second segmentation mask, the second segmentation mask being used to indicate the location and type of defects in its corresponding verification sample image.
Specifically, after the trained image segmentation model is obtained, the accuracy of the image segmentation model can be determined by using the verification sample image in the verification image set. Namely, each verification sample image is input into an image segmentation model, detection defects detected by the image segmentation model are determined, and detection results are determined according to a second segmentation mask corresponding to the detection defects and the true defects of the verification sample image.
Illustratively, assume that a verification sample image has a defect a and a defect b; the position of the defect a is A, and the type is corrosion; the location of defect B is B and the type is a crack. After the image segmentation model detects the verification sample image, in one case, the obtained detection defect 1 is: only one defect a exists, the position is A, and the type is corrosion; in another case, the detected defect 2 is obtained as: two defects a and B exist, wherein the position of the defect a is A, the type is corrosion, the position of the defect B is B, and the type is thick; in yet another case, the detected defect 3 is obtained as: two defects a and B exist, wherein the position of the defect a is C, the type is corrosion, the position of the defect B is B, and the type is coarse; in yet another case, the detected defect 4 is obtained as: two defects a and b exist, wherein the position of the defect a is A, and the type is corrosion; the location of defect B is B and the type is a crack. At this time, in one implementation, if the detected defect is completely consistent with the true defect of the verification sample image, the detected result is true and is recorded as 1; if the detected defect and the true defect of the verification sample image only have differences, the detection result is false and is marked as 0. For example, the detection results for detection defect 1, detection defect 2, and detection defect 3 are all 0, and the detection result for detection defect 4 is 1. In another implementation, the accuracy of the detection result is determined according to the degree of difference between the detected defect and the true defect of the verification sample image. For example, the detection result for detection defect 1 is 50%, the detection result for detection defect 2 is 83%, the detection result for detection defect 3 is 67%, and the detection result for detection defect 4 is 100%.
It should be noted that, in the above example, the detected defect 1, the detected defect 2, the detected defect 3 and the detected defect 4 are the same verification sample image, and are input into the same model, and the possible results are obtained after the detection. Meanwhile, the above examples of the representation of the detection result are only examples and should not be taken as limiting the method of the present invention. The concrete expression form of the detection result can be set and adjusted by a user according to the requirements and the application scene.
S410, determining the detection accuracy according to detection results corresponding to all the verification sample images.
Specifically, after determining the detection result corresponding to each verification sample image, the detection accuracy of the current image segmentation model can be obtained. For example, the arithmetic average value of the detection results corresponding to all the verification sample images may be calculated as the detection accuracy, and the mean square error of the detection results corresponding to all the verification sample images may be calculated as the detection accuracy.
S411, detecting whether the accuracy is smaller than a preset threshold, if yes, executing S412, and if not, executing S413.
Specifically, after the detection accuracy of the image segmentation model is calculated, determining whether the detection accuracy is smaller than a preset threshold; if the detection accuracy is equal to or greater than a preset threshold, the training of the image segmentation model is indicated to reach the requirement, so that the training of the image segmentation model can be ended; if the detection accuracy is smaller than the preset threshold, the training of the image segmentation model is not yet achieved, so that the defect description of the sample image in the training image set and the corresponding first segmentation mask are required to be updated, and the updated training image set is reused for training the basic model which does not achieve the requirement.
S412, for each training sample image in the training image set, modifying the defect description of the training sample image, updating the first segmentation mask corresponding to the training sample image after the defect description is modified, and returning to S403.
Specifically, when the detection accuracy of the current image segmentation model is smaller than a preset threshold, it can be stated that the training of the image segmentation model does not meet the requirements yet. Therefore, it is necessary to update each training sample image in the training image set and retrain the base model with the updated training sample image until the detection accuracy of the image segmentation model trained from the base model is equal to or greater than a preset threshold. The method comprises the steps of modifying a defect prompting range, modifying a defect prompting outline, discarding or adding a defect type for each training sample image in a training image set, so as to modify the defect description; when the defect description of each training sample image is modified, the corresponding first segmentation mask is also updated and modified together so as to realize the correspondence between the training sample image and the corresponding first segmentation mask.
After updating each training sample image in the training image set is completed, each training sample image can be sequentially used as a current training sample image to be input into the basic model, and the basic model is executed again according to the sequence of S403-S411 until the detection accuracy of the model is equal to or greater than a preset threshold value.
S413, model training is finished.
Specifically, when the detection accuracy of the image segmentation model is equal to or greater than a preset threshold, the model training can be determined to be finished. I.e. the image at this time is segmented into models to meet the needs of the user.
In this embodiment, whether the detection accuracy of the image segmentation model is smaller than a preset threshold is determined, so that when the detection accuracy of the image segmentation model does not reach the standard, a training image set is updated, and a basic model is trained again, so that the purpose of improving the detection accuracy of the image segmentation model by using the continuous change of fewer original sample images and the defect prompt change is achieved.
According to the model training method provided by the embodiment of the invention, the abnormal image and the corresponding abnormal mask thereof are preprocessed by acquiring the abnormal image, so that a sample image and a segmentation mask are obtained, and the sample image is divided into a training image set and a verification image set; sequentially inputting training sample images into a basic model, determining a loss function of the basic model according to a first segmentation mask corresponding to the current training sample image according to a training result of each time, and adjusting parameters of the basic model until all training sample images in the current training image set are input into the basic model, and taking the basic model as an image segmentation model; inputting each verification sample image into an image segmentation model by utilizing each verification sample image in the verification image set, determining detection defects, and determining detection results corresponding to each verification sample image according to a second segmentation mask corresponding to the verification sample image; determining detection accuracy according to detection results corresponding to all the verification sample images; when the detection accuracy is smaller than a preset threshold, the defect description of each training sample image is modified, the first segmentation mask corresponding to the training sample image is updated, the updated training sample images are sequentially input into the basic model again, the model parameters are adjusted until the detection accuracy of the model is equal to or larger than the preset threshold, and training of the model can be completed, and model training is finished. In the technical scheme, on one hand, the obtained few abnormal images and the corresponding abnormal masks are preprocessed to obtain more sample images, so that the problem of fewer sample images is solved; meanwhile, the preprocessing of the abnormal images is utilized, so that the diversity of image data is increased, and a foundation is provided for the subsequent improvement of the generalization capability of the training model. On the other hand, the updating of the parameters of the basic model is facilitated by calculating the loss function of the basic model. In another aspect, the training of the images in the training image set is automatically achieved by determining whether the current training sample image is the last training sample image in the training image set, generating a training result, and determining a loss function according to the training result, so as to continuously adjust parameters of the basic model; meanwhile, after all images in the training image set are trained, the basic model is automatically determined to be an image segmentation model. And finally, determining whether the detection accuracy of the image segmentation model is smaller than a preset threshold value, updating a training image set when the detection accuracy of the image segmentation model does not reach the standard, and training the basic model again to achieve the purpose of improving the detection accuracy of the image segmentation model by using the mode of continuously changing fewer original sample images and changing defect prompts.
After training of the model is achieved according to the above embodiment and various alternative implementation manners, the trained model can be applied to a required scene, and the following is taken as an example of acquiring image data of an actual boiler water wall, and determining defects of the boiler water wall according to the image data of the boiler water wall.
Fig. 5 is a schematic flow chart of a defect detection method according to the present invention, as shown in fig. 5. The image segmentation model in the embodiment of the invention can be obtained by training by using the model training method described in each embodiment.
Referring to fig. 5, the defect detection method of the present embodiment may include the steps of:
s501, determining image data of a boiler water wall.
Wherein the image data comprises data in the form of pictures, data in the form of video and/or data in the form of image frames.
Specifically, can utilize the manual work to get into inside the boiler water-cooling wall and acquire, also can utilize unmanned aerial vehicle to carry shooting equipment, get into inside the boiler water-cooling wall through automatic cruising or manual navigation's mode and acquire image data.
S502, inputting the image data into an image segmentation model, and detecting whether the boiler water wall has defects.
Specifically, the obtained image data is input into an image segmentation model, and the image segmentation model can automatically read the characteristics in the image data and label the positions and types of defects in the image data.
It is noted that after the image segmentation model is used to detect the boiler water wall to detect the image data, the image data can be used as new sample data to be input into the sample library of the image segmentation model, so as to achieve the purpose of continuously expanding the sample library of the image segmentation model.
According to the defect detection method provided by the embodiment of the invention, after the image segmentation model is trained, the image data of the boiler water wall collected in the real scene is determined, and the boiler water wall is detected by using the image segmentation model, so that the defect position and type of the boiler water wall are obtained, the defect detection of the boiler water wall in a mode of replacing manual visual observation is realized, and the purpose of accurately detecting the defect of the boiler water wall by using the image segmentation model obtained by training a few sample images is achieved.
Fig. 6 is a schematic structural diagram of the model training device provided by the invention. As shown in fig. 6, the apparatus includes:
An image obtaining module 601, configured to obtain an abnormal image, where one abnormal image corresponds to one abnormal mask, and the abnormal mask is used to indicate a location and a type of a defect in the corresponding abnormal image;
the preprocessing module 602 is configured to preprocess the abnormal image and the corresponding abnormal mask thereof to obtain at least one sample image and at least one segmentation mask, and divide the sample image into a training image set and a verification image set, where one sample image corresponds to one segmentation mask, and the number of sample images is greater than the number of abnormal images;
The model training module 603 is configured to train the base model by using a training image set to obtain an image segmentation model, where the image segmentation model is used to detect whether a boiler water wall has a defect;
the accuracy detection module 604 is configured to input the verification image set into the image segmentation model, and determine a detection accuracy;
the optimizing module 605 is configured to update the training image set if the detection accuracy is less than the preset threshold, and return to performing the step of training the base model with the training image set to obtain the image segmentation model until the detection accuracy is greater than or equal to the preset threshold.
Optionally, the model training module 603 is specifically configured to:
Inputting a current training sample image into a basic model, and determining a training result, wherein the current training sample image is any one training sample image in a training image set;
according to the training result and the first segmentation mask corresponding to the current training sample image, adjusting parameters of the basic model;
if the current training sample image is not the last training sample image in the training image set, taking the next training sample image of the current training sample image as the current training sample image, and returning to execute the step of inputting the current training sample image into the basic model to determine a training result;
And if the current training sample image is the last training sample image in the training image set, taking the basic model as an image segmentation model.
Optionally, when adjusting parameters of the basic model according to the training result and the first segmentation mask corresponding to the current training sample image, the model training module 603 is specifically configured to:
determining a loss function of the basic model according to the training result and a first segmentation mask corresponding to the current training sample image;
and adjusting parameters of the basic model according to the loss function.
Optionally, the accuracy rate detection module 604 is specifically configured to:
Inputting the verification sample image into an image segmentation model for each verification sample image, determining detection defects, and determining detection results corresponding to the verification sample images according to the detection defects and second segmentation masks corresponding to the verification sample images;
And determining the detection accuracy according to detection results corresponding to all the verification sample images.
Optionally, the apparatus further comprises: the training image set updating module is specifically used for: for each training sample image in the training image set, modifying the defect description of the training sample image, and updating the first segmentation mask corresponding to the training sample image after the defect description is modified.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above. The specific working process of the functional module described above may refer to the corresponding process in the foregoing method embodiment, and will not be described herein.
The model training device provided by the embodiment of the invention can execute the model training method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of executing the model training method.
Fig. 7 is a schematic structural diagram of a defect detecting device according to the present invention. As shown in fig. 7, the apparatus includes:
an image data acquisition module 701, configured to determine image data of a water wall of a boiler;
The detection module 702 is used for inputting the image data into the image segmentation model and detecting whether the boiler water wall has defects or not; the image segmentation model is obtained by adopting the model training method according to any embodiment of the invention.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above. The specific working process of the functional module described above may refer to the corresponding process in the foregoing method embodiment, and will not be described herein.
The defect detection device provided by the embodiment of the invention can execute the defect detection method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of executing the defect detection method.
Fig. 8 is a schematic structural diagram of an electronic device according to the present invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 8, the electronic device 8 includes at least one processor 11, and a memory, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, etc., communicatively connected to the at least one processor 11, in which the memory stores a computer program executable by the at least one processor, and the processor 11 can perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the electronic device 8 can also be stored. The processor 11, the ROM 12 and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
The various components in the electronic device 8 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, etc.; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 8 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 11 performs the various methods and processes described above, such as a model training method or a defect detection method.
In some embodiments, the model training method or the defect detection method may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as the storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 8 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into RAM 13 and executed by processor 11, one or more steps of the model training method or defect detection method described above may be performed. Alternatively, in other embodiments, processor 11 may be configured to perform the model training method or the defect detection method in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (10)

1. A method of model training, comprising:
obtaining an abnormal image, wherein one abnormal image corresponds to one abnormal mask, and the abnormal mask is used for indicating the position and the type of a defect in the corresponding abnormal image;
Preprocessing the abnormal images and the corresponding abnormal masks thereof to obtain at least one sample image and at least one segmentation mask, and dividing the sample image into a training image set and a verification image set, wherein one sample image corresponds to one segmentation mask, and the number of the sample images is larger than that of the abnormal images;
Training the basic model by using the training image set to obtain an image segmentation model, wherein the image segmentation model is used for detecting whether the boiler water wall has defects or not;
Inputting the verification image set into the image segmentation model, and determining detection accuracy;
if the detection accuracy is smaller than the preset threshold, updating the training image set, and returning to execute the step of training the basic model by using the training image set to obtain an image segmentation model until the detection accuracy is larger than or equal to the preset threshold.
2. The model training method of claim 1, wherein the training image set includes at least one training sample image, one of the training sample images corresponding to a first segmentation mask, the first segmentation mask indicating a location and a type of a defect in the training sample image to which the first segmentation mask corresponds;
The training of the basic model by using the training image set to obtain an image segmentation model comprises the following steps:
Inputting a current training sample image into the basic model, and determining a training result, wherein the current training sample image is any training sample image in the training image set;
According to the training result and a first segmentation mask corresponding to the current training sample image, adjusting parameters of the basic model;
If the current training sample image is not the last training sample image in the training image set, taking the next training sample image of the current training sample image as the current training sample image, and returning to execute the step of inputting the current training sample image into the basic model to determine a training result;
And if the current training sample image is the last training sample image in the training image set, taking the basic model as the image segmentation model.
3. The method according to claim 2, wherein adjusting parameters of the base model according to the training result and the first segmentation mask corresponding to the current training sample image comprises:
determining a loss function of the basic model according to the training result and a first segmentation mask corresponding to the current training sample image;
and adjusting parameters of the basic model according to the loss function.
4. The model training method of claim 1, wherein the set of verification images includes at least one verification sample image, one of the verification sample images corresponding to a second segmentation mask indicating the location and type of defects in the corresponding verification sample image;
The step of inputting the verification image set into the image segmentation model to determine detection accuracy comprises the following steps:
Inputting the verification sample image into the image segmentation model for each verification sample image, determining a detection defect, and determining a detection result corresponding to the verification sample image according to the detection defect and a second segmentation mask corresponding to the verification sample image;
and determining the detection accuracy according to detection results corresponding to all the verification sample images.
5. The model training method of claim 2, wherein the updating the training image set comprises:
and modifying the defect description of the training sample image for each training sample image in the training image set, and updating a first segmentation mask corresponding to the training sample image after the defect description is modified.
6. The model training method of claim 1, wherein the preprocessing includes at least one of a scaling process, a cropping process, and a flipping process;
And the sizes of all the sample images after the pretreatment are consistent.
7. The model training method of claim 1, wherein the number of outlier images is less than or equal to 10.
8. A defect detection method, comprising:
determining image data of a boiler water wall;
Inputting the image data into an image segmentation model, and detecting whether the boiler water wall has defects or not; wherein the image segmentation model is obtained by using the model training method as claimed in any one of claims 1-7.
9. An electronic device, comprising:
one or more processors;
A memory for storing one or more programs,
When executed by the one or more processors, cause the one or more processors to implement the model training method of any one of claims 1 to 7, or the defect detection method of claim 8.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the model training method according to any one of claims 1 to 7, or implements the defect detection method according to claim 8.
CN202410073759.6A 2024-01-18 2024-01-18 Model training method, defect detection method, electronic equipment and storage medium Pending CN117893509A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410073759.6A CN117893509A (en) 2024-01-18 2024-01-18 Model training method, defect detection method, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410073759.6A CN117893509A (en) 2024-01-18 2024-01-18 Model training method, defect detection method, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117893509A true CN117893509A (en) 2024-04-16

Family

ID=90644258

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410073759.6A Pending CN117893509A (en) 2024-01-18 2024-01-18 Model training method, defect detection method, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117893509A (en)

Similar Documents

Publication Publication Date Title
US10169861B2 (en) Image processing apparatus, non-transitory computer readable medium, and image processing method
CN112581463A (en) Image defect detection method and device, electronic equipment, storage medium and product
CN110264444B (en) Damage detection method and device based on weak segmentation
CN112989995B (en) Text detection method and device and electronic equipment
CN112785625A (en) Target tracking method and device, electronic equipment and storage medium
CN113378712B (en) Training method of object detection model, image detection method and device thereof
CN110910445B (en) Object size detection method, device, detection equipment and storage medium
US20220172376A1 (en) Target Tracking Method and Device, and Electronic Apparatus
CN113947188A (en) Training method of target detection network and vehicle detection method
CN114332977A (en) Key point detection method and device, electronic equipment and storage medium
CN115908988B (en) Defect detection model generation method, device, equipment and storage medium
CN113344862A (en) Defect detection method, defect detection device, electronic equipment and storage medium
CN115330940A (en) Three-dimensional reconstruction method, device, equipment and medium
CN112233161B (en) Hand image depth determination method and device, electronic equipment and storage medium
CN117893509A (en) Model training method, defect detection method, electronic equipment and storage medium
CN115330705A (en) Skin paint surface defect detection method based on adaptive weighting template NCC
CN114255339A (en) Method and device for identifying breakpoint of power transmission conductor and storage medium
CN113807368A (en) Glass instrument detection method and device based on YOLO algorithm and related equipment
CN115050086B (en) Sample image generation method, model training method, image processing method and device
CN114092739B (en) Image processing method, apparatus, device, storage medium, and program product
CN115420277B (en) Object pose measurement method and electronic equipment
CN114037865B (en) Image processing method, apparatus, device, storage medium, and program product
CN117275030B (en) Method and device for auditing map
CN114926447B (en) Method for training a model, method and device for detecting a target
US20220230343A1 (en) Stereo matching method, model training method, relevant electronic devices

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination