CN115760788A - Defect generation method for deep learning task - Google Patents

Defect generation method for deep learning task Download PDF

Info

Publication number
CN115760788A
CN115760788A CN202211466558.XA CN202211466558A CN115760788A CN 115760788 A CN115760788 A CN 115760788A CN 202211466558 A CN202211466558 A CN 202211466558A CN 115760788 A CN115760788 A CN 115760788A
Authority
CN
China
Prior art keywords
image
defect
transformation
generating
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211466558.XA
Other languages
Chinese (zh)
Inventor
胡凯
彭斌
杨艺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Luster LightTech Co Ltd
Original Assignee
Luster LightTech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Luster LightTech Co Ltd filed Critical Luster LightTech Co Ltd
Priority to CN202211466558.XA priority Critical patent/CN115760788A/en
Publication of CN115760788A publication Critical patent/CN115760788A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Image Analysis (AREA)

Abstract

The application provides a defect generation method for a deep learning task, which comprises the following steps: acquiring a defect image according to a defect sample, wherein the defect image is a rectangular area containing defects in the defect sample; constructing a defect sample library according to the defect images corresponding to a preset number of defect samples; extracting at least one defect image from the defect sample library as a target image; injecting the target image into a preset background image according to the type of the deep learning task and the conversion condition to generate a converted image; and generating a corresponding task image and an annotation file according to the transformed image. The defect generation method provided by the application aims to solve the problem that when the number of defect samples and normal samples is not balanced, deep learning training is very difficult, and the defect detection effect is poor.

Description

Defect generation method for deep learning task
Technical Field
The application relates to the field of defect detection, in particular to a defect generation method for a deep learning task.
Background
In the field of defect detection, deep learning is a common method and is used for solving the defect detection task of complex products. Deep learning is the intrinsic law and expression hierarchy of learning sample data, and information obtained in the learning process is very helpful for interpretation of data such as characters, images and sounds. The deep learning model training needs to be carried out on a training set and a verification set, the model can store the optimal weight, the weight is read, the precision of the training set and the precision of the verification set are recorded, and the parameters are convenient to adjust.
In actual production, defect samples which can be trained are very few, normal samples are many, the number of the defect samples and the number of the normal samples are unbalanced, usually, a data set is expanded, the data set is resampled, or data samples are generated manually, but the expansion of the data set increases a large amount of sample data, the workload is increased, the data set is resampled, the data samples are generated manually, and the manpower and material resources are increased.
The deep learning training also needs enough samples to ensure the detection effect, so the deep learning training is very difficult at present, and the defect detection effect is poor.
Disclosure of Invention
The application provides a defect generation method for a deep learning task, which aims to solve the problem that the defect detection effect is poor due to the fact that deep learning training is very difficult.
The application relates to a defect generation method for a deep learning task, which comprises the following steps:
acquiring a defect image according to a defect sample, wherein the defect image is a rectangular area containing defects in the defect sample;
constructing a defect sample library according to the defect images corresponding to a preset number of defect samples;
extracting at least one defect image from the defect sample library as a target image;
injecting the target image into a preset background image according to the type of the deep learning task and the conversion condition to generate a converted image;
and generating a corresponding task image and an annotation file according to the transformed image.
Optionally, the step of obtaining a defect image according to the defect sample includes:
generating a local area small graph in the defect sample, wherein the local area small graph is a rectangular area and the rectangular area contains defects;
drawing a defect outer contour on the local area small graph, and generating a defect image and a mask image which are the same in size;
generating a minimum circumscribed rectangle of the defect outer contour according to the defect image outer contour;
and externally expanding the periphery of the minimum external rectangle, and taking the externally expanded rectangular area as a defect image.
Optionally, when the periphery of the minimum circumscribed rectangle is subjected to external expansion, the length of an external expansion pixel is 3px-5px.
Optionally, the conversion condition includes: horizontal mirroring transformation, vertical mirroring transformation, special angle rotation transformation, affine transformation, brightness transformation, contrast transformation, fuzzy transformation, noise transformation, and sharpening transformation, wherein each transformation condition comprises an amplitude and a probability.
Optionally, before the step of generating the corresponding task image and the annotation file, the method further includes: judging whether other transformations need to be added or not according to the transformed images, and if so, regenerating the transformed images according to the added transformation conditions; and if not, generating a corresponding task image and an annotation file.
Optionally, the step of generating the corresponding task image and the annotation file includes:
according to the corresponding task of the background image, adopting different label file generation modes;
and configuring the transformation conditions of the background image and the transformation conditions of the target image, and generating a corresponding task image and an annotation file.
Optionally, generating a corresponding task image and an annotation file according to the transformed image includes:
generating a corresponding auxiliary image according to the background image;
transforming the background image according to the transformation condition to generate a transformed background image;
randomly generating an injection position according to the width and the height of the background image and the width and the height of the defect image;
generating an auxiliary image subregion and a background image subregion according to the position;
judging whether the target image exists in the auxiliary image sub-area or not;
if the target image exists, randomly generating an injection position again according to the width and the height of the background image and the width and the height of the defect image;
if the target image does not exist, equally transforming the defect image and the mask image according to the transformation condition;
fusing the defect image into a sub-region of the background image;
and fusing the mask image corresponding to the defect image into the sub-area of the auxiliary image to generate the auxiliary image.
Optionally, for different deep learning task types, whether to generate a markup file is determined:
if the deep learning task type is a classification task, generating no marking file;
if the deep learning task type is a segmentation task, saving the auxiliary image as an annotation file;
and if the deep learning task type is a detection task, performing contour analysis on the auxiliary image, acquiring a minimum external rectangle of each contour as a labeling frame, and simultaneously generating a corresponding json file or xml labeling file.
Optionally, the defect image is fused into a sub-region of the background image, and after the mask image is corroded, a poisson function is used for fusion. Poisson fusion is to place a target image into a background image, the placement position is within the size range of a foreground required image with a P point as the center in the background image, the color and the gradient in the target image can be changed in the fusion process, the seamless fusion effect can be achieved, and the fusion effect is better.
Optionally, injecting the target image into the background image according to the deep learning task type and the transformation condition, and generating the transformed image, including:
generating an initial transformation list according to the transformation conditions;
setting probability or adjusting a transformation sequence for the initial transformation list to generate a transformation list;
and transforming the background image and the target image according to the sequence of the transformation list.
According to the technical scheme, the application provides a defect generation method for a deep learning task, and the method comprises the following steps: acquiring a defect image according to a defect sample, wherein the defect image is a rectangular area containing defects in the defect sample; constructing a defect sample library according to the defect images corresponding to a preset number of defect samples; extracting at least one defect image from the defect sample library as a target image; injecting the target image into a preset background image according to the type of the deep learning task and the conversion condition to generate a converted image; and generating a corresponding task image and an annotation file according to the transformed image. The method realizes defect generation on the normal Sample image (OK Sample) by using a small number of defect samples (NG Sample) and a large number of normal Sample images (OK Sample) so as to increase the number of defect samples, relieve the problem of data imbalance and improve the detection effect.
Drawings
In order to more clearly explain the technical solution of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious to those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic diagram of the steps of a defect generation method for a deep learning task;
FIG. 2 is a schematic diagram illustrating steps for generating a corresponding task image and an annotation file;
FIG. 3 is a schematic illustration of an equivalent transformation of the defect image and the mask image;
FIG. 4 is a schematic diagram of interaction method for constructing a defect sample library;
FIG. 5 is a schematic diagram of a defect image generation method.
Detailed Description
Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following examples do not represent all embodiments consistent with the present application. But merely as exemplifications of systems and methods consistent with certain aspects of the application, as recited in the claims.
The defect detection is an industrially important application, and due to the fact that defects are various, complete modeling and migration of defect features are difficult to achieve through a traditional machine vision algorithm, reusability is not large, working conditions are required to be distinguished, and a large amount of labor cost can be wasted. Deep learning has achieved a very good effect on feature extraction and localization, and therefore, deep learning algorithms are introduced into the field of defect detection.
In the deep learning, a machine learning model with multiple hidden layers and massive training data are constructed to learn the characteristics, so that the accuracy and the universality of classification or prediction are finally improved. However, training an effective deep learning model requires a large amount of labeled data, a large amount of graphics card resources, and a long training time, and in many industrial scenarios, the acquisition cost of defect images is very high, so that the number of samples is very limited, and it is difficult to directly train the deep learning model. Therefore, the false detection rate and the missing detection rate of the trained defect detection model are high, and the requirements of enterprises are difficult to meet.
The present application provides a defect generation method for a deep learning task, referring to fig. 1, which is a schematic step diagram of the defect generation method for the deep learning task, and includes:
s1: acquiring a defect image according to the defect sample, wherein the defect image is a rectangular area containing defects in the defect sample;
specifically, the defect Sample may be a normal Sample image (OK Sample) or a defect Sample (NG Sample), and since the number of the defect samples (NG Sample) is small and the number of the normal Sample images (OK Sample) is large in actual production, the defect Sample normal Sample image (OK Sample) in the embodiment accounts for a large percentage, and common defects include: planar object defects, three-dimensional object defects, for example: scratches, pits and grooves, cracks, edge chipping, and stains.
S2: constructing a defect sample library according to defect images corresponding to a preset number of defect samples;
referring to fig. 4, an interactive mode schematic diagram is constructed for the defect sample library; in some embodiments, the defect sample library is constructed interactively, first, a rectangular region with defects is selected from the defect samples, i.e. a small map of the local region, as shown in a in fig. 4, then the outline of the defect is drawn on the rectangular region, as shown in B in fig. 4, and finally, a minimum bounding rectangle of the defect is generated according to the outline of the defect, as shown in C in fig. 4.
S3: extracting at least one defect image from a defect sample library as a target image;
it should be understood that the process of extracting the target image is random.
S4: injecting the target image into a preset background image according to the type of the deep learning task and the conversion condition to generate a converted image;
specifically, the preset background image is a normal Sample image (OK Sample);
in order to improve the diversity of defect samples, the target image is subjected to the selection of the type of the learning task and the condition transformation.
Referring to fig. 5, a schematic diagram of a defect image generation method is shown, in order to make the process of injecting the target image into the background image more efficient, a plurality of target images having completed the learning task type and the transformation condition are superimposed, and defects in the plurality of target images are injected into the same background image, so as to generate a transformed image;
s5: and generating a corresponding task image and an annotation file according to the transformed image.
Through the steps of S1-S5, defects are generated on the normal Sample image (OK Sample) by using a small number of abnormal samples (NG Sample) and a large number of normal Sample images (OK Sample), so that the number of the defective samples is increased, the problem of data imbalance is relieved, and the detection effect is improved.
According to the defect sample, the step of acquiring the defect image comprises the following steps:
generating a small local area image in the defect sample, wherein the small local area image is a rectangular area and the rectangular area contains defects;
drawing a defect outer contour on the local area small graph, and generating a defect image and a mask image which are the same in size; it has a gray value of 0 only inside the defective region and the remaining values of 255;
generating a minimum circumscribed rectangle of the defect outer contour according to the defect image outer contour;
and externally expanding the periphery of the minimum external rectangle, and taking the rectangular area after external expansion as a defect image, namely the defect image is a rectangular area containing defects in the defect sample.
In some embodiments, when the minimum bounding rectangle is externally extended all around, the externally extended pixel length is 3px-5px. When the defect outline drawing process is abnormal or all defects are not framed, the framing range is expanded by outwards expanding the minimum circumscribed rectangle to the periphery so as to ensure that the defects exist in the framing area.
For example: the size of the local region small map generated by the defect sample is 25px, the rectangular region contains the defect, the outer contour of the defect is drawn on the region 25px, the defect image and the mask image with the same size are generated, for example, a minimum circumscribed rectangle of 6px 7px is generated according to the outer contour of the defect, the periphery of the minimum circumscribed rectangle of 6px 7px is subjected to external expansion, 3px-5px is externally expanded on each side of the minimum circumscribed rectangle, the external expansion can be 12px 3 px-169px-17px, and the rectangular region 12px 13px-169px 17px is used as the defect image.
In some embodiments, the transformation conditions include: and simulating angle transformation, illumination transformation and imaging detail transformation of the size position, wherein each transformation condition comprises an amplitude and a probability.
Specifically, for example: horizontal mirror image, vertical mirror image, special angle rotation, for example: 30 degrees, 90 degrees, 180 degrees, etc., affine transformations, such as: translation, rotation, scaling, shearing, etc., illumination transformations, e.g., brightness transformations, contrast transformations, imaging detail transformations, e.g., blur, noise, sharpening, and other transformations may be added as desired.
When the user (terminal) instruction is 'rotate 30 degrees', the system rotates the image by 30 degrees, if the generated image does not meet the requirement, the user carries out the next instruction 'zoom 3 px', the system zooms the image by 3px, and whether other transformations need to be added or not is judged according to the generated image.
Before the step of generating the corresponding task image and the annotation file, the method further comprises the following steps: judging whether other transformations need to be added or not according to the transformed image, and if so, regenerating the transformed image according to the added transformation conditions; and if not, generating a corresponding task image and an annotation file.
The step of generating the corresponding task image and the annotation file comprises the following steps:
adopting different label file generation modes according to the corresponding tasks of the background image;
the corresponding tasks may be classified into a classification task, a segmentation task, and a detection task.
And configuring the transformation conditions of the background image and the transformation conditions of the target image, and generating a corresponding task image and an annotation file.
Referring to fig. 2 and 3, fig. 2 is a schematic view of steps for generating a corresponding task image and an annotation file, fig. 3 is a schematic view of performing equal transformation on the defect image and the mask image, and generating the corresponding task image and the annotation file according to the transformed images, including:
s501: generating a corresponding auxiliary image according to the background image;
specifically, the corresponding auxiliary images are images with gray values of 255;
s502: transforming the background image according to the transformation condition to generate a transformed background image;
the background transformation condition and the target transformation condition are configured and executed in an interactive mode, the transformation is designed in advance and then selected from the pre-designed transformation, and the method selects common transformation, such as: carrying out horizontal mirror image processing on the background image to obtain a transformed background image;
s503: randomly generating an injection position according to the width and the height of the background image and the width and the height of the defect image;
s504: generating an auxiliary image subregion and a background image subregion according to the position;
for example: setting the width of the background image as W, the height as H, the width of the defect image as W and the height as H, and setting the randomly generated injection position as x and y, wherein x = rand (0, W-W-1) and y = rand (0, H-H-1), so that the auxiliary image sub-region is (x, y, x + W, y + H);
s505: judging whether a target image exists in the auxiliary image sub-region or not; namely, judging whether the number of pixels with the gray value of 0 is 0 or not;
if the target image exists, randomly generating an injection position again according to the width and the height of the background image and the width and the height of the defect image;
if the step of judging whether the target image exists in the auxiliary image sub-area reaches 10 times and the satisfied points x and y can not be obtained, skipping the defect sample and judging the next defect sample;
if the target image does not exist, equally transforming the defect image and the mask image according to the transformation condition;
for example: if the target image does not exist, the brightness of the defect image and the mask image can be converted, and whether the next conversion needs to be added or not is judged according to the actual situation;
s506: fusing the defect image into a sub-region of the background image;
s507: and fusing the mask image corresponding to the defect image into the sub-area of the auxiliary image to generate the auxiliary image.
Judging whether to generate a labeling file for different deep learning task types:
if the deep learning task type is a classification task, no marking file is generated;
if the deep learning task type is a segmentation task, saving the auxiliary image as an annotation file;
and if the deep learning task type is a detection task, performing contour analysis on the auxiliary image, acquiring the minimum circumscribed rectangle of each contour as a labeling frame, and simultaneously generating a corresponding json file or xml labeling file.
When a user (terminal) does not need to generate a label file, the system selects a deep learning task type as a classification task, when the user (terminal) needs to generate the label file in the png format, the system selects the deep learning task type as a segmentation task, and when the user (terminal) needs to generate the label file in the json or xml format, the system selects the deep learning task type as a detection task.
The markup file is generally divided into formats such as mat, json, xml, png, CSV, xlsx, txt, word, and the like, and the json, xml, or png format is generated in this embodiment.
The contour analysis also refers to contour detection, that is, the contour analysis is used for detecting the boundary of an object in an image and acquiring the topological information of the image. The first step is as follows: carrying out binaryzation on the image; the second step is that: a connected component mark; the third step: and acquiring the mass center, the area, the perimeter, the minimum circumscribed rectangle and the like of the outline.
And fusing the defect image into a sub-area of the background image, corroding the mask image, and fusing by using a Poisson function.
The mask image is preprocessed, the image corrosion can eliminate the boundary points of the object, the boundary is contracted inwards, the object smaller than the structural element can be removed, the two objects with tiny communication can be separated, and if the two objects have tiny communication, the two objects can be separated when the structure is large enough.
In some embodiments, the defect image is fused into the subregion of the background image, and a replication mode may also be adopted, and the replication mode may possibly result in that better fusion cannot be performed between the images, resulting in poor fusion effect, but poisson fusion is to place a target image into the background image, the placement position is within the size range of a foreground required image centered on a point P in the background image, the fusion process may change the color and gradient in the target image, and a seamless fusion effect may be achieved, and the fusion effect is better.
Injecting a target image into a background image according to the type of the deep learning task and the transformation condition to generate a transformed image, wherein the method comprises the following steps:
generating an initial transformation list according to transformation conditions;
among them, the transformation ranges that can be selected for the background image transformation and the target image transformation are the same, namely, the size position angle transformation (horizontal mirroring, vertical mirroring, special angle rotation, affine transformation), the illumination transformation (brightness transformation, contrast transformation), and the imaging detail transformation (blur, noise, sharpening), for example: the background image transformation selects horizontal mirror images and vertical mirror images, so that any one or more of horizontal mirror images, vertical mirror images, special angle rotation, affine transformation, brightness transformation, contrast transformation, blurring, noise and sharpening can be selected for the target image transformation, different transformation amplitudes and transformation probabilities are set for the background transformation conditions and the target transformation conditions, the background transformation conditions and the target transformation conditions can be freely combined, and it needs to be noted that each transformation condition comprises an amplitude and a probability;
for example: horizontal mirror image, translation transformation, 90-degree rotation, brightness transformation, fuzzy transformation and noise transformation are sequentially selected for background transformation of the image, and an initial background transformation list is shown in table 1:
ID transforming names Transforming amplitude Probability of transformation
0 Horizontal mirror image Is free of 1
1 Translation transformation [-10,10] 1
2 Special angular changes 90 degree 1
3 Luminance conversion [-30,30] 1
4 Fuzzy transformation [3,7] 1
5 Noise transformation [10,20] 1
TABLE 1
For example: the target transformation of the image sequentially selects vertical mirror image, zoom transformation, 180-degree rotation, contrast transformation, fuzzy transformation and sharpening transformation, and then an initial target transformation list is shown in table 2:
ID transforming names Transforming amplitude Probability of transformation
0 Vertical mirror image Is free of 1
1 Scaling transformations [-0.5,0.5] 1
2 Special angle changes 180 degrees 1
3 Contrast transformation [0.5,1.5] 1
4 Fuzzy transformation [3,7] 1
5 Sharpening transformations [3,7] 1
TABLE 2
Setting probability or adjusting a transformation sequence for the initial transformation list to generate a transformation list;
and transforming the background image and the target image according to the sequence of the transformation list.
The step of generating the transformed image according to the background transformation condition and the target transformation condition comprises the following steps:
setting probability for each transformation or adjusting transformation sequence to generate a transformation list;
specifically, for the transformation with the transformation amplitude interval, an intermediate value is randomly generated each time for transformation;
specifically, the defect image and the mask image are transformed according to the target transformation list, and the transformation is performed according to the list order of table 2 on the assumption that the probability of each transformation is not changed or the transformation order is adjusted;
transforming the background image according to the background transformation list, and if the probability of each transformation is not changed or the sequence of the transformation is adjusted, performing the transformation according to the list sequence in the table 1;
before each transformation is executed, randomly generating a probability value between 0 and 1;
if the probability value is less than the probability value set by the transformation, the transformation is executed, otherwise, the transformation is not executed.
If the probability value is less than the probability value set by the transformation, the transformation is executed, otherwise, the transformation is not executed. According to the technical scheme, the application provides a defect generation method for a deep learning task, and the method comprises the following steps: acquiring a defect image according to a defect sample, wherein the defect image is a rectangular area containing defects in the defect sample; constructing a defect sample library according to the defect images corresponding to a preset number of defect samples; extracting at least one defect image from the defect sample library as a target image; injecting the target image into a preset background image according to the type of the deep learning task and the conversion condition to generate a converted image; and generating a corresponding task image and an annotation file according to the transformed image. The method realizes defect generation on the normal Sample image (OK Sample) by using a small number of abnormal samples (NG Sample) and a large number of normal Sample images (OK Sample), so as to increase the number of the defect samples, relieve the problem of data imbalance and improve the detection effect.
The detailed description provided above is only a few examples under the general concept of the present application, and does not constitute a limitation to the scope of the present application. Any other embodiments that can be extended by the solution according to the present application without inventive efforts will be within the scope of protection of the present application for a person skilled in the art.

Claims (10)

1. A defect generation method for a deep learning task, comprising:
acquiring a defect image according to a defect sample, wherein the defect image is a rectangular area containing defects in the defect sample;
constructing a defect sample library according to the defect images corresponding to a preset number of defect samples;
extracting at least one defect image from the defect sample library as a target image;
injecting the target image into a preset background image according to the type of the deep learning task and the conversion condition to generate a converted image;
and generating a corresponding task image and an annotation file according to the transformed image.
2. The method of claim 1, wherein the step of obtaining the defect image according to the defect sample comprises:
generating a local area small graph in the defect sample, wherein the local area small graph is a rectangular area and the rectangular area contains defects;
drawing a defect outer contour on the local area small image, and generating a defect image and a mask image which have the same size;
generating a minimum circumscribed rectangle of the defect outer contour according to the defect image outer contour;
and externally expanding the periphery of the minimum external rectangle, and taking the externally expanded rectangular area as a defect image.
3. The method of claim 2, wherein when the minimum bounding rectangle is externally extended, the length of the externally extended pixel is 3px-5px.
4. The method of claim 1, wherein the transformation condition comprises: at least one of a horizontal mirroring transformation, a vertical mirroring transformation, a special angle rotation transformation, an affine transformation, a luminance transformation, a contrast transformation, a blur transformation, a noise transformation, and a sharpening transformation, each of said transformation conditions comprising an amplitude and a probability.
5. The method for generating the defect of the deep learning task according to claim 1, wherein the step of generating the corresponding task image and the annotation file is preceded by the method further comprising: judging whether other transformations need to be added or not according to the transformed images, and if so, regenerating the transformed images according to the added transformation conditions; if not, generating a corresponding task image and an annotation file.
6. The method for generating the defect of the deep learning task according to claim 1, wherein the step of generating the corresponding task image and the annotation file comprises:
according to the corresponding task of the background image, adopting different label file generation modes;
and configuring the transformation conditions of the background image and the transformation conditions of the target image, and generating a corresponding task image and an annotation file.
7. The method for generating the defect of the deep learning task according to claim 1, wherein generating the corresponding task image and the annotation file according to the transformed image comprises:
generating a corresponding auxiliary image according to the background image;
transforming the background image according to the transformation condition to generate a transformed background image;
randomly generating an injection position according to the width and the height of the background image and the width and the height of the defect image;
generating an auxiliary image subregion and a background image subregion according to the position;
judging whether the target image exists in the auxiliary image sub-area or not;
if the target image exists, randomly generating an injection position again according to the width and the height of the background image and the width and the height of the defect image;
if the target image does not exist, equally transforming the defect image and the mask image according to the transformation condition;
fusing the defect image into a sub-region of the background image;
and fusing the mask image corresponding to the defect image into the sub-area of the auxiliary image to generate the auxiliary image.
8. The method of claim 7, wherein whether to generate a markup file is determined for different deep learning task types:
if the deep learning task type is a classification task, no marking file is generated;
if the deep learning task type is a segmentation task, saving the auxiliary image as an annotation file;
and if the deep learning task type is a detection task, performing contour analysis on the auxiliary image, acquiring a minimum external rectangle of each contour as a labeling frame, and simultaneously generating a corresponding json file or xml labeling file.
9. The method of claim 7, wherein the defect image is fused into a sub-region of the background image, and after the mask image is eroded, the defect image is fused by using a Poisson function.
10. The method of claim 1, wherein the injecting the target image into the background image according to the type of the deep learning task and the transformation condition to generate the transformed image comprises:
generating an initial transformation list according to the transformation conditions;
setting probability or adjusting a transformation sequence for the initial transformation list to generate a transformation list;
and transforming the background image and the target image according to the sequence of the transformation list.
CN202211466558.XA 2022-11-22 2022-11-22 Defect generation method for deep learning task Pending CN115760788A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211466558.XA CN115760788A (en) 2022-11-22 2022-11-22 Defect generation method for deep learning task

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211466558.XA CN115760788A (en) 2022-11-22 2022-11-22 Defect generation method for deep learning task

Publications (1)

Publication Number Publication Date
CN115760788A true CN115760788A (en) 2023-03-07

Family

ID=85335335

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211466558.XA Pending CN115760788A (en) 2022-11-22 2022-11-22 Defect generation method for deep learning task

Country Status (1)

Country Link
CN (1) CN115760788A (en)

Similar Documents

Publication Publication Date Title
CN109934826B (en) Image feature segmentation method based on graph convolution network
CN104376548B (en) A kind of quick joining method of image based on modified SURF algorithm
US8958643B2 (en) Recognition of numerical characters in digital images
CN111191570B (en) Image recognition method and device
CN111680690B (en) Character recognition method and device
US8687895B2 (en) Image processing apparatus, image processing method, and computer-readable medium
CN111507976A (en) Defect detection method and system based on multi-angle imaging
CN112906794A (en) Target detection method, device, storage medium and terminal
CN113284154B (en) Steel coil end face image segmentation method and device and electronic equipment
CN112215925A (en) Self-adaptive follow-up tracking multi-camera video splicing method for coal mining machine
CN115908988B (en) Defect detection model generation method, device, equipment and storage medium
WO2004084138A2 (en) A method for contour extraction for object representation
CN111626145A (en) Simple and effective incomplete form identification and page-crossing splicing method
CN114674826A (en) Visual detection method and detection system based on cloth
CN116167910B (en) Text editing method, text editing device, computer equipment and computer readable storage medium
CN112749741A (en) Hand brake fastening fault identification method based on deep learning
CN115760788A (en) Defect generation method for deep learning task
CN115578362A (en) Defect detection method and device for electrode coating, electronic device and medium
CN113658288B (en) Method for generating and displaying polygonal data vector slices
US4656468A (en) Pattern data processing apparatus
CN112861861B (en) Method and device for recognizing nixie tube text and electronic equipment
US11657511B2 (en) Heuristics-based detection of image space suitable for overlaying media content
CN115311237A (en) Image detection method and device and electronic equipment
CN112396696B (en) Semantic map incremental updating method based on feature point detection and segmentation
EP4218246A1 (en) Detection of image space suitable for overlaying media content

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination