CN114820543B - Defect detection method and device - Google Patents

Defect detection method and device Download PDF

Info

Publication number
CN114820543B
CN114820543B CN202210495447.5A CN202210495447A CN114820543B CN 114820543 B CN114820543 B CN 114820543B CN 202210495447 A CN202210495447 A CN 202210495447A CN 114820543 B CN114820543 B CN 114820543B
Authority
CN
China
Prior art keywords
defect
picture
sub
pictures
output result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210495447.5A
Other languages
Chinese (zh)
Other versions
CN114820543A (en
Inventor
冯文龙
王凯耀
胡伟
李思桥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Fangshi Technology Co ltd
Original Assignee
Suzhou Fangshi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Fangshi Technology Co ltd filed Critical Suzhou Fangshi Technology Co ltd
Priority to CN202210495447.5A priority Critical patent/CN114820543B/en
Publication of CN114820543A publication Critical patent/CN114820543A/en
Application granted granted Critical
Publication of CN114820543B publication Critical patent/CN114820543B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30132Masonry; Concrete
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a defect detection method and a defect detection device, wherein the method comprises the following steps: acquiring a picture to be subjected to defect detection; performing first preprocessing on a picture to obtain a first sub-picture group; inputting the first sub-picture group into a pre-trained model to obtain a first output result, wherein the pre-trained model is obtained by training a plurality of groups of samples, each group of samples comprises a defect picture and a corresponding defect grade, and the first output result is used for indicating whether each sub-picture in each first sub-picture group has a defect; performing second preprocessing on the picture to obtain a second sub-picture group; inputting the second sub-picture group into a pre-trained model to obtain a second output result, wherein the second output result is used for indicating whether each sub-picture in each second sub-picture group has a defect or not; and obtaining a defect detection result according to the first output result and the second output result. By the method and the device, the problem of inaccurate wall surface defect detection results is solved, and the effect of improving the accuracy of the wall surface defect detection results is achieved.

Description

Defect detection method and device
Technical Field
The embodiment of the invention relates to the field of image recognition, in particular to a defect detection method and device.
Background
In recent years, due to the rising cost of labor and the rapid development of robotics, more and more industries urgently need the addition of robots. People's demand for the house of fine decoration is higher and higher, and under this background, the construction mode of building decoration gradually changes to professional, mechanization, automatic direction. The defects of the building wall comprise cracks, corrosion spots or holes and other fillers, and a decorating worker needs to detect the state of the wall surface before painting the wall surface to find some problems of the wall surface, such as bulges, recesses, cracks and the like. Most of the existing detection methods are manual identification, on one hand, a large amount of labor cost needs to be consumed, and on the other hand, the condition of inaccurate result is easy to occur due to the fact that manual judgment is based on the experience of workers.
Therefore, the problem that the wall surface defect detection result is inaccurate exists in the related art.
In view of the above problems in the related art, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a defect detection method and device, which at least solve the problem of inaccurate wall surface defect detection result in the related art.
According to an embodiment of the present invention, there is provided a defect detection method including: acquiring a picture to be subjected to defect detection; performing first preprocessing on the picture to obtain a first sub-picture group; inputting the first sub-picture group into a pre-trained model to obtain a first output result, wherein the pre-trained model is obtained by training a plurality of groups of samples, each group of samples comprises a defect picture and a corresponding defect grade, and the first output result is used for indicating whether each sub-picture in each first sub-picture group has a defect; performing second preprocessing on the picture to obtain a second sub-picture group; inputting the second sub-picture group into the pre-trained model to obtain a second output result, wherein the second output result is used for indicating whether each sub-picture in each second sub-picture group has a defect; and obtaining a defect detection result according to the first output result and the second output result.
Further, the performing a first pre-processing on the picture to obtain a first sub-picture group includes: dividing the picture according to a preset first pixel interval to obtain a plurality of small pictures; zooming the small pictures to obtain zoomed pictures, and taking the zoomed pictures as the first sub-picture group; performing a second preprocessing on the picture to obtain a second sub-picture group, including: and dividing the picture according to a preset second pixel interval to obtain a plurality of small pictures as the second sub-picture group.
Further, before inputting the first sub-picture set into a pre-trained model to obtain a first output result, the method further includes: acquiring a first number of defect sample pictures and a second number of non-defect sample pictures; performing data enhancement on the defect sample pictures, and taking the pictures obtained by enhancement and the defect sample pictures of the first quantity as defect sample pictures of a third quantity; and performing model training based on the third number of defect sample pictures and the second number of non-defect sample pictures to obtain the pre-trained model, wherein the pre-trained model takes a convolutional neural network as a main body, takes a full-connection network as neuron output, is flattened after five times of double convolution, and outputs three elements through the full-connection network.
Further, performing model training based on the third number of defect sample pictures and the second number of non-defect sample pictures to obtain the pre-trained model includes: dividing each sample picture to obtain a plurality of small pictures; labeling each small picture, wherein the label is used for representing the defect degree of the image in the small picture; randomly selecting two small pictures from the plurality of small pictures for random rotation, performing data enhancement through a mixup algorithm to obtain enhanced pictures, and storing the enhanced pictures with tag values meeting expected values; and carrying out model training based on the enhanced pictures with the label values meeting the expected values to obtain the pre-trained model.
Further, inputting the first sub-picture group into a pre-trained model, and obtaining a first output result includes: judging whether defects exist according to whether the values of the three elements abc output by the pre-trained model meet a first preset condition, wherein the first preset condition is as follows: c is greater than 0.95 or a <0.05 or b + c > alpha, judging that the defect exists when the first preset condition is met, and judging that the defect does not exist when the first preset condition is not met; and correspondingly marking the defect judgment result in the picture to be subjected to defect detection.
Further, inputting the second sub-picture group into the pre-trained model, and obtaining a second output result includes: judging the defect grade according to whether the value of the three-element abc output by the pre-trained model meets a second preset condition, wherein the second preset condition is as follows: if c >0.95 or a <0.05, the defect level is high; if c is less than or equal to 0.95, a is more than or equal to 0.05, and b + c is more than alpha, the defect grade is medium; if c is less than or equal to 0.95 and a is more than or equal to 0.05, and b + c is less than or equal to alpha and a is more than beta, no defect exists; if c is less than or equal to 0.95, a is more than or equal to 0.05, b + c is less than or equal to alpha, and a is less than or equal to beta, the defect grade is low; and correspondingly marking the defect judgment result in the picture to be subjected to defect detection.
Further, obtaining a defect detection result according to the first output result and the second output result includes: merging the two pictures marked with the first output result and the second output result, and taking a result with a high defect level as the defect detection result, wherein the first output result comprises whether a defect exists, and if so, marking the position corresponding to the picture to be subjected to defect detection as a color with a first depth; the second output result comprises a defect grade, wherein the defect grade corresponds to the color of the third depth, the color of the second depth, the color of the first depth and the colorless from high to low.
According to another embodiment of the present invention, there is provided a defect detection apparatus including: the device comprises an acquisition unit, a defect detection unit and a defect detection unit, wherein the acquisition unit is used for acquiring a picture to be subjected to defect detection; the first processing unit is used for carrying out first preprocessing on the picture to obtain a first sub-picture group; a first input unit, configured to input the first sub-picture group into a pre-trained model to obtain a first output result, where the pre-trained model is obtained by training multiple groups of samples, each group of samples includes a defect picture and a corresponding defect level, and the first output result is used to indicate whether each sub-picture in each first sub-picture group has a defect; the second processing unit is used for carrying out second preprocessing on the picture to obtain a second sub-picture group; a second input unit, configured to input the second sub-picture group into the pre-trained model to obtain a second output result, where the second output result is used to indicate whether each sub-picture in each second sub-picture group has a defect; and the output unit is used for obtaining a defect detection result according to the first output result and the second output result.
According to a further embodiment of the present invention, there is also provided a computer-readable storage medium having a computer program stored thereon, wherein the computer program is arranged to, when executed, perform the steps of any of the method embodiments described above.
According to yet another embodiment of the present invention, there is also provided an electronic device, comprising a memory in which a computer program is stored and a processor configured to run the computer program to perform the steps of any of the method embodiments described above.
According to the invention, the picture to be subjected to defect detection is obtained; performing first preprocessing on a picture to obtain a first sub-picture group; inputting the first sub-picture group into a pre-trained model to obtain a first output result, wherein the pre-trained model is obtained by training a plurality of groups of samples, each group of samples comprises a defect picture and a corresponding defect grade, and the first output result is used for indicating whether each sub-picture in each first sub-picture group has a defect; performing second preprocessing on the picture to obtain a second sub-picture group; inputting the second sub-picture group into a pre-trained model to obtain a second output result, wherein the second output result is used for indicating whether each sub-picture in each second sub-picture group has a defect; and obtaining a defect detection result according to the first output result and the second output result, so that the problem of inaccurate wall defect detection result in the related technology can be solved, and the effect of improving the accuracy of the wall defect detection result is achieved.
Drawings
Fig. 1 is a block diagram of a hardware structure of a mobile terminal of a defect detection method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a defect detection method according to an embodiment of the invention;
FIG. 3 is a schematic model diagram of the present embodiment;
FIG. 4 is a schematic diagram of defect detection in the present embodiment
Fig. 5 is a block diagram of a defect detecting apparatus according to an embodiment of the present invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The method embodiments provided in the embodiments of the present application may be executed in a mobile terminal, a computer terminal, or a similar computing device. Taking an example of the method running on a mobile terminal, fig. 1 is a block diagram of a hardware structure of the mobile terminal of a defect detection method according to an embodiment of the present invention. As shown in fig. 1, the mobile terminal may include one or more (only one shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA), and a memory 104 for storing data, wherein the mobile terminal may further include a transmission device 106 for communication functions and an input-output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration, and does not limit the structure of the mobile terminal. For example, the mobile terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store a computer program, for example, a software program and a module of application software, such as a computer program corresponding to the defect detection method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer program stored in the memory 104, so as to implement the method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet via wireless.
The building wall defects comprise fillers such as cracks, corrosion spots or holes, and the like, and the defects are detected by judging the defects existing on the surface or inside of an object through technologies such as infrared, ultrasonic, laser, machine vision or human vision. The current defect detection generally refers to the detection of defects on the surface of an object, the adopted technologies for surface defect detection mainly include an infrared measurement technology, an ultra-wavelength division ranging technology and the like, and the defects of holes, spots, color differences, pits, scratches, induced corrosion, overlapping, defects and the like on the surface of the object are detected through advanced technologies.
In the present embodiment, a defect detection method is provided, and fig. 2 is a flowchart of the defect detection method according to the embodiment of the present invention, as shown in fig. 2, the flowchart includes the following steps:
step S101, acquiring a picture to be subjected to defect detection;
step S102, carrying out first preprocessing on a picture to obtain a first sub-picture group;
step S103, inputting the first sub-picture group into a pre-trained model to obtain a first output result, wherein the pre-trained model is obtained by training a plurality of groups of samples, each group of samples comprises a defect picture and a corresponding defect grade, and the first output result is used for indicating whether each sub-picture in each first sub-picture group has a defect;
step S104, carrying out second preprocessing on the picture to obtain a second sub-picture group;
step S105, inputting the second sub-picture group into a pre-trained model to obtain a second output result, where the second output result is used to indicate whether each sub-picture in each second sub-picture group has a defect.
And step S106, obtaining a defect detection result according to the first output result and the second output result.
In the above embodiment, the picture to be subjected to defect detection is a picture to be subjected to wall surface defect detection, and may be a picture obtained by shooting an outer wall or an inner wall of a building or a video screenshot, the first preprocessing and the second preprocessing correspond to different processing modes respectively, and mainly include segmentation or scaling in different proportions so as to adapt to a model, a plurality of sub-pictures are obtained after each preprocessing and serve as sub-picture groups, two sub-picture groups are identified respectively, the first sub-picture group is input into a pre-trained model to obtain whether a defect exists at each position on the picture, the second sub-picture group is input into the pre-trained model to obtain a defect level at each position, the two results are comprehensively overlapped to obtain a defect detection result, it should be noted that the pre-trained model may be the same model, and data of the two sub-picture groups are input into different interfaces of the model respectively. The problem that the wall surface defect detection result is inaccurate in the related technology can be solved by respectively carrying out data processing on the two sub-picture groups, and the effect of improving the accuracy of the wall surface defect detection result is achieved.
As an optional implementation manner, the performing the first preprocessing on the picture to obtain the first sub-picture group includes: dividing the picture according to a preset first pixel interval to obtain a plurality of small pictures; zooming the small pictures to obtain zoomed pictures, and taking the zoomed pictures as a first sub-picture group; performing a second pre-processing on the picture to obtain a second sub-picture group, including: and dividing the picture according to a preset second pixel interval to obtain a plurality of small pictures as a second sub-picture group.
The dividing of the picture may be dividing the picture to be recognized into a plurality of 128 × 128 small pictures at 64 pixel intervals, scaling the pictures into 64 × 64 small pictures to serve as a first sub-picture group, and the second preprocessing of the picture may be dividing the picture to be recognized into a plurality of 64 × 64 small pictures at 64 pixel intervals to serve as a second sub-picture group.
As an optional implementation manner, before inputting the first sub-picture group into the pre-trained model to obtain the first output result, acquiring a first number of defect sample pictures and a second number of non-defect sample pictures; performing data enhancement on the defect sample pictures, and taking the pictures obtained by enhancement and the defect sample pictures of the first quantity as defect sample pictures of a third quantity; and performing model training based on the third number of defect sample pictures and the second number of non-defect sample pictures to obtain a pre-trained model, wherein the pre-trained model takes a convolutional neural network as a main body and takes a full-connection network as neuron output, the model is flattened after five times of double convolution, and three elements are output through the full-connection network.
Before the model is used, the model is trained through a certain amount of sample data, and after the collected pictures are cut, the non-defective pictures are far more than the defective pictures, so that the data enhancement needs to be carried out on the defective sample pictures, and the number of positive and negative samples can be balanced. And performing model training on the enhanced picture and the original defect sample picture, for example, if the original defect sample picture is 100 pictures and 500 new pictures are added after enhancement, performing model training together by taking the 600 pictures as the defect sample picture, wherein the model takes the convolutional neural network as a main body and takes the full-connection network as neuron output. The model in this embodiment is input as a 64 × 64 grayscale, flattened by five double convolutions (same convolution +2 times downsampling convolution), and outputs abc 3 elements by a full-connection network.
As an optional implementation manner, performing model training based on the third number of defect sample pictures and the second number of non-defect sample pictures to obtain a pre-trained model includes: dividing each sample picture to obtain a plurality of small pictures; labeling each small picture, wherein the label is used for representing the defect degree of the image in the small picture; randomly selecting two small pictures from the plurality of small pictures for random rotation, performing data enhancement through a mixup algorithm to obtain enhanced pictures, and storing the enhanced pictures with tag values meeting expected values; and carrying out model training based on the enhanced pictures with the label values meeting the expected values to obtain a pre-trained model.
During model training, labels of the defect degrees of the small pictures are marked on the small pictures, the data enhancement of the small pictures can adopt a mixup algorithm, the labels are stored to accord with the expected enhanced pictures, and model training is carried out based on the pictures to obtain a pre-trained model.
As an optional implementation, inputting the first sub-picture set into a pre-trained model, and obtaining the first output result includes: judging whether defects exist according to whether the value of the three-element abc output by the pre-trained model meets a first preset condition, wherein the first preset condition is as follows: c is greater than 0.95 or a <0.05 or b + c > alpha, judging that the defect exists when the first preset condition is met, and judging that the defect does not exist when the first preset condition is not met; and correspondingly marking the result of the defect judgment in the picture to be subjected to the defect detection.
Whether defects exist is judged through preset conditions, wherein the preset conditions are obtained through multiple times of test adjustment in the model training and correcting processes, the defects exist when the first preset conditions are met, the defects do not exist when the first preset conditions are not met, and the data can be adjusted to adapt to different scenes.
As an optional implementation, inputting the second sub-picture set into the pre-trained model, and obtaining the second output result includes: judging the defect grade according to whether the value of the three-element abc output by the pre-trained model meets a second preset condition, wherein the second preset condition is as follows: if c >0.95 or a <0.05, the defect level is high; if c is less than or equal to 0.95, a is more than or equal to 0.05, and b + c is more than alpha, the defect grade is medium; if c is less than or equal to 0.95 and a is more than or equal to 0.05, and b + c is less than or equal to alpha and a is more than beta, no defect exists; if c is less than or equal to 0.95 and a is more than or equal to 0.05, and b + c is less than or equal to alpha and a is less than or equal to beta, the defect grade is low; and correspondingly marking the result of the defect judgment in the picture to be subjected to the defect detection.
Judging the defect grade according to the values of the three elements output by the model, and marking the corresponding different grades with different color depths after the judgment, wherein the defect grade can be marked in other modes besides the different color depths.
As an alternative embodiment, obtaining the defect detection result according to the first output result and the second output result includes: merging the two pictures marked with the first output result and the second output result, and taking the result with high defect grade as a defect detection result, wherein the first output result comprises whether a defect exists, and if so, marking the position corresponding to the picture to be subjected to defect detection as a color with a first depth; the second output result includes a defect level, the defect level corresponding from high to low to a color of a third depth, a color of a second depth, a color of a first depth, and no color.
The principle when the two output results are combined is that the result with dark color is taken to be output as the defect detection result.
The present embodiment further provides a specific implementation, and the following describes the technical solution of the present embodiment with reference to the specific implementation.
The embodiment of the invention can be applied to the detection of the defects (including but not limited to cracks, color blocks and the like) of various wall surfaces in the construction industry, can be carried in a construction robot and various detection instruments, and can detect the defects of the images acquired by the equipment through the depth model taking the convolutional neural network as a main body. The problem that the recall rate of the existing algorithm to the fuzzy defect characteristics is low can be solved; the detection result of the existing algorithm is mostly halved, namely, the algorithm is defect-free, the defect is not graded, and attention is lacked; the depth model is complex, the detection speed is slow and the like.
The technical scheme of the embodiment can be divided into three stages of data processing, model building and model prediction.
1) Data processing stage
(1) Data acquisition: acquiring a required defect picture by a camera, and cutting the defect picture into small pictures of 64 multiplied by 64;
(2) data division: the original pictures (64 × 64) were classified into three categories (artificial judgment): no defect, weak defect, strong defect, corresponding label values of 0.0 (no defect), 0.5 (weak defect), 1.0 (strong defect), respectively;
(3) data enhancement: randomly selecting two pictures for random rotation, and enhancing according to a mixup algorithm;
mixup data enhancement: generating lambda randomly according to lambda-Beta (0.5), and fusing two pictures according to the following formula:
x new =λx 1 +(1-λ)x 2
y new =λy 1 +(1-λ)y 2
where x is the pixel value and y is the label value. Enhanced pictures with label values of 0.45-0.55 (weak defects) and 0.9-1.0 (strong defects) are saved. It should be noted that the weak defect is basically a picture with a label of 0.5, where a range of ± 0.05 (which means a picture with a label in a range of 0.45-0.55 generated by the saving algorithm) is given, and the specific data is slightly adjusted, but it is not suitable to be adjusted too much.
2) Model building and training phase
Fig. 3 is a schematic diagram of a model according to this embodiment, in which the convolutional neural network is used as a main body, and the fully-connected network is used as a neuron output. The model is input into a 64 × 64 gray scale image, flattened after five times of double convolution (same convolution +2 times of downsampling convolution), and outputs abc 3 elements through a full-connection network.
It is emphasized that the input size is fixed 64 x 64, and five double convolutions (i.e., ten-layer convolutions). Of course, the same size output can be obtained by other convolution methods. However, through multiple experiments, the effect achieved by the current model parameters is more preferable.
The labels corresponding to the pictures are input in abc, and the corresponding rule is as follows:
TABLE 1 Label values for pictures
Figure BDA0003632869380000101
The proportion of strong defect (original + enhancement), weak defect (original + enhancement) and no defect (original) data in the model training process is set as 1:1:2.
the network model is light and concise as a whole, has high reasoning speed, and can input a plurality of pictures at one time for parallel operation.
3) Model prediction phase
This stage is the main body of defect detection, the detection process is performed in two steps simultaneously, and fig. 4 is a schematic diagram of defect detection of the present embodiment, as shown in fig. 4.
Detecting one (upper):
(1) dividing a picture to be identified into a plurality of 128 x 128 small pictures at 64 pixel intervals, and scaling the pictures into 64 x 64;
(2) feeding the trained model according to the line;
(3) the model outputs the three element values of a b c, and whether the defect (T or F) is detected is further judged according to the following rules:
If c>0.95or a<0.05or b+c>α
then L←T
else L←F
the TP can be changed by adjusting the threshold value alpha, so that the requirements of different detection tasks on R (recall rate) and P (accuracy rate) are met.
(4) The detection result (128 × 128) is associated with the original drawing (64 × 64 × 4), and is rendered in the original drawing in T (light red) and F (colorless).
Test two (below):
(1) dividing a picture to be identified into a plurality of 64 multiplied by 64 small pictures according to 64 pixel intervals;
(2) feeding the trained model according to the line;
(3) outputting a b c three-element value by the model, and further judging the defect level L according to the following rule:
Figure BDA0003632869380000111
Figure BDA0003632869380000121
(4) the TP and the FP can be changed by adjusting the threshold values alpha and beta, so that the requirements of different detection tasks on R (recall rate) and P (accuracy rate) are met. Wherein TP and FP are evaluation indexes and indicate true yang and false yang.
(5) The defect levels from high to low correspond to output colors of L3 (dark red) -L2 (red) -L1 (light red) -L0 (colorless), and are drawn in the original drawing.
And finally, combining the prediction graphs of the first detection and the second detection, and taking the result of the color depth.
The embodiment of the invention treats the defect detection as a three-classification task, adds a weak defect class, substantially expands the functions of the model, and improves the identification precision of the model on the fuzzy defect; the detection model adopts a lightweight high-efficiency convolutional neural network, and can carry out high-precision defect detection at a higher speed by combining the output of the fully-connected neural network; the prediction process in the defect prediction stage is parallel according to two steps, the large-scale detection in the step one improves the detection recall rate, and the small-scale detection in the step two accurately grades the defect target, thereby improving the detection quality.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention or portions thereof contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (which may be a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
In this embodiment, a defect detection apparatus is further provided, which is used to implement the foregoing embodiments and preferred embodiments, and the description of which is already given is omitted. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 5 is a block diagram of a defect detecting apparatus according to an embodiment of the present invention, as shown in fig. 5, the apparatus including:
an acquiring unit 10, configured to acquire a picture to be subjected to defect detection;
a first processing unit 20, configured to perform a first preprocessing on a picture to obtain a first sub-picture group;
a first input unit 30, configured to input the first sub-picture group into a pre-trained model to obtain a first output result, where the pre-trained model is obtained by training multiple groups of samples, each group of samples includes a defect picture and a corresponding defect level, and the first output result is used to indicate whether each sub-picture in each first sub-picture group has a defect;
a second processing unit 40, configured to perform second preprocessing on the picture to obtain a second sub-picture group;
a second input unit 50, configured to input a second sub-picture group into a pre-trained model to obtain a second output result, where the second output result is used to indicate whether each sub-picture in each second sub-picture group has a defect;
and an output unit 60, configured to obtain a defect detection result according to the first output result and the second output result.
Through this embodiment, can solve the inaccurate problem of wall defect testing result that exists among the correlation technique, reach the effect that improves the rate of accuracy of wall defect testing result.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Embodiments of the present invention also provide a computer-readable storage medium having a computer program stored thereon, wherein the computer program is arranged to perform the steps of any of the above-mentioned method embodiments when executed.
In an exemplary embodiment, the computer-readable storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present invention further provide an electronic device, comprising a memory in which a computer program is stored and a processor configured to execute the computer program to perform the steps in any of the above method embodiments.
In an exemplary embodiment, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
For specific examples in this embodiment, reference may be made to the examples described in the above embodiments and exemplary embodiments, and details of this embodiment are not repeated herein.
It will be apparent to those skilled in the art that the various modules or steps of the invention described above may be implemented using a general purpose computing device, they may be centralized on a single computing device or distributed across a network of computing devices, and they may be implemented using program code executable by the computing devices, such that they may be stored in a memory device and executed by the computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into various integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. A method of defect detection, comprising:
acquiring a picture to be subjected to defect detection;
performing a first pre-processing on the picture to obtain a first sub-picture group, wherein the first pre-processing comprises: dividing the picture according to a preset first pixel interval to obtain a plurality of small pictures; zooming the small pictures to obtain zoomed pictures, and taking the zoomed pictures as the first sub-picture group;
inputting the first sub-picture group into a pre-trained model to obtain a first output result, wherein the pre-trained model is obtained by training a plurality of groups of samples, each group of samples comprises a defect picture and a corresponding defect grade, and the first output result is used for indicating whether each sub-picture in each first sub-picture group has a defect;
performing second preprocessing on the picture to obtain a second sub-picture group, wherein the second preprocessing comprises: dividing the picture according to a preset second pixel interval to obtain a plurality of small pictures as the second sub-picture group;
inputting the second sub-picture group into the pre-trained model to obtain a second output result, wherein the second output result is used for representing the level of each defect of each sub-picture in each second sub-picture group;
and obtaining a defect detection result according to the first output result and the second output result.
2. The method of claim 1, wherein before inputting the first sub-picture set into a pre-trained model to obtain a first output result, the method further comprises:
acquiring a first number of defect sample pictures and a second number of non-defect sample pictures;
performing data enhancement on the defect sample pictures, and taking the pictures obtained by enhancement and the defect sample pictures of the first quantity as defect sample pictures of a third quantity;
and performing model training based on the third number of defect sample pictures and the second number of non-defect sample pictures to obtain the pre-trained model, wherein the pre-trained model takes a convolutional neural network as a main body, takes a full-connection network as neuron output, is flattened after five times of double convolution, and outputs three elements through the full-connection network.
3. The method of claim 2, wherein model training is performed based on the third number of defect sample pictures and the second number of non-defect sample pictures, and obtaining the pre-trained model comprises:
dividing each sample picture to obtain a plurality of small pictures;
labeling each small picture, wherein the label is used for representing the defect degree of the image in the small picture;
randomly selecting two small pictures from the plurality of small pictures for random rotation, performing data enhancement through a mixup algorithm to obtain enhanced pictures, and storing the enhanced pictures with tag values meeting expected values;
and carrying out model training based on the enhanced pictures with the label values meeting the expected values to obtain the pre-trained model.
4. The method of claim 1, wherein inputting the first sub-picture set into a pre-trained model to obtain a first output comprises:
judging whether defects exist according to whether the values of the three elements abc output by the pre-trained model meet a first preset condition, wherein the first preset condition is as follows: c is greater than 0.95 or a <0.05 or b + c > alpha, judging that the defect exists when the first preset condition is met, and judging that the defect does not exist when the first preset condition is not met;
and correspondingly marking the defect judgment result in the picture to be subjected to defect detection.
5. The method of claim 1, wherein inputting the second sub-picture set into the pre-trained model to obtain a second output comprises:
judging the defect grade according to whether the value of the three-element abc output by the pre-trained model meets a second preset condition, wherein the second preset condition is as follows: if c >0.95 or a <0.05, the defect level is high; if c is less than or equal to 0.95, a is more than or equal to 0.05 and b + c is more than alpha, the defect grade is middle; if c is less than or equal to 0.95 and a is more than or equal to 0.05, and b + c is less than or equal to alpha and a is more than beta, no defect exists; if c is less than or equal to 0.95 and a is more than or equal to 0.05, and b + c is less than or equal to alpha and a is less than or equal to beta, the defect grade is low;
and correspondingly marking the defect judgment result in the picture to be subjected to defect detection.
6. The method of claim 1, wherein deriving a defect detection result from the first output result and the second output result comprises:
merging the two pictures marked with the first output result and the second output result, and taking a result with a high defect level as the defect detection result, wherein the first output result comprises whether a defect exists, and if so, marking the position corresponding to the picture to be subjected to defect detection as a color with a first depth; the second output result comprises a defect grade, and the defect grade corresponds to the color of the third depth, the color of the second depth, the color of the first depth and no color from high to low.
7. A defect detection apparatus, comprising:
the device comprises an acquisition unit, a defect detection unit and a defect detection unit, wherein the acquisition unit is used for acquiring a picture to be subjected to defect detection;
a first processing unit, configured to perform first preprocessing on the picture to obtain a first sub-picture group, where the first processing unit includes: dividing the picture according to a preset first pixel interval to obtain a plurality of small pictures; zooming the small pictures to obtain zoomed pictures, and taking the zoomed pictures as the first sub-picture group;
a first input unit, configured to input the first sub-picture group into a pre-trained model to obtain a first output result, where the pre-trained model is obtained by training multiple groups of samples, each group of samples includes a defect picture and a corresponding defect level, and the first output result is used to indicate whether each sub-picture in each first sub-picture group has a defect;
a second processing unit, configured to perform second preprocessing on the picture to obtain a second sub-picture group, where the second preprocessing unit includes: dividing the picture according to a preset second pixel interval to obtain a plurality of small pictures as the second sub-picture group;
a second input unit, configured to input the second sub-picture group into the pre-trained model to obtain a second output result, where the second output result is used to indicate whether each sub-picture in each second sub-picture group has a defect;
and the output unit is used for obtaining a defect detection result according to the first output result and the second output result.
8. A computer-readable storage medium, in which a computer program is stored, wherein the computer program is arranged to perform the method of any of claims 1 to 6 when executed.
9. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 1 to 6.
CN202210495447.5A 2022-05-07 2022-05-07 Defect detection method and device Active CN114820543B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210495447.5A CN114820543B (en) 2022-05-07 2022-05-07 Defect detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210495447.5A CN114820543B (en) 2022-05-07 2022-05-07 Defect detection method and device

Publications (2)

Publication Number Publication Date
CN114820543A CN114820543A (en) 2022-07-29
CN114820543B true CN114820543B (en) 2023-04-18

Family

ID=82512195

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210495447.5A Active CN114820543B (en) 2022-05-07 2022-05-07 Defect detection method and device

Country Status (1)

Country Link
CN (1) CN114820543B (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10853959B2 (en) * 2019-04-01 2020-12-01 Sandisk Technologies Llc Optical inspection tool and method
CN111768381A (en) * 2020-06-29 2020-10-13 北京百度网讯科技有限公司 Part defect detection method and device and electronic equipment
CN113781456A (en) * 2021-09-16 2021-12-10 欧冶云商股份有限公司 Steel surface defect detection method and equipment based on artificial intelligence image recognition
CN113850773A (en) * 2021-09-18 2021-12-28 联想(北京)有限公司 Detection method, device, equipment and computer readable storage medium
CN113989684A (en) * 2021-10-22 2022-01-28 贵州电网有限责任公司 Method and system for marking and grading machine inspection defect picture images

Also Published As

Publication number Publication date
CN114820543A (en) 2022-07-29

Similar Documents

Publication Publication Date Title
CN111179251B (en) Defect detection system and method based on twin neural network and by utilizing template comparison
CN107657603B (en) Industrial appearance detection method based on intelligent vision
CN113239930B (en) Glass paper defect identification method, system, device and storage medium
CN107944504B (en) Board recognition and machine learning method and device for board recognition and electronic equipment
CN109584227A (en) A kind of quality of welding spot detection method and its realization system based on deep learning algorithm of target detection
CN111667455A (en) AI detection method for various defects of brush
CN111814850A (en) Defect detection model training method, defect detection method and related device
CN113706495B (en) Machine vision detection system for automatically detecting lithium battery parameters on conveyor belt
JP7372017B2 (en) Steel component learning device, steel component estimation device, steel type determination device, steel component learning method, steel component estimation method, steel type determination method, and program
CN113516651A (en) Welding joint defect detection method and device based on residual error network
CN110599453A (en) Panel defect detection method and device based on image fusion and equipment terminal
CN113255590A (en) Defect detection model training method, defect detection method, device and system
CN111696079A (en) Surface defect detection method based on multi-task learning
CN113627435A (en) Method and system for detecting and identifying flaws of ceramic tiles
CN115861170A (en) Surface defect detection method based on improved YOLO V4 algorithm
CN114359235A (en) Wood surface defect detection method based on improved YOLOv5l network
CN115526852A (en) Molten pool and splash monitoring method in selective laser melting process based on target detection and application
CN114743102A (en) Furniture board oriented flaw detection method, system and device
CN117455917B (en) Establishment of false alarm library of etched lead frame and false alarm on-line judging and screening method
CN114663382A (en) Surface defect detection method for electronic component based on YOLOv5 convolutional neural network
CN109102486B (en) Surface defect detection method and device based on machine learning
CN114820543B (en) Defect detection method and device
CN116681677A (en) Lithium battery defect detection method, device and system
CN111738991A (en) Method for creating digital ray detection model of weld defects
CN115761257A (en) Method and device for detecting angle of part based on computer vision technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant